►
From YouTube: Grafana Tempo Community call 2022-08-11
Description
- Tempo 1.5 overview
- AMA (Discussion of tuning at scale and more)
- Funny hats
Join the next community call here:
https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY/edit
Learn more at https://grafana.com and if all of this looks like fun, feel invited to see if there’s a role that fits you at https://grafana.com/about/careers/
A
All
right
welcome
to
the
august
edition
of
the
temple
community
call
the
turkey
edition
somehow
not
november.
A
A
We're
gonna
enjoy
this
we'll
start
with
some
quick
news,
I'll
start
with
some
team
member
news,
although
the
most
impacted
team
members
out
here
ananya,
who
has
been
with
the
project
since
the
beginning,
is
moving
on
he's
going
to
go
to
college
he's
going
back
for
a
master's
degree,
so
he's
going
to
take
us
a
break,
he'll
remain
a
maintainer
he's
done
amazing
work
on
the
project
and
I
think
he'll
probably
stay
involved
in
some
ways,
but
his
focus
will
no
longer
be
tempo
in
life
and
we
all
wish
him
well.
A
So
if
you
want
to
drop
him
a
line
in
slack
or
tweet
at
him
or
whatever,
I
think
he'd
appreciate
it
he's
been
a
huge
part
of
this
project
and
he'll
still
be
involved,
but
I
think
his
his
contributions
will
come
down
a
bit
and
I
really
wish
he
were
here.
I
guess
he
can
watch
the
recording
too.
A
In
other
news
we
also
and
jenny's,
not
here
either.
We
recently
added
a
new
developer
jenny.
In
the
past
couple
weeks,
jenny,
lam
and
she's
been
a
great
addition,
she's
just
getting
started
and
suraj
as
well.
So
we
have
some
new
editions
of
the
team,
neither
who
could
show
up,
but
they
can
watch
the
recording
too.
I
guess
so
there
you
go.
A
That's
that's
it
for
team
member
news
a
little
bit
a
little
bit
of
change
up
there,
but
I
think
we're
gonna
we're
gonna,
keep
moving
forward,
of
course,
and
we
have
a
strong
team
and
I'm
excited
to
to
see
what
happens
next
tempo
one
five
has
been
cut
kind
of
two
two
release
candidates
have
been
made
one
today
and
one
last
week.
I
think
there
might
be
one
more
commit
that
we
want
to
sneak
in,
but
it's
for
the
most
part
ready
to
go
I'll,
link
you
to
rc
zero.
A
We
are
running
it
right
now
in
our
operational
cluster.
It's
about
I
just
look.
It
was
actually
larger
than
I
realized
it's
over
300
megs
like
250
megs,
a
second
and
around
2
million
spans
a
second.
So
we're
running
that
now
we
are
having
some
just
small
issues:
small
stability
issues,
but
we're
kind
of
trying
to
tighten
up
the
ship.
A
The
biggest
problem
really
is
that
compaction
just
takes
way
longer
with
the
parquet
blocks
and
we're
going
to
continue
to
work
on
that
as
we
as
we
go
forward,
but
at
our
scale
we
are
definitely
seeing
just
less
imp
or
the
performance
of
the
compactors
reduce
quite
a
bit,
and
we
had
to
scale
our
compactors
up
so
that
that's
the
major
kind
of
like
caveat
if
you
want
to
run
parquet
back
end,
is
watcha
compactors
and
you
might
need
to
scale
those
up.
B
A
Are
aware
of
a
panic
that
only
happens
in
a
single
binary,
but
we'll
definitely
look
to
fixing
that
up
before
2.0,
so
we've
never
seen
it
in
our
distributed
system
with
the
various
components,
but
it's
fairly
easy
to
reproduce,
with
a
single
binary
just
by
splitting
up
the
docker
compose
and
there's
a
helpful
member
of
the
community
who
has
posted
an
issue
on
that
in
a
lot
of
details.
So
it's
something
to
do
with
parquet.
A
It
does
a
lot
of
unsafe,
pointer
handling
for
performance
and
maybe
they
juggled
a
pointer
poorly.
There
appears
to
be
some
kind
of
a
pointer
issue
there,
but
we'll
we'll
file
an
issue
upstream,
we'll
get
some
good
examples
for
them
and
we'll
try
to
get
that
ironed
out.
Certainly
before
2.0
other.
C
A
A
So
if
you
have
kafka
or
rabbitmq
kind
of
situation,
a
queueing
thing
or
you're
using
any
kind
of
database,
the
metrics
generator
code
now
looks
for
basically
all
of
the
open
telemetry,
the
open,
telemetry
standard,
semantic
tagging,
and
if
it
sees
those,
then
it
will
decorate
it'll,
create
additional
spans
and
decorate
those
with
information,
and
you
can
get
your
databases
and
your
cues
on
your
service
graph
now,
which
is
cool.
A
The
final
thing
is
usage
stats,
so
we
did
add
usage
stats,
which
is
similar
in
implementation
to
loki.
There
will
be
in
the
blog
post
details
about
what
shared
with
grafana.
It's
all,
of
course,
anonymous.
It's
just
details
about
configuration
to
help
us
know
what
people
are
using
and
to
help
us.
You
know
target
our
efforts
in
terms
of
improving
the
product,
essentially,
but
they'll.
The
blog
post
will
detail.
What's
sent
and
it'll
also
detail
how
to
disable
it.
It's
a
simple:
it's
a
simple
config
setting
to
disable.
A
So
all
of
the
information
will
be
there
and
that's
that's
1.5
in
a
nutshell,
there's
of
course,
a
bunch
of
features
and
additional
stuff,
as
well
as
some
small
breaking
changes.
Nothing
major
and
some
bug
fixes
it's
pretty
standard
release
parquet,
I
would
say
being
the
highlight
here:
if
people
want
to
experiment
with
that
cool,
so
I
have
a
turkey
head
on
and
it's
time
to
do
an
ama.
A
If
anyone
would
yeah
feel
free
to
ask
anything,
you
want
you
can
unmute
or
you
can
just
drop
it
in
chat
here
or
even
put
it
in
the
dock
and
the
team.
These
are.
This
is
not
me
ama.
I
guess,
if
you
do,
ask
a
question
about
me.
I'll,
maybe
answer
it,
but
it's
more
like
a
tempo
team
ama.
But
if
you
want
to
ask
what
my
favorite
piece
is
or
something
I'll,
let
you
know.
A
I'm
gonna
ask
a
question
joe:
what's
your
favorite
pizza
man,
like
I
like
a
lot
of
pizza,
I'm
pretty
I'm
a
veggie
guy
really.
So
I
like
the
olives,
green,
olives
and
black
olives
are
my
favorite
peppers
too
get
those
on
there.
A
Banana
peppers
are
delicious,
so
I'd
say
a
good
veggie
pizza
with
a
lot
of
options.
That's
probably
my
favorite
pizza.
A
That
is
a
question
I
won't
answer.
I
feel
that's
a
little
too
personal
and
I
don't
think
we
can
really
do
that
in
a
public
forum.
A
D
Hey
I
haven't
been
here
in
a
few,
but
I'm
back,
I'm
back
with
the
performance
scaling
questions.
I
guess
so.
Since
the
last
time
I
was
here,
we
had
the
pretty
significant
total
traffic
increase.
I
would
say
we're
doing
about
250
million
spm
in
a
single
tenant
per
cluster
right
now.
D
So
the
ingest
path
is
mostly
fine.
That's
been
scaling
up
pretty
good.
I
think
what
we
notice
is
the
query
times
are
becoming
increasingly
long
because
of
increasing
blockless
size,
so
we've
been
kind
of
just
increasing
max
block
size
progressively
just
to
make
bigger
and
bigger
and
bigger
blocks
to
help
with
block
size.
D
But
I
don't
know
if
there's
like
guidelines
on
how
to
tweak
query
performance,
there's
stuff
that
we
kind
of
like
stumbled
upon
a
little
bit
like
we
run
like
100
gigabyte,
memcache
instance,
and
that
helps
for
cash
hits.
But
like
there's
an
upper
limit
to
that
a
little
bit,
we
would
like
run
more
quarriers
like
way
more,
but
we're
still
seeing
we'd
like
it
to
be
even
faster.
I
guess
the
the
trace
lookups
so.
D
A
Okay,
I'll
also
point
to
this
pr
that
came
in.
Oh,
you
got
to
tell
me
how
many
bytes
per
second
you're
doing
too,
but
this
pr
was
when
was
this.
I
think
it
was
even
in
the
previous
release,
one
four,
but
this
means,
or
with
this
pr
tempo
we'll
take
start
end
parameters
on
trace
id
search
and
only
search
the
blocks
that
are
within
those
ranges.
A
A
D
I
know
in
the
past
we
mentioned
that
we
had
like
super
long,
rendering
traces
we
mostly
got
rid
of
those.
That's
not
a
problem
anymore.
We
we
replaced
with
links
between
traces
in
like
it's
we
can.
I
can.
We
can
exclude
those
from
the
yeah.
It's
like.
We
have
short
choices
now
you
know
a
few
seconds
and
that
that's
like
yeah.
A
Cool
now
grafana
does
not
support
this.
Yet,
although
I
think
there
was
somebody
who
opened
a
pr
and
it
kind
of
died,
I
wish
we
had
joey
or
excuse
me,
maybe
conor,
on
to
comment.
But
let's
see
if
I
can
at
least
find
the
pr
as
they
seemed
here,
we
go.
Here's
the
pr
it's
still
unmerged,
but
I'll
put
it
in
the
community
call
doc
here.
A
Oops
do
that.
I
didn't
do
that
yeah,
so
there's
a
pr
for
grafana.
Did
I
erase
the
other
link?
Somehow
I
may
have
done
that,
but
anyways.
This
is
somebody
who
was
attempting
it's
actually
the
same
person
who
added
the
parameter
on
our
end
and
just
kind
of
died.
But
I
know
the
team
has
expressed
interest
in
this
like
there
was
some
back
and
forth
chatter
and
then
the
developer
just
kind
of
disappeared.
I
guess
so
you
know
extending
temp
or
grfonda
to
use
this.
A
I
think
the
only
ask
we
they
had
was:
can
we
make
it
configurable
because
they
had
already
put
the
parameter
in?
They
just
wanted
there
to
be
a
little
configuration
option,
so
it
was
disabled
by
default
and
you
say,
like
you
know,
do
an
hour
outside
of
the
range
or
whatever
two
hours
or
something
like
a
configurable
option
there.
A
So
if
someone
sees
that
pr
through
or
if
you
just
kind
of
jump
in
and
push
it
and
ask
you
know
for
if
there's
any,
if
there's
interest
or
whatever,
I
think
you
could
probably
get
some
momentum
on
that
and
then
you
would
really
be
able
to
chop
down
the
percentage
of
the
block
list.
You
were
searching
for
trace,
ids.
D
Okay,
yeah,
that
would
be
yeah,
that's
something
I
can
take
a
look
at.
I'm
also
wondering
like,
if
there's
an
obvious
performance
hit
on
having
really
large
blocks.
Like
let's
say
my
blocks
are
50
gigs
on
average
size,
that's
just
yeah,
but
dude
they're.
Not
I'm
just
wondering
if
I
can
how
much
I
can
stretch
it.
You
know
I
think
average
block
is
like
10
15
gigs
right
now,
really.
D
We
have
a
lot
of
really
small
traces.
I
I
would
say
basically
we
have
the
the
biggest
volume
producer
has
a
special
instrumentation
that
collapses
all
the
spans
together,
except
for
a
really
small
percentage.
So
we
have
a
a
lot
of
like
you
know.
Two
three
span
traces,
so,
okay.
C
B
A
I
don't
think
there
really
is
an
upper
bound.
I
mean
at
some
point
you're
just
kind
of
like
you
know
your
compactor
spends
a
long
time
taking
eight
blocks
and
creating
one
block
where
it
would
have
you
know,
could
have
done
a
whole
lot
of
smaller
blocks
to
reduce
your
block
list.
So
it's
kind
of
an
opportunity
cost,
I
suppose,
to
making
bigger
and
bigger
blocks,
but
if
that's
just
kind
of
necessary
in
order
to,
if
that's
just
kind
of
necessary
in
order
to
make
your
block
this
short
enough,
that's
fine.
A
The
other
thing
is,
I
think,
there's
an
upper
limit
on
the
total
size
that
will
devote
to
the
bloom
filters.
Is
that
true,
it's
been
a
while,
since
I've
looked
at
that
code.
If
that's
true,
then
at
some
point
your
bloom
filter,
false
positive
rate,
will
go
up
and
you'll
be
checking
more
blocks
than
you
have
to,
and
with
that
many
traces
of
that
small
of
a
size
you
may
be
running
into
that.
I'd
have
to
go.
Look
at
that
code
again.
E
A
No,
but
if
you
just
kind
of
check
some
of
your
oh,
is
it
it's
in
the
meta
jason?
Is
it?
No?
I
don't
think
it
is
damn.
No,
I
don't
think
it
is
sorry
I
shouldn't
I
mean
darn.
I
don't
think
it
is.
A
A
False
positive
rate
is
of
the
of
the
blocks
that
are
being
created.
You
could
do
like
a
real
basic
bloom
filter
estimate,
just
like
total
size
dedicated
to
bloom
filters,
using
that,
as
well
as
the
total
ids
total
number
of
objects
is
going
to
give
you
a
rough
estimate
of
the
false
positive
okay.
Let
me
I'll
make
a
note
to
find
some
of
that
detail
for
you
do
you?
Could
you
share
your
bites
per
second
I'd
love
to
hear
what
that
is
at
215
million
spans
per
second.
A
Okay,
no
worries-
I
might
bug
you
for
that.
One
slack
when
I
find
this
false
positive
stuff
for
you.
B
B
C
Yes,
so
yeah,
I
have
a
question
because
some
time
ago
I
saw
a
pr
in
the
temple
helm.
It
was
about
adding
the
autoscaler
for
all
the
components,
and
I
saw
that
you
added
a
comment
there
saying
hey,
I
I
actually
don't
recommend
it
so
I'll.
Try
now
the
autoscaler
to
the
injectors.
C
So
I
wonder,
then,
how?
How
do
you
do
it
at
grafana?
How
do
you
provision
your
injectors
if
it's
not
at
a
scalar
or
if
there
is
a
technique?
Maybe
we
can
do
and
also
another
question
after
this
one
is:
is
it
better
to
have
like
a
a
bunch
of
smaller
injectors
or
like
a
very
beefy
injectors.
A
Sure
I'd
like
to
hear
what
other
members
think
on
this,
I
would
say
so
yeah
we
should
we
don't
auto
scale.
First
of
all,
we
don't
auto
scale
in
gestures,
they're,
statically
provisioned.
The
cost
is
like
a
manual
process
to
downscale.
Right
now
is
the
problem.
You
could
maybe
put
some
upward
automatic
scaling,
but
I
would
never
put
down
automatic
scaling.
You
have
to
kind
of
hit
a
flush,
endpoint
and
then
it'll
all
force
you'll
remove
itself
from
the
ring
flush.
A
All
the
data
there's
a
kind
of
a
manual
thing
there,
so
distributors
and
queriers
compactors
those
can
all
be
auto-scaled
via
cpu
or
requests,
or
you
know
what
any
kind
of
normal
thing
you
might
want
to
scale
by,
but
ingestors
are
stateful
and
cannot
really
be
downscaled
without
without
without
some
kind
of
manual
process
right
now
we
really
should
remove
that
from
the
helm.
Chart.
Not
the
auto
scaling
general
is
fine,
but
that
one
piece
needs
to
have
it
removed
and
then,
in
terms
of
I
need
to
drop.
B
A
E
Yeah
see,
I
was
thinking
the
other
way
fewer
beefier
ingesters
would
let
you
create
larger
blocks
from
the
get-go.
There's
a
lot
less
work
for
the
compactor
to
have
to
come
back
and
clean
up
right.
I
mean
yeah,
there's
a
lower
bound,
like
you
have
to
think
about
blast,
radius
and
stuff
like
that,
but
yeah.
It
also
accommodates
larger
traces
better.
E
B
C
A
Our
investors
take
after
replication
factors,
so
this
is
times
three
are
in
just
I
think
our
investors
take
about
30
megs,
a
second
each
would
be
my
guess.
I
think.
Maybe
if
we
measured
that
by
like
like
ingestion,
that
might
be
a
good
way
to
provide
guidelines,
but
I
think
right
we
receive,
like
I
said,
300
something
make
a
second.
A
We
have
about
30
adjusters.
Well,
we
did
we
had.
We
had
roughly
the
same
number
of
adjusters
as
we
had
hundreds
of
like
or
yeah
tens
of
megabytes
per.
Second,
I
guess
and
then
times
three
so
30.
A
Ooh
ryan
curious
from
this
group:
have
you
ever
run
across
this
or
request
to
visualize
traces
as
flame
graphs.
B
Yo
yeah,
I
was
just
curious,
I
didn't
you
know
it
was
not
something
that
I
would
have
myself
thought
of,
but
it
was
a
issue
in
the
jaeger
repo
and
there's
actually
a
couple
of
a
couple
of
different
ones.
That
people
ask
this
on
separate
occasions,
and
so
I
you
know,
apparently
some
people
found
it
useful.
I
was
curious.
A
That'd
be
kind
of
interesting
right.
Would
you
lose
anything
from
a
flim
growth.
B
I
think
you
kind
of
lose
the
sort
of
sequential
aspect
of
it.
I
guess
you
kind
of
get
more
of
a
aggregate.
I
mean
it
seems
like
the
the
typical
situation,
where
someone
would
request
it
is
when
there's
like
a
ton
of
spans
like
that
are
really
small
that
are
like
all
on
top
of
each
other
that
make
it.
I
guess
like
super
long
vertically
or
something
I
I
don't
know.
Personally,
I
guess
that's
kind
of
the
the
is
kind
of
what
I'm
gathering.
I
guess.
A
Yeah,
if
everything,
if
everything
was
synchronous,
I
think
a
flame
graph
would
work
real
well,
I
think
the
problem
is
when
you
have
concurrency
right
because
then
you'd
have.
How
would
you
represent
like
multiple
stacks?
If
you
had
some
kind
of
like
depth
to
your
flame
graph,
maybe
that'd
be
kind
of
cool.
A
bunch
of
things
happen
at
once.
A
I
don't
know,
I'm
not
sure
how
you
would
pull
that
off
right,
like
because
it'd
be
kind
of
neat
if
left
to
right
was
the
timeline,
and
then
you
had
like
you
know
you
could
see
what
you're
spending
time
on,
because
flame
graphs
are
just
brilliant
ways
to
you
know
visualize
that
kind
of
data.
A
B
Cool
thanks,
yeah
things
like
related
it,
I
think
internally,
we
also
discussed
changing
the
trace
view
to
make
it
stack
to
the
left,
so
it
wouldn't
be
stretched
out,
but
you
would
be
easier
to
compare
duration
because,
like
all
the
trades,
all
the
smaller
star
on
the
left
column
feels
kind
of
similar,
but
I'm
not
sure
if
the
grafana
team
has
anything
planned
around
this.
B
A
Serverless
good
question
will
serverless
search
work
with
park
and
one
five.
The
answer
is
yes,
although
we
can
we've
just
really
struggled
with
serverless
it's
kind
of
a
black
box,
it's
really
hard
to
see
what's
happening
in
there
and
it
just
adds
a
lot
of
latency
and
confusion
generally
and
in
fact
our
best
timings
on
parque
have
been
when
we
have
removed
serverless
and
we've
just
relied
on
our
queries.
Only
so
I
think
there'll
be
more
guidance
as
2.0
comes
out,
it
does
work
with
serverless
and
we'll
keep
that
option.
A
What
I
think,
where
we
left
it
off
last,
was
our
queriers
were
doing
most
of
the
load,
but
for
the
absolute
biggest
queries
we
were
letting
it
spill
over
into
serverless
and
that
was
kind
of
a
good
balance.
So
if
you
queried
huge
time,
ranges
that
required,
you
know
additional
resources,
we'd
spill
over
to
service,
but
we'd
like
to
keep
most
queries
inside
of
the
queriers,
and
I
think
we
found
that
was
a
better
balance.
A
So
so
I
don't
know
for
sure,
but
it
should
continue
to
work.
Basically,
I
don't
see
any
reason
why
I
wouldn't.
E
A
Yeah
cloudera,
we
found
it,
we
like
cloudrun
better.
Even
I
should
have
started
there,
but
whatever
the
ability
to
choose
the
go
version,
there's
a
lot
more
tunables
in
terms
of
how
it
scales
and
resources
and
all
this
so
we
had
we've
had
a
lot
more
success
on
cloud
running
cloud
functions,
but
it's
still
just
sometimes.
A
Sometimes
you
execute
a
query
that
should
take
100
milliseconds
and
it's
10
seconds,
and
it
happens
a
lot
like
with
with
gcs.
You
know
whatever
you're
hitting
the
you're
hitting
the
5
9
percentile,
and
so
you
get
a
couple
seconds
sometime
on
the
query
that
should
be
50
milliseconds
and
that's
fine
tempo
is
built
to
deal
with
that.
But
it's
a
ton.
A
It's
a
huge
percentage
of
the
of
the
queries
to
the
serverless
infrastructure
seem
to
just
take
a
long
time
and
we
really
have
not
been
able
to
pin
it
down
and
the
other
thing
is.
I
don't
really
want
to
spend
a
ton
of
my
time,
debugging
cloud,
specific
issues
and
getting
some
really
perfectly
tunable
google
cloud
thing
and
we've
and
then
having
little
experience
on
aws
or
you
know
another
cloud
azure
or
whatever.
A
So
we're
going
to
probably
re-coordinate
around
the
queriers
only
as
much
as
possible
and
maybe
look
at
maybe
look
at
the
serverless
functions
for
longer
queries
things.
Maybe
we
consider
more
like
a
batch
query
and
we
might
have
to
make
that
distinction
in
tempo
depending
on
you
know
the
amount
of
data
that's
being
queried.
A
B
A
Agree,
I
agree
it
does
kind
of
it
does
feel
wasteful
like
to
have
a
bunch
of
queers
doing
nothing
now
in
our
cloud
offering
that's
what
we're
going
to
do,
because
we
want
to
hit
our
slos
that's
important
and
it's
worth
it,
but
in
terms
of
tuning
something
internal
and
making
decisions
about
how
many
queries
do
I
have
versus
how
how
much
I'm
gonna
rely
on
serverless.
I
don't
know,
that's
gonna,
be
a
tough
call.
A
We
deal
with
that
now
and
I
think
when
two
o
comes
out,
we'll
have
more
guidance,
because
we're
gonna
have
a
lot
more
experience
that
time
too
getting
these
configurables.
You
know
the
tunable
setup
and
when
we
released
serverless,
we
kind
of
had
a
page
that
documents
our
settings
and
why
we
chose
them.
We'll
do
a
similar
thing
when
we
do
2.0
parquet
serverless.
A
A
The
big
features
would
be
park
aga
so
like
the
parquet
back
end
being
the
default
back
end
for
tempo
and
vga,
as
well
as
the
f.
What
we're
calling
like
phase
one
of
traceql,
which
is
marty's
working
on
now,
and
it's
going
to
be
like
the
basic
conditions
marty,
if
you
want
to
address
that
at
all
feel
free
to
jump
in
and
the
kinds
of
things
you're.
Looking
for
for
this
2.0.
E
Yeah
sure
kunron
and
I
are
working
on
traceql
now,
it's
too
big
of
a
language
to
implement
in
one
get-go,
and
we
don't
want
to
do
that
anyways.
We
want
to
get
parts
of
it
out
there
and
see
what
works
and
what
doesn't
so,
we've
kind
of
internally
kind
of
like
just
evaluated
like.
What's
we
consider
the
core
of
the
language
and
then
that's
what
we're
focusing
on
and
what
we
hope
to
have
ready.
So
it
would
just
be
like
basic
span.
E
Sex
span
set
selection
logic
things
you
can't
do
in
the
current
search
like
there
will
be
included,
so
I
think,
there's
still
a
lot
of
interesting
stuff
there,
but
it
won't
be
the
full
language.
B
A
Maybe
we
should
that's
part
of
the
next
community
call
kind
of
highlight
some
of
that.
I
think
we
have
a
doc
somewhere,
so
we
can
just
take
the
the
details
out
of
there
and
talk
about
what
we
want
to
be
in
each
phase
and
once
we
get
rolling
with
it,
you
know
we
could
get
some
momentum
behind
it.
I
think
marty's
going
to
do
a
lot
of
like
groundwork
for
a
bit
here
to
set
up
the
the
structures
necessary
to
really
blast
it
out
and
then
I'll
be
done
in
a
couple
weeks.
E
B
A
E
D
It's
it's
a
bit
higher
than
what
we
before
we
started
tweaking
it
was
like
120
000
or
something
like
that.
Yeah
yeah.
We
have.
A
D
B
D
What
we
see
is
no,
no
there's,
no
there's
no
code,
customization,
it's
just
like
config
changes.
What
we
saw
is
that
we
would
do
only
like
level
one
pretty
much
compaction
with
like
the
heavy
volume
like
with
our
previous
settings.
When
we
start
getting
a
lot
more,
we
would
do
mostly
level
one
compaction
and
a
little
bit
level
two.
So
we
added
a
lot
more
compactors.
D
I
think
we're
running
like
150
replicas
on
one
of
the
on
the
busy
cluster,
and
we
we
we
just
like
regularly
cycle
through,
like
use
temple
cli,
see,
what's
like
the
what
what
the
trending
blocks
like
trace
per
block
is
and
like
just
kind
of
increase
that
up,
but
the
the
like
time
to
actually
see
a
a
a
real
decrease.
It
takes
a
few
days
after
like
it's
not.
We
can
be
like
very
reactive
on
that.
D
It's
like
if
the
block
does
increase
we're
gonna
have
to
live
with
the
increased
block
list,
for
you
know
sometimes
up
to
like
our
retention
time,
which
is
just
seven
days.
So
that's
been
that's
fine.
I
think
I
think
we
have
to
mitigate
the
query.
It's
like
when
this
happens.
We
have
to
mitigate
on
the
the
the
query
times
in
some
way
for
for
users
to
still
be
able
to
coordinate
those
the
the
traces
they
need.
So.
E
Yeah
compactors,
I
will
say,
they're
not
very
dense,
like
each
pod.
Really
only
does
one
compaction
at
a
time
and
we've
kind
of
like
never
changed
that
I'm
not
sure
like,
but
150
sounds
about
right
for
that
scale
that
you
were
mentioning.
D
The
actually,
I
think
what
you
were
discussing
earlier
about,
like
having
big
investors
versus
small
intestines
kind
of
give
me
gives
me
an
idea
a
little
bit
more.
I
was
just
thinking
because
we
had
to
do
this
is
like
insights
on
what's
happening
in
my
team.
D
We
had
to
do
just
infrastructure
changes
to
kind
of
like
keep
traffic
in
certain
zones,
basically
just
to
cut
on
cost,
so
we're
we're
moving
off
of
we're,
moving
for
moving
data,
we're
moving
off
of
pub
sub
and
now
to
pub
sublight
right
now
internally,
for
for
our
moving
our
trace
data
from
the
collectors
to
our
downstream
systems,
with
our
tempo
and
like
our
other
storage,
and
so
we
could
already.
D
We
already
have
a
plan
to
like
the
pub
sublight
allows
us
to
partition
traces
in
a
certain
way,
so
we
can
have
all
the
spans
for
a
trace
to
go
going
to
a
certain
place.
I
think
maybe
we
could
leverage
that
in
some
extent,
if
we're
already
routing
the
data,
that's
close
together
to
the
same
ingestors
or
the
same.
You
know
the
same
distributors
and
now
like
between
the
distributors
and
the
investors.
D
I
don't
know
how
and
actually,
if
it
stays
bunched
up
or
not,
but
I
don't
know,
maybe
maybe
this
is
like
what
I've
been
thinking
about
for
part
of
the
meeting.
Also
because,
like
you
said,
if
the
the
the,
if
the
traces
stay
together
then
it'll
like
it,
won't
need
to
be
compacted
again
later
right
and
then
we
would
save
on
compaction.
D
A
Yeah,
that's
kind
of
what
we're
seeing
while
we're
dealing
with
parquet
just
upping
the
size
of
the
blocks
cut
by
the
injectors
is
a
huge
like
relief
to
the
compactors
you're,
just
putting
so
much
less
pressure
on
them.
A
So
you
are
kind
of
playing
a
game
of
like
how
long
do
I
want
to
keep
data
in
the
injectors,
but
if
you
can
afford
more
there
and
cut
fewer
blocks,
it
does
significantly
reduce
right
the
the
work
your
compactors
have
to
do.
D
Yeah,
I
think
it's
the
the
dancing
game
is
that,
like
our,
the
average
usage
is
low,
but
when
we
get
big,
spikes
of
traffic
resources
just
goes
way
up.
So
we
have
to
we
have
to.
We
can't
like
put
too
much
pressure
on
the
injectors,
because
then
what
we
see
is
we'll
see
some
ooming
and
then
it
quickly
degrades
the
amount
of
of
traces
the
temple
can
receive
like.
D
If
we
have
two
injectors
that
are
down,
we
drop
a
lot
of
traces,
and
so
we
don't
want
to
put
too
much
version
of
the
gestures
just
because
we
want
to
make
sure
we
drop
the
least
trace
as
possible
when
there's
spikes
and
traffic,
and
it
still
kind
of
happens
even
though,
like
our
daily
is
kind
of
high.
So.
B
A
Cool
what
do
I
feed
my
hat?
I
feed
my
hat
smaller
hats.
A
E
A
Appreciate
that
we've
worked
hard
on
this
thing
and
it's
awesome
to
hear
people
hitting
honestly
multiple
times
our
scale
and
having
that
kind
of
success
with
tempo.
A
So
yeah
keep
keep
talking
to
us
and
let
you
know
let
us
know
what
you
need
and
we're
going
to
keep
making
this
thing,
hoping
for
a
really
cool
parquet
launch
soon
next
couple
months,
and
I
would
encourage
everyone
to
kind
of
experiment
with
it-
maybe
like
in
a
dev
cluster
or
like
a
staging
cluster
and
give
us
some
feedback.
And
let's
improve
this
and
make
this
kind
of
the
next
step
for
tempo.
B
C
Yeah
a
little
one,
more
question
I
think
I
saw
in
grafana
con.
That's.
I
just
want
to
confirm
for
the
next
version
so
2.0
if
we're
going
to
have
a
way
to
do
aggregations
within
temple.
So
so
then,
if
I
say
I
would
like
to
check
all
trace
aggregate
hello,
given
that
this
service
has
500,
give
me
this
tax.
What
are
the
common
values
so
then
yeah?
So
then
I
can
know
what
is
causing
the
issue.
A
A
Aggregation
like
accounting,
spans
or
doing
like
averages
of
durations
over
sets
of
spans
will
be
in
two
point,
something
something
after
something.
After
2.0.
C
Yeah
so
at
least
what
the
issues
that
sometimes
I
see
is
when
there
is
high
cardinality.
So
I
cannot
use
the
normal
metric
scenario.
So
let's
say
that.
B
A
All
right:
well,
I
think
it's
been
a
really
good
call
thanks
everyone
for
showing
up,
and
apparently
we
need
to
do
amas
more
often
because
this
was
pretty
successful.
I
think
next
time
I
expect
better
questions,
though
so
you
know
see
what
you
got
all
right.
Everyone
take
care
and
we
will
see
you
up
hopefully,
in
a
month
at
the
next
community
call,
and
if
not
I'll,
see
you
before
then
take
care.