►
From YouTube: Tempo Community Call 2021-10-14
Description
- Upcoming highlights for 1.2 release
- Live search demo and discussion of search progress
- Snazzy space backgrounds
A
We
are
recording
all
right,
hi,
everybody
welcome
to
another
template
community
call,
so
it's
been
about
a
month
since
the
last
one.
So
we
do
these
every
month
and
I'm
not
sure
if
the
chat,
how
well
it's
shared,
but
we
do
have
a
share.
There's
a
link
shared
for
the
kind
of
tempo
community
called
the
public
document
and
we'll
just
start
kind
of
covering
things.
If
anybody
has
anything
that
they
want
to
talk
about
just
you
know
chime
in
at
any
time
or
add
it
to
the
agenda.
A
Okay,
so
first
thing
that
we
want
to
talk
about
is
we
do
have
a
observability
con
coming
up
soon
next
month,
and
so
that
is
it's
november,
8th,
the
10th
and
there
will
be
a
session
on
getting
started
with
tempo,
and
so
don't
miss
that-
and
hopefully
you
know
there
will
be
some
cool
things
that
we
talk
about
there
and
there's
so
some
links
to
those
in
the
community
document.
A
Cool
we
have
the
next
big
kind
of
thing.
Is
we
have
10
1.2
upcoming,
so
we're
kind
of
hoping
to
release
that
here
in
the
next
couple
of
weeks
by
the
end
of
the
month
and
there's
some
really
cool
features
in
here,
so
we'll
kind
of
just
go
through
and
talk
about
them
and
and
then
we'll
we'll
see
where
we
go
so
the
first
one
is
in
gesture
search,
so
this
is
being
able
to
search
recent
traces.
This
is
something
we've
been
working
on
for
a
while.
A
So
it's
you
know,
we've
gone
through
it
on
previous
community
calls
and
of
course
you
know,
we've
talked
about
it
in
slack
and
things
like
that,
but
I
can
go
ahead
and
just
kind
of
like
start
sharing
a
window
and
click
around
things,
and
we
can
kind
of
just
show
you.
You
know
what,
where
we're
going
with
it,
we
can
talk
about
all
that
kind
of
stuff.
That
sounds
good
cool,
yeah.
A
Yeah,
so
can
everybody
read
that
I'm
not
you
know:
does
it
look
like
it's
a
good
size,
we'll
go
ahead
and
resize
it
a
little
bit
so
beforehand.
We
kind
of
just
heard
screenshots
of
this,
but
this
is
just
running
locally,
so
we'll
we'll
go
through
it.
So
what
we've
added
is
kind
of
like
a
tag
based
ability
to
search
the
recent
traces.
These
are
traces
that
are
in
the
ingester.
A
So
if
you
know
about
tempo's
kind
of
internal
structure,
the
ingester
is
what
receives
the
data
and
it
buffers
it
for
for
a
little
period
of
time
and
flushes
it
to
disk
and
it's
responsible
for
flushing
it
to
the
back
end.
This
is
a
grafana
we've
added
a
new
tab
here
in
the
this
is
an
experimental,
grafana
ui.
That's
for
this
and
it
what
we're
doing
with
it
is
kind
of
like
it's
a
if
you've
used
jager.
A
This
should
be
familiar,
so
it's
kind
of
have
some
drop
downs
here
for
kind
of
like
the
top
level
service
name
and
operation,
and
then
we
have
some
tags
and
some
other
abilities
to
filter
here
and
so
we'll
go
through
that.
And
what
I'm
running
here
is
the
tns
demo,
so
the
tns
demo
is
kind
of
a
three-tier
application
actually
might
be
good
to
go
and
click
on
that.
It's
just
a
little
toy
toy
three-tier
application
with
some
different
latency
and
some
other
services,
so
there's
a
load,
balancer
application
database
cool.
A
So
just
so,
we
kind
of
know
what
we're
working
with
here,
and
so,
if
I
just
run
the
query
on
an
empty
thing,
this
is
just
a
default
search
with
no
filters.
So
this
will
just
start
grabbing
the
quickest
traces
out
of
memory,
and
so
we
can
click
on
some
of
these
and
there's
each
of
these
are
links
to
open
the
trace
and
explore
window.
A
There's
some
summary
information
here
when
it
started
the
duration,
the
service,
the
top
level
service,
name,
cool,
yeah,
and
so
the
first
step
down
here
is
to
filter
by
a
service
name.
And
so
this
will
find
any
trace
with
that
service
in
it.
So
I
can
run
that
now
for
this
tns
application.
This
demo,
they
pretty
much
all
have
that
so
there's
not
really
anything
cool
going
on
there,
but
there
is
a
little
bit
of
different
stuff
that
we
can
do.
A
The
second
drop
down
is
the
is
the
span
name.
So
jaeger
calls
this
operation,
but
an
open
telemetry
is
actually
just
a
span
name
attribute,
and
we
can
click
on
some
of
these
things
and
see
and
just
find
any
trace
with
that
data
and
of
course
they
pretty
much
all
have
the
same.
But
there
are
a
couple
of
things
here,
so
one
we
can
maybe
look
at.
This
is
a
different
kind
of
request.
That's
going
through
the
system,
so
this
is
a
polling,
metrics
request,
and
so
the
span
name
is
right
here
http.metrics.
A
So
what
we're
looking
through
here
is
really
any
of
these
attributes
that
are
on
there's
two
two
types:
two
levels
in
the
open,
telemetry
standard,
there's
data,
two
different
levels:
one
is
this
span
level.
These
are
tags,
they're
called
attributes,
but
we're
calling
them
tags
and
these
are
unique
each
span,
and
then
there
are
process
level
attributes
which
are
more
like
part
of
the
pipeline
or
the
application
itself.
A
Service
names
is
common
one,
so
this
is
set
in
your
application
and
other
things
like
this
part
of
the
pipeline,
like
if
you're
using
the
grafana
agent
or
another
another
collector,
they
can
automatically
add
other
tags
like
pod
name,
maybe
the
cluster
and
the
name
space
for
their
kubernetes
namespace.
I
don't.
I
don't
have
that
here.
This
is
just
a
docker
compose
kind
of
like
setup,
so
cool
yeah.
We
have
a
question
here-
are
the
spam
names
and
other
searchable
attributes
filtered
off
the
service
name
or
each
other?
A
So
no,
so
we
they're
not
so
this.
This
will
find
any
trace
with
any
of
these
anywhere,
so
they're
not
actually
tied
together.
So
it's
not
looking
at
the
span
level.
That
is
something
that
I
think
we're
going
to
look
at,
because
this
is
really
useful
and
long
term.
We
definitely
will
be
taking
this
more
to
this
a
span
level
versus,
whereas
right
now
it's
just
just
trace
cool,
so
yeah
yeah.
A
I
think
a
lot
of
that
will
be
like
coming
later
and
we
can
talk
about
that
a
little
bit
so
other
things
we
can
look
for.
Let's
see,
is
this.
A
The
tags
is
more
free
form
and
I'll
get
to
that
question
in
just
a
second.
The
the
tags
are
our
ability
to
search
all
the
attributes
that
are
in
here
and
it's
a
little
bit
more
free
form.
So
we
could
do
host
name
and
there's
some
autocomplete
here.
Do
things
like
this
right
and
we
can
combine
attributes
here
that
we
want
to
look
up
yeah,
so
we
can
click
on
some
of
these
things,
minimum
duration.
A
A
Error
equals
true
that
actually
does
not
work,
so
we
actually
need
to
fix
that.
So
because
error,
the
conversion,
so
jager
uses
an
error
tag
like
just
a
tag
in
open
telemetry.
It's
a
span
status
and
we're
not
actually
capturing
that.
Yet
so
we
need
to
error.
Maybe
we
could
create
a
virtual
error
tag
out
of
it.
That
would
probably
be
good.
Let's
make
a
note
of
that.
A
Let's
see,
let's
go
back
so
the
other
question
was:
are
the
suggestions
and
the
drop
downs
driven
by
data
and
the
adjusters
as
well?
Yes,
so
these
drop
downs
are
actually
coming
from
the
data
in
the
adjusters,
so
they
they
are
inspecting
all
the
unique
values
that
are
coming
out
of
the
data
as
it's
received
and
it's
stored
for
a
certain
period
of
time
and
a
tag
that
stops
being
present
when
it's
flushed
from
the
adjuster
will
no
longer
be
in
the
list
yeah.
A
So
so
this
list,
it's
not
exactly
what's
stored
in
the
investors,
but
it
would
be
recent
yeah
we're
close
cool
yeah.
So
I
mean
this
is
totally
dynamic.
There's
no
need
to
configure
that
if
that
makes
sense
too
yeah,
so
this
is
really
a
first
step.
So
let's
go
with
I've.
A
We've
shown
this
before,
but
this
is
kind
of
like
the
approach
that
we're
taking
for
going
with
this
feature,
there's
kind
of
three
phases
here:
phase
one
is
what
we're
talking
about
now-
is
the
as
an
api,
the
ability
to
search
the
adjusters
and
kind
of
this
experimental
profile
ui,
and
what
the
next
phase
would
do
is
kind
of
fall
back
and
search,
and
that's
something
we're
looking
at
and
we
might,
you
know,
need
to
change
directions
a
little
bit.
A
This
we've
learned
a
lot
with
this
ingestor
search,
so
we've
kind
of
looked
at
this
certain
data
format
flat
buffers,
which
is
very,
has
no
penalty
for
decoding.
The
format
on
disk
is
the
same
as
in
memory,
it's
very
fast
and
it's
good
for
brute
forcing
search,
but
for
back
end.
We
think
it
may
not
still
be
enough.
So
there's
some
things.
We
want
to
look
at
there
and
then,
at
the
same
time,
we're
kind
of
developing
a
real
query
language.
A
So
we
really
want
to
do
something:
that's
really
flexible
and
robust
kind
of
like
prom,
ql
or
log
loki
cuba
and
that's
that's
kind
of
in
progress,
and
so
when
that
comes
like
we'll
have
the
full
features.
But
what
we
didn't
want
to
do
is
try
to
kind
of.
A
You
know,
put
a
lot
of
that
in
here
and
kind
of
maybe
set
expectations
for
what
the
language
could
do
or
what,
because
we
want
to
make
sure
that
what
we
come
up
with
you
know
can
be
implemented
and
it
it's
going
to
work
well.
So
I
think
this
is
kind
of
like
you
know,
basic
experimental.
I
think
it's
useful
but
yeah.
It's
still
still
experimental
cool
cool,
any
other
questions
or
thoughts
on
this
kind
of
stuff.
A
So
it
will
be
in
tempo
1.2
as
an
experimental
feature,
so
you'll
have
to
enable
it
so
there's
just
there
to
be
two
ways
enable
it
either
a
yaml
config
option
or
a
command
line
option
and
the
kind
of
like
in
the
release.
That's
why
I
have
some
more
information,
but
certain
components
of
tempo
if
you're
running
in
distributed
mode
have
to
have
it
enabled
and
other
ones,
just
don't
matter
like
the
compactor,
doesn't
do
anything
with
it
now
the
grafana
image.
A
For
this
feature
there
is
a
feature
toggle
to
turn
on
this
tab
and
it's
the
image
that
it's
in
so
I'm
actually
down
here.
I'm
not
sure
if
you
can
see
this,
but
it's
actually
the
latest
8.3
pre-release,
that's
probably
the
best
thing
to
use,
because
it
has
the
most
recent
fixes
and
other
things,
but
I
understand
that
8.3
per
clicks
might
be
a
hard
sell
or
hard
to
use
in
your
environment.
It's
very
very
new.
A
A
No,
I
don't
think
so.
I
think
this
discovers.
Basically
everything
and
godzilla
also
got
some
interesting
questions.
Yeah
yeah,
the
tns
demo
data
is
not,
you
know
particularly
exciting,
but
there
is
some.
You
know
it
has
a
little
bit.
We.
We
are
running
this
in
our
own
internal
cluster,
which
is
one
and
a
half
million,
or
two
million
spans
per
second
and
the
data.
A
There
is
a
lot
more
interesting,
but
we
were
hoping
maybe
to
use
that
for
the
demo,
but
there's
a
little
bit
too
much
stuff
in
there,
so
yeah
but
yeah.
So
there
are
a
couple
other
little
things
that
are
search.
I
guess
that
we
could
mention
some
little
features,
but
there
is
the
ability
to
control
which
of
these
tags
are
actually
recorded
in
the
data.
A
So
if
you
have
certain
tags
that
are
low
information
like
they
can
be
kind
of
ignored,
and
then
they
won't
be
searchable
like
this
one,
the
exporter
version
and
maybe
the
client
uuid.
I
think
we,
I
might
not
have
this
set
up
for
my
local
demo,
but
things
like
that.
That
may
not
be
useful,
so
you
can
actually
save
a
little
bit
of
resources
by
doing
that.
A
A
All
right
cool.
The
next
thing
we
want
to
talk
about
on
1.2
is
a
new
node
for
tempo.
So
this
is
something
that
a
lot
of
people
have
asked
for,
and
it's
really
cool.
I
guess
zach
isn't
here,
but
he
is.
He
would
know
more
about
this,
but
so
there
are
two
ways
to
run
tempo
and
soon
there
will
be
three.
So
the
two
ways
are
single
binary,
which
means
it's
just
a
single
instance
of
tempo.
It
runs
everything
it
has.
A
You
know
all
the
endpoints,
it
does
all
the
compassion
and
the
ingestion
and
everything
so
that's,
but
it
doesn't
scale.
So
it's
just
a
single
pod
by
itself
it
doesn't
get
into
a
ring
and
network
with
any
other
pods.
So
that's
what
I
was
using
here
for
this
demo.
The
other
way
is
fully
distributed.
So
this
is
where
you're
running
separate
deployments
for
requires
query
front
dems,
the
ingesters,
the
distributors,
and
that
way
you
can
kind
of
individually
scale
right
each
kind
of
layer
of
tempo.
A
Well,
we've
kind
of
there's
a
you
know
been
a
lot
of
interest
in
this
third
mode,
and-
and
so
this
will
be
your
available
1.2,
it's
called
scalable
single
binary.
A
So
it's
like
the
single
binary,
where
each
tempo
runs
all
components
except
it's
a
single
deployment
and
every
pod
still
runs
all
components,
but
they
will
talk
in
a
ring,
and
so
that
should
be,
you
know
easier
to
to
deploy
than
the
distributed
mode,
and
so
that's
really
good
for
people,
maybe
that
have
outgrown
the
single
binary,
but
maybe
don't
want
to
invoke
you
know
all
the
different
stuff
for
the
distributed
mode,
because
that's
probably
10
times
more
work,
and
so
this
thing
is
kind
of
in
the
middle.
A
So
I
think
that's
really
cool
now
we're
not
using
it
ourselves
internally.
So
you
know,
if
you
give
it
a
shot
like,
we
would
be
interested
to
hear
about
it,
and
so
it's
a
little
interesting
because
maybe
when
traffic
comes
in
for
a
pod
like
for
the
distributor
it
actually
will
be
talking
to
the
investors
or
let's
say
the
query
front
end
will
be
talking
to
a
query
which
may
be
itself.
It's
kind
of
interesting
or
it
could
be
any
of
the
rest
of
the
pods.
A
A
A
So
the
next
thing
that
we
think
we
wanted
to
talk
about
was
oh
yeah
cool,
so
currently
tempo.
If
you
run
a
query
and
it
has
an
error
on
a
block,
it
will
actually
just
fail
the
whole
query.
So,
if
you're
trying
to
look
up
a
trace,
but
you
have
a
corrupted
block
in
your
back
end,
it
will
actually
just
fail
the
whole
the
whole
query,
and
so
we
know
that's
come
up.
A
You
know
for
certain
people
that
are
using
tempo,
it's
happened,
so
I
think
it's
gotten
us
to
probably
everybody
where
you
have
to
go
and
figure
out
what's
wrong
with
the
block
and
probably
just
delete
the
block.
So
what
we're
doing
is
this
partial
block
results?
It's
a
way
to
kind
of
smooth
over
this
edge
case.
So
if
there's
a
corrupted
block
right,
we'll
go
ahead
and
return
the
results
that
we
can
so
tempo
already.
A
If
a
trace
is
kind
of
spread
out
in
multiple
blocks,
it
already
kind
of
combines
that
and
unifies
the
the
trace
and
shows
you
what
what
it
can
load,
and
so
this
is
the
same
thing,
but
it
can
tolerate
some
block
failures
and,
I
think,
hey
mario
is:
will
there
be
a
kind
of
like
a
new,
we
kind
of
want
to
show
something
in
the
grafana
ui
when
that
happens,
right,
correct.
C
Yeah,
it's,
I
don't
think
it's
merch
yet
yeah,
but
we
will
be
working
with
grafana
so
yeah
when
we
communicate
that
some
block
failed
and
it
may
contain
partial
results
that
we
notify
the
user
that
what
the
reading
is
possibly
incomplete.
A
Cool
yeah
yeah,
so
I
I
think
this
is.
Do
you
remember
if
there's
any
config
options
for
this,
or
is
this
just
always
on.
C
Yeah,
it's
a
new
flag
on
the
well
like
a
new
config
param
on
the
query.
Frontend,
I
think
it's
called
total
8
failed
blocks
by
default
is
set
to
zero,
so
the
same
behavior
is
maintained
from
from
before.
So,
if
any
block
fails,
it
will
just
fail
the
entire
query
yeah,
but
you
can
set
it
up
to
as
as
high
as
you
want.
A
Good
yeah,
it's
probably
most
relevant
for
larger
installations,
maybe
with
right
thousands,
tens
of
thousands
of
blocks
just
because
the
chance
of
a
failure
in
one
of
them
increases
yeah,
cool,
okay,.
A
Okay,
all
right,
the
next
thing
is
so
we
have
a
link,
a
pr
here
linked
for
kind
of
like
the
search
back-end
work.
If
you
want
to
take
a
look
at
that
and
see
what
we're
doing
we
we're
kind
of
like
going
an
alternate
path
for
back-end
search,
because,
instead
of
we
did
the
ingester
search
like
that
works.
Well,
we
kind
of
maybe
want
to
try
something
a
little
bit
different
for
the
back
end,
and
so
that's
linkedin.
A
Here,
I'm
not
sure
what
else
we
want
to
go
through
and
talk
about
on
the
pr,
but
we
just
thought
that
would
be
interesting
to
link
if
you
want
to
look
through
it.
So
what
we'll
do
is
we'll
be
working
independently
on
backend
search
and
then
kind
of
like
pair
up
the
two
efforts
in
the
middle.
A
All
right
cool,
the
last
kind
of
feature
that
we
want
to
talk
about
in
1.2
was
a
new
there's,
a
new
tempo
cli
command.
So
I'm
not
sure
if
you've
used
the
tempo
cli,
but
it
has
kind
of
like
a
variety
of
like
debugging
or
other
commands,
and
what
makes
it
different
is
that
it
has
a
lot
of
things
that
work
directly
against
your
back
end.
So,
if
you're
storing
your
blocks
in
s3
or
gcs,
you
can
run
the
tempo
cli
command
or
the
cli
directly
against
that
and
do
different
things.
A
If,
instead
of
calling
the
tempo
api,
you
can
actually
just
point
directly
to
the
bucket
and
it'll
perform
all
the
same
activity
of
finding
that
trace
in
the
back
end
downloading
the
bloom
filters
going
through
the
indexes
reading
the
trace
out
of
the
back
end
blocks
and
so
to
kind
of
tell
you
all
the
blocks
that
it's
in
and
the
final
output,
so
that
can
be
useful
for
for
debugging
or
I
don't
know.
Maybe
there's
like
a
cool
data
processing
thing
you
can
do
with
it
all
right
cool.
A
That's,
I
think.
That's
really.
All
we
kind
of
came
came
with
on
the
agenda
to
talk
about.
If
there's
any
other
questions-
or
we
could
talk
about
anything,
is
anybody
everybody's
tempo
install
until
install
is
going
well,
we
can
troubleshoot
some
issues
if
you
have
some
or
yeah
or
just
hang
out
for
a
minute.
B
I've
been
hitting
even
with
the
latest
version
of
tempo.
The
the
ring
problem
with
the
compactor
is
unhealthy
frequently,
but
I
suspect
that
may
just
be
due
to
using
dns
is
the
ring
on
aws.
B
Yeah,
well,
it's
the
member
list,
but
it's
yeah
being
driven
by
a
records
within
aws.
So
I
suspect
it's
probably
related
to
latency
there.
But
boy
is
it
frustrating
because
it
you'll
remove
the
unhealthy
ones
and
the
new
ones
will
come
up
and
they'll
like
catch
some
remnant
of
the
dns
record
or
something
before
it
before
the
ttl
drops
off?
And
then
it
just
perpetuates.
B
B
A
B
They
just
they
just
kind
of
randomly
get
sick
and
just
stop
responding
or
something
and
yeah
they
and
then
the
whole
ring
goes
unhealthy.
It's
yeah,
the
one
on
my
kubernetes
deployment
is
going
good,
so
we're
just
waiting
to
shift
to
that.
But
yeah,
oh
I
see
is
this
on
fargate
or
these
other
ones.
B
In
on
the
way,
the
ring
works,
yeah
so
easily
looking
forward
to
getting
off
of
that
deployment.
B
D
Sorry
this
is
this
was
kind
of
tying
into
what
you
were
discussing,
so
we
we
have
the
same
issue
with
the
competitors
on
deployment.
Oh
sorry,
the
same
issue
is
you
marty,
I
believe
where,
after
deployment
old
compactors
are
still
in
the
ring
marked
as
unhealthy
and
then
we
go,
we
have
to
go
and
forget
them
manually,
so
we're
building
automation
to
kind
of
like
do
that
automatically.
D
But
you
said
that
you
fixed
this
issue,
we're
on
like
a
pre-production,
we're
we're
like
building
our
own
off
of
the
main
branch
right
now
to
try
to
get
some
of
the
early
search
features,
but,
like
I,
we
we're
still
having
the
issue
with.
A
D
A
Yeah,
so
I
I
think
what
we
had
settings
that
were
working
well
for
us
internally
and
I
think
we
made
those
the
default
member
list
so
out
of
the
box.
There
are
certain
things
in
the
member
list
code
that
we're
using
that
aren't
enabled
then,
but
they
work
well
for
us
and
I
think
we
had
tempo
set
those
by
default.
Maybe
what
we
can
do
is
share
those
in
the
slack.
We
can
look
those
up
and
share
kind
of
like
there's
different
settings
for
tuesday
and
propagation
and
other
timeouts
and
things
and
so
yeah.
A
D
B
D
A
Okay,
well,
hey
there!
Here's
this
link
here
thanks,
it's
so
the
default's
there
so
that
documentation
should
be
created
from
like
the
manifest
is
created
from
from
the
code.
So
hopefully
that'll
be
be
correct,
but
maybe
take
a
look
at
that
and
it
you
know
we
can
dig
into
it.
If
you
want
to
reach
out
to
us
and
say
we'll
keep
keep
digging
yeah
that.
D
A
It
was
all
in
one
one:
there
was
a
actual
bug
in
the
memberless
code
that
had
to
do
with
tombstone
propagation,
where
it
would
just
keep
if
the
tombstone
came
along
with
an
update,
it
would
just
keep
propagating
it,
but
I
think
that
was
in
one
one.
I
think
that's
the
latest.
I
don't
think
we
have
anything
yeah.
A
D
Distributors
are
okay,
ingestors
were
having
a
weird
issue
where
really
really
unfrequently
we'll
have
an
adjuster
that
gets
that
gets
stuck
in
crash
loops
forever
and
the
only
way
to
get
it
fixed
is
to
delete
the
pvc,
and
then
it
comes
back
correctly,
but
I
I
think
it's
very
it's
kind
of
disruptive
where
it'll
happen,
two
ingestors
will
get
into
the
state
and
that
and
like
we,
the
the
the
volume
that
we're
able
to
accept
drops
significantly
at
that
point,
and
it
requires
manual
intervention
right
now,
so
I
don't
know
we're
also
building
automation
to
kind
of
help
with
that
right
now
I
don't
know
if
we
have
any.
A
Well,
I'd
be
interested
to
know
if
the
crash
loop
is
maybe
a
bug
or
something
I
mean,
but
there
are
other
things
that
will
look
like
a
crash
loop
like
a
configuration.
Bad
configuration
will
cause
the
pod
to
exit
and
kind
of
look
like
a
crash
loop.
A
Yeah
ingestors
can
use
a
lot
of
memory.
They
store
all
the
traces
in
memory
for
a
certain
period
of
time,
and
I
mean
I
think
ours
have
like
10
or
15
gigabyte,
pod
limits,
which
is
pretty
hefty.
But
there
are
some
settings.
Maybe
you
could
tweak
if
your
workload's
a
little
different,
like
maybe
flush
them
from
memory
a
little
bit
faster.
If
you
want
to
kind
of
use
less
memory,
but
it
would
increase
test
scale
that
might
be
an
option.
A
Yeah,
okay,
so
we
have
edinardo
says
we
randomly
see
the
following
error:
let's
see.
A
Unable
to
find
meta
during
compaction,
so
what
that
normally
means
is
a
different
compactor
got
to
that
block.
First,
because
what
happened
is
it?
It
found
the
block
that
it
was
and
it
was
a
live
block
and
it
was
going
to
or
it
was
up
for
compaction.
It
was
going
to
compact
it,
but
by
the
time
it
got
to
it,
the
metadata
was
gone
so
either
something
else
compacted
it
already
or
maybe
the
block
was
deleted
out
of
the
bucket.
A
So
if
the
block
was
deleted
out
of
the
bucket
it's
possible
that
maybe
the
retention
period
for
the
bucket
is
kind
of
conflicting
with
tempos
and
retention,
so
I
think
we
recommend,
like
put
it
a
day
out,
like
you,
can
have
like
an
s3
lifecycle
policy.
You
know
kind
of
clean
up,
you
know
to
make
sure
the
bucket
doesn't
have
any.
You
know
stuff
left
over
just
say.
Maybe
if
you
set
tempo
to
two
weeks
set
the
bucket
to
like
15
days
right,
something
like
that,
so
there's
a
little
bit
of
a
buffer.
A
Otherwise,
I
don't
know
I
would
look
in
the
compactor
ring
and
see
if
there's
any
unhealthy
entries
or
just
make
sure
everything
else
looks
good
and
see
if
there's
two,
maybe
compactors
that
are
thinking
they
both
in
the
same
part
of
the
ring.
That
would
be
something
to
look
into.
A
I
mean
it's,
it's
not
I.
As
long
as
compaction
seems
like
it's
working,
it's
probably
okay,
I
mean
the
compactor
will
will
try
again,
I
mean
every,
I
think
you
know
30
seconds
or
so
it's
kind
of
like
looking
at
the
block
list
and
finding
more
opportunities
to
come
back.
So
if
it
misses
it
once
it'll
catch
it
on
the
next
one.
A
There
are
it's
okay,
so
this
is
something
else
in
one
point,
too
there
there
are
a
little
bit
of
changes
in
this
area
that
makes
compaction
a
little
bit
more
critical.
So
that's
they
will
also
if
you're
running
in
distributed
mode.
The
compactor
will
also
be
building
the
tenant
index,
and
that
was
actually
part
of
that
was
in
one
one,
but
I
think,
and
then
in
one
two,
we
changed
a
little
bit
more
where
we're
sharding
across
compactors.
So
it's
important
to
have
healthy,
compactors,
yeah,
all
right.
A
Yeah
we
can
do
that
all
right.
Thank
you.
Everyone
for
coming!
This
is
good.
These
questions
are
great.
I
love
hearing
all
the
ways
tempo
fails
like.
Let's
make
it
a
better
thing,
so,
let's,
let's
keep
working
on
it,
so
that's
great
and
if
we
can't
fix
it,
we'll
keep
trying
so
cool
yeah,
so
well,
just
to
recap:
1.2
we're
kind
of
targeting
by
the
end
of
the
month.
So
that's
in
a
couple
weeks-
and
I
don't
have
all
these
things-
we
talked
about
yeah,
cool
and
observabilitycon
next
month.