►
From YouTube: Tempo Community Call 2021-11-11
Description
Overview of ObservabilityCON 2021 Announcements and Tempo 1.2 release.
A
This
is
the
november
temple
community
call,
maybe
the
fifth
or
sixth
we've
done,
which
is
kind
of
cool.
I
think
I'm
supposed
to
have
dyed
hair
at
this
one,
or
maybe
the
last
one,
but
I
did
not
do
that
because
the
vote
was
rigged.
I
don't
know
if
you
want
to
review
the
previous
month's
community
call,
but
basically
ritchie
cheated
to
try
to
get
me
to
dye
my
hair.
So
I
do
not.
A
I
did
not
honor
that
vote
so
anyways
today
we're
kind
of
coming
off
some
cool
stuff,
the
observability
con
session.
We
talked
about
some
future
stuff.
In
fact,
all
of
the
presenters
except
for
jen
are
here,
can
run
mario,
so
we
can
dig
deep
into
search
or
service
graphs
or
some
of
the
feature
plans.
We
have
a
lot
of
the
people
who
are
kind
of
instrumental
in
getting
that
together
here.
A
If
you
have
questions
about
that,
and
we
also
released
one
too
and
we
can
kind
of
get
into
some
of
the
details
there
as
well
talk
about
whatever
you
know,
the
release
notes
or
if
you
have
questions
about
what's
going
on
or
how
to
enable
search
or
we
do
have
a
known
issue
and
we'll
review
that.
A
So
there's
an
attendees
list
check
yourself
in
there.
If
you
don't
mind
and
there's
also
an
agenda,
feel
free
to
add
items
there
and
at
some
point
we'll
just
open
it
up
and
say
you
know,
feel
free
to
to
ask
anything
you'd
like
there's.
No
there's
no
scripted
presentation
here.
So
this
is
conversational,
feel
free
to
chat
in
the
slack
or
on
mutant
chat
if
you're,
feeling
brave,
whatever's,
fine
and
we're
here
to
answer
your
questions
like
this
is
meant
to
be
kind
of
like
an
office
hour,
z,
kind
of
session,
hey
andre.
A
Meant
to
be
kind
of
an
office
hour
session,
so
I
meant
to
kind
of
yeah
be
casual
more
than
formal
all
right,
I'm
joe,
and
I
think
we
have
a
couple
other
maintainers
here,
rod
and
mario
and
daniel
also
who
really
likes
board
games.
At
least
that's
the
only
thing
I
know
about
him.
A
But
I
guess
I
don't
know
anyone
else,
william
you've
done
a
couple
prs
into
tempo
yeah.
I
think
you've
contributed
quite
a
bit.
B
C
B
And
I
just
started
with
shopify
on
observability
stuff,
so
I
just
thought
that
I'd
drop
by
and
see
what's
up,
yeah.
A
I
don't
know
cool
yeah,
I
remember
you
were
talking
about
yeah.
You
tried
to
do
the
uncompressed
block
size
page
size
so
that
we
could
pre-allocate
the
buffer
correctly
yeah.
B
B
A
No,
no
doubt
that
would
be.
That
would
have
been
a
hard
one.
I
don't
even
know
how
I'd
do
it
immediately.
It
would
take
me
to
deep
dive
the
code
and
think
about
a
couple
pieces,
but
it
was
cool
that
you
dug
into
those
pieces.
You
had
some
good
questions
and
it
was
cool
to
see
somebody
else.
Even
looking
at
that
code,
I
think
myself,
maybe
maybe
ananya
and
marty
and
yourself,
william,
are
the
only
ones
who
really
line
by
line
to
that
piece
of
the
of
the
code
base,
but
cool
all
right.
A
Let's
get
into
the
stuff
a
little
bit
here,
observability
con.
I
guess
we
won't
go
back
through
the
whole
thing,
and
but
why
don't
we
kind
of
connect
to
our
various
presenters
here
and
without
doing
a
huge
presentation
or
worrying
about
slides?
A
Just
give
a
kind
of
a
summary
of
your
part
of
the
of
the
of
the
talk
that
way
people
can
kind
of
get
up
to
date
on
some
of
the
stuff
we
announced
there.
Do
you
want
to
start
since
you
kind
of
we'll
go
in
the
order?
We
did
the
presentation,
I'm
kind
of
throwing
you
under
the
bus
here
or
put
you
on
the
spot.
I
suppose.
D
No
sure
yeah,
it's
fine,
so
yeah.
I
started
the
presentation
during
observabilitycon
and
I
presented
a
tempo
search,
which
is
you
know,
one
of
the
newest
features
we're
releasing
with
temple
1.2.
D
So
we
added
the
ability
to
tempo
to
search
for
recent
traces
using
tags
and
during
absolutely
during
the
presentation.
I've
also
done
a
demo,
so
you
can
also
see
a
bit
how
it
works
there,
but
basically
it's
possible
now
to
search
for
like
the
last
15
or
30
minutes
of
traces
that
have
been
ingested
and
you
can
search
on
them
using
tags
that
are
present
somewhere
in
the
in
the
tracing,
dispense
and
yeah
it's
available
in
1.2.
So
you
can
try
it
out
today
to
try
it
out.
D
D
E
A
I'm
going
to
put
the
I'm
going
to
put
the
1.2
blog
post
and
the.
D
A
In
the
doc
it
actually
has
a
link,
so
we
mentioned
there's
like
some
flags.
You
have
to
set.
We
kind
of
consider
these
experimental
features.
Still
honestly,
once
we
get
the
fullback
and
search
complete,
I
think
we'll
go
ahead
and
drop
that.
D
A
A
D
Marked
as
experimental,
which
means
you
know
it
is
stable,
we
use
it
ourselves
all
the
time
and
also
at
scale.
So
you
know
it
works,
but
we
expect
to
see
a
lot
of
changes
there.
We
will
be
adding
a
full
back-end
search
and
we're
also
working
on
a
native
query,
language
traceql,
or
something
like
that.
So
what
we
have
today
is
you
know
not
the
final
goal,
so
everything
will
change.
D
A
Yeah,
no
doubt
definitely
break
some
stuff
along
the
way
I
did
link
the
blog
post
there
under
the
tempo
1.2,
so
go
to
the
like
the
agenda,
doc.
The
meeting
doc
and
you'll
see
it,
and
if
you
look
in
there,
we
kind
of
talk
about
recent
trace,
search
and
there's
some
links.
That'll
show
you.
If
you
want
to
kind
of
arrange
the
flags
another
link,
I
guess
I
should
share.
I
kind
of
forgot
that
I
think
I
did
this.
Somebody
did
this.
A
If
I
didn't
do
it,
somebody
did,
I
guess,
but
we
also
have
a
docker
compose
that
example,
and
it
kind
of
sets
all
the
flags
as
well
in
grafana
as
well
as
in
I
do
this.
Can
I
who
did
this.
D
E
D
Shows
the
graffala
config
and
the
temple
config.
So
it's
a
good
example.
You
can
just
spin
it
up
and
try
it
out.
A
A
Man,
doctor
compose,
of
course,
is
a
great
way
to
just
like
super
fast,
get
a
couple
services
up
and
get
an
example
going
so
hopefully
between
the
the
docs
and
that
compose
example.
If
you
want
to
fool
with
this,
you'll
have
the
ability
to.
A
And
yeah
kind
of
give
us
some
feedback
like
we're,
marking
it
experimental
for
a
number
of
reasons.
One
is
so
we
can
be
a
little
flexible
for
the
next
couple
months.
You
know,
while
we
get
full
back
in
search
together,
so
do
if
you
do
spend
some
time
with
it.
If
you
do
experiment
with
it
like,
please
give
us
feedback
to
help
us
to
help
us
yeah
to
help
us
improve
and
make
it
better.
Like
you
know,
things
are
as
long
as
it's
experimental.
A
We
have
a
lot
of
flexibility,
so
this
is
the
time
basically
to
get
some
feedback
in
and
help
us
make
it.
What
you
need
first
search
and
yeah
we're
kind
of
actively
building
the
full
back-end
search
now,
basically
cool
mario.
Can
you
give
us
a
service
graph
thing
like
a
just
kind
of
a
short
rundown
of
our
work
there
and
kind
of
future
working
sure.
F
Yeah,
so
during
the
our
session
I
went
a
bit
over
service
crowds,
which
is
a
feature
we've
been
working
on.
It
currently
resides
on
the
agent
and
essentially
is
a
visual
representation
of
distributed
systems
and
all
its
services
and
how
they
relate
to
each
other.
In
essence,
what
we
do
is
we
derive,
derive
metadata
from
the
traces
as
we
collect
them
and
we
build
metrics
with
which
we
construct
a
graph.
F
So
with
it,
you
can
infer
the
topology
of
this
with
the
system
get
like
a
high
level
overview
of
the
health
of
all
the
services
and
kind
of
see
like
a
historic
view
of
of
the
topology
and
the
health
of
the
system,
as
it
has
evolved
over
the
time
yeah.
It's
currently
on
working
development,
we're
kind
of
polishing
a
bit
the
visualization.
Mainly
recently,
we
introduced
some
development
improvements
which
have
not
successful
gains
on
on
reducing
cpu
usage
and
memory
usage,
which
was
very
cool.
C
F
Yeah
it's
so
it's
hidden
between
two
feature:
flags
kind
of
one
in
grafana.
You
have
to
enable
a
feature:
toggle
called
tempo
service
graph
and
then
to
run
it
to
kind
of
generate
this
data
that
composes
the
service
graphs
you'll
have
to
enable
the
feature
in
the
agent,
which
is
where
all
the
processing
happens.
A
Do
we
have
do
you
have
good
docs
on
that.
D
A
A
Apparently
I
don't
know
okay,
we
should
turn
that
on
and
if
we're
going
to
release
the
agent,
let's
turn
on
grifonic
cloud,
let's
work
with
yeah,
let's
work
with
the
team
on
that.
Okay,
so,
let's
see
turn.
I
think
we
need
a
little
bit
of
work
to
do
on
the
front
end,
our
normal.
Our
primary
front-end
engineer,
grifana
engineer,
is
on
paternal
paternity
leave,
so
we
might
take
us
it
might
take
us
a
little
bit
to
get
all
the
changes
we
wanted
before.
A
We
release
it,
but
we
will
get
that
together
in
grafana,
clown,
mario
and
let's
just
talk
offline
and
figure
out
the
best
way
to
roll
that
out,
which
also
kind
of
to
take
a
step
back
to
search.
We
are
rolling
that
out
on
grafana
cloud
as
well
and
search
will
roll
out
with
8.3.
D
A
A
Oh
okay,
that's
farther
off
than
I
thought.
Okay,
but
as
8.3
rolls
out,
search
will
be
available
in
grafana
cloud
and
so
you'll
be
able
to
kind
of
mess.
With
this
recent
trace
search,
stuff
we've
been
working
on.
A
That's
true,
so
if
anyone
you
know
on
this
call
wants
to
experiment
with
it
now
you
know,
keep
in
mind
you're
upgrading
to
a
beta.
Basically,
but
if
that's
acceptable
to
you
kind
of
you
and
your
your
org,
you
know,
or
you
have
a
personal
account-
you
want
to
fool
with
it.
A
You
know
feel
free
to
reach
out.
I
think
this
is
a
small
enough
group.
I
don't
mind
flipping
a
couple
feature
flags
to
make
sure
to
get
you
guys
kind
of
experimenting
with
it
earlier
cool,
oh
to
the
service
graph
news,
that
work
is
done
in
the
agent
right
now.
So
the
agent
like
generates
these
metrics,
that
it
then
pushes
up
to
grifana
cloud,
and
then
we
build
our
service
graphs
off
those
metrics.
We
do
have
some
also
future
work
there.
F
Right,
yeah,
of
course,
so,
as
you
just
said,
the
processing
now
happens
on
the
agent,
and
this
has
a
couple
of
let's
say:
issues
which
is
the
setup
is
not
as
easy
as
it
would
be
if
it
happened
on
the
bucket.
So
what
we
want
to
do
is
move
this
processing
to
tempo
will
debuck
and
in
general
and
make
this
feature
and
integration
into
grafana
cloud.
So
you
don't
have
to
configure
the
agent
the
prometheus
back
end
grafana
tempo,
all
of
that.
F
It
just
kind
of
automatically
happens
on
our
end
in
in
a
cloud,
and
this
will
be
a
component
with
other
sorry.
Other
features
that
we
have
in
the
agent,
such
as
spa
matrix,
which
is
a
an
otel
processor
that
generates
metrics
but
again
derives
metadata
from
from
the
ingested
spots,
yeah
and
other
future
work.
We
might
decide
to
move.
A
Yeah,
I
think
also
pretty
cool,
is
all
this
work
is
going
into
open
source
tempo,
so
this
will
just
be
pr's.
A
Yep,
so
if
you're
using
that
it'll
all
be
available
for
you
and
then
we're
gonna,
you
know
roll
that
up
to
graphonic
cloud.
So
we
you
just
kind
of
start
magically
getting
these
service
graphs
and
span
metrics
and
some
of
these
other
kind
of
data
points
which
I
hopefully
will
be.
You
know
a
nice
addition
to
our
cloud
offering.
A
Cool
anything
else
from
observabilitycon
worth
discussing.
D
You
can
talk
about
jen,
or
is
that
for
another
call.
A
I
don't
know
jet
sure
jet
is
our
enterprise
tempo.
So
if
anyone
represents
just
like
a
massive
bank
here-
and
you
want
to
spend
millions
of
dollars
on
enterprise,
tempo
then
feel
free
to
reach
out.
I
don't
think
that's
really
who
we
are
here
at
this
call,
but
if
it
is,
we
have
an
enterprise
version
of
tempo
with
support
and
some
other
nice
features
like
federated
querying.
Basically,
so
you
can
push
to
a
bunch
of
small
tempos
and
query
across
so
yeah.
A
I
I
don't
think
that's
what
this
call
is
about,
but
if,
if
any
of
you
all
do
have
that
need
we
have
the
we
have
the
product
for
you.
A
I
think
I
want
to
talk
about
the
performance,
though,
because
that
was
kind
of
a
cool
story
for
1.2
at
least
I
personally
have
I've
been
like
somewhat
obsessed
with
it.
We
were
kind
of
in
the
middle
I'm
going
to
put
this
graph
here,
because
I
made
spent
I'm
not
a
graphic
person
or
graphic
designer
person.
I
spent
way
too
long
making
this
graph
basically,
so
this
was
kind
of
our
performance
across
the
life
cycle
of
1.2.
A
It
started
off
with
1
1
and
that's
the
number
of
spans
we
can
process
divided
by
cpu.
So
it's
like
a
measure
of
performance
and
1
1.
We
had
this
baseline
level
as
we
were
building
search
it
all
tanked,
because
you
know
we're
adding
features,
we're
doing
way
more
work.
We're
you
know,
writing
inefficient
code,
just
get
a
feature
in
and
then
kind
of.
I
know
improving
on
the
other
side
and
then
kind
of
towards
the
very
end
things
really
wrapped
up
and
came
together
and
1.2.
A
A
search
on
is
more
performant
than
1.1,
which
I
think
is
amazing,
because
we're
actually
doing
some
double
work
and
we're
doing
that
on
purpose.
We're
kind
of
like
doubling
up
our
rights
a
little
bit
to
create
two
different
formats,
one's
the
old
trace
bid
format
and
the
other
is
this
new
searchable
format
and
it's
kind
of
giving
us
this
ability
to
experiment
with
a
searchable
format,
while
keeping
the
old
code
paths
stable
and
then
1.2.
A
A
search
off
is
like
double,
basically
the
efficiency
of
1.1,
which
is
amazing
to
me,
because
I
thought
1.1
is
actually
pretty
good,
but
apparently
we
had.
We
had
a
lot
of
room
to
improve,
but
anyways
search.
I
think,
as
we
improve
search,
I
think
I'm
not
going
to
say
it's
going
to
be
in
1.3
because
we
might
choose
to
cut
1.3
earlier,
just
to
get
some
bug,
fixes
and
smaller
features
in
and
it
might
end
up
in
1.4.
A
D
A
The
search
and
with
the
you
know,
the
new
search,
the
and
the
old
trace
bid
kind
of
look
up
search.
A
I
also
think
our
our
our
latencies
are
have
gotten
really
good
as
well.
When
we
at
1.1,
I
think
it
was
like
or
1.0
our
p50
was
like
2.5
seconds,
which
is
slow.
You
know
that's
noticeable,
you
like
hit
a
query
and
then
you're
watching
for
two
and
a
half
seconds,
and
it
comes
back
and
our
p50s
now
are
like
seven
or
eight
hundred
milliseconds,
which
I
think
is
great
to
find
a
trace
and
one
point
two
billion
1.2
billion
traces
in
our
back
end.
A
So
I
think,
I'm
quite
proud
of
the
performance
both
in
ingestion
as
well
as
our
like
query
performance.
I
think
it's
a
great
trace
base
id
engine
now
cool.
So
that's
roughly
observabilitycon.
A
A
Yeah
that's
fair
and
I
think
that's
kind
of
some
of
what
the
internal
team
is
voiced
as
well.
It
adds
a
little
bit
of
overhead
to
cut
a
release,
but
it's
not
terrible.
You
know
we
make
a
blog
post
and
we
do
some
release
notes
and
it's
like
half
automated,
basically
and
it's
half
manual.
A
So
maybe
we
could
like
tighten
up
what
we
can
tighten
up
and
then
kind
of
improve
improve
the
the
release
cycle.
I
think
we're
roughly
every
two
months,
although
this
one
one
point
two
to
one.
I'm
sorry
one
point
one
to
one
point:
two
is
more
like
two
and
a
half
months
or
so
then
then,
well,
two
months.
A
So
if
we
did
cut
it
down,
it'd
probably
be
about
once
a
month
would
is
probably
what
we'd
have
to
get
our
early
cycle
down
to
or
once
every
five
to
six
weeks,
maybe
would
be
a
goal.
Let's
chat
about
that
as
a
team
and
see
what
we
think
it's
also
kind
of
a
weird
period
of
the
year,
because
you
know
you're
going
into
kind
of
holidays.
A
It
makes
a
little
bit
harder
to
get
everybody
coordinated
and
cut
a
release.
Since
everybody's
you
know
out
basically,
but
we'll
we'll
figure
that
out
and
thanks
for
the
the
feedback
andre
we'll
see.
If
we'll
see,
if
that's
in
the
cards,
the
title
release
schedule.
A
Cool
1.2,
we
got
the
blog
post
there
and
it
covers
a
lot
of
what
we
talked
about.
The
one
thing,
maybe
that's
worth
adding
is
this
kind
of
new
operational
mode,
which
is
a
scalable
like
single
binary,
so
we
have
always
deployed
it
as
four
separate
components
that
are
all
independently
scalable
my
bad
five
separate
components.
A
So
we've
introduced
this
new
method
of
deploying
that
loses
some
of
that,
but
is
more
operationally
simple,
it's
kind
of
an
in-between
option
and
basically
it's
the
ability
to
scale
a
single
binary,
a
single
like
process
horizontally
and
the
single
processes.
All
those
pieces
in
at
the
query
front
end
the
query,
the
distributor
and
the
adjuster
and
compactor.
So
all
five
pieces
are
in
one
process,
and
you
can
just
say
I
want
six
of
these
or
two
of
these,
or
you
know,
scale
up
and
down
does
make
sense.
A
For
you
know
your
load
basically
there's
a
link
in
in
the
blog
post.
If
you
go
check
that
out,
there's
a
link
there
and
the
link
kind
of
brings
you
to
some
of
the
docs
and
docker
compose.
Also,
of
course,
because
that's
how
we
communicate
config
I'll,
in
fact
I'll
put
the
docker
compose
in
here
directly.
A
This
is,
let's
see,
search,
docker,
compose
scalable,
another
just
kind
of
good
example
on
how
to
how
to
kind
of
configure
this,
and
this
might
be
like
a
good
mix
for
you
and
like
your
needs
again,
if
you're
doing
like
a
multi-million
spans
per
second
enormous
production
cluster,
my
recommendation
would
be
to
stick
with
the
fully
distributed
mode.
But
if
you're
somewhere
in
between
this
might
make
sense-
and
this
might
be
good
stepping
stone
to
get
to
the
to
get
to
the
more
complex
operational
mode.
E
A
A
Doing
well
just
going
over
like
observability
con,
basically
and
1.2
anything
you
want
to
add
to
any
announcements.
Any
big
big
changes
from
you.
A
Sure
yeah,
let's
get
a
call,
or
one
thing
we
should
add
before
we
move
on
to
that,
but
that's
kind
of
like
plans
for
one
three
tempo,
1.3,
hey
mario,
take
care
bud.
A
So
we
don't
know
why
this
is
happening,
but
we
will
release
a
one
two
one,
hopefully
soon,
that
has
at
least
a
patch
to
get
past
it,
if
not
like
a
root
cause
kind
of
fix.
Marty
is
on
that
and
that's
kind
of
what
I
was
hoping
on
this
call.
So
he
could
give
us
an
update,
but
no
big
deal.
A
If
you
do
see
this,
please
let
us
know
because
then,
if
you
can
share
some
information,
we
might
help
us
it
might
help
us
track
it
down.
Like
I
said,
we
have
an
enormous
different
ranges
of
trace
sizes
and
attribute
types,
and
I
really
just
I
don't
know
zack
basically
stumbled
across
the
magic
combination,
that's
creating
this
issue
and
nobody
else
can
seem
to
do
it
so
we'll
at
least
get
it
patched
up
for
one
two,
and
if
you
do
see
it
yeah
please
share
any
information
you
have.
E
There's
one
more
bug
the
bug
fix
was
just
merged.
Today,
it's
the
she
was
like.
We
have
more
bugs,
but
yeah.
It
was
the
incorrect
initialization
of
max
bytes
per
trace
limit
and
instead
of
setting
that
to
the
max
search
bytes
per
trace,
I
have
set
it
to
max
bytes
per
trace,
so
the
bug
fix
was
just
merged,
but
if
you've
upgraded
to
one
two
already,
you
might
see
some
error
logs
in
the
ingester
that
say
trace
too
large,
because
the
limit
dropped
from.
A
A
Let's
see
one
three:
what
we
should
do
here
is
share
our
milestones,
so
we've
been
trying
to
use
milestones
and
I
think
we've
done
a
pretty
good
job
as
a
way
to
kind
of
communicate.
What
we're
working
on
now,
what
we
consider
priorities,
what
we
want
in
the
next
release,
so
here's
the
current
1-3
milestone.
A
There's
not
a
ton
in
here
now
and
honestly.
What
we
need
to
make.
We
need
to
make
a
back
end
search
issue
and
put
it
in
here
and
kind
of
do
the
same
thing.
We
did
that
in
gesture
search,
readiness
I'll
make
a
note
for
myself
to
do
that.
A
If
I
can
search
issue,
it'll
kind
of
be
like
a
meta
issue
with,
like
you
know
this,
these
are
the
five
or
ten
things
we
want
to
get
done
before
we
release
fullback
and
search
in
one
three
like
I
said
that
one
might
slip,
depending
on
our
desire
to
just
cut
a
one
three
early
and
get
a
couple
bug
fixes
in
maybe
a
feature
or
two
or
we
might
just
hold
one
through
a
little
bit
longer
and
make
sure
we
get
that
right.
A
I'm
not
gonna
commit
to
either
way
right
now,
but
that
is
kind
of
the
focus
of
the
team
is
getting
this
fullback
and
search
together
over
the
next
quarter
is
kind
of
our
internal
goal,
but
other
things
that
are
worth
mentioning
this
unhealthy
compactors
that
do
not
leave
ring.
This
is
another
one
we
just
don't
see,
but
has
a
fair
amount
of
traction
from
a
few
people.
A
If
anyone
has
information
that
to
help
us
reproduce,
it
would
be
great
to
hear
that
I
I've
collected
a
lot
of
information
in
this
issue
thread
and
there's
just
still
no
like
smoking
gun,
like
obvious
reason.
Additionally,
this
is
something
else.
We
don't
see.
We
roll
our
compactors,
probably
multiple
times
a
week.
A
We
have
seven
or
eight
environments.
We're
constantly
deploying
weekly
to
every
single
one,
and
our
ops
environment
in
particular
has
a
lot
of
churn,
and
this
just
doesn't
happen,
which
is
just
really
weird,
that
we
we're
struggling
to,
pin
it
down
so
so
we,
what
we're
going
to
do
at
least
is
release
the
move.
A
The
compactors
to
a
new
ring
in
cortex
and
the
new
ring
will
allow
automatically
dropping
an
unhealthy
compactor
where,
as
the
current
ring
does
not
so
this
will
require
some
internal
code
changes,
and
it
has
this
new
feature
and
it's
just
like
if
a
compactor's
been
holding
for
five
minutes,
just
everybody
forget
that
it
exists
forever
and
it
will
at
least
kind
of
band-aid.
This.
A
So
that
the
people
who
are
seeing
it
will
have
this
five-minute
period,
where
there'll
be
this
compactor,
that's
unhealthy
and
it'll
go
away
right
now
you
have
to
kind
of
manually,
go
and
forget
it,
which
is
a
little
frustrating
I
think
operationally,
but
we
will
get
that
fixed
up
and,
and
then
at
least
you'll
have
this
like
auto,
forget
path
and
it'll
ease
your
operations,
even
if
we
can't
figure
out
exactly
why
it's
happening
in
some
environments
and
not
ours,.
A
Yeah,
honestly,
is
that
the
only
thing
that's
listed
in
here
that's
worth
discussing,
there's
some
smaller
things
here.
We
should
probably
put
more
one
three
guys.
What
do
you
all
say.
E
A
Cut
it
now
send
it
that's
good
search,
we're
gonna
try
to
get
in
one
three.
I
don't
want
to
commit
the
service
graph
server
side
metric
stuff.
I
think
we
need
to
look
a
little
bit
more
at
what
that
timeline
might
look
like.
For
me.
The
bigger
emphasis
is
search
but
conrad
or
mario's
gone
karana.
Do
you
want
to
weigh
in
on
service
graphs
and
maybe
like
if
we're
going
to
think
about
one
four
for
that
or
if
one
three
is
an
option.
D
Yeah
we're
currently
starting
with
the
design
of
this,
so
we're
thinking
about
how
we
can
integrate
this
within
tempo.
You
know
in
a
smart
way
that
doesn't
you
know
destabilize
the
other
components
so
result
the
design
phase
and,
depending
on
you
know
which
design
we
pick
it
yeah
some
parts
might
end
up
in
1.3
or
later
yeah.
That's
a
bit
unclear
at
the
moment.
To
be
honest,
yeah
we'll
have
an
updated
next
community
called
probably.
D
A
Still
in
design
phase
it's
hard
to
commit
to
130.,
that's
a
good
point.
Where
search
is,
I
wouldn't
say
what
heavily
designed,
but
at
least
the
path
forward
is
clear
with
search,
whereas
with
with
with
service
graphs,
I
think
there's
still
a
lot
of
open
questions,
so
we'll
definitely
not
put
an
issue
in
here
for
that
perhaps
parts
of
it
perhaps
at
least
there'll
be
a
design
doc
within
the
next
you
know
month
or
so
for
sure.
But
let's,
let's
do
an
update
at
the
next
community
call.
D
But
yeah
the
other
goal
would
be
to
to
you
know,
be
able
to
generate
server-side
metrics,
like
you
know
this
quarter,
so
it
could
be.
I
guess
in
two
releases
then,
hopefully.
C
A
Well,
I
think
I've
roughly
covered
everything
I
thought
we
should
does
anybody
else
have
any
ideas
of
maybe
other
content,
other
things
that
are
worth
talking
about,
or
questions
about
the
product
or
plans
or
really
anything
you
want.
You
know
what
my
favorite
color
is
how
many
dogs
live
in
this
house,
but
how
many
dogs
live
in
the
south
zero?
I
have
no
dogs.
A
A
A
E
I
was
just
gonna
say:
tempo
searches
and
grafana
cloud
now
so
do
check.
Did
we
mention
that
already,
okay.
D
Yeah,
so
search
will
be
in
grafana,
we'll
be
rolling
out
with
grafana
8.3,
which
is
like
somewhere
in
december,
but
we
are
running
search
in
the
tempo
clusters.
So
if
you
know
which
endpoints
are
it,
you
can
already
use
search
or
you
can
use
your
own
graphic
instances.
Instance
in
that
case
yeah.
So
I
guess.
A
There
is
an
open
offer,
certainly
to
anybody
on
this
call
right.
If
you
do
want
it
enabled
we
can
turn
that
on
in
your
in
your
instance,
we
don't
want
to
like.
I
don't
want
to
broadcast
that,
like
in
a
giant
blog
post
that
everybody
sees-
or
you
know,
post
it
publicly,
because
I
don't
want
a
billion
requests
to
turn
on
a
feature
flag
that
is
kind
of
experimental,
but
certainly
anyone
who's
this
tight
with
tempo
and
is
working
with
the
product
and
wants
to
have
some
fun
with
it.
A
A
If
you
give
me
your
just
url
it'll
be
like
something.grafana.net.
If
you
can
give
that
to
me,
I
can
get
that
set
up
for
you.
A
Oh
okay,
ga
on
my
bed,
all
right-
I
will
set
this
up
later
today,
if
you're
on
our
public
slack
feel
free
to
just
dm
me.
If
you
don't
see
this
in
a
day
or
so,
and
I
will
and
I'll
make
sure
it
gets
together,
but
I'm
going
to
set
it
up
this
afternoon
and
then
yeah.
So
if
you
don't
see
it
just
ping
me
and
say
joe,
you
promised
me
to
do
this.
Where
are
you
but
and
I'll
get
it
together,
cool
yeah!
Thank
you.
A
All
right:
well,
it's
been
a
fun
community
call
come
back
in
next
month
and
we'll
be
looking
at
yeah,
probably
a
little
bit
more
a
detailer
on
one
three,
and
what
we're
trying
to
release
in
that
search
should
have
moved
some
as
well
and
we'll
have
maybe
some
more
full
back-end
search
may
have
moved
some
and
we'll
make
up
some
more
updates
on
that,
as
well
as
the
a3
release
of
grafana,
oh
cool,
all
right,
you
all
take
care,
and
I
will
see
you
in
about
a
month.