►
From YouTube: Grafana Tempo Community Call 2023-05-11
Description
Join our next Tempo community call: https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY/edit
What was discussed:
- Tempo 2.1.1
- Tempo Operator moved to github.com/grafana/tempo-operator
- Quarterly plans
- Dynamic Metrics from spans!
- vParquet3 performance improvements
- Streaming search
A
A
Welcome
to
Temple
Community
call
may
may
2023
Edition
it's
almost
summer
well
for
the
Northern,
Hemisphere
I
suppose,
maybe
almost
winter.
For
some
of
you
today
we
got
a
somewhat
short
agenda:
we're
going
to
talk
about
real,
quick
about
2-1-1,
which
came
out
shortly
after
q1,
and
it's
a
pretty
critical
bug,
so
I'd
recommend
being
on
one
line.
If
you
can't
do
on
one,
the
tempo
operators
recently
moved
and
I
see.
A
We
have
one
of
the
maintainers
here
of
the
operator
if
they
want
to
say
anything,
but
we've
moved
it
just
in
the
past
couple
weeks
and
talk
about
some
of
the
work
that
team
our
team
is
going
to
be
focusing
on
this
quarter
so
that
we
can.
Let's
talk
about
that,
a
little
bit,
I,
guess,
cool
and
I
think
I
might
be
volunteered
to
do
everything.
A
A
Opportunity
not
at
all
right
all
right,
I
can
handle
it.
If
that's
the
way,
it's
going
to
be
thanks,
Sutter
cool,
so
two
one
one
came
out:
let's
get
a
link
in
there.
Can
somebody
throw
a
link
in
that
doc
to
the
2-1-1
release
notes,
but
people
on
one
is
one
commit
after
two
one.
Somebody
in
the
community
noticed
an
issue
where
occasionally
Tempo
at
storage
time
would
flip
a
Boolean
value
from
false
to
True.
A
We
tracked
this
down
to
our
like
our
Upstream
dependency,
that
parquet
dependency.
So
it
was.
Writing
like
these.
A
A
part
or
the
parquet
go
Library.
You
use
these
love
buffers
for
obvious
reasons.
So
does
it
costly
allocate
memory,
and
it
was
not
correctly
like
overriding
the
buffer
when
it
kind
of
got
a
new
Vector
of
Brilliance
to
write
to
the
back
end,
so
we
discovered
that
we
tracked
that
down
and
fixed
it
Upstream
got
it
into
Tempo
and
then
got
it
in
2-1-1,
so
heavily
recommend
you
use
that,
especially
if
you
have
any
Boolean
attributes
on
your
stance,
because
otherwise
it
might
get
flipped
sometimes
from
false
to
true.
A
The
operator
in
other
news
was
also
moved
to
thank
you
to
Red
Hat,
as
well
as
the
various
maintainers
there
Andreas.
Is
that
how
you
pronounce
your
name?
Andreas,
yep,
cool
and
taval
Pebble.
Both
have
worked
on
this
quite
a
bit
and
donated
it
over
to
grafana.
We
appreciate
that
and
will
remain
maintainers
on
it.
B
Nothing
particular
so
I
hope
next
week.
I
would
get
time
to
start
working
on
a
blog
post,
which
yeah
hopefully
can
introduce
the
temp
operator
to
about
the
community
and
would
be
awesome
if
we
can
publish
it
on
a
croissant
blog
Maybe.
A
Yeah
I
think
we
can
I
will
talk
to
the
blog
people,
but
they're
always
excited
about.
You
know
something
about:
I
want
to
buy
products.
Of
course,.
B
A
Temple
operator
we'll
get
that
on
the
blog
honestly,
where
was
I
somewhere
on
Reddit
and
someone,
it
was
just
a
thread
about
tracing
and
Tempo.
Oh
no,
it
was
on
Hacker
News,
but
they
were
talking
about
like
how
difficult
it
could
be
to
deploy
some
of
these
databases.
A
When
you
don't
have
a
lot
of
experience
like
developing
them,
and
somebody
mentioned
that
you
were
using
the
tempo
operator
and
it
worked
really
well
so
I
thought
that
was
a
cool
random
call
out
for
the
operator
that
it
had
made
their
life
really
easy,
deploying
Tempo.
B
A
Finally,
we're
just
flying
through
these
I
don't
know
if
this
is
going
to
be
a
particularly
long
one
this
quarter,
so
we
did
some
quarterly
planning.
Our
quarters
I
think,
are
a
bit
office
set
or
offset
by
like
a
month
for
normal
human
people
quarters,
but
we
talked
about
the
work
we
want
to
do
this
quarter
and
I
thought
I'd
pass
along
to
this
group
here.
Of
course,
performance
is
a
big,
is
a
big,
is
a
big
Focus.
A
This
quarter,
as
well
as
some
extensions
to
the
metrics
generator,
which
we've
kind
of
teased
and
I'll
have
her
kind
of
short
demo
here
today
on,
which
is
a
way
to
do
these
Dynamic
grouping
and
dynamic
percentiles
on
your
spans,
which
is
really
cool
performance,
wise
we're
going
to
cut
a
V
part
K3,
and
what
this
is
going
to
do
is
I.
Think
people
even
been.
C
C
Okay,
yeah,
so,
okay,
so
V
part
K3.
This
is
still
in
the
design
phase,
so
there
are
still
too
many
specifics,
but
essentially
right
now
with
parquet.
We
need
it
in
schema
that
predefines
well,
basically
how
what
data
is
going
to
be
how
data
is
going
to
be
laid
out.
So
we
have
a
combination
of
dedicated
columns
and
then
an
array,
an
array
of
key
values,
so
some
dedicated
columns
are
straightforwards
which
are
intrinsic,
like
name
and
so
on,
and
then
we
have
a
set
of
dedicated
Columns
of
attributes
that.
A
C
Initially
deemed
more
important
than
than
others,
basically
because
we
think
are
the
most
common
that
are
going
to
be
searched
the
most
and
like
past
surname
and
space,
but
everything
else
is
sent
to
an
array
of
key
values.
So
anytime,
you
need
to
search
by
a
specific
attribute
that
is
not
part
of
these
dedicated
columns,
you're
going
to
go
through
the
entire
array
searching
for
that
attribute,
which
it's
way
slower
than
just
pulling
one
one
column.
C
So
we
want
what
we
want
to
do
essentially
is
make
this
Dynamic,
and
what
we're
now
designing
is
how
Dynamic
and
how
that
is
going
to
work,
because
in
the
end,
you're
still
like
we're,
gonna
still
need
to
have
an
schema
for
writing
and
reading.
C
So
most
likely,
the
dynamic
part
is
gonna,
be
configured
per
tenant
based
on
the
different
usage
patterns
of
of
the
different
uses,
but
yeah
I
think
as
soon
as
we'll
have
more
concrete
news
and
undesigned.
We
plan
on
sharing
this
in
in
the
design,
docs
or
design
proposals.
I
think
they're
called
in
in
Tempo
the
same
with
it
for
parquet,
initially
traceql
the
metric
generator
so
to
have
an
open
discussion,
yeah
right
now,
I
think
Adrian
and
myself
were
leading
the
efforts
but
yeah.
We
hope
to
have
this
as
an
open
discussion.
C
C
Yes,
that's
some
orientation.
Okay,.
A
Maybe
no
did
you
mention
it
and
we
totally
missed
it
and
I
just
totally
miss
it.
C
Oh
yeah
yeah
yeah,
we
intent
on
on
our
publishing
a
design
proposal.
A
A
We
found
that
they
often
we
found
that
they
often
sorry
they
often
have
giant
tags
like
SQL
queries
or
things
just
shoved
into
tags
and
pulling
those
out
massively
reduces
the
length
of
that
attributes,
column
and
improves
the
ability
to
improves
the
ability
to
sorry
leave
a
like
improve
the
ability
to
get
your
searches
done
quickly
on
all
across
all
attributes.
You
can
not
only
pull
out
the
ones
you
want
to
query
on.
You
can
pull
out
the
ones
that
just
make
the
column
really
good.
A
C
Yes,
yeah.
That
was
pretty
interesting,
find
deduced
by
reducing
the
length
of
the
attributes
column.
Even
if
you
don't
query
by
the
new
newly
dedicated
columns,
it
is
a
massive
improvements
in
NSP
and
yes,
so
far,
we
have
analysis
like
adhoc
analysis
of
some
like
simple
data
blocks,
and
yet
we're
trying
to
figure
out
how
we
can
the
best
way
of
presenting
data
about
how
traces
like
how
traces
look
like
like
which
are
the
top
column
top
attributes.
C
So
we
can
better
suggest
uses
which
activists
to
promote,
because
then
you
say,
data
not
necessarily
need
to
be
the
ones
you're
searching
by,
but
just
most
likely,
a
combination
of
a
few
you're,
very
interested,
like
you're,
very
frequently
searching
by
and
just
the
biggest
ones.
A
Yeah
cool
well,
we're
excited
for
that.
We
think
it's
going
to
make
massive
performance
impact
on
very
large
clients,
for
us
very
large,
like
our
own
clusters.
Internally
are
not
hitting
the
speeds
we
want
for
surgeon.
We
think
this
is
going
to
be
a
big,
a
big
move
in
the
right
direction
and
we
think
some
of
the
advanced
operators
which
I
assume
are
people
on
this
call,
will
definitely
be
able
to
make
use
of
it
as
well.
A
Cool
other
news
in
performance
we
have
already
merged
and
grafana
is
currently
working
on
a
and
I'll
do
a
quick
demo
of
this
real
fast
here
streaming
endpoint.
So
there's
no
nice,
grafana,
UI,
but
and
you'll
even
read.
That
is
that
readable,
a.
A
I
should
have
done
that
before
this
car,
because
I
don't
know
how
to
increase
the
font
size
limit
terminal
one
day.
I'll
learn
that
oh
look
at
this
yeah,
okay
plus-
maybe
oh,
my
God
I-
didn't
mean
it
is
command
plus.
Thank
you
all
anyways,
so
the
streaming
endpoint
will
connect
over
grpc.
It's
a
single
directional
streaming.
Rpc
call
you
pass
your
query
and
you
get
back
a
series
of
updates
over
the
course
of
the
time.
A
A
Let's
do
something
that
might
take
a
little
bit
longer
to
serve
this
resource.
That
service
name
equals
type
of
query,
fan,
band
and
duration
greater
than
five
seconds
at.
A
There
he
is
well
won't
find
anything,
but
you
can
still
see
like
across
the
course
of
the
streaming
and
or
as
it's
being
streamed
back.
We
can
see
how
many
jobs
are
done,
how
many
are
total,
so
we
can
nicely
hopefully
draw
like
a
progress
bar
or
give
some
good
indication
about.
You
know
how
far
along
the
query
is
and
I
think
that'll
be
really
important
for
understanding.
If
you
have
written
yourself,
a
very
heavy
query:
that's
going
to
take
forever
or
you
wrote
yourself
a
nice
easy
query.
A
So
you
have
an
immediate
understanding
of
how
long
it's
going
to
take,
and
hopefully
this
will
actually
return
something
and
over
the
course
of
the
search
Maybe.
A
No,
oh
there
you
go
so
you
can
get
intermediate
results,
kind
of
as
the
search
is
going
along,
so
graffanta
can
display
them
as
soon
as
they're
hit
instead
of
after
you
know,
20
seconds
or
30
seconds,
or
something
like
that.
So
for
bigger
queries,
you're
gonna,
like
again
immediately
have
good
instincts
on
how
long
the
thing
you
wrote
is
going
to
take
and
then
for
and
then
you'll
also
be
getting
these
Temporaries.
A
Or
these
like
intermediate
results
back,
so
you
can
be
like
clicking
on
Trace
IDs,
exploring
traces
having
understanding
of
what's
occurring.
You
know
before
the
full
time
out
of
weights,
and
so
this
isn't
necessarily
going
to
improve
Tempo
performance,
but
it's
definitely
going
to
improve
the
experience.
A
In
a
in
a
very
powerful
way,
I
think
so
larger
queries
are
more
obvious
and
you
get
your
results
even
faster,
even
if
the
query
took
the
same
amount
of
time
it
took
before
so.
A
lot
of
excitement
about
that
here.
A
I
think
that's
going
to
be
a
great
improvement,
too
Tempo
and
how
we
interact
with
them
and
then
the
final
thing
is
these
updates
to
the
metrics
and
Generator,
so
the
metrics
generator
is
generates
the
Prometheus
metrics
right,
but
they're
these
like
canned
pre-configured
metrics,
and
the
thing
that
we've
been
working
on
recently
for
the
past
month
or
so
is
an
API
endpoint
to
dynamically
slice
and
dice
your
slice
and
dice
your
your
spans
to
get
histograms
based
on
you
know
or
sorry
quantiles
of
your
your
different
expand
durations.
A
Does
that
make
sense
or
am
I
just
totally
making
things
out
so
anyways
I'm
going
to
query
this
API
metric
standpoint,
I'm
going
to
pass
it
prompt,
I'm,
sorry
I
had
a
trace,
ql,
query
and
then
I'm
going
to
tell
it
what
to
group
by
so
I
want
to
group
my
service
name
and
I
want
to
take
everything.
This
is
an
empty
query
right,
and
it's
going
to
give
me
back,
hopefully
right.
A
It's
going
to
say
how
many
spans
are
counted
up,
and
it's
going
to
give
me
back
all
of
the
different
buckets
for
a
histogram
and
it
did
the
grouping
on
Surface
name,
which
is
mythical
server,
is
the
one
that
found
I'm
using
a
repo
that
we
use
internally
for
fake
data,
basically
called
intro.
B
A
But
anyways,
so
this
endpoint
I
can
do
different
kinds
of
queries
on
and
I
can
Group
by
whatever
I
want.
We,
instead
of
going
for
a
full
like
complete,
parsable,
prom,
ql
traceql
mix,
because
that
would
have
been
a
whole
lot
more
work.
We
wanted
to
do
this
first
because
we
thought
it
would
be
a
quick
move
towards
Dynamic,
metrics
or
traces
generate
a
ton
of
value
without
having
to
go
through
the
full.
A
Like
query
language,
you
know,
prompt
ql,
Trace,
Google,
mashup
kind
of
thing
and
grafana
will
be
building
a
UI
on
top
of
this
as
well.
So
you,
you
should
be
able
to
do
like
select
your
service
name,
select
your
name
spaces,
select
your
clusters
and
immediately
get
back
your
kind
of
like
a
histogram
values.
So
you
know
this
is
kind
of
boring
because
there's
only
one
resource
service
name,
so
we
can
do
like
span.hcp
Target
Maybe,
and
we
can
see
now
we're
broken
down
by
Target.
So
we
see
Albert.
A
These
are
all
names
that
head
Simons
came
up
with
aloe
vera
Manticore
illith
Beholder.
We
kind
of
look
through.
We
even
see
metrics,
maybe
something
we
don't
care
about,
and
we
see
things
maybe
we
do
care
about,
but
of
course,
since
I'm
writing,
Drake's
ql
and
it's
responsive,
I
can
just
say:
span.hd
Target
does
not
equal
slash,
metrics
and
I'm,
going
to
only
now
select
certain
spans
and
I
get
back
a
different
set
of
buckets
the
ones
I'm
more
interested
in
so
I
excluded.
My
metrics
I
had
my
things
down
again.
A
Let's
do
that
bigger,
so
I'm
writing.
Traceql
I
am
then
able
to
do
these
like
group,
buys
I
can
specify
any
parade
or
any
attribute
my
spans
or
my
resource
and
I
can
quickly
generate
my
histograms
here.
So
this
is
another
one.
We
are
super
interested
in
or
super
excited
about,
in
particular
kind
of
the
grafana
UI
integration,
because
this
has
made
you
know.
Certainly
you
all
could
probably
write
some
kind
of
clever
UI
on
top
of
this
that
we
want
to.
A
You
know,
provide
that
as
well
and
open
source
performance,
so
we're
excited
about
that
as
well.
So
this
should
be
pretty
slick
and
that's
roughly
what
I
got
today
to
talk
about,
so
we
got
Dynamic
metrics
coming
up.
We
have
two
big
performance
improvements
that
we're
excited
about.
A
A
A
I
think
that'll
be
a
game-changing
like
feature
for
Tempo
generally,
an
open
source
tracing
or
tracing
generally
I've
felt,
like
you
know,
there's
a
lot
of
these
proprietary
products
vendor.
You
know
lock-in
products
that
do
a
lot
of
nice
things
with
traces,
but
there's
not
been
a
really
solid,
open
source.
A
A
Questions
thoughts
concerns
if
not
I'm
roughly
done
here.
A
Cool
sorry,
well,
you
all
had
an
excellent
day
and
we'll
see
in
about
a
month
the
graphonicon's
coming
up
so
expect
some
things
at
grafonicon
and
we'll
definitely
have
a
Tempo
session.
So
try
to
show
up
for
that
and
we
may
or
may
not
have
a
community
call.
We
might
cancel
it
depending
on
the
conflict
of
we'll
see,
but
I
will
we'll
communicate
that
so
everyone
take
care
and
I
will
see
you
when
I
see
you.