►
From YouTube: Grafana Tempo Community Call 2022-12-08
Description
Agenda
- Tempo 2.0
- Projected release date
- Resource requirements
- TraceQL roadmap
A
A
This
is
Temple
Community
call
December
2022
Edition
last
of
the
year
best
of
the
year,
we're
going
to
talk
about
2.0,
of
course,
as
we
have
for
the
past
four
or
five
months,
Maybe
share.
We
talked
last
time
about
sharing
kind
of
what
we
have
internally
for
phases
of.
B
A
So
we
thought
we'd
talk
about
it,
I'm
not
this
year.
What's
up
and
and
I
put
other
stuff
question
mark
for
hey
nanya,
so
good
to
see
you,
man,
I,
put
other
stuff
question
mark
in
case.
Anybody
else
has
thoughts,
questions
whatever,
because
I
do
think.
If
we
just
do
our
content,
it
won't
be
super
long.
You're
welcome
to
ask
whatever
Alton
yeah.
A
A
A
We
are
gonna,
get
back
from
the
break,
there's
been
a
lot
of
conferences
and
then,
of
course,
vacation
at
the
end
of
the
year,
which
is
strung
things
out
a
bit,
and
we
want
to
make
sure
that
we,
you
know,
deliver
a
good
Tempo
2.0,
if
I'm
being
honest,
I
think
we
may
be
bit
off
a
bit
too
much.
Maybe
we
should
have
done
parquet
as
a
1.6.
Only
and
then
Trace
ql
is
a
2.0.
Putting
both
of
those
features
in
at
the
same
time
has
really
stretched
the
team.
A
It's
really
two
huge
changes
at
the
same
time
and
I
think
the
cost
of
that
has
been
a
little
bit
longer
than
we
expected
for
this
release.
So
end
of
January
is
the
goal
we're
going
to
get
back
from
our
break
and
be
feature
complete
and
just
be
hammering
on
it
for
a
month
to
improve
config
to
improve
the
experience
to.
C
A
In
terms
of
grafana,
the
UI
as
well
as
you
know,
of
course,
the
tempo
back
end
as
well.
So
that's
that's
where
we
are
on
the
on
2.0
and
hopefully
that's
all
right.
I
also
want
to
talk
about
resources
a
little
bit
and
I
get
the
feeling
from
this
internal
group,
because
I
kind
of
consider
all
the
you
know
the
inner
circle
of
the
community.
A
The
people
who
are
most
plugged
in
who
are
talking
to
us
the
most
about
needs
and
I,
want
to
get
your
feeling
for
this,
because
I
feel
like
2.0
when
it
comes
out,
is
going
to
require
the
default
config,
roughly
2x,
the
resources
of
1.5
reason
being,
of
course,
the
parquet
change.
So
this
will
be
like
a
total
cluster
TCO
increase
of
two
times
you're
going
to
want
to
scale
up
your
ingesters
and
compactors
about
double
as
soon
as
you
install
it.
How
does
that
sound
as
like
a
member
of
the
community
internally?
A
This
is
not
a
big
deal,
we're
well
below
margins.
It's
still
a
very
cheap
database
to
run
on
our
side,
but
I
am
in
concerned
about
I,
guess,
Community
reaction.
When
we
say
something
like
two
times
the
resources
and
like
the
blog
post
for
2.0
I
I
would
appreciate
if
anybody
has
some
feedback
on,
how
does
that
hit
like
you're
running
this
internally?
How
do
you
feel
about
a
2X
resource
increase.
D
D
Go
ahead,
yeah,
so
yeah,
I,
I
think
at
least
for
us
still,
like
you
mentioned
it,
it
performs
super
fast
anyways,
so
we
are
using
all
the
tools
that
use
like
four
or
five
times
the
same
resources
for
observability.
Also,
so
then,
like
putting
two
packs.
This
is
still
fine,
okay,
at
least
for
us,
yeah,
okay,.
A
That's
good
to
hear
if
anybody
else
has
some
feedback
we'd
like
to
hear
it
either
way
negative
or
positive.
Of
course.
To
me,
that's
also
not
a
concern.
Tempos
stands
is
pretty
slim
to
some
of
the
other
tools.
Okay,
we
feel
the
same
way
compared
to
our
other
back
ends.
Tempo
is
the
cheapest
to
run
it
Remains
the
cheapest
to
run
with
this
2x
increase.
A
So
if
others
are
in
that,
having
that
same
experience,
that's
great
to
hear
I
will
have
been
concerned
about
that
announcement
externally,
like
I
said
internally
internally,
the
right
choice
is
just
to
you
know,
spend
the
money
move
forward
and
at
the
features,
but
I
really
want
the
community
to
feel
like
we're
still
delivering.
You
know
a
high
performance
database,
so
hearing
that's
positive,
I
will
say:
V2
will
remain
supported
as
a
Trace
by
ID
lookup
option
for
the
foreseeable
future.
A
A
A
We
do
think
we
can
kind
of
continue
whittling
this
down.
There
are
definitely
some
things
we
can
improve,
but
at
release
I
do
expect
a
2X
TCO
increase
and
I've
been
kind
of
nervous
about
announcing
that.
So
it's
great
to
hear
some
positive
feedback
that,
despite
that
sounds
very
bad
to
me
in
my
head.
You
all
hear
experiences
similar
to
ours
internally,
where
it's
real,
it's
cheap
to
run
it's
cheaper
than
other
options
and
doubling
it
is
not
going
to
be
super
concerning
I
suppose.
B
Hey
actually,
if
I
could
take
a
quick
poll,
is
anyone
here
using
parquet
already
for
the
back
end.
Yeah
with
this
2x
is
really
more
for
about
the
default
configuration
with
just
swapping
it
yeah.
So
if
you're
already
using
parquet
on
the
back
end,
then
the
increase
won't
be
quite
2x,
it'll
be
less
than
that.
I
mean
maybe
just
like
reference
is
like
50,
probably
yeah,.
D
A
A
Cool
all
right
I
do
have
a
traceql
roadmap
to
share
Marty.
Do
you
want
to
do
that?
I
did
jot
a
bunch
of
notes
down
that
I
was
gonna,
say
I
can
do
it
I,
basically
like
throwing
you
on
the
spot
here
after
I
kind
of
prepare.
A
A
Right
so
we've
broken
up
traceql
phases
internally,
I'm
just
going
to
dump
all
this
stuff
in
here
and
then
we'll
talk
through
it.
We've
broken
up,
oh
well.
Of
course
this
is
going
to
format
horribly.
We've
broken
up
the
the
phases
internally
and
I'll.
We'll
just
share
that
with
you.
Basically
raw
as
it
is.
How
can
I
do
this?
Where
it
doesn't
I,
don't
want
a
bullet
point.
Somebody
who's
way
better.
If
Google
Docs
fix
that.
For
me,
please
so
Trace
Keel
release.
This
is
what
is
in
Tempo.
A
Now
we
use
it.
It's
set
up
in
our
internal
Ops
cluster
right
now,
and
so
this
will
be
available
at
2.0
release.
Type
awareness
was
one
of
the
big.
Of
course.
You
know
goals
of
this
new
back
end,
so
you
can
look
for
ranges
on
status
code,
I,
love.
This
feature
right,
Boolean
logic
and
I
have
a
simple
and
here
so
status
code,
greater
than
200
less
than
equal
to
300.
Let's
look
for
a
range
of
status
codes.
You
know.
A
We
all
know
that
HTTP
status
code
ranges
have
meaning
and
that
will
all
be
available,
including
more
advanced
things
you
can
put
those
in
parentheses
and
or
that
with
another
Convent
condition.
Basically,
the
normal
kind
of
Boolean
logic
you're
used
to
is
going
to
be
100
available
here
in
these
conditions.
A
A
So
if
you
know
like
resource.environment,
if
you
know
it's
on
your
resource
instead
of
on
your
your
span
attributes,
this
will
speed
up
your
queries
immensely
and
you
can
specify
that
in
Trace
kill
you
can't
in
the
current
search,
and
then
we
have
two
basic
aggregations
available
so
like
I
want
to
look
for
all
traces
where
count
or
all
traces,
where
you
account
of
a
certain
kind
of
span,
that's
greater
than
10
or
the
average
of
a
field.
A
I
put
duration,
but
you
can
average
any
field,
so
you
could
in
this
case
I'm
looking
for,
like
let's
say
slower
database
queries
show
me
all
the
times
where
I
have
you
know
the
database
is
greater
than
100
milliseconds
across
all
the
matching
spans.
So
these
are
the
kinds
of
things
that
will
be
available
at
release
and
we'll
have
you
know
some
documentation
and
blog
posts
and
stuff
that
kind
of
detail.
All
this
to
give
people
some
help
yeah
I
get.
Can
you
do
that?
Marty.
A
A
So
you
can
do
that
too.
I
do
think
one
we're
talking
about
how
we're
getting
back
in
January
and
we're
working
hard
to
improve
this
I
think
one
of
the
things
we
really
need
to
hammer
on
is
the
error
returns
when
you
type
an
invalid
query,
because
it's
it's
pretty
rough,
you
know
it's
like
the
normal
kind
of
like
unexpected
token
at
position,
and
that's
not
a
great
experience
right
so
I
think
that's
going
to
be.
One
of
the
things
we
try
to
improve
is
the.
When
you
do
type
a
bad
query.
A
You
know
the
error
message
will
hopefully
help
you
out
a
little
bit
better.
Like
I
said
at
release,
the
old
search
will
be
available
as
well
as
traceql,
so
for
users
who
want
the
old
search
still
there,
and
we
will
have
a
way
where
we
kind
of
phase
this
all
forward
into
triscule
I've
been
pushing
the
grafana
team
to
keep
that
as
a
builder,
even
after
we
kind
of
deprecate
the
old
search,
and
we
only
are
going
to
use
trade
scale.
A
So
if
some
of
this
seems
intimidating
to
you
or
your
users,
everything
will
remain
the
same.
They'll
be
able
to
do
their
current
stuff
exactly
like
they
do
it,
and
maybe
they
can
start
exploring
this
more
advanced
language
as
they
want
to.
A
Okay,
so
phase
two
we're
going
to
add
some
more
aggregations
at
least
Min
and
Max.
Maybe
some
more
I'm,
not
sure
and
then
grouping
is
a
really
cool
feature
in
phase
two,
where
you
know
you
can
specify
some
kind
of
conditions
and
then
you
can
group
or
I
think
we
put
buy,
actually
buy
and
then
let's
say
namespace
so
I
want
to
you
know
Group
by
namespace
or
environment
or
application
name
or
whatever
you
want
to
do.
A
You
can
Group
by
some
kind
of
thing,
and
then
you
can
like
Marty,
showing
maybe
look
for
account,
for
instance.
So
you
want
to
see
a
condition.
You
want
to
group
it
by
a
certain
value
and
then
only
find
ones
where
you
know
it's
over
10
in
a
specific
namespace,
perhaps
or
any
namespace.
In
this
case,
this
one
I
kind
of
broke
out
to
phase
three
pipeline.
Comparisons
are
kind
of
interesting
where
you
can
say
like
we
designed
this
language,
where
we
kind
of
went
off
the
rails,
but
I
love
it.
A
You
could
say,
for
instance,
HGB
dot
status
equals
200
count.
This
works
in
the
language
is
greater
than.
A
Dot
status
equals,
let's
say
just
300
I'm,
making
up
a
fake
example
here
right
count,
so
traces
where
one
status
is
more
common
than
a
second
status.
Thank
you
for
adding
code,
that's
the
actual
semantic
convention
name
there.
So
pipeline
comparisons
are
available
and
then
finally
I
think
these
structural
operators,
which
is
like
looking
for
parents,
descendants
or
siblings
of
spans.
A
So
we
have
this
greater
than
sign,
which
is
just
parent
I
want
to
set
a
set
of
spans
where
the
parent
I
want
a
set
of
spans,
where
there's
a
descendant
relationship
or
a
set
of
spans
with
a
sibling
relationship,
that's
kind
of
where
we're
aiming
for,
and
then
the
undefined
phase
the
feature
phase
of
metrics
I
think
perhaps
this
will
be
post
2.0.
A
This
makes
very
much
sense
to
me
to
continue
forward
with
this
phase
two,
but
I
do
wonder
if
we
should
talk
to
the
community.
Some
you
all
about
these
next
three
steps
and
what
we
kind
of
value
as
a
community
from
this
language,
because
you
know
it's
quite
a
bit
of
cost
to
implement
these.
A
Do
we
want
metrics
more
than
pipeline
comparisons?
Pipeline
comparisons
is
pretty
Advanced
I,
don't
think
a
lot
of
people
are
going
to
want
use
that,
whereas
metrics
might
immediately
be
useful
or
the
structural
operators
I
think
maybe
more
advantageous
than
the
pipeline
comparisons,
and
so
these
kind
of
like
input
from
the
community
after
2.0,
we
want
no
input
before
2.0.
A
We
just
want
to
cut
this
release
so
after
2.0,
we'll
kind
of
settle
some
we're
going
to
get
focused
on
this
phase.
Three
thing:
let's
bring
this
back
to
you
all.
Maybe
if
you
have
some
thoughts,
you
could
kind
of
talk
to
your
own
users
and
get
some
thoughts
together
about
what
you
all
would
value
next
out
of
this
language,
and
we
can
use
that
input
to
focus
our
efforts
over
the
next.
A
D
C
Yeah
you
mentioned
metrics.
Can
you
give
like
some
examples?
What
that
might
look
like
I'm
trying
to
see
like
what
yeah
with
that
phase
might
like
bring.
A
C
A
The
the
big
okay,
so
this
is
what
I
want
and
nobody
else
wants,
or
I
wouldn't
say
that
I
have
some
some
people
who
want
this
style.
The
basic
idea
would
be
to
sorry
use
the
same
language,
basically
to
select
a
set
of
spans,
which
is
what
this
is
all
doing.
We're
selecting
span
sets
and
then
past
that
set
of
spans
into
some
function,
like
rate
for
count
over
time
or
quantile.
You
know
P99
of
a
field
over
time,
something
like
that.
A
The
current
debate
is
pipes
versus
Prometheus
style
functions,
so
this
is
what
I
like
because
I
really
like
sticking
with
the
pipes,
but
the
kind
of
big
argument
against
this
is:
we
have
a
whole
lot
of
different
things
that
use
this
Prometheus
style.
Loki
log
ql
does,
of
course,
prom
ql
does,
and
so
I
would
prefer
sticking
with
the
pipes,
but
in
terms
of
like
adopting
users
and
using
something
they're
already
familiar
with.
Most
people
want
this
more
functional
style
here
at
the
bottom
there.
A
So
right.
So,
basically
right
now,
when
you
select
a
span
set
if
that
span
set
returns,
you
kind
of
are
selecting
that
Trace.
When
you're
doing
metrics
that
span
set
kind
of
turns
into
a
stream
of
spans
across
all
traces
and
then
you
are
generating,
like
a
rate
or
whatever
A
Time
series,
just
like
you
would
out
of
Prometheus
or
a
Time
series
like
you
generate
out
of
Loki.
When
you
add
the,
when
you
add
the
metrics
functions,.
C
Is
this
required
that
the
metrics
generator
to
to
get
this
data.
A
No
so
this
would
be
like
live
calculated
off
of
your
traces.
Basically,
we
intend
to
keep
the
metrics
generator,
because
the
metrics
generator,
especially
at
very
high
volume
Prometheus,
is
you
know,
or
whatever
your
metrics
backend,
that
accepts
Prometheus
remote
right,
whatever
that
is,
is
always
going
to
be
more
efficient
at
storing
and
retrieving
metrics
than
tempo
right.
A
So
for
those
who
want
to
continue
to
use
the
metrics
generator
for
these
canned
metrics,
we
will,
you
know,
keep
that
feature,
and
then
we
also
have
to
discussed
like
moving
the
metrics
generator
into
kind
of
like
the
role
of
a
ruler
where
you
could
generate
your
own
kind
of
like
you
could
write
your
own
queries
that
are
executed.
A
You
know,
as
a
rule,
basically
like
Prometheus
right,
you
can
do
the
Prometheus
rules
and
they're
executed
once
every
15
seconds
and
that
single
data
point
is
stored,
so
you
can
more
efficiently
query
it.
So
that's
a
future
thought
for
generator
like
a
way
for
it
to
go,
but
we're
not
currently
working
on
that.
We
would
need
the
metrics
language
first.
Basically,.
A
All
right,
I
have
other
stuff
question
mark.
So
that's
a
pretty
broad
topic.
A
I,
don't
have
anything
in
particular
for
other
stuff,
but
if
anyone
here
does
feel
free
to
ask
it's
kind
of
an
open
Forum
at
the
moment,
you
could
type
it
in
the
doc
or
chat
or
you
could
put
it
wherever
keeping
us
close
to
prompt
q.
Log
ql
is
easy
for
me
to
drive
adoption
yeah,
that's
what
everybody
says
and
it's
I've
gotten
that
feedback.
So
much
I'm
gonna
give
up
I.
A
I
gave
up
a
long
ago,
party,
I,
love
the
pipe
style
and
I
really
want
to
keep
it,
because
once
you're
kind
of
like
building
this
chain
of
pipes,
I
want
to
kind
of
I
feel
like
you're
in
this
mode,
where
you
just
keep
typing
on
the
end,
but
the
argument
for
keeping
that
functional
style
or
switching
to
that
functional
style
for
metrics.
A
And
then
you
know,
if
you
already
know
log
ql
or
you
already
know,
prompt
ql
suddenly
like
a
lot
of
this
knowledge
transfers
over
and
what
Jared
is
saying
is
what
everyone
has
told
me,
which
is
hey,
we're
already
using
Loki
we're
already
using
prompt.
We
really
just
want
this
thing
to
work.
Similarly,
so
I've
been
told
this
enough,
that
I
think
the
right
choice
is
going
to
be
a
Prometheus
Style
foreign.
C
B
Well,
I
did
have
one
thing
to
mention
to
think
about
like
another
discussion
topic
was
the
structural
operators,
and
things
like
that,
like
the
sibling,
grandparent
or
grandchild
kind
of
stuff
really
requires
new
columns
in
the
parquet
format,
so
we're
kind
of
like
going
to
align
that
work
with
kind
of
like
the
parquet
2
like
in
the
future
and
there's
a
lot
of
other
things.
We'd
like
to
do
with
the
new
format
as
well
like
this
one
is
great,
but
we
already
know
that
there's
more
things
we
can
do
so
like.
B
We
would
be
things
like
custom
columns
right,
like
the
ability
to
more
like
dynamically
control,
the
columns
that
are
created
so
right
now.
It's
like
this
fixed
schema
that
is
kind
of
like
just
a
balancing
act
between
different
things
that
we
thought
would
be
useful,
so
that
kind
of
like
that
would
be
a
really
cool
thing
to
have
you
parquet2
and
while
we're
there,
we
would
add
the
columns
for
these
structural
things.
B
A
Yeah
Adrian
is
currently
specking
that
out
from
our
side,
and
that's
been
a
request,
I
believe
even
for
me
all
and
we've
already
had
that
request
internally,
which
is
right
right
now.
We
have
those
blessed
columns
and
we
either
want
to
do
full
Dynamic
columns,
which
I
would
call
unlikely
or
the
the
kind
of
compromise
is
in
your
settings.
You
choose
the
columns
that
are
broken
out
from
the
main
attributes
column,
which
will
allow
you
to
do
two
important
things.
A
One
is
the
columns
you
query
on
all
the
time
you
can
make
their
own
columns,
which
will
massively
increase
your
efficiency
of
querying
and
the
other
thing
is.
You
can
pull
columns
out
of
that
main
attribute
column
that
are
just
way
too
big
like
enormous
SQL,
queries
or
whatever
something
in
your
org.
That
is
just
cluttering
that
attributes
column
and
is
making
it
harder
to
query.
This
will
be
kind
of
like
an
advanced
user
kind
of
case,
but
we'll
definitely
include
some
details
and
that's
the
parquet2.
A
A
Yeah,
that's
right:
cool
it'll
be
out
soon
after
2.0
no
promises
about
2.1
it'll.
Well,
we
should
get
back
to
regular
Cadence.
Like
I
said
we
just
weave
it
off
more
than
we
should
have.
There
was
no
scope
creep.
Just
the
initial
scope
was
enormous
and
we
didn't
quite
realize
that
until
we
were
into
the
mix
like
parquet
and
Trey
school
at
the
same
time,
it's
been
a
huge
amount
of
work.
The
team
has
really
putting
a
ton
of
effort
and
I.
Think
we've
achieved
something
pretty
cool
by
getting
this
together.
A
A
The
experience
is
rough
and
that's
what
we're
going
to
really
focus
that
last
month
on
is
cleaning
that
experience
up.
But
if
you
use
tip
of
Maine
grafana
tempo
and
then
you
set
Trace
ql
editor
I
think
is
the
grafana
flag
rafana
future
flag,
and
then
you
have
to
do
what
is
it
Marty
you're?
Oh,
it's
the
same.
Config
as
it
is,
the
block
type
is
a
V
part.
Okay,
so
you
can
fool
with
it.
A
Now,
if
you
want
like
I
said,
experience
is
rough
and
that's
our
goal
to
make
that
smoothed
out
ready
to
go
for
2.0.
D
A
A
We
will
be
ready
to
go
in
the
new
year
with
2.0
and
all
the
new
features
I
appreciate
all
your
all's
involvement
and
patience
on
this
one.
Thank
you.