►
From YouTube: Grafana Tempo Community Call 2022-10-13
Description
Join our next Tempo community call: https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY/edit
What was discussed:
- Discussion of progress on some difficult parquet issues.
- TraceQL Demo! (kind of)
- AMA!
- Welcome new team member
A
We
probably
have
a
demo
of
trace
ql
for
you,
but
coonrod
is
currently
juggling
things
a
bit
I
think
we
jumped
it
on
him
a
little
bit
late
this
this
morning,
so
he's
going
to
do
his
best
if
it's
a
little
janky
or
he
doesn't
quite
get
it
together,
we're
all
going
to
forgive
him,
and
maybe
we
could
get
like
a
video
or
something
later
today
in
the
channel
or
tomorrow
or
something
so
you
can
all
see,
but
we
do
have
some
basic
interaction
with
grafana.
You
know
a
a
query.
A
Like
a
text
box,
you
can
type
in
the
query
hit
enter
your
returns
back
or
get
your
traces
back.
So
hopefully
we
can
get
something
together
in
the
call,
if
not
I
promise,
we
will
maybe
we'll
have
something
available
kind
of
soon.
You
can
see
a
little
bit
of
of
the
the
language
itself
in
action
in
grafana,
but
to
the
actual
content
of
this
meeting.
A
First
of
all,
we
have
a
new
team
member
who
you
might
never
see.
Adrian
stower
he's
really
sharp
and
he
lives
in
Australia.
So
I've
seen
him
twice
now
he's
a
good
guy,
but
he's
a
little
hard
to
coordinate
with
because
he's
on
the
side
of
the
planet
from
a
lot
of
us.
A
But
if
you,
you
should
see
some
PRS
from
in
the
repo
and
I
think
he's
going
to
be
very
impactful
for
the
project
I
actually
linked
one.
There
he's
already
kind
of
started
to
to
get
some
work
in
and
I
really
wish.
He
were
here
to
introduce
himself.
We
normally
try
to
do
that
with
new
members,
but
sadly,
this
time
we
cannot
so
Adrian.
A
A
In
other
work
news.
We
continue
to
move
forward
with
parquet.
This
past
month
has
been
something
of
a
struggle
with
the
library
we're
using
it's
a
very
high
performance
Library.
They
use
a
lot
of
unsafe
pointers,
a
lot
of
buffer
pooling
a
lot.
They
dropped
to
assembly
quite
a
bit.
It
is
a
library
where
they
cut
no
corner
or
they
they
I
guess
sacrifice
everything
for
optimization.
A
A
A
lot
of
memory
improvements
recently
in
the
past
week
or
so
have
been
important,
and
also
we
have
some
garbage
collection,
Phoenix,
panics,
I've,
never
even
seen
in
a
go
program,
but
basically
the
garbage
collector
is
panicking
because
there's
a
pointer
that
it
doesn't
have
any
clue
what
it
is
essentially.
B
A
This
is
an
invalid
pointer
in
the
Heap.
What
is
this
and
that's
due
to
some
of
the
unsafe
pointer
handling?
So
we
have
some.
We
see
some
light
at
the
end
of
the
tunnel.
It's
been
kind
of
a
tough
month
because
of
that,
because
we've
been
really
focused
on
this
instead
of
pushing
the
project
forward
like
we'd
like,
but
the
stability
of
course
is
important
and
we'd
rather
deliver.
You
know
the
next
version
of
tempo,
stable
and
ready
to
go
then
throw
something
out
there.
A
That's
not
quite
ready
foreign
yeah
I
would
love
to
share
that.
So
the
library
in
question
is
I'll,
put
the
link
in
the
docs,
where
we
can
all
see
it.
A
The
link
oops
I'm,
apparently
pasting
it
eight
times
way
down
at
the
bottom
there.
It
is
right
there.
It's
I,
we
actually
were
looking
at
parquet
before
this
library
was
public
and
there
wasn't
really
a
library
that
was
performant
enough
for
what
we
wanted.
This
one
is
significantly
well
optimized,
like
I
said:
there's
some
small
issues,
and
also
we
are
really
pushing
the
bounds
of
this
Library.
A
I
would
say
more
so
than
even
segment
themselves
are
doing
with
it,
so
we
are
providing
a
lot
of
feedback,
there's.
Actually
a
number
of
different
teams
that
have
now
become
invested.
This,
including
who
does
frost
DB
it's
one
of
the
profiling
groups
pyroscope
or
there's
that
other
one
there's
two
in
their
name
kind
of
the
same
thing,
but
anyways
there's
a
lot
of
interest
in
this
repo.
A
It's
getting
a
lot
of
traction
and
all
of
the
different
users
of
it
are
really
pushing
it
to
its
limits
and
I
think
we're
all
working
to
improve
it.
So
it's
a
it's
a
very
cool
Library.
It
enables
us
to
do
what
we
want
in
Tempo,
but
we
have
had
some
issues
with
it
as
well,
but
I
do
hope
to
see
some
improvements
or
I
do
hope
to
kind
of
move
past
these
soon.
The
owners
of
the
repo
are
extremely
responsive,
they're,
open
to
improvements.
A
They
have
a
monthly
Community
call
themselves
and
they
work
hard
to
to
resolve
these
issues.
So
it
is
well
supported
and
we
have
confidence.
You
know
it's
gonna,
it's
gonna
do
what
we
need
to
do,
but
it
has
been
something
of
a
challenge
and
it's
been
something
of
frustration
but
I'm,
proud
to
say
we're
close
to
the
end
of
it.
I
hope
I'm
not
going
to
talk
about
the
trace,
kill
demo.
Yet
I'm
going
to
slyly
move
this
to
the
end
of
the
list
and
note
there's
an
AMA
in
there
as
well.
A
So
just
get
a
bunch
of
very
long-winded
questions
together
for
you
all.
So
we
give
forking
around
a
little
bit
of
time.
Also
on
the
list.
Observability
con
is
coming
up
next
month
soon
about
three
or
four
weeks
from
now
links
there.
I
guess:
I
could
just
click
it
and
see
what
the
dates
are,
but
I
will
be
at
observabilitycon
personally,
it's
in
New
York.
A
If
anyone
else
is
attending
find
me
I'll
be
at
a
booth
somewhere,
I'm
sure
talking
about
Tempo
so
come
find
me
say:
hello
I'll
also
be
at
kubecon
at
the
end
of
this
month.
So
if
you're
there
come
find
me
I'll,
be
at
a
booth
somewhere,
we'll
say:
hey
I'd,
love
to
hear
what
you're
doing
with
Tempo
and
just
talk
shop
a
little
bit,
but
we
will
have
our
session
and
observability
con.
A
We
will,
of
course,
demo
a
lot
of
the
things
we're
talking
about
as
well
as
give
some
better
dates
and
times
for
this
next
2.0.
We
really
hope
to
have
a
clear
vision
of
that
in
the
next
three
or
four
weeks
here.
So
I
get
excited
about
that.
If
you
can
attend
virtually,
please
do
if
you
can
tell
in
person
I'd
love
to
see
you
and,
like
I,
said
I'll
get
kubecon
too.
A
C
Hey
I
was
yeah,
I
I
was
wondering.
C
So
we've
we've
we've
had
some
changes
internally,
I,
don't
know
if
Tanner
has
chatted
with
you
so
now
we're
not
we're.
We
have.
We
run
a
modified
version
of
the
Distributors
that
read
directly
from
Pub
sub
light.
On
our
end,
we've
had
a
little
bit
of
challenge
with
the
version
of
The
Collector
that
was
being
used
by
the
Distributors
I.
C
Don't
know
if
that's
still
on
your
radar,
just
meaning
that
it's
it's,
it
was
quite
old
and
we
and
our
internal
collectors
are
pretty
close
to
what's
on
the
open
source
project.
Of
course
it
like
they
release
every
two
weeks.
So
like
there's
a
limit
to
how
but
like
we
try
to
stay
pretty
close
yeah
I,
don't
know
if.
D
Yeah
we
updated
I
was
checking
what
the
version
was
was
like,
oh
57
or
58.
and
yeah.
Essentially,
we
bumped
up
the
the
collector
version
with
very
old.
We
were
also
using
an
antenna
Fork
that
now
we
have
dropped.
We
had
to
4K
file
because
some
we
were
relying
some
on
some
metrics
that
were
made
internal
a
while
ago,
but
we
think
it's
like
a
decent
compromise
to
just
not
use
the
fork
anymore
and
yeah.
D
We
updated
to
that
version,
because
there
is
a
breaking
change
with
the
like
the
for
the
B
trace
protobuf,
the
format.
All
the
names
of
the
like
distracts
were
renamed
from
instrumentation
library
to
scope,
so
that
was
the
like
the
oldest
or
rather
newest
version.
That
will
still
be
supporting
both
formats,
the
old,
the
old
one
and
the
new
one,
and
we
didn't
get
want
to
just
bump
it
too
too
fast.
So
yeah,
that's
that's
the
situation.
We
also
updated
open,
Telemetry
Proto.
D
D
A
D
Yes,
it
will
be
able
to
ingest
to
both,
although
it
doesn't
have
backwards
compatibility
when
you
query
it,
so
the
output
would
be
in
the
new
format.
Okay,.
A
A
Let
us
get
up
to
date
completely,
because
then
we'll
have
like
at
least
one
release
where
we
tell
people
about
the
problem,
if
you're
using
HTTP
push-
and
we
can
also
give
them
a
heads
up
about
the
change
on
the
the
pull
side
on
the
queries,
the
query
side,
okay,
I
think
that
should
be
fine
is
that
is
that
all
right
with
you
Gabe?
Is
that
new
enough
that
572
yeah.
C
Yeah
I
think
I
think
that's.
Okay.
The
amount
of
changes
that
we
have
to
do
to
our
code
is
minimal
on
those
versions.
So,
okay.
C
I
think
we
we
might,
we've
tried
peering
to
The
Collector
repo,
our
Pub
sub
and
Pub
sub
light
receivers
and
exporters,
and
it
hasn't
been
extremely
well
received.
So,
okay,
I,
don't
know
sure
we
we
we're
just
we're
yeah
I,
think
there's
a
lot
of
complexity
around
Pub
sub
light.
C
A
Next
critical
question:
where's
the
turkey
hand,
it
is
likely
in
the
basement,
in
my
children's
box,
of
costumes,
that's
the
most
likely
location
for
that.
So
turkey,
hats,
costume
box,
downstairs.
C
I
had
the
second
question
at
first
time:
turkey
hat
related.
No,
no,
no
just
okay,
I
saw
in
the
hotel
contrib
that
the
the
service
graph
processor
is.
Is
there
as
a
component?
I
was
wondering
if
you're
going
to
keep,
as
you
add,
features
in
Tempo?
Are
you
going
to
keep
adding
the
features
to
the
kind
of
like
the
modeling
contribute
as
well.
D
I
think
that's
the
intention
that
we
benefit
both
like
anything
that
we
do
in
Temple.
We
contribute
back
to
a
hotel
and
vice
versa,
also
because
the
the
integration
right
now
with
the
UI
is
limited
to
grafana,
so
yeah
I
think
it
makes
sense
to
keep
both
versions
very
very
close,
if
not
identical.
Another
goal
of
this
change
was
to
vendor
country
back
in
in
the
grafana
agent.
As
we
use
open
television
equipment
agent,
there
is
some
reworks
and
a
bit
of
noise
happening
in
that
area.
D
A
Cool
yeah,
like
Mario,
said:
grafana
supports
these
metrics,
so
we
thought
it
was
cool
that
the
collector
could
generate
it.
So
if
you're
using
Jaeger
or
some
other
thing,
some
other
open
source
thing
or
some
even
vendor,
you
could
generate
these
same
metrics.
If
you
wanted
and
you
could
do
grafana
things
with
them,
so
I
thought
it
was
cool.
We
got
that
back.
Upstream,
too
cool.
E
I
have
a
question
about
parquet
yeah,
so
my
knowledge
is
very
limited
of
the
the
whole
format
but
I
wonder
like
as
I
see,
we
have
like
some
columns
that
are
the
temple
team
decide
to
to
add
like
a
to
be
so,
it
will
be
faster
than
for
searching.
So
we
have
I
think
it's
like
a
service
name
and
and
others
attributes.
E
Will
it
be
possible,
then,
for
the
common
people?
That's
also
access
and
columns
there
like,
if
you
feel
like
oh
yeah,
I,
normally
do
a
lot
of
search
using
this
specific
attributes.
So
I
would
like
to
improve
the
the
performance
there.
A
So
at
release-
probably
not-
we
did
kind
of
pick
some
blessed
columns.
Service
name
itself
is
required
by
the
hotel
spec
so
that
one's
pretty
safe.
But
we
did
also
include
like
cluster
namespace
pod,
and
then
we
also
included
some
very
similar
ones
from
the.
What
is
it
the.
C
A
Semantic
conventions
like
Kate's
dot,
cluster
or
something
like
that
kubernetes
dot
pod.
So
we
tried
to
include
ones,
we
thought
would
be
most
likely
decorated
and
then
it
would
increase
right,
speed
up
search
when
you
did
those.
We
have
already
talked
internally
about
the
change
you're
suggesting,
which
is
why
don't?
A
But
it
would
not
be
in
this
very,
very
first
version,
but
that's
a
great
feature
request
I'm,
going
to
bring
that
up
at
our
internal
weekly
chat
that
we
got
some
external
feedback
on
that,
because
I
want
that
feature
as
well.
I
want
her
tenant
because
I
want
the
ability
on
our
side.
We
have
various.
You
know
sizes
of
customers,
and
sometimes
we
have
very
large
customers
who
don't
push
those
those
special
columns
and
they
don't
query
on
those
special
columns.
A
So
if
we
notice
that
they're
querying
on
particular
columns,
we
can
use
that
to
we
can
use.
We
can
separate
those
out.
We
can
speed
up
their
queries.
It's
also
kind
of
cool.
The
the
other
like
advantage
to
this
feature
is
that
that
you
suggested
is.
Sometimes
people
have
very
large
columns
like
SQL
query,
and
these
columns
are
huge
because
you
know
SQL
queries
are
very
long,
sometimes
and
they're
just
thrown
into
this
tag
in
the
trace.
A
If
you
can
pull
those
out,
maybe
you're
not
searching
on
them,
but
now
you've
reduced
the
size
of
your
main
column
that
you
are
searching
regularly
and
you
it's
another
tool
to
kind
of
like
let
operators
improve
search,
speed
on
the
on
their
setup.
So
I
do
think
that's
in
the
on
the
the
list
somewhere,
but
it
won't
be
at
like
the
2.0
release.
A
That's
a
great
feature
request.
We
have
discussed
it
already.
I
think
it's
I
think
I
think
it's
definitely
going
to
happen
because
I
think
we'll
need
it
too
and
I
just
think.
It's
not
going
to
be
immediate.
D
B
I
can
talk
about
some
physical
stuff
I.
Don't
have
like
a
full
and
demo
going
on.
Unfortunately,
but
I
can
show
some
screenshots
from
the
grafana
UI
and
some
API
responses.
A
Okay
show
us
what
you
can
and
then,
if,
if
I
guess,
we
have
a
meeting
me
and
Conrad
and
some
of
the
developers
are
going
to
be
on
Monday
and
look
at
the
state
of
things
and
what
we
want
to
get
together
for
observabilitycon.
Maybe
coming
out
of
that
I'll
be
able
to
put
together
some
basic
demo
and
we
can
do
like
a
video
for
people.
I
do
want
to
share
this
because
it's
somewhat
working,
but
maybe
just
too
hard
to
Cobble
together
in
like
the
half
hour,
we
gave
Conrad.
B
B
B
Yeah
wait:
where's
Temple.
B
Okay,
let's
see
so,
we
have
three
squill,
the
baby
query
Tempo
changed
a
little
bit
up
until
now
you
could
query
using
tags
and
then,
for
we
respond
with
like
a
list
of
traces
that
match
this
response
and,
like
the
structure,
was
the
choice
search,
mid
metadata,
which
consists
of
this
data
with
trade
school.
We're
expanding
on
this
and
we'll
also
store
like
the
span
set
the
match.
The
original
query
and
pass
that
into
the
like
onto
the
response
object.
B
So
I
just
want
to
show
like
how
the
definition
changes
like
you
have
the
original
Fields
here
and
we'll
add
a
span
set,
which
is
just
like
an
object
containing
a
list
of
spans
and
then
for
every
span.
We'll
also
have
like
the
attributes
and
stuff
like
that.
B
So
let
me
see
I'm
struggling
with
my
windows:
yeah,
okay,
okay,
so
I'm
running,
template,
Tempo
locally
right
now,
I,
unfortunately
get
into
like
a
full
demo
from
grafana
aquarium,
Tempo
and
like
showing
it
grafana.
It
shouldn't
work
but
yeah
just
trying
to
get
it
running
right
now,
but
you
can
query
Tempo
directly
using
the
API
search
endpoint.
Just
like
the
the
current
search
implementation,
you
can
pass
start
and
end
time
and
limits,
and
then
you
pass.
B
You
can
also
Pass
the
full
three
SQL
query
using
the
the
queue
parameter,
so
you
just
pass
it
to
Tempo
in
this
case,
I'm
filtering
on,
like
all
traces
that
contain
a
span
set
and
in
that
span
there
should
be
inexpensive.
There
should
be
a
span
with
service
name
equals
Tempo,
query
and
also
the
cluster
attribute
should
be
like
a
hex
equal
to
this
thing
here.
B
K3D,
then,
if
you
query,
this
you'll
get
a
response
back,
which
is
an
object
with
traces
and
then
that's
just
like
a
list
of
all
the
traces
that
matched
your
query.
B
So
this
is
basically
the
product
of
object.
I
showed
you
have
the
trace
ID.
It
would
Trace
name
so
the
same
information
you
can
see
in
the
current
search
interface
and
the
new
field
is
the
the
span
set.
The
span
set
will
say
how
many
spans
match
your
query
so,
for
instance,
for
this
Trace
five
spans
of
this
Trace
matched
this
tracequal
query
and
we
also
return
like
currently
three
spans
that
are
like
part
of
the
span
set.
B
A
B
Is
part
of
it
normally,
the
service
name
should
also
be
within
this
ad
bits
list,
but
we
discovered
a
bug
which
I
introduced
today.
I
guess
so
you
know
there
should
be
multiple
AdWords
here
and
if
you
modify
your
query,
if
you
add
more
filters,
for
instance,
you're
also
going
to
say
like
I'm
looking
at
the
attributes
http.path,
this
is
an
Android
set
on
my
traces
on
my
span
somewhere.
B
You
can
also
add
an
extra
filter
on
that,
for
instance,
it
has
to
start
with
API
and
then
something
else
in
wildcard.
B
I
need
to
connect
them
cool
and
I
will
filter
it
for
you,
but
normally
you'll
get
multiple
attributes
here,
but
that's
not
working
right
now,
so
we'll
try
to
record
a
better
demo.
B
I
can
also
show
a
screenshot
of
the
UI
over
here.
So
it's
trying
to
get
this
working
but
didn't
want
to
start
up
on
my
machine,
but
so
what
we're
doing
in
the
grafana
UI
is.
Currently
we
have
a
separate
tab
trade
scroll,
but
we're
planning
to
merge
them
with
the
search
tab.
So
it's
still
like
under
we're
kind
of
like
researching
what
the
best
ux
will
be
here
and
basically,
you
have
just
like
one
big
text
box
in
which
you
can
put
your
query.
B
The
I
should
type
the
query,
there's
autocomplete,
so
we
will
fetch
the
tags
from
Tempo
and
we'll
be
able
to
propose
stuff
like
hey
you're
typing
s,
so
it
will
propose
service
name
or
will
propose
different
tags
present
on
your
spans,
and
it
will
also
help
with
you
know,
making
sure
the
query
is
is
valid.
You
know
just
kind
of
like
with
premature
data
complete
it's
kind
of
similar.
We
aren't
as
strong.
B
Yet
as
promises
out
of
complete-
because
you
know,
there's
a
lot
more
work
in
there-
maybe
we
want
to
get
to
that
point
and
then
the
response
right
now
is
in
a
table
in
which
you'll
see
you
have
different
columns,
the
trace
ID
and
the
span
ID.
So
every
block
here
is
like
one
Trace
that
matched
and
the
three
rows
you
can
see
under
it
are
spans
that
match
the
original
query.
So
this
is
part
of
the
span
set,
so
in
the
Json
respond
I
showed
there
were
also
three
spans.
B
They
were
part
of
the
span
set.
Those
are
those
three
spans
here
and
then
yeah
we're
just
showing
information
in
the
table
so,
for
instance,
Discovery
it
query
the
service
name
and
the
HTTP
method.
So
we
will
fetch
these
attributes
out
of
the
back
end
and
also
populate
them
on
the
spans
and
hand
them
like
all
the
way
to
the
front
end.
So,
for
instance,
in
this
case
it's
just
service
name,
must
you
know
equal
any
wild
card?
B
So
it's
just
any
value
is
valid
valid
and
you
can
see
there
will
be
multiple
values
here.
So
it
can
be
a
quick
way
to
you
know:
you're
searching
for
a
specific
span.
You
know
an
attribute
starts
with
something,
but
you
don't
know,
don't
know
the
exact
value
or
you
want
to
see
all
the
different
values
in
your
backend.
You
can
just
use
the
vertex
with
a
wildcard
and
it
will
show
the
actual
values
in
here
yeah.
Besides
that
this
Council
be
dynamic.
B
Based
upon
your
query
and
the
whole
query
path,
is
you
know
kind
of
optimized
in
that?
If
you
query
your
spans
for,
for
instance,
HD
method
and
service
name,
these
address
will
be,
will
be
passed
all
the
way
down
to
the
parquet
level,
and
we
will
only
search
through
the
columns
like
corresponding
to
these
attributes.
So
we
don't
search,
you
know
everything.
Only
these
columns
so
yeah
it's
going
to
be
much
faster
than
like
the
current
search.
B
Yeah,
so
to
make
learning
easier
the
way
design.
The
way
Trace
call
is
designed
is
yeah.
I
guess
mostly
like
the
the
domain.
We're
working
with
traces
is
different
from
metric
series
and
logs.
So
tree
squirrel
is
also
different
in
in
concept
because
of
that
so
an
important
concept
into
SQL
are
these
span
sets
which
are
these
curly
braces.
So
everything
within
this
are
like
conditions
that
Define
like
a
span
set,
which
is
a
totally
different
concept
than
what
you
have
when
you're
dealing
with
time
series
like
Prometheus
and
Loki.
B
So,
while
we're
trying
to
design
trsql
in
like
a
similar
way
to
problem
local,
so
it
also
has
you
know
the
similar
syntax
of
the
operator
with
and
the
value
within.
Like
the
the
quotes
code
marks,
it
just
won't
be
possible
to
build
a
language
like
prompt
well,
which
works
on
traces
like
Tracer.
Just
you
know
too
different
for
them.
A
The
I've
always
personally
not
liked
the
comment
and
prom
ql,
because
it's
ambiguous
what
you're
actually
doing
I
do
agree
with
what
you're
saying
here.
Like
you
know,
any
attempt
to
make
the
learning
curve
easier
would
be
nice,
but
we
wanted
to
support
like
more
complex
expressions
like
we
can
do
or
which
prime
kill
can't
do
you
could
ask
for
you
know,
service
name
equals
a
or
service
name
equals
B
and
and
that
perhaps
with
a
different
condition.
A
So
we
wanted
a
more
expressive
language
than
prompt
ql,
because
the
objects
you're
selecting
are
far
more
complicated
than
a
stream
of
metrics
right.
The
stream
of
metrics
is
a
set
of
labels
and
then
data
associated
with
it,
whereas
a
trace
is
an
extremely
complex
object.
So
we
wanted
a
language
that
could
do
that.
A
Well,
but
I
do
appreciate
the
feedback
and
it
does
make
me
wonder
if
there's
some
Middle
Ground
I
would
say:
that's
not
an
immediate
goal,
but
I
agree,
promise
you're
great,
but
helps
on
board
folks
and
they
work
similarly,
eventually
relatively
simple
use
case.
Yeah.
E
A
A
Do
we're
gonna
do
what
we
had
planned
first,
then,
if
we
start
getting
a
lot
of
feedback,
if
a
lot
of
people
want
a
comment
to
mean
and
and
they
want
both-
and
we
start
hearing
that,
then
we
can
devote
some
resources
to
expanding
the
parser
I'm,
getting
that
to
work
so
yeah
I
I.
That
has
been
a
concern
from
the
beginning
like
how
similar
to
prom
ql.
A
Do
we
want
to
be?
How
much
do
we
need
to
do?
A
Our
own
thing
come
as
basically
and
yeah
comments
and
right
right
and
both
of
those
and
those
both
have
very
similar
data
models,
because
it's
like
you're
selecting
streams
of
data,
in
one
case
it's
a
bunch
of
floats
and,
in
the
other
case,
is
a
bunch
of
strings,
but
you
are
kind
of
selecting
something
very
similar,
which
is
why
it
works
so
cleanly
between
log
queue
and
prompt
ql,
whereas
in
traces
there's
no
real
stream
of
data,
or
at
least
we
didn't
design
Tempo.
A
B
But
the
same
time,
we're
also
looking
at
like
design
the
UI
in
such
a
way
that,
like
make
it
as
easy
as
possible
to
query
to
create
Trace
called
queries
like
make
simple
queries
as
simple
as
possible,
but
also
allow
people
to
make
more
advanced
queries
if
they
want
to
yeah.
A
Oh
to
your
point,
there
Lucas
the
goal
is
on
this
very
first
iteration
to
keep
the
current
UI
exactly
like.
It
is
so
for
users
who
don't
want
to
learn
a
new
query
language.
They
can
continue
to
use
the
UI
as
it
is,
and
then
they'll
select
some
labels
they'll
pick
a
service
name.
A
Maybe
they'll
do
some
duration
and
stuff,
and
it
will
write
a
query
for
you
in
the
text
box
above
and
you
can
choose
just
to
hit
run
and
just
do
it
as
it
is
or
you
can,
or
you
can
start
learning
the
triscule
language
through
that.
So
we
will
not
get
rid
of
the
current
UI.
All
of
the
existing
ability
to
search
will
be
there.
We
see
Power
users
and
people
who
understand
their
Trace
structure,
moving
to
traceql
more
quickly
than
an
average
user.
A
Who
just
show
me
the
service
show
me
something
less
than
a
second
or
greater
than
a
second.
You
know
that
kind
of
user
can
continue
to
use
the
UI
it'll
still
be
there
and
then
maybe
the
user,
who
is
more
interested
in
very
you
know,
tightly
creating
a
very
tight
set
of
conditions
that
return
a
very
like
nuanced
set
of
traces
or
very
you
know
well
filtered
set
of
traces.
That
ability
will
also
be
there
foreign.
A
Cool
I
appreciate
coonrod
doing
his
best
here
and
cobbling
that
together.
Just
seeing
the
curls
was
pretty
cool,
but
I
do
hope.
Definitely
At
observability
Con
keep
an
eye
on
that
and
then
I
also
think,
hopefully,
because
I
have
to
get
a
demo
together
in
the
next
couple
weeks
of
zero
Billy
con.
So
as
soon
as
I
get
it
I'll
give
you
all
a
sneak
peek
in
the
in
the
slack
Channel
I'll
dunk
the
video
in
there,
and
you
guys
can
see
some
of
what
we've
done
already
thanks.
B
B
It
should
be
close
to
have
a
like
a
PRN
Tempo,
and
then
there
were
also
a
couple
of
PRS
already
in
the
grafana
Repository.
So
it's
all
like
coming
together.
C
A
All
right,
I,
give
everyone
a
few
seconds
left
here
for
questions
feel
free
to
Chuck
it
and
chat,
or
if
you
want
to
unmute
and
ask
whatever's
fine
I,
do
appreciate
everyone
coming
I
see
we
have
all
our
usuals
here.
I
appreciate
all
your
support
for
Tempo
and
using
the
project.
Please
keep
letting
us
know
what
you
need
and
please
keep
you
know
with
the
group,
and
we
will
keep
you
up
to
date
on.
What's
going
on
we're
doing
our
best
here
and
I.