►
From YouTube: Grafana Tempo Community Call 2022-07-14
Description
Tempo Community Call
- New Helm chart maintainers
- Parquet in Ops! Metrics shared
- Queues/DBs in service graphs
- APMish table demo
Join our next Tempo community call:
https://docs.google.com/document/d/1yGsI6ywU-PxZBjmq3p3vAXr9g5yBXSDk4NU8LGo8qeY
Learn more at https://grafana.com and if all of this looks like fun, why see if there’s a role that fits you at https://grafana.com/about/careers/ ?
A
We
are.
We
have
a
decent
presentation
for
you
today,
actually
kind
of
better
than
I
thought
coming
into
today.
A
lot
of
stuff
was
kind
of
thrown
at
me.
I
didn't
realize
how
much
we
gotten
done
in
the
past
month.
Let
me
present
this
guy
here
there.
It
is,
hopefully
question
mark
yep
so
stuff
to
do
today,
stuff
to
cover.
We
have
some
new
Helm
chart.
A
Maintainers
I
wanted
to
highlight
that
and
Anya
kind
of
reached
out
to
a
couple
people
who
had
been
doing
good
work
in
the
community
and
we
brought
two
more
maintainers
in
for
the
helm
chart,
which
is
awesome,
because
it's
often
very
difficult
to
keep
up
with
that
project
and
to
see
more
people
involved
in
there.
It's
very
cool
we
have
parquet
in
our
Ops
cluster,
which
is
a
really
cool.
A
It's
a
very
large
cluster
and
we
have
it
stable
there
and
excited
to
share
some
metrics
and
stats
with
that
and
parquet
being
the
kind
of
upcoming
format
for
Tempo
back-end
format.
There's
a
lot
of
really
neat.
A
There's
a
lot
of
really
neat
improvements.
There
and
a
lot
of
neat
capability
given
back
to
our
user
I
feel
like
users,
and
then
we
have
our
cues
databases
in
the
service
graph.
A
cool
extension
on
our
existing
service
graph
features
that
kunarod's
going
to
talk
to
us
about,
and
our
guest
presenter
Joey
give
me
the
last
name
Joey.
How
do
you
do
it.
A
Done
thanks,
man
towards
okay,
Joey
will
be
showing
us.
Some
of
the
cool
work
he's
been
doing
on
this
apm-ish
is
what
we
call
it
table
kind
of
a
aggregate
table
from
our
metrics.
We
produced
to
give
a
high
level
overview
of
endpoints
and
your
various
in
your
various
environments
and
with
your
applications
to
help
quickly
diagnose
issues.
So
we
have
a
pretty
cool
agenda
today.
Do
not
hesitate
to
ask
questions
or
put
things
in
the
agenda
doc
or
interact.
A
However,
you'd,
like
we'll
kind
of
keep
an
eye
on
all
the
different
ways
to
talk
to
us
and
try
to
answer
questions
or,
if
you
just
want
to
bring
up,
tops
conversation
or
whatever,
certainly
the.
Certainly,
it's
not
only
limited
to
the
four
things
on
the
slide.
A
We'll
start
with
our
new
home
chart,
maintainers,
one
of
which
is
currently
on
the
call
Martin
and
Fausto
like
I
said
both
these.
Both
these
Engineers
have
been
doing
great
work
in
our
slack
channel,
helping
other
users
out,
which
is
awesome,
love
to
see
that
and
helping
on
the
helm
charts
any
help
we
can
get.
There
is
fantastic
and
I'm
so
happy
that
these
two,
these
two
Engineers
jumped
in
and
have
been
doing
good
work
for
us.
It's
going
to
help
the
team
a
lot
home
charts
are
not
something
we
use
internally.
A
A
Cool
if
you'd
like
to
say
something
you
would
welcome
to,
if
not,
if
you'd
like
to
accept
your
award
and
thank
your
parents-
and
you
know
your
your
best
friends
and
all
that
you're
more
than
welcome.
C
I
think
it's
very
cool
yeah
I,
really
like
the
products
and
yeah
I
I,
think
that
yeah
I,
hope
I
hope
I
will
be
able
to
help.
There
are
some
some
people
so
yeah.
A
Cool,
thank
you.
So
much
I
definitely
have
seen
a
lot
of
activity
in
slack
a
lot
of
help
from
you.
So
I
appreciate
all
that
effort.
Absolutely.
A
Cool
parquet
is
in
Ops,
so
we
tried
this
about
two
weeks
back.
A
Okay,
so
first
of
all,
ops
is
a
fairly
large
cluster,
1.5
million
spans
per
second
on
average
it
Peaks
to
about
2
million
and
dips
below,
maybe
like
1.2
million
overnight,
and
we
take
about
250
Mega
second,
on
average
and
the
previous
attempt
went
very
poorly.
This
is
about
two
weeks
back.
We
installed
it,
we
stabilized
it.
We
thought
things
were
looking
pretty
good,
but
there
was
a
bug
and
the
parquet
Library
we're
using.
A
That
was
dropping
something
like
20
of
our
total
traces
on
compaction,
which
is
not
really
an
acceptable
kind
of
back
end.
I
would
say
so
you
can
see
here.
This
is
the
vulture.
It's
a
small
process
that
sits
alongside
Tempo
and
we'll
push
traces
into
Tempo,
and
then
query
them
back
repeatedly
and
make
sure
that
you
know
it's
like
a
durability
or
an
accuracy
kind
of
a
tester,
and
it
was
very
unhappy
During.
The
period
in
which
we
installed
it.
A
You
can
see,
20-ish
percent
of
traces
were
just
missing
and
then
incorrect
result
was
around
like
maybe
three
to
five
percent
traces
were
just
wrong,
came
back
with
just
unexpected,
missing
spans
or
incorrect
structure
or
whatever
else,
so
it
was
grim
and
we
removed
it
from
Ops
after
a
couple
days,
but
we've
installed
it
just
a
couple
days
back
and
vulture
is
now
happy.
A
So
this
is
our
current
look
right
now
over
the
past
few
days,
you
can
see,
there's
it
says
three
or
four
traces
not
found
on
a
search
so
we're
looking
at,
like
a
thousandth
of
a
percent
of
these
failed
searches.
We
can
dig
into
that.
These
are
numbers
we
can
handle.
It
may
have
been.
You
know,
a
bug
in
any
number
of
different
components.
These
are
the
kind
of
trans
in
errors
that
you
know
we
can
move
forward
with
the
20
meant
something
horrible
is
happening.
A
This
could
mean
any
number
of
different
tiny
issues,
and
we
will
continue
to
dig
on
that.
So
so
things
are
looking
great
right
now,
with
with
parquet
and
Ops
after
kind
of
a
frustrating
first
attempt,
we
put
it
in
there.
We
had
all
these
High
Hopes
stabilized
it
and
then
the
vulture
metrics
just
kept
getting
worse
and
worse,
but
today
vulture
is
happy,
and
that
makes
us
happy.
It
means
compaction's
working.
It
means
ingestions
working
queries,
working
everything's
heading
the
right
direction
now
for
this
new
back
end.
A
So
let's
talk
about
the
resource
usage.
It's
going
to
cost
more
resources
to
use
this
back.
End
kind
of
the
trade-off
here
is
we're
going
to
do
more
on
the
front
end.
We
have
to
do
more
work
when
we
receive
the
data
so
that
we're
going
to
store
it
in
a
format.
That's
more
easily
accessible
for
queries
and
for
reading
is,
is
the
goal
here.
A
So
it's
it
is,
without
a
doubt,
going
to
cost
us
a
little
bit
more
and
we're
looking
I've
had
this
1.5
x,
that's
really
for
all
across
the
entire
cluster,
all
resources,
I'd
say
it's
about
up
one
and
a
half
right
now
we
can
improve
on
that,
but
I
would
be
shocked
if
we
can
ever
get
it
all
the
way
down
to
the
Proto.
A
What
our
old
format
did
was
about
as
dumb
and
easy
as
a
thing
as
you
could
possibly
do,
but
it
really
just
made
it
really
just
made
this
search
capabilities,
we're
trying
to
build
almost
impossible
because
we
had
to
consume
every
single
byte
that
was
sent
to
us
in
order
to
in
order
to
execute
any
query.
The
simplest
query
to
the
most
complex
required
us
to
pull
every
single
byte
that
was
sent
to
the
to
the
cloud
cluster.
So
this
more
advanced
format
is
costness.
Like
I
said
about
1.5.
This
is
CPU.
A
We
can
see
in
particular
in
gestures
are
up,
they
look
to
be
up
almost
2x
of
their
previous
usage.
Compactors
are
up
as
well,
we've
also
scaled
compactors
quite
a
bit.
The
compaction
is
a
bit
slower
and
we've
had
to
scale
up
our
compactors
to
try
to
keep
our
block
lists
under
control.
So
part
of
that
is
just
thinking.
Factors
doing
more
work,
and
part
of
that
is
us
is
a
scale
up
as
well
querying
query
front
end.
Distributors
those
components
were
basically
not
impacted.
A
If
anything,
it
looks
like
Distributors
are
down
a
bit
and
I
couldn't
tell
you
why
I
don't
know
if
anybody's
any
guesses,
but
maybe
Distributors
are
down
by
a
couple
percent
10
or
something
I'm,
not
sure
what
would
have
caused
that
memory?
Definitely
an
increasing
compactors
in
particular
I
think
ananya
are
one
of
our
longtime
Engineers
is
taking
a
deep
dive
into
that
and
trying
to
figure
out.
A
What's
going
on
there
and
just
remember,
you
went
up
a
lot
at
first,
but
really
it's
not
that
much
larger,
maybe
like
only
10
20
percent
over
what
it
was
using
before
so
just
remember,
he's
looking
good
overall
and
then
again
distributor
distributor,
usage
down
I.
Don't
know
why
I
don't
think.
There's
nothing.
I
think
that
we
change
that
should
have
impacted
Distributors,
but
distributor
usage
is
down.
Maybe
we
can
dig
into
that
a
bit.
A
Queriors
is
about
the
same
and
the
query
front
end's
about
the
same,
so
of
all
the
of
the
major
components
that
would
be
impacted
by
by
a
change
in
the
back
end.
I
left
out
the
mattress
janitor
just
because
it
feels
like
it
would
be
unimpacted
we're
seeing
increased
resource
usage,
as
expected,
like
I,
said
we're
doing
more
on
the
front
end
to
make
our
lives
happier
on
the
on
the
read
path
on
the
back
end.
A
Okay,
so
oh
I
thought
I
had
a
slide
before
that.
No
I
guess
I,
don't
okay!
So
what's
the
impact
of
this?
What's
the
point
of
all,
this
I
think
there's
a
couple
of
really
good
things
to
share,
but
we're
going
to
start
with
this
query,
this
query
performance.
This
particular
query
is
looking
for
a
cluster
label
like
cluster
equals,
prod
or
cluster
equals.
You
know
us
East
one
or
whatever
might
make
sense
in
your
environment.
A
Right
previously,
we
had
to
pull
every
single
byte
from
the
back
end
basically
to
to
answer
this
question,
so
it
took
about
40
seconds
for
one
hour,
which
was
terrible.
A
Ten
thousand
nearly
10
000
queries
a
second
about
eight
gigs,
a
second
for,
like
a
total
across
the
entire
query,
we're
looking
at
something
that,
like
40
gigs.
A
second
is
how
fast
we
were
processing
our
data,
which
is
just
not
good.
It
was
barely
hanging
on.
We
felt
like
we
just
never
really
liked
it,
which
is
why
we
embarked
on
building
this
new
backend
format
at
all.
So
this
query
now
against
this
newer
data,
this
parquet
backend
is
only
four
seconds
or
so.
A
The
number
of
requests
per
second
is
down
10x,
which
is
huge.
The
parquet
back
end
is
really
allowing
us
to
be
very
specific
about
the
data
we're
requesting
before
we
pulled
everything.
Now
we
can
really
home
in
on
a
particular
piece
of
data
and
pull
that
only,
and
so
we
are
pulling
way
less
gigs
per
second
from
GCS
with
way
less
requests
per
second
and
our
total
effective.
A
Our
total,
effective
query
rate
is
something
like
140
gigs,
a
second
now
and
that's
a
little
bit
of
a
made-up
number,
because
it
makes
less
sense
now
I'm
trying
to
basically
pick
a
number
in
terms
of
the
old
math,
but
the
new
math
is
very
different
because
we
are
pulling
such
a
smaller
percentage
of
the
block.
A
I'll
also
say
I'm
being
a
little
cheeky
when
I
pick
cluster
equals,
because
cluster
equals
is
a
special
column,
that's
kind
of
broken
out
and
requires
us
to
pull
less
data.
It's
kind
of
a
blessed
column,
because
we
believe
people
use
like
cluster
and
namespace
and
some
other
standard
columns
a
lot
and
we
split
those
out,
but
I
also
wanted
to
pick
it
because
it
highlights
an
important
part
of
this
schema
or
this
new
format,
which
is
that
we
can.
A
We
can
impact
the
rate
at
which
you
can
pull
data
by
improving
on
our
format.
We
can
break
out
other
columns
to
make
them
pull
less
data.
It
also
means
your
users
can
know
how
to
do
searches
to
reduce
the
amount
of
data
that
they're
pulling
if
you
search
on
a
resource
label.
If
you
only
look
for
something
that
shows
up
on
resource,
it's
going
to
be
a
lot
faster
than
if
you
look
for
every
span,
but
this
makes
sense.
There's
a
lot
more
spans
than
there
are.
You
know,
unique
resources.
A
If
you
look
for
a
trace
level
thing
like
a
duration
or
a
root
span,
name,
that's
even
faster,
because
there's
one
for
every
Trace
versus
you
know
a
few
different,
a
bunch
of
clusters
per
trace
or
a
whole
bunch
of
you
know,
span
tags
per
Trace.
So
the
point
that's
being
made
here
is
they're
actually
faster
queries
than
this,
and
there
are
slower
queries
than
this.
But
before
there
was
no
such
thing,
every
query
was
the
same:
pull
all
data
and
chug
through
it
the
best
you
can-
and
it
was
terrible.
A
Now
we
have
a
format
that
we
can
iterate
on.
We
have
a
format
that
your
queries,
kind
of
make
sense
on
I'm
asking
for
a
small
piece
of
data.
I
should
expect
the
faster
query
and
that's
going
to
be
true
with
this
new
backend
format.
So
overall,
this
is
extremely
exciting.
This
is
exactly
what
we're
looking
for
and
everything's
heading
the
right
direction.
We
are
really
hoping
to
cut
a
1.5
in
the
next
couple
weeks
with
this,
as
a
experimental
backend
format
that
you
can
trigger.
A
Awesome
performance
on
a
really
high
volume.
Cluster
I
want
to
thank
Marty
and
ananya
for
about
six
months
of
work
on
this
and
probably
about
two
months
of
work
on
a
different
format
before
this
that
really
kind
of
fell
on
its
face.
So
they've
put
an
enormous
amount
of
effort
in
and
I
think
we're
seeing
the
results
of
that
just
in
the
past
few
days,
and
they
have
done
great
work
both
of
those
engineers,
foreign.
A
If
you
want
to
unmute
and
thank
your
parents,
you
also
can
yeah
yeah
I
mean
we
still
have
the
time
to
do
so,
really
excited
yeah
cool,
absolutely,
absolutely
all
right.
That
might
be
my
last
one
yep,
okay,
cool.
Let
me
hand
the
mic
off
to
Rod
I,
believe
who's
gonna
talk
about
some
recent
improvements,
we're
making
on
the
service
graphs.
D
Yeah,
so
so,
for
something
totally
different,
pretty
much
a
generator.
We
can
generate
like
an
overview
of
your
service
topology
using
service
graph
metrics.
Up
until
now,
we
would
only
look
at
direct
connections
between
different
services.
D
Yeah
it's
this
one,
yeah
I
should
be
able
to
see
this,
so
This
demo
is
based
upon
the
intro
to
MLT
repository.
We
have.
This
is
just
a
demo
application
with
a
couple
of
small
microservices
talking
to
each
other
and
I've
extended
this
to
also
include
a
messaging
system
between
two
services,
so
you
can
just
run
this
in
Docker
compose
and
then
start
using
it.
The
version
of
tempo
which
is
running
in
this
does
not
add
the
queues
yet,
but
Tempo
1.5
should
include
this
change
now.
D
But
if
you
go
to
grafana,
I
can
just
look
up
a
couple
of
traces
which
are
still
being
generated.
So
it's
good
and
can
you
know
very
quickly
quickly
quickly
see
what
what
the
servers
are
doing.
So
we
have
three
services
involved
in
this
system
as
a
requester
just
generating
random
requests,
doing
a
request
to
the
server.
The
server
does
some
work
and
here
is
a
span
which
is
a
database
call.
D
So
the
reason
you
can
see
this
is
this
is
based
upon
the
name,
but
we
can
also
see
that
there
are
database
attributes
present.
So
this
is
defined
by
the
open,
Geometry
semantic
convention.
If
you're
doing
a
call
to
a
database,
they
recommend
to
set
these
attributes
like
the
database
name,
the
statement,
maybe
the
system,
the
user
most
of
these
are
optional.
Some
are
like
required.
Slash
recommended,
like
you
know,
it's
not
enforced
in
any
way,
but
this
is
kind
of
like
a
convention.
That's
agreed
upon
and
we
are
piggybacking
on
that
convention.
D
So
we
base
ourselves
up
on
the
auto
semantic
convention
to
detect
these
special
spans,
but
basically,
if
you're,
using
an
auto
SDK
SDK,
this
should
just
work
out
of
the
box
like
if
it's
instrumenting
your
database
calls.
It
will
just
do
this
by
default
because
all
the
sdks
implement
this.
D
So
you
can
see
the
server
there's
a
call
to
the
database
here
and
then,
a
bit
later,
the
requester
does
will
publish
the
message
on
the
Queue,
which
is
a
web
link
here
in
this
case,
and
the
recorder
will
consume
it
recorded
as
kind
of
like
an
archiving
process.
It
consumes
messages
and
then
just
writes
it
to
disk.
Don't
think
too
much
about
it.
It
doesn't
really
matter
what
you
can
see
here
is.
D
This
is
two
spans
of
two
Services
across
a
messaging
system
and
the
way
you
can
recognize
this
again
is
using
again
this
many
conventions.
They
tell
that
if
you
doing
a
request
across
a
marketing
system,
a
queue
you
should
include
include
the
messaging
attributes,
for
instance,
messaging
dot,
protocol
protocol
version
system
stuff
like
that,
and
it
should
also
has
Bank
kind
producer
and
the
receiving
one
or
the
consuming
one
should
have
staying
kind
consumer.
D
So
again,
we're
basically
ourselves
upon
these
attributes
being
present
on
the
span
and
then,
if
they're
present
on
this
fan,
then
the
processor
will,
you
know,
detect
these
extra
components
and
the
service
graph
will
then
also
include
certificate
yeah.
This
is
basically
the
view
right
now.
So
what
you
can
see
now
is
the
different
systems.
D
You
have
the
requester
doing,
requests
to
the
server.
The
server
is
also
doing
requests
to
the
database
in
this
case
postgres.
So
you
can
see
the
amount
of
requests,
the
amount
of
field
requests
here
and
the
request
is
also
doing
requests
to
the
reporter.
D
For
now,
there's
no
support
in
the
grafana
in
the
front
end
to
visualize
these
Connections
in
a
different
way,
but
we're
planning
to
add
this
so
in
the
metrics
you
generate,
we
can
include
Excel
metadata
that
tell
you
know
this
connection.
This
isn't
a
normal
connection,
it's
a
connection
across
a
queue
and
then
maybe
we
can
add
like
an
icon
here
which
clarifies
you
know.
This
is
a
rebel
link
here
between
here,
and
this
is
the
database.
D
So
you
can
use
a
different
icon
or
something
like
that,
and
then
you
should
be
able
to
very
quickly
see
you
know
all
the
components
involved
in
your
system.
You
can
see
all
the
components
using
the
same
database
and
much
requests
they're
sending
to
each
component
so
yeah.
This
is
just
like
an
enhancement
of
the
processor
which
will
add
some
extra
data
and
yeah
make
sure
you
can
see
all
the
components
present
in
the
system.
A
I
would
say,
I'm
pretty
excited
about
that.
That's
a
request.
We
get
a
fair
amount
from
various
customers,
external
people.
We
don't
use
a
lot
of
pre-built
queuing
in
our
products
and
some
of
our
teams
use
some
relational
databases,
but
it's
just
an
area.
We
don't
use
a
ton
of,
because
we
kind
of
write
these
databases
ourselves
and
I.
Think
a
lot
of
people,
of
course,
externally
use
relational
databases.
A
Queuing
is
super
popular
and
to
get
support,
for
that
is
nice
I'm
gonna
stare
at
Conor
a
little
bit
and
imply
we'd
love
to
see
some
better
icons
on
there
to
show
that
you
know
that
there
was
a
queue
and
a
database
would
be
nice
too,
but
I
know
that
team's
working
hard
on
those
improvements.
D
Yeah
I'll
be
sure
to
like
lay
out
like
all
the
things
that
we
expose,
so
they
can
use
it
in
the
content,
yep
very
cool,
but
yeah.
It's
really
cool,
because
if
you
use
a
queue
before
it
would
just
be
a
gap
in
a
service
graph,
so
you
couldn't
detect
it
and
you
don't
have
a
connection
between
these
services
and
now
they
show
up
and
it's
also
backwards
compatible.
So
it
also
shows
up
in
an
old
version
of
the
profano.
It
will
just
not
be
you
know
special
anyway.
D
It
will
be
like
a
regular
connection,
so
you
can't
see
if
it's
using
a
queue
or
not.
E
Yeah,
the
epr
is
open,
but
it
hasn't
been
merged.
Yet,
okay,.
A
I
thought
there
was
work
on
that,
so
anyone
who
uses
the
open,
teleporter
collector
can
kind
of
have
some
of
these
similar
features
with
with
the
newer
versions
of
grafana,
even
if
you
aren't
using
the
the
tempo
back
end
or
the
metrics
generator.
So
we
do
try
to
keep
those
metrics
in
line
with
open
Plumtree
collector
metrics,
so
that
it's
kind
of
usable
by
a
lot
of
different
people
in
the
front
end,
and
not
just
by
Tempo
users.
E
B
Thanks
Conrad
super
cool
stuff,
we're
working
on
there,
so
it's
great
to
see
how
they're
progressing
yeah.
So
this
is
a
screenshot
of
the
APM
table,
but
I'll
give
you
a
demo
in
a
moment
for
sure,
but
essentially
what
we're
getting
at
with
this
table
is
to
provide
app
Performance,
Management,
obviously
out
of
the
box.
So
we
don't
want
you
to
have
to
set
up
a
whole
lot
of
extra
resources
to
get
this
data.
B
So
next
slide,
please
if
you
don't
mind
and
then
go
through
a
couple
of
the
points.
So
where
does
this
data
come
from?
Well,
it's
essentially
going
from
the
metrics
generator,
that's
how
the
table
is
produced
and
that's
where
the
data
is
coming
from
and
what
you
see
in
the
table.
Well,
it's
for
viewing
our
top
five
red
spy
metrics.
B
So,
essentially,
the
requests
and
the
amount
of
those
requests
that
have
errors,
and
also
our
P90
as
well,
so
that
we
can
see
the
requests
that
are
taking
a
bit
too
long
that
we
might
want
to
drill
down
into
a
bit
further
to
figure
out
what's
happening
with
those
and,
of
course
fix
them
as
quickly
as
possible.
B
The
APM
table
will
also
allow
you
to
jump
straight
from
the
table
to
Prometheus,
in
the
case
of
the
peanut
of
of
the
rates,
the
air
rates
and
the
durations
to
show
you,
the
query
being
executed
itself
and
the
results
that
we're
getting
back
kind
of
in
a
quick
access
to
get
essentially
from
traces
straight
over
to
to
permit
this.
B
We
also
provide
a
little
link
as
well
for
going
over
the
tempo,
which
places
in
the
name
to
save
you
a
little
bit
more
time
and,
of
course,
we
have
capabilities
to
filter
the
table
and
the
service
graph
as
well.
So
what
I
might
do
now
is
I'll
share.
My
screen
and
I'll
give
you
a
quick
look
at
the
APM
table
in
action.
B
Similarly,
if
we
want
to
check
out
the
errors,
for
example,
now,
the
top
one
doesn't
actually
have
an
error
rate,
which
is
great,
but
the
others
do
so.
Let's
click
on
one
of
those
and
have
a
look
and
it'll
give
us
information
on
on
that
span,
name
in
particular,
and
it
enters
in
the
status
quo
for
us
as
well.
So
it's
just
the
idea
of
the
APM
table
is
basically
to
give
us
an
overview
of
what's
happening
with
a
relatively
low
effort
involved
and
to
allow
you
to
jump
from
the
table.
B
Those
that
kind
of
summary
view
into
the
places
that
you
want
to
be
to
learn
more
about
what
the
summer
is
actually
showing
you
and
the
coolest
part
about
this
I
think
is
for
the
P90
durations.
So
let
me
click
on
one
of
those.
Of
course
it
enters
in
the
query
again
for
me
with
exemplars
as
well,
which
saves
me
even
more
time
because
now
I
can
see
what
was
happening
in
the
trace.
For
example,
when
the
P90
was
fired.
B
Roy,
perhaps,
which
span
was
the
culpress
of
causing
the
long
duration
and
also
jump
to
logs
from
there
and
basically
have
access
to
the
other
pillars
of
observability
being
able
to
jump
around
between
traces
to
my
metrics
to
my
logs
back
to
my
traces
again
if
I
need
to
all,
depending
on
you
know
what
information
you're
looking
for
it
all
comes
full
circle
really,
which
is
which
is
lovely
When
I
close
that
out
there
now
and
I'll.
B
Give
you
just
a
quick
look
on
the
little
Tempo
Link
in
the
end,
which
opens
up
the
tempo
search
view
for
you
and
enters
in
the
Spanish.
So
of
course,
we're
open
to
all
kinds
of
suggestions
to
improve
this
workflow
or
any
of
the
others.
But
the
main
idea
here
is
just
just
to
make
I
suppose
your
life
a
bit
easier
with
using
this
table
and
getting
the
information
that
you're
looking
for
and,
of
course
you
can
filter
the
table
as
well.
B
So,
for
example,
if
you
and
again
we
were
leaving
the
table
lives
above
the
known
graph,
of
course.
So
if
you
want
to,
for
example,
filter
by
server
and
you
select
app
run
the
query,
it
updates
the
table
with
those
tags
used
and
we'll
also
update
the
node
graph
as
well.
So
you
can
filter
on
both
at
the
same
time,
if
you
like
just
kind
of
helping
you
to
figure
out,
what's
going
on
in
your
system
as
quickly
as
possible.
B
Of
course,
the
service
graph
then
has
its
own
links
as
well,
that
you
can
access
to
take
you
to
more
areas
of
the
tool
that
can
help.
You
figure
out,
what's
going
on
with
the
system
as
quickly
as
possible,
and
that's
that's
I
mean
you
can
also
filter
by
spend
kind
name.
For
example,
if
you
only
wanted
server
spans,
they
need
to
run
and
it
will
filter
the
table
for
you
again.
B
So,
of
course
this
is
I
mean
this
is
early
days.
This
is
version.
It's
not
version,
one
exactly.
There's
been
a
lot
of
great
feedback,
Temple
team,
but
we're
open
to
all
suggestions.
Anything
that
you
believe
would
make
this
better.
Let
us
know
we'll
be
happy
to
hear.
B
I
suppose
I'll
stop
sharing
my
screen,
though,
to
continue
with
that
with
the
demo.
What
do
you
think.
A
Cool
did
the
logs
link
traces,
the
logs
link,
change.
B
Yes,
it
did
because
now
we're
actually
supporting
it's
not
only
logs
that
we're
supporting
linking
to
Conor
has
done
some
great
work
around
linking
to
other
pillars
as
well.
Yeah,
yeah
and
I.
A
I
I
noticed
the
the
small
change
there
yeah
overall.
This
is
really
neat.
Definitely
looking
forward
to
some
more
feedback
on
this.
This
is
behind
a
feature
flag
right.
It.
B
Yeah
and
we've
had
we've
had
good
feedback
around
that
actually
very
recently,
we're
thinking
about
removing
I
mean
it's
still
early
days,
but
we
could
just
as
easily
remove
you
know
one
or
two
of
those
flags
to
make
it.
You
know
easier
for
people
to
get
at
the
at
the
at
the
APM
table
and
whatnot,
but
definitely
making
it
clear
in
the
documentation
is
very
important
for
us.
If
we
don't
remove
them.
Yes,
yeah.
A
I
guess
it
kind
of
makes
sense
to
consolidate
this
point
like
all
this
around
one
feature:
flag.
I
would
agree
with
that,
but
yeah
we
run
this
in
our
Ops
cluster
I'm,
sorry
yeah.
We
run
this
on
all
our
clusters.
These
metrics
and
we
generate
this.
We've
had
a
lot
of
good
back
and
forth
in
the
past
couple
months.
This
has
really
come
around.
A
B
Oh
you're
very
welcome,
yeah
I
can't
believe
you
got
it
the
first
time,
honest
to
God,
very
I,
think
I
could
count
in
one
hand
you
know
the
people
they
got
it
right.
The
first
time.
B
9.1.0,
so
we
started
on
this
a
little
bit
for
graphanic,
Fest,
profanic,
online
or
extension,
so
I
think
about
two
weeks
beforehand.
So
we
didn't
have
something
ready
to
go
in
with
the
9.0
release.
That
happened.
Maybe
I
don't
know
five
or
six
days
after
graphene
Fest,
so
it
yeah
it'll
be
available
for
1.1.0
yeah.
A
A
Yeah
I
think
that's
roughly
what
we
have
on
this
side.
Anybody
has
any
questions
or
any
other
kind
of
Tapas
conversation
they'd
like
us
to
talk
about
with
regard
to
Tempo
or
tracing
or
open
Telemetry
or
whatever,
or
you
know
what
our
favorite
pizza
toppings
are
feel
free
to
ask,
and
we
can
do
our
best
to
answer
unless
we
really
hold
our
favorite
pizza
toppings
close
to
our
chest
and
refuse
to
share
that
kind
of
information.
Marty's,
pretty
he's
pretty
he's
pretty
dodgy
about
his
about
his
pizza,
toppings
I.
Think.
A
All
right,
I
want
to.
Thank
you
all
for
your
time.
It's
been
a
good
call.
Like
I
said
we
had
way
more
content
than
I
thought.
Thank
you.
So
much
Joey
for
jumping
in
the
parquet
recently
going
into
Ops
is
awesome.
A
I
expect
in
the
next
month
to
see
a
1.5
and
we'll
be
talking
about
that
in
our
next
Community
call
and
hopefully
have
even
more
parquet
metrics,
so
hopefully
we'll
be
spend
a
little
time
doing
some
resource
improvements
and
the
march
to
trace
ql
is
on,
as
of
kind
of
this
week,
really
look
forward
to
seeing
what
we
do
with
that.
Next,
all
right.
Thank
you
all
for
your
time.
I
will
see
you
when
I
see
you
take
care.