►
From YouTube: Tempo Community Call 2021-05-13
A
So
tempo
community
call
of
may
how
is
tempo
related
to
loki?
We
have
an
agenda
item
check
that
out
tempo
is
a
distributed.
Tracing
backend,
whereas
loki
is
a
logging
back-end,
so
in
one
sense
they're
not
really
related
at
all.
A
A
Agent
to
allow
you
to
you,
know,
log
create
logs
and
create
traces
in
a
manner
that
allows
you
to
kind
of
link
those
through
grafana
traces
to
logs
and
logs
of
trace
kind
of
jumps
around.
We
currently
use
loki
internally
as
our
kind
of
index
into
tempo,
because
tempo
is
a
trace.
Id
only
store,
but
other
back
ends.
A
A
Cool
state
of
exemplars
yeah,
that's
a
good
one
to
start.
Today,
though,
I
really
want
to
introduce
rod
our
newest
grafana
tempo
developer,
and
why
don't
you
jump
in
here,
real,
quick
and
just
give
us
a
quick
introduction.
C
Yeah
so
hi,
I'm
konrad,
recently
joined
the
grafana
labs.
I'm
based
in
belgium,
so
also
speak.
Dutch.
You
can
just
talk
english
to
me
of
course,
and
yeah
also
before
working
on
tempo.
So
I'm
looking
forward
to
get
to
know
the
community
get
to
know
who's
using
tempo
and
you
know
contribute
to.
A
The
party
every
day,
that's
right,
cool,
so
thanks,
conrad
er.
Sorry,
I
call
you
everything.
I
want
to
stick
to
something
at
least
close
to
your
name:
kunron.
Try
it
cool
we're
glad
to
have
you
super
excited
to
have
a
good
ride
on
the
team
and
I
think
it's
gonna
be
a
good
great
addition.
A
Let's
talk
about
07-0,
so
we
released
that
between
the
previous
meeting
and
this.
So
let's
go
over
some
headlines
here.
Some
of
the
some
of
the
more
important
features
that
we
added
there.
Let's
see,
I
have
the
giant
list
up
here.
There's
a
ton
of
bug
fixes.
I
won't
go
into
all
the
details,
but
if
you
review
that
there's
a
number
of
performance
improvements,
one,
notably
that
I
really
like
that,
we've
started
to
run
internally.
A
A
I
think
we
were
finding
our
internal
bottlenecks
are
more
related
to
disk
or
I
o
throughput
than
they
were
necessarily
the
kind
of
the
amount
of
cpu
remember
spending
the
adjusters.
So
we
went
with
that
trade-off,
but
for
different
people
this
might
be
different.
If
you
had
very
fast
ssd's
directly
connected
tempo
and
some
kind
of
like
a
bare
metal
environment,
then
it
might
actually
be
quite
a
bit
better
to
stick
with
non-compressed
wall
page
based
access
index
access
is
sounds
boring,
but
it's
quite
good.
A
I
think,
and
one
of
a
great
improvement
here.
So
previously
we
were
pulling
like
an
entire
index.
Is
one
big
blob
which
really
reduced
the
size
of
the
indexes
we
could
keep,
because
once
they
get
to
omeg
two
megs
three
megs.
It
was
a
huge
amount
of
data
to
pull
for
each
block,
but
we've
changed
it.
So
we've
organized
our
indexes
in
terms
of
pages
and
it'll
pull
256k
at
a
time.
A
Those
might
be
the
highlights.
I
don't
know
if
we
did
go
to
go
116.,
that's
probably
worth
mentioning.
I
don't
know
if
there's
anything
else
in
here.
That
is
particularly
interesting,
but
I
think
those
are
the
ones
that
I
was
most
interested
in.
Is
there
anything
else?
The
team
remembers
that
that
is
worth
discussing.
A
Oh
yeah,
that's
a
great
yeah
good
point.
Did
I
even
note
that
in
the
thanks
probably
we
dropped
the
jager
query
requirement.
So
we
had
this
thing
called
tempo
query,
which
is
really
based
on
jager
query.
A
And
it
was
kind
of
an
annoying
thing,
mainly
because
it
was
just
an
extra
piece
that
you
had
to
run
and
it's
very
confusing
to
people
why
it
existed.
Excuse
me,
but
grafana,
7.5.x,
now
and
and
tempo
0.7.0
directly
support
each
other
and
there's
no
need
for
this
kind
of
intermediary.
It
makes
our
examples
much
more
simple.
A
I
think
some
of
our
examples
still
have
it,
but
we're
looking
to
remove
those
entirely
and
grifona
can
now
query
tempo
directly,
which
is
just
makes
everything
easier.
Frankly,
I
think
we're
still
going
to
keep
the
tempo
query
piece
around.
It
is
nice,
I
think,
to
have
a
different
way
to
query
tempo
than
directly
out
of
grafana.
If
you
don't
want
to
use
grafana,
but
we're
going
to
stop
using
it
and
kind
of
just
like
move
it
out
of
the
examples
and
probably
push
people
to
stick
with
grafana
directly
to
tempo.
A
Oh
yeah,
that
is,
we
used
that
to
do
a
migration,
so
we
had
to
move
one
of
our
tempo
clusters
between
kubernetes
clusters,
one
of
our
tempo
instances
between
kubernetes
clusters,
and
we
had
no
way
to
tell
an
ingester
to
just
completely
flush
all
blocks
to
the
back
end.
So
we
needed
that
we
needed
a
way
to
tell
and
gesture
completely
push
all
blocks
to
the
back
end
before
we
shut
it
down.
So
we
didn't
lose
any
data
and
that's
why
we
added
that
piece.
A
Yeah,
I
zach.
Are
you
the
guy
running
it
in
fargate.
E
Out
having
to
trigger
that
on
shutdown
of
the
containers,
so
I'm
probably
gonna
have
to
like
rebuild
your
images
a
little
bit
more
than
I
already
was
so
that
I
can
run
that
as
a
a
post
or
sorry
a
pre-shutdown
fargate,
unfortunately,
doesn't
give
you
any
control
over
that.
It's
just
they
say.
A
So
I
don't
think
it
would
be
too
difficult
to
add
a
config
option
to
the
adjusters
to
just
do
that
on
shutdown
like
that,
be
the
default
behavior
on
shutdown.
We
wouldn't
run
it
that
way,
but
for
people
who
are
in
an
environment
where
that
made
sense,
it
would
be
fine.
It
would
make
shutdown
take
a
long
time,
but
you
would
be
welcome
to
run
that.
E
Yeah
that'd
be
cool,
I
mean
well.
I
need
the
same
thing
on
loki
because
I
checked
in
with
them
and
they
said
like
oh
yeah,
we
have
the
same
thing
when
we
shut
down
we're
not
actually
flushing
all
the
logs,
it's
like,
oh,
so
I've
probably
been
losing
a
little
bit
of
data
every
now
and
again.
A
Right,
cool
yeah:
let
me
I'll
add
that
as
an
issue,
it's
easy
enough.
That
is
an
issue
and
then
we
can
kind
of
see
if
we
can
spend
some
time
on
it.
I
don't
think
it'd
be
too
difficult.
D
There
is,
there
is
one
thing
where
the
blocks
that
would
be
flushed
won't
be
immediately
queryable
I
mean,
I
think,
that's
more
of
a
like
cost
benefit
like
it's
worth,
getting
the
stateless
ingestors
right
without
persistent
volumes,
but
you
can
have
maybe
gaps
in
query
according
that's,
you
know
something
that
we
would
hope
to
address.
You
know
if
we
change
the
way
the
blocks
are
pulled
and
things
like
that,
but
that
I
mean.
B
D
D
Just
for
the
queries
to
repo
the
blocks
like
if
they
pull
every
five
minutes,
there
would
be
that
you
know
the
adjuster
will
flush
the
block
and
then
go
away.
Normally
the
adjusters
hang
onto
the
blocks
and
you
know,
set
respond
to
queries
for
that
data
because
they're,
the
only
pod
that
would
know
about
it.
So
you
know
for
a
few
minutes
until
they,
you
know
repo
the
blocks
that
those
traces
that
were
just
flushed
wouldn't
show
up
yeah,
but
I
mean,
I
think,
that's
probably
worth
it.
D
A
Cool,
oh
marty's
got
some
stuff
in
here
marty.
Do
you
want
to
do
these.
D
Sure
yeah,
I
think
there
were
a
couple
things.
Maybe
you
know
seven
to
call
out
real
quickly.
There
was
a
kafka
receiver
and
a
couple
breaking
changes.
We
don't
need
to
go
through
them,
but
just
to
be
aware,
like
from
0.6
to
0.7
yeah.
D
A
Okay,
so
we've
always
supported
ingestion
of
zipkin,
but
we
don't
use
zipkin,
so
they
were
said
a
surprise.
I
suppose,
when
we
actually
did
have
a
customer
who
you
started
using
it,
but
zipkin
purposefully
duplicates
span
ids
between
the
client
and
the
server.
So
when
the
client
reaches
out
it
creates
a
span
and
it
passes
that
span
id
and
in
jager
and
open
telemetry,
a
new
span
is
created
on
the
other
side,
with
a
new
span
id
button
zipkin
it
duplicates
that
purposefully.
It
uses
the
same
span
id
across
as
on
both
sides.
A
The
negative
of
that
is,
we
wrote
a
bunch
of
code
that
assumed
a
span.
Id
would
be
unique,
basically
in
a
trace,
because
it's
an
id
but
turns
out
to
be
not
correct
in
zipkin,
so
we
had
two
small
changes.
It
actually
wasn't
as
bad.
A
Maybe,
as
I
thought
when
we
first
heard
about
this
or
the
customer
first
started
having
this
issue,
one
was
kind
of
we
have
this
combined
trace
logic
that
has
this
assumption
of
unique
span,
ids
and
that's
been
fixed,
and
the
second
is
on
query:
we
dedupe,
so
we
search
this
trace
for
spans
that
have
the
same
id
and
then
we
dedupe
them
and
we
create
a
new
spanish
on
the
other
side,
and
this
logic
is
straight
out
of
jager.
A
So
if
someone
was
using
zipkin,
I
think
it's
got
a
little
bit
less
popularity
than
maybe
jager
some
of
these
other
pieces.
You
may
have
been
seeing
this
and
it
should
be
fixed
it's
in
the
tip
of
maine,
but
it
has
not
been.
It's
not
been
cut,
so
the
next
release
will
have
these
fixes.
D
Sure
yeah,
I
think
this
is
always
kind
of
fun
to
go
through
as
we
kind
of
work
on
making
tempo
more
performance,
scalable,
and
also
I
mean
we
get.
You
know
discussion
for
things
like
you
know
how
many
different
pods
or
different
components
should
I
run
and
it's
hard
to
say,
because
workloads
vary
so
much
across
clusters,
so
this
is
just
kind
of
going
through
an
internal
cluster
that
we
have,
which
is
our
monitoring
cluster
yeah.
D
So
I
mean
I
can
just
go
through
this
and
maybe
talk
about,
like
you
know,
thoughts
on
some
of
these
things
so
currently
we're
doing
we're
ingesting
1
million
spans
per
second
and
that's
around
150
megabytes
to
the
distributors
per
second.
D
So
we
have
plans
to
go.
I
mean
I
guess
our
goal
is
probably
three
or
five
million
spans
per
second
and
then
higher.
What
three
or
five
million
would
be
kind
of
like
the
next
step.
Probably
we
might
even
be
able
to
get
that
with
the
current
design.
It's
just
it's
not
maybe
worth
us
like
sampling
that
much
just
for
our
own
internal
testing
but
yeah,
so
just
to
get
a
sense
of
like
the
amount
of
pods.
D
D
They
cut
blocks
at
700
megabytes
and
that's
snappy,
so
that's
after
compression.
So
I
think,
before
compression
that
was
somewhere
around
three
gigabytes,
I
think
just
to
get
a
sense
of
how
often
data
is
cut.
I
think
each
ingestor
is
cutting
a
block
every
couple
of
minutes.
They
have
50
gigabyte,
persistent
disks
now.
The
reason
for
that
is
they
don't
actually
need
that
much
storage,
but
it's
in
order
to
get
some
enough
bandwidth
right,
so
the
bandwidth
for
the
disk
scales
up
with
the
size.
D
This
isn't
on,
like
g
or
in
google
cloud.
So
I
don't
think
we're
running
into
any
bandwidth
limits
right
now,
but
I
think
we're
close
right.
So
that's
probably
like
something
else,
we'll
look
at
so
compression
for
the
wall
was
really
important
there,
because
that
cuts
down
on
just
the
how
much
data
is
being
written
to
disk
at
once
for
queries.
So
we
we
have
25
queriers
and
two
front
ends,
so
the
queries
are
kind
of
like
the
pods
that
are
doing
the
work
in
the
front
ends
or
serving
the
api.
D
This
was,
I
just
got
here
our
query
p90
for
looking
at
the
traces
somewhere
between
two
and
five
seconds.
So
most
of
the
time
it's
two
seconds,
but
we
have
occasional
long
tails
over
time
or
that
are
around
five
seconds.
P99
is
is
worse,
I
think,
maybe
it's
more
like
10
seconds,
I'm
just
going
to
put
that
in
here.
D
That
was
something
like.
So
maybe
something
like
that
we
had
36
compactors.
I
didn't
get
any
of
the
like
configuration
options
and
I'm
not
sure
that
we
really
need
that
many,
but
we
did
find
that
it's
good
to
have
that
many.
I
think
compacting
is
still
something
that
we're
working
on
tuning
and
determining
the
proper
number
of
yeah.
A
The
compactors
are
relatively
small
they're
like
a
few
gigs
of
memory,
each
and
one
cpu,
so
they're
not
doing
a
lot
of
work.
I
think
most
of
our
pieces
are
smaller
compared
to
say
cortex
or
loki,
but
I
don't
really
know
also.
I
had
no
idea
with
36,
I
would
guess
like
10
less
than
that.
Maybe
I
should
have
known
that.
D
I
think
you
know
when
we
increased
the
sampling
rate
to
1
million
spans
per
second.
The
block
list
was
growing
quite
a
lot.
I
think
it
was
higher
than
this
for
a
while,
and
so
I
would
say,
compactors
are
still
in
in
progress
thing
being
tuned
right.
There
was
the
compaction
time
window
that
we
reduced,
I
think
to
15
minutes
things
like
that.
Yeah,
the
maximum
block
size
we
increased,
and
then
we
we're
running
quite
quite
a
lot.
I
would
say
at
any
given
moment.
D
Some
of
them
are
idle,
though
yeah
yeah.
So
for
totals
that's
around
210
cpu.
I
didn't
give
them
one
moment
around
353
gigabytes
of
memory,
and
that's
there's
more
than
just
this.
That's
in
that
entire
kubernetes
name
space,
there's
a
couple
things
else
that
we
run
like
just
sidecars
and
things,
but
that's
the
total.
D
A
Thanks
definitely
at
a
million
space,
a
second
we're
actually
at
1.5
million
for
a
while,
sometimes
it's
hard
for
us
to
track.
What's
going
on
in
our
different
applications
to
know
why
we're
growing
or
shrinking.
So
for
a
couple
days
there
we
were
sitting
at
1.5
million
and
it
was
fine
also,
I
always
cringe
at
the
210
cpu,
which
just
seems
like
so
much.
The
good
news,
though,
is
we're.
A
Gonna
reduce
that
quite
a
bit,
there's
a
prn
now
in
maine,
although
it
broke
everything
that
significantly
reduces
the
the
cpu
usage
of
the
ingesters
about
70
reduction,
which
is
really
exciting
because
of
the
200
cpus,
I
think
like
150
of
them
are
ingesters
like
it's,
almost
all
investors
so
or
the
majority
is
just
so.
This
will
be
a
huge
reduction,
and
I
wanted
to
get
this
in
before
we
started
pushing
to
that
three
to
five
million
marty
was
talking
about.
D
Oh
yeah,
I
forgot
to
cover
this
part
here
just
just
quickly.
Okay,
so
we
have
two
retention
and
that's
about
56
terabytes
of
compressed
data,
25
billion
traces
yeah.
Yes,
you
know
I
was
thinking
one
metric
that
I'm
not
sure
tempo
has
that,
but
that
would
be
interesting
was
how
many
live
traces
are
on
each
ingestor
at
a
given
moment.
I
think
that's
another
thing
where
comparing
across
workloads
would
be
useful,
yeah
just
mentioning
it.
A
D
A
F
F
A
I
also
checked
the
link
to
the
reduction
pr
into
the
dock.
There's
some
really
juicy
graphs
in
there
on
our
little
test
bed,
I
had
point
one
cpu
for
the
ingestors
and
then
we're
down
to
almost
0.25,
so
just
a
huge
reduction
in
cpu
usage.
Basically,
what
we
did
is
not
marshall
proto,
so
we
marshal
at
the
at
the
distributor
level,
and
then
we
pass
just
bytes
like
blobs
of
bytes
to
the
ingester,
which
stores
it
directly
into
the
back
end,
and
we
only
re-marshal
it
if
we
have
to
like.
A
If
you
query
it
basically,
so
we
think
that's
going
to
be
a
huge
improvement
now
it
basically
broke
a
lot
of
stuff
that
we're
working
on
right
now,
and
I
really
wanted
to
have
this
in
our
operations
cluster
today
for
this
meeting,
but
it
didn't
work
out.
So
I
think
the
team's
deciding
between
one
of
two
pass
going
forward
and
we
have
prs
up
for
both
of
those
but
we'll
figure
that
out
cool
marty
you're
up
next.
Can
you
tell
us
about
exemplars.
D
D
Sure
yeah
I
mean,
as
far
as
I
understand
like,
I
think
they
work
well
in
755,
and
so
that's
the
version.
I've
worked
with
I'm
not
aware
of
what
the
changes
are
in
eight
though
yeah
cool,
oh
yeah,
let's
do
on
a
previous
community
call
we
kind
of
walked
through
exemplars
and
kind
of
a
demonstration
of
them.
So
I
think
we
could
look
that
up.
If
anyone
is
interested,
I
guess
I
would
say
they're
kind
of
still
experimental
in
in
progress.
D
So
that
is
why,
like
this,
is
I'm
not
sure
that
this
is
something
that
anybody
would
put
into
production
today,
but
just
to
talk
through
the
current
state
of
exemplar.
D
So
on
the
previous
community
call,
the
news
was
prometheus-226,
which
was
a
very
basic
introductory
support,
experimental
and
now,
actually,
just
recently,
as
of
yesterday,
prometheus
227
has
released
with
extended
support
more
more
things,
for
example
some
that
are
going
to
be
required
for,
like
the
other
use
cases,
that
we
want
things
like
graphonic
cloud
and
if
you
were
to
run
this
in
your
own
cluster.
D
So
what
that
inc?
What
is
new
is
in
previous
227
is,
it
will
now
store
the
so
it
always
would
scrape
the
examples,
but
now
they
will
also
be
written
to
the
write
ahead
log
and
what
that
does
previously.
The
examples
were
purely
in
memory.
So
if
the
pod
was
was
lost
or
restarted,
the
data
would
be
gone,
but
in
the
right
head
log
I
was
able
to
recover
some
of
the
data
and
I'm
not
sure
I
can't
remember
exactly
how
long
those
right
ahead
logs
are
retained,
so
that
they're,
not
permanent.
D
This
state
is
still
not
part
of
like
the
long-term.
You
know
sample
storage,
but
it
will
usually
is
on
the
order
of
a
couple
of
hours,
so
we'll
be
able
to
recover
some
exemplar
data.
D
Additionally,
also
we'll
now
send
the
exemplars
to
a
remote
right
target,
and
so
the
goal
for
that
is
to.
There
are
other
tools
that
can
do
this,
but
currently
we're
working
on
receiving
that
data
in
cortex,
and
so
that
would
be
meaning
that
we
are
working
on
the
ingest
and
query
path
for
exemplars
and
cortex,
so
that
would
be
a
downstream
target
of
prometheus
premier.
D
This
would
be
running
and
scraping
the
exemplars
in
your
cluster
and
then
sending
it
to
a
cortex
back
end
yeah,
let's
see
so
I
was
mentioning
seven,
five,
five
and
so
for
querying
exemplars.
If
you
have
prometheus,
you
can
query
example:
ours:
I
would
go
with
seven
five:
five.
There
was
a
small
like
there
was
support.
Previously,
I
think
753
is
where
it
became
like
the
stability
was
there,
but
I
think
in
755
there
was
a
small
ui
fix,
so
I
think
I
would
I
would
go
with
that.
D
One
too
and
another
thing
I
think,
mario,
I
think
we
were
talking
about
this
recently.
So
another
thing
in
progress
is
exemplar,
supporting
grafana
agent
right
and
so
grifana
agent
makes
use
of
prometheus,
and
so
now
that
this
has
been
merged
and
released,
we
can
bring
that
functionality
into
grafana
agent,
and
that
will
also
be
able
to
scrape
and
remove
right
examples,
and
I'm
not
sure
if
that's
actually
yeah
do
you
want
to
talk
about
that
for
a
second.
G
Right
I
mean
I
don't
have
much
news:
yeah
yeah,
it's
not
yet
on
the
agent,
but
it's
like
the
next
next
item
on
the
list.
Yeah,
so
yeah
stay
tuned
to
agent,
pierce.
A
Cool
thanks
marty
we're
there
right.
It's
almost
done.
D
Yeah
I
mean
the
query
path,
there's
a
pr
for
the
query
path
in
progress
and
and
then
I
think
after
that
yeah
I
think
it
will
be
coming
to
cortex
soon.
Nice.
A
Cool,
let's
cool
thanks.
So
member
list
is
next.
We've
had
some
discussion
on
memberless.
I
was
within
seconds
of
ditching
it.
In
fact,
I
told
the
team.
I
was
going
to
ditch
it,
but
we
stuck
with
it
because
the
community
really
wanted
it.
They
like
it,
and
so
we
have
since
made
some
tweaks
to
our
settings
and
found
a
lot
of
stability.
So
I'm
glad
people
pushed
me
to
keep
it.
A
I
was
about
to
give
up,
but
what
you
see
there
in
the
dock
is
our
current
settings
and
since
we've
had
these
settings,
our
issues
with
like
the
common
problem
was
like
unhealthy
compactors
compactors
just
sitting
in
there
forever
they're
impossible
to
forget.
We
couldn't
figure
out
why,
since
we've
made
these
changes,
we've
not
seen
that
at
all.
A
I
don't
think
I
can't
even
remember
the
last
time
we
had
to
sit
there
and
smack
forget
on
a
compactor
page,
praying
that
it
would
go
away
and
getting
really
frustrated,
and
you
know
going
to
extreme
measures
to
get
rid
of
it.
So
I
would
recommend
these
settings
if
you're
using
member
lists,
I
think
we
found
them
very
stable
and
please
give
us
feedback
if
you
don't
so,
we
can
continue
to
work
on
these.
A
G
Yeah
so
exciting
news
from
the
agent
we
have
a
couple
of
new
features
that
we
wanted
to
talk
about.
The
first
one
is
auto
login,
so
like
running
a
distributed
system.
Sorry
a
system
with
distributor
tracing,
it's
very
powerful
but
also
have
has
a
lot
of
challenges.
One
of
those
is
trace
discovery.
G
Well,
as
you
may
know,
tempo
doesn't
offer
like
a
direct
tool,
a
way
to
just
find
the
trace
based
on
different
parameters,
so
you
usually
have
to
leverage
other
tools
like
login,
metrics
and
and
the
like,
but
maybe
your
instrumentation,
your
login
or
metric,
doesn't
support
xm
plus,
like
we
were
talking
about
or
yeah
or
login
or
your
login
doesn't
support,
injecting
trace
ids.
G
So
we
thought
yeah.
Maybe
there
is
another
way
like
another
tool
for
trace
discovery,
and
that
is
automatic
login,
which
is
in
the
agent
which
you
can
think
of
it's
like
just
a
tracing
pipeline.
G
At
the
moment
it
just
supports
loki.
So
what
it
does
is
uses
a
loki
instance
within
the
agent
which
you
can
think
of
it
with
just
a
prompting
client,
so
it
pushes
logs
to
to
loki
and
then
you
can
use
loki
to
find
those
traces.
G
I
have
a
live
demo
prepared,
but
if
you
have
any
questions
or
comments
beforehand,
yeah
we
can
address
them.
If
not
see.
G
Right,
well,
I
have
it
it's
been
running
for
quite
some
time
so,
hopefully
hasn't
crashed.
G
Yep
awesome
right,
so
I'm
just
running
the
docker
example
from
the
agent
as
it
is
in
in
in
main,
and
we
have
automatic
login
enabled
for
for
all
three
type
of
kind
of
traces
you
you
can
log
for
which
are
spans
again
root,
span
and
processes.
G
So
yeah
these
logs
are
locked
with
a
label
which,
in
this
case
is
template.
You
can
override
that
and
then
you
can
select
any
of
the
three.
For
instance
we
can
set
for.
Yes,
let's
see
what
spans
are
being
processed
in
the
agent
and
there
you
have
there,
you
have
it.
You
see.
The
logs
are
just
a
pair
of
key
value
that
keep
key
value
pairs
which
describe
tax
properties
from
the
from
these
pants
and
then
what
is
cool
is
that?
G
Well,
sorry,
if
you're,
actually,
my
my
keyboard,
I'm
one
of
these
people
with
mechanical
keyboards,
that
you
can
change
like
the
interesting
searches
for
these
traces
that
you
couldn't
do
otherwise,
so
I'm
just
formatting
the
log
line
and
I'm
searching
for
traces
which
have
a
duration
longer
than
one
second
and
then
we're
here
the
field.
Then
maybe
I
want
to
search
that
also
have
also.
G
Errors,
nice
and
then
yeah.
We
have
all
these
traces
and
to
see
them.
The
graffana
folk
already
built
a
nice
integration
here
you
can
just
open
the
lock
line
and
click
on
tempo
and
watch
the
trace
and
see
what's
going
wrong
with
your
service.
G
Yeah,
this
is
the
quick
demo
yeah,
as
I
said
before.
At
the
moment,
this
only
supports
using
loki,
we're
looking
to
also
probably
adding
support
to
just
log
into
the
standard
out,
yeah
and
well
just
by
using
the
the
tool.
Hopefully
we'll
come
up
with
more
improvements.
E
Question
am
I
correct
I'm
assuming
that
I
can
add
loki
labels
based
on
like
the
source
that
it
came
from?
Okay,
great.
G
Right
yeah,
it
has
all
the
loki
capabilities,
so
you
can
use
all
the
prom
tail
client
configuration
and
yeah
and,
like
constant
labels,
yeah
well,
I
can
think
of
more
of
it.
G
Sure
cool,
yeah,
so
very
powerful
tool,
all
right,
and
the
next
feature
is
that
we
added
remote
white
to
spawn
matrix.
So,
as
we
showed
last
week,
we
introduced
metrics
generated
from
the
spans
again
in
the
agent
or
in
this
tracing
pipeline,
but
how
they
were
being
exported.
The
metrics
was
just
by
the
primitive
supporter
from
the
auto
collector,
which
basically
just
exposed
a
matrix
handler,
which
you
would
have
to
scrape
so
another
option
that
we
have
introduced
is
instead
of.
G
Exposing
an
exporter,
maybe
we
can
use
the
remote
rights
capabilities
of
the
agent.
So
now
you
can
use
a
prometheus
instance
within
the
agent.
It's
like
the
very
same
case
as
for
unlucky
instance
from
the
previous
feature
and
directly
pushed
to
yeah
any
prometheus
like
backhand
like
prometheus,
cortex
or
anything
yeah
and
then
finally,
well,
we
have
just
mentioned
it
like.
G
The
next
item
on
the
list
is
adding
support
for
examples
in
the
agent
now
that
have
been
that
full
request
has
been
merged
to
the
like
to
the
upstream
to
prometheus
yeah,
and
hopefully
that
will
open
many
many
of
those
for
that
feature
and
well,
that's
it
for
me.
A
Thanks,
mario
cool
well,
I
think
that's
roughly
it
for
the
agenda.
We
had
if
there's
any
other
questions
or
any
other
points
of
you
know,
discussion
that
people
would
like
to
cover
feel
free
to
ask
or
put
it
in
the
document
or
whatever
works.
A
F
Can
you
hear
me
yeah
yep,
it's
I
saw
mario's
demo
that
was
pretty
cool
man.
A
I
think
I'm
excited
about
the
span
logging.
I
want
to
see
it
in.
I
want
to
compare
it
to
other
metrics
that
we
have.
I
want
to
see
what
it
tells
us
about.
What
we're
ingesting,
I
think
we'll
learn.
I
will
think
all
our
things
with
auto
logging.
A
Cool
well,
if
that's
it,
we
can
go
ahead
and
call
it
we're
a
little
early
but
no
big
deal.
Everybody
enjoy
your
whatever
week.
It's
left
and
we'll
see
in
a
month.
I
suppose
and
yeah
feel
free
to
jump
in
on
the
slack
channel
or
add
issues
to
the
repo
there's
a
community
forum
as
well,
to
ask
questions,
and
we
do
our
best
to
answer
all
these
quickly
and
to
engage
people
through
all
of
them
so
feel
free
to
kind
of
connect
to
us.