►
From YouTube: Tempo Community Call 2020-12-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
will
do
my
best.
Let's
look
at
the
doc
on
the
meeting
which
I
can't
there
it
is
so
the
community
called
doc
is
up
it's
attached
to
this
meeting.
B
Anyone
is
welcome
to
any
agenda
items,
so
we
have
a
couple
things
that
we
thought
to
talk
about,
but
anybody
is
welcome
to
add
anything
they'd
like
and
we
will
address
some
kind
of
as
we
get
to
them.
First,
a
couple
of
updates
a
couple
of
changes.
I
guess,
since
the
last
community
call
both
of
these
from
daniel
thanks
daniel,
apparently
the
only
one
doing
work
on
tempo
right
now
added
redis
support,
which
is
awesome.
We
don't
use
redis
internally
or
I
think
we
do
for
some
things.
B
We
mainly
use
webcash
d,
but
redis
is
of
course,
extremely
popular
and
an
awesome
addition
to
tempo
a
lot
of
people
like
redis
and
then
k6,
which
is,
I
believe,
where
you
work,
is
that
right,
daniel.
C
B
Cool
very
cool,
so
I
guess
they
have
this
benchmarking
tool.
That's
really
neat!
It's
a
where
you
can
do
kind
of
like
these
benchmark
tests
in
javascript,
and
so
we've
added
those
into
our
ci
pipeline
to
get
some
kind
of
idea
of
changes
and
how
those
impact
you
know
ingest
ingestion
and
stuff.
We
hope
to
expand
on
those
kind
of
over
time
other
than
that.
B
We're
gonna
talk
a
little
bit
about
what
where
things
are
internally
at
grafana,
but
I
want
ananya
who's
been
working
mainly
on
the
query
path
to
walk
through
some
of
those
changes
and
what's
going
on
there
and
then
marty's
gonna
talk
about
compaction,
which
he
has
spent
a
fair
amount
of
time
on,
and
then
we
can
get
back
to
making.
D
D
So
yeah,
thank
you,
so
yeah
I'll
talk
a
little
bit
about
the
work
on
the
query.
Part
that's
been
happening,
so
we
started
out
because
the
query
path
has
been
like
fundamentally
incomplete.
It's
been,
there's
there's
not
been
a
way
to
scale
the
query
path
currently,
so
the
way
it
works
right
now
is
a
query.
Hits
a
single
query
and
the
one
single
query
needs
to
do
all
of
the
work
about
fetching
the
trace
and
it's
going
to
be
looking
up
the
bloom
filters
on
every
block.
That's
in
the
back
end.
D
So
that's
like
on
like
at
grifon
labs.
We
have
around
like
400
blocks
in
our
back-end
or,
like,
I
think,
that's
gone
up
a
bit
in
this
last
week,
but
yeah.
So
a
single
carrier
fetches
the
bloom
filters
for
all
of
these
blocks
filters
them
out
fetches
indexes
for
the
ones
where
it
gets
a
hit
and
then
finally
fetches
the
object
id.
And
so
this
was
like
and
a
way
in
which,
like
we
were
not
able
to
shard
this
work
out
or
sort
of
split
it
up
between
multiple
queries.
D
So
each
querier
had
to
do
all
of
the
work
for
a
given
query
and
there
was
no
way
like
if
we
see
latency
climbing
there's
no
way.
We
can
just
up
the
number
of
carriers
in
our
deployment
and
bring
it
down,
and
that
was
bothering
us,
and
so
we
decided
to
work
on
a
query
front
end,
which
is
very
similar
in
function
to
loki
and
cortex
it.
It
sort
of
shards
up
the
block
space
and
splits
the
work
between
the
carriers.
D
So
we
can
actually
bring
down
latency
by
scaling
up
the
deployment
of
carriers
and
that's
that's
been
the
work.
That's
been
happening
for
the
past
two
weeks
and
there's
like
two
parts
to
this.
So
the
first
pr
that's
already
in
is
about
adding
some
extra
parameters
to
the
query.
So
it
can
support
the
shard
work,
and
so
these
parameters
look
like
the
block
boundaries
right.
So
now
we're
going
to
specify
we're
going
to
specify
the
block
boundary
for
each
query.
D
So
every
query
now
will
first
hit
the
query
front
end,
which
will
be
responsible
for
splitting
it
up
adding
all
of
these
parameters
and
then
inserting
it
into
a
queue
and
then
the
queries
will
sort
of
feed
off
of
this
queue
and
then
work
on
their
set
of
assigned
shards
and
then
return
the
the
trace
to
the
query
front
end.
So
that's
that
appears
like
it's
almost
done
we're
going
to.
I
mean
it's
it's
in
the
draft
stage
in
github
and
I'm
going
to
like
update
it
soon.
D
D
It
should
also
help
do
exhaustive
search
so
now
that
we
are
able
to
split
up
the
shard
space,
the
block
space
will
have
multiple
queries
working
for
the
same
query
and
they
can
all
return
results
and,
if
multiple
like,
if
a
trace,
is
split
between
blocks,
we
can
sort
of
get
in
the
query
front
and
combine
the
trace
and
then
return
it
in
in
like
a
really
short
time
compared
to
what
we
would
have
before
and
yeah
that's
been
a
little.
That's
been
the
work,
that's
been
happening.
B
So,
to
be
clear,
we're
I
don't
know,
I
guess,
we've
seen
block
lists
up
to
a
thousand
or
so
generally
when
things
are
broken
right
now
we're
in
the
600s
we're
in
a
couple
like
over
a
billion
traces
and
our
p50
is
like
at
a
second.
So
I
don't
think
that's
terrible,
but
this
would
definitely
give
us
like
some
room
to
improve
and
actually
devote
resources
to
scaling.
B
Another
thing
I'd
like
to
add
for
the
query
path:
is
we
intend
to
drop
the
requirement
of
the
jager
query
kind
of
plug-in
thing
when
we
first
made
all
this,
their
grafana
did
not
have
any
support
for
traces,
and
so
that
was
the
only
way
to
see
tempo
traces
was
through
that
jaeger
query
and
the
grpc
plugin,
and
so
we're
going
to
drop
the
re.
B
I
think
we're
going
to
leave
it
we're
going
to
have
it
as
an
option
for
people
who
want
to
use
it
that
way,
but
we
are
going
to
just
drop
the
requirement
of
it
and
have
a
tempo
be
able
to
directly
return
responses
to
grafana
there's
a
ticket
out
there
there's
an
issue
out
there
for
tempo.
You
know
struggles
to
return,
100
000
span
plus
traces,
and
we
found
that
it
was
spending
almost
all
of
its
time,
just
marshalling
and
unmarshaling
repeatedly
to
protobuf
and
jason
and
everything
under
the
sun.
D
Yeah,
we
also
merged
that
with
the
last
pr
content,
negotiation
and
we're
we're
marching
into
proto
now,
instead
of
jason.
So.
B
That's
a
good
thing
to
add
in
the
updates.
Ananya
did
add
basic
content
negotiation
on
the
query
path,
so
you
could
previously.
It
was
always
returning
json,
which
was
very
was
expensive
to
to
marshall,
and
so
we
switched
to
protobuf
between
the
query,
jager
query
and
the
querier
cool
marty.
You
wanna
talk
about
compaction
a
little
bit
sure
I'll.
Take
some.
E
E
F
E
Cool
yeah
so
for
compaction,
like
the
main
thing
that
we've
been
kind
of
like
looking
at
is
you
know,
performance
and
observability
like
compaction?
Is
a
big
big
import?
It's
important!
You
know
it
keeps
a
blockless
length
down.
It
keeps
like
you
know,
search
times
manageable
and
you
know
pulling
and
things
like
that.
So
it's
important
you
know
small
blocks
that
are
coming
out
fast
out
of
the
adjusters
have
to
be.
You
know
compacted,
so
you
know
kind
of,
like
mainly
the
baby.
The
biggest
thing
here
is.
E
We
made
a
change
to
where
it
can
actually
compact
up
to
eight
blocks
at
one
time.
So
previously
it
was
always
taking.
You
know
only
two
blocks.
It
was
limited,
so
you
know
rough
calculation,
like
that's
67,
less
shuffling
of
like
level
zero
data,
which
is
like
the
most
active
data
that
comes
out,
so
it
can
do
that
early
compaction
where
it's
needed
the
most
a
lot
faster,
other
metrics.
So
you
know
talking
about
observability
like
questions
about
like
compactor
performance
like
how's.
It
doing
so.
E
We've
looked
at
some
metrics
that
made
sense
to
add
here
so
bytes
written.
You
know
how
much
data
is
being
flushed
to
the
back
and
objects
written.
You
know
how
many
objects
or
traces
are
being
flushed
to
the
back
end.
You
know
things
that
are
makes
sense
to
calculate
rates
on
you
know
and
to
to
do
other
things.
These
are
also
per
tenant
objects
combined.
So
part
of
the
one
job
with
the
compactor
is
like
when
it's
taking
blocks
together,
doesn't
just
append
the
data
directly.
It
actually
will
it.
E
You
know
if
it
finds
the
same
trace
across
multiple
blocks.
It
will
combine
those
together,
and
so
this
is
required
because
actually,
if
you
have
a
replication
factor
more
than
one,
it's
expected
that
the
same
data
is
actually
distributed
out
into
a
couple
different
blocks,
and
so
the
metric
there
is
just
you
know,
keeping
attractive
like
how
often
that's
occurring
like
when
it's
combining
things.
So
it's
related
to
your
replication
factor
and
then
just
the
number
of
blocks
that
are
being
compacted.
There's
been
some
other
improvements.
E
Just
you
know,
cpu
memory,
error
handling,
so
just
things
that
are
makes
sense.
So
some
you
know
some
blocks
that
are,
if
they're
longer
you
know
towards
the
older
end
of
the
data
you
know
they
can
take
up
to
an
hour.
Maybe
to
do
compaction
and
things
like
that.
So
it's
those
things
can
have
a
larger
effect.
E
Maybe
the
goals
like
so
I
I
put
this
in
here-
we've
kind
of
bandied
it
around
kind
of
like
an
internal
goal
is
like
what
would
it
look
like
to
scale
to
a
million
spans
per
second?
So
that's
not
too
far
away
like,
I
guess
joe
you'll
touch
on
that
a
little
bit
later,
maybe
like
in
the
document
but
like
so
like
we
expect
compaction.
Maybe
to
be.
You
know
something
that
hits
you
know.
Maybe
a
ceiling
like
scalability
concerns
faster
than
others,
maybe
or
at
least
before
that
limit.
E
So
you
know
what
are
the
things
that
make
sense
so
just
observe.
General
observability
are
more
things
that
we
need
to
look
at.
Maybe
defining
these
questions
and
answering
them
like.
Are
the
compactors
keeping
up
with
the
load?
What
does
it
mean
for
a
compactor
to
even
be
keeping
up
like
what
sort
of
metrics
makes
sense
for
us
to
add
to
look
at
things
like
that?
You
know
what
is
compaction's
end
goal
like
what
is
the
actual
block
count
size
window
size,
things
that
make
sense.
E
When
is
compaction
ever
done,
like
you
know
like
at
some
point,
it
doesn't
make
sense
to
compact
this
data
any
further,
but
you
could
keep
compacting.
So
there
has
to
be
some
sort
of
you
know
what
is
it
the
end
goal
there
and
then
you
know
once
we
can
answer
those
questions,
we
can
provide
some
better
guidance.
You
know
for
a
given
load,
then
this
number
of
spans
per
second,
you
know,
needs
this
many
compactors,
and
things
like
that.
E
B
Yeah
something
else
that
happened
recently
was
we
were
seeing
some
additional
load
and
our
block
list
was
growing
and
we
we
have
this
setting
called
the
time
window,
and
so
we
found
that
reducing
the
time
window
actually
helped
us
keep
our
block
list
shorter.
B
This
is
kind
of
on
the
spot
marty,
but
something
I
was
thinking
about
was
a
time
when
it
was
kind
of
a
cluj
to
make
it
easier
to
find
blocks
that
go
together.
So,
basically,
a
time
window
is
a
set
of
blocks,
that'll
even
consider
compacting.
It
just
made
the
algorithm
much
cleaner,
because
here's
the
10
that
you'd
even
ever
look
at
to
compact
is
there
an
ability
to
even
remove
that
and
instead
and
instead
perhaps
have
a
more
organic
method
of
identifying
a
set
of
blocks
that
can
be
compacted
together.
E
Yeah,
so
the
default
is
four
hours,
meaning
that
it
will
always
consider
blocks
within
the
same
four
hour
time
span
that
can
be
compacted
right,
and
this
this
is
fixed,
like
a
four
hour
interval
fixed
off
of
the
unix
epoch
right.
So
it's
a
fixed
interval,
so
it
makes
you
know
the
ability
for
sharding.
Only
one
compactor
will
ever
own
a
given
interval.
We
reduce
ours
to
one
hour,
which
means
for
that
that
hot
data
coming
out
of
the
ingestors
like
that
one
hour
can
be
sharded.
E
E
Well,
I
have
some
ideas,
but
basically
yeah,
like
you
know
that
that
window
is
kind
of
like
good
and
bad
as
you
reduce
it.
You
increase
the
sharding
of
your
compactors
like
the
ability
you
know
they
can
split
that
workload
of
data
that
needs
to
be
compacted
better
because
they
own
a
smaller
portion
of
like
the
amount
of
work.
E
However,
it
means
that
the
window
of
data
that
you
can
consider
to
compact
also
gets
smaller
so
kind
of
like
it's,
it's
a
balancing
act
and
so
yeah,
I
think,
there's
I
think
what
we
need
you
know.
Maybe
we
need
to
look
at
replacing
that
is
there's
something
yeah
different,
that's
more
dynamic
or,
but
it
still
has
to
be.
You
know
with
the
sharding
and
things
like
that,
you
won't
want
blocks.
You
know
right
the
same
block
being
compacted
into
two
more
blocks
like
it
needs
to
be
compacted
once
yeah
right.
B
So
we
were
on
four
hours
and
we
were
seeing
roughly
for
eve
when
the
compaction
was
complete.
We
were
seeing
about
four
blocks
in
each
section,
each
each
time
window
of
four
hours
because
of
our
upper
cap
of
six
million
traces
per
block.
So
what
we
found
was
by
reducing
it
to
one
hour,
we're
roughly
matching
what
our
max
compassion
compaction
size
was
anyway.
B
Compaction
was
able
to
keep
up
much
better
and
it
was
due
to
the
right
distribution
across
the
compactors,
so
we
should
probably
put
that
in
some
kind
of
operational
guide
somewhere
like
we
have
a
real
basic
description
of
some
of
our
settings,
but
like
certainly
a
guide
of
some
sort.
Let
me
make
a
note
of
this.
I
think
marty
in
particular.
At
this
point
you
would
be
the
best
for
the
competitor
side,
all
the
tunables
and
like
kind
of
why
you
would
choose
one
value
or
another,
so.
E
One
day
perhaps
yeah
there's
definitely
a
balancing
act.
Maybe
we
should
consider
changing
the
default
to
one
hour.
It's
just
a
nice
round
number.
Maybe
it
makes
sense
for
most
use
cases.
B
Yeah
all
right
ananya
has
a
pr
app.
What
is
that
like
troubleshooting
and
it's
a
good.
C
A
B
Cool
so
internally
we're
currently
seeing
150
000
spans
per
second,
which
is
disappointing,
but
for
most
of
this
week,
and
I
should
track
down
why
this
is
but
for
most
of
this
week
we're
actually
at
200
000
spans
a
second
which
is
part
of
what
had
us
looking
a
little
bit
at
the
compactor,
because
we
were
seeing
it
growth
in
the
block
list
and
growth
in
the
block
list
tends
to
reduce
query,
latency
or
sorry
increase,
query,
latency
and
so
we're
just
kind
of
poking
around
and
seeing
what
was
going
on
there,
which
is
where
we
found
some
of
those
settings
that
marty
was
playing
with,
but
so
internally,
we've
seen
about
200,
000,
spans
per
second
I'd,
say
sustained,
maybe
210
or
so
a
little
bit
above
that
and
we're
fine.
B
I
think
we're
easily
at
the
scale.
I
think
double
the
scale
would
be
fine
once
we
start
pushing
to
a
million
spans.
A
second,
I
think,
is
when,
like
marty
said,
compactor
is
going
to
be
an
interesting
thing,
or
it's
going
to
be
interesting
to
see
how
that
grows
at
a
million
spans
per
second
ananya's
work.
On
the
query
front
end,
I
makes
me
comfortable
with
the
query
path
once
that's
in
place.
Well,
past
a
million
spans
a
second,
it's
really
going
to
be
a
matter
of
wrangling.
These
extremely
long
block
lists.
B
That's
going
to
be
the
thing
that
just
grows
with
the
number
of
traces
kept
and
your
retention
and
all
these
things.
So
we
don't
really
have
I
don't
know
I'd,
say
and
I'd
say
personally,
it'd
be
nice
to
see
a
million
per
second
and
what
tempo
looks
like
to
get
that
at
that
scale.
I
don't
know
what
the
goals
of
the
community
are.
B
I
would
be
really
interested
to
hear
what
kind
of
skill
people
would
like
to
see.
If
anyone
is
willing
to
share
that
or
numbers
they
may
have
already
been
seeing
internally,
if
anybody
can
share,
perhaps
personal
experience
with
tracing
or
tempo
specifically,
but
the
kind
of
load
that
you
might
want
to
see
at
maybe
some
kind
of
extreme
scale.
I'd
love
to
kind
of
target
that
or
understand
what
people
think
about
when
they
think
of
extreme
tracing
skill.
G
First
time
on
the
call,
but
yes
I
I'm
one
of
the
maintainers
for
cortex
right
and
I
would
normally
join
the
loki
one
too,
but
the
loki
community
calls
at
like
three
o'clock
in
the
morning
for
me.
So
I
appreciate
having
this
one
at
a
more
reasonable
time.
B
Are
you
based
on
the
west
coast,
yeah
yeah,
oh,
okay,
yeah!
This
is
definitely
more
reasonable
for
you
than
whatever.
G
I've
only
just
started
tinkering
around
with
tempo
myself.
I've
got
it
installed
locally,
but
it
doesn't
support
azure,
blob
storage,
so
yeah.
B
E
E
G
I
just
I
just
liked
it.
I
actually
started
implementing
a
possible
solution
to
this
last
night,
so
if
I'll
reach
out
to
go
and
see
where
he's
at,
but
if
not
I'll
he's
not.
G
G
B
Sure
very
cool
you
don't
have
to
have
any
thoughts
on
extreme
tracing.
I
do
but.
G
I
don't
think
they're
feasible
for
per
tempo
right
now.
The
pro
what
we've
got
at
aks
is
obviously
probably
on
the
very
extreme
end
for
distributed
tracing,
but
I
usually
a
million
a
million.
A
second
is
usually
a
good,
a
good
first
measure
right
like
get
that
working
there.
B
E
G
E
G
Right
but
yeah
a
million
a
million
across
like
a
bunch
of
tenants.
I
think
it's
a
much
easier
problem
than
one
tenant
being
that
that
giant
silo
and
that's
the
problem
like
I'm
facing
with
at
microsoft,
with
some
of
our
stuff.
With
some
we
tend.
We
have
a
lot,
a
very
wide
base
of
people
and
internally,
that
are
using
some
of
these
texts,
but
we
also
have
people
who
are
extremely
large
silo
like
like
single
tenants
that
would
challenge
some
of
these
systems.
B
Yeah,
our
compaction
challenges
are
slightly
different
than
cortex.
I
think
we
have
more
data
but
way
less
processing.
I
think
they're
more
cpu
bound
and
we
are
more
just
like
throughput
to
the
back
end
of
bound
in
gcs.
We
see
about
40
mega
second
off
the
back
end
40
to
50
from
our
compactors,
and
so
I
think
that
is
kind
of
roughly
our
cap,
like
we're
going
to
find
that
to
be
ultimately
what
the
thing
we
cannot
overcome
when
we're
trying
to
compact
some
huge
amounts
of
data.
G
Well,
yeah,
I'm
looking
forward
to
getting
this
trying
to
once
I
get
the
blob
stores
to
work.
I
will
look
forward
to
trying
this
on
aks.
B
B
We
want
to
simplify
that
interface
to
be
dumber
and
to
be
more
like
be
more
like
a
file
io
style
interface,
where
it
just
supports
like
read
or
read,
range
some
basic
things
instead
of
the
full.
I
think
right
now
you
can
like
ask
for
individual
object
types
like
read
index
or
read
block
matter
or
some
crap.
B
Okay,
I
should
know
I
wrote
it,
but
right
me
and
martian,
and
I
were
just
discussing
that
because
we
wanted
to
do
some
changes
internally
and
I
think
making
that
dumber,
which
would
make
it
easier
to
add.
Azure,
blob,
storage
and
other
backends
is
on
the
list
to
do
all.
A
Okay,
very
cool
cool
fyi,
I'm
already
looking
at
moving
the
local
call
to
a
later
time.
G
B
B
D
G
F
D
B
F
Sure
cool.
B
Does
anybody
have
anything
else
you
want
to
address
or
talk
about
excited
for
daniel,
getting
his
job
at
k6
come
come
january?
I
don't
know
what
ability
you'll
still
have
to
contribute,
but
certainly
thank
you
for
everything.
You've
done
and
any
participation
you
can
still
do
would
be
cool.
C
G
B
Me,
too
surprised
that
there
were
some
questions
some
questionable
period
over
the
summer
where
tempo's
future
was
up
in
the
air
a
bit,
but
we're
back
yep.
G
Cool
I've
had
a
good
experience
with
loki
so
hopefully
gonna
turn
that
into
tempo
as
well.