►
From YouTube: Tempo Community Call 2022-01-13
Description
Updates on the state of Tempo. Discussion of 1.3, backend search, metrics generator, and possible future backend formats.
Plus a double announcement of the two year anniversary of Tempo!
B
A
Okay,
it
just
turned
on
for
me:
okay,
so
welcome
to
the
january
tempo
community
call
and
thanks
for
joining
this
is
fairly.
A
Informal,
so
we
have
some
things.
We
want
to
talk
about
I'll
post
the
agenda
in
the
chat
over
here.
This
is
fairly
informal,
but
so
you're
welcome
to
ask
any
questions.
A
You
want
we'll
talk
about
some
of
the
things
we
have
on
the
agenda
feel
free
to
add
your
own
as
well
as,
if
you
just
have
a
question,
you're
welcome
to
either
put
it
in
chat
or
just
kind
of
unmute
and
ask
this
isn't
like
some
canned
presentation
where
you
know,
if
we're
kind
of
on
you
know,
we
won't
want
to
be
interrupted.
A
So
yeah
make
this
conversation
if
you're
interested
and
ask
questions
kind
of
about
where
tempo
is
and
where
we
want
to
go
so
did
ananya
join
and
no,
he
didn't
okay,
I
wanted
ananya
to
do
this
first
and
for
first
bullet
point
here,
but
it's
real
late
for
him.
I
think
it's
like
11
o'clock,
where
he
is
or
something
like
that
pm.
A
So
I
think
it's
in
a
couple
days.
Oh,
I
think
we
have
a
pr.
We
have
a
presentation.
Mario
did
you
put
that
together
yeah?
Why
don't
you
bring
that
up?
Man.
A
A
So
what
is
it
there
should
be
a
date?
Oh
there
you
go
january
17th.
So
four
days
from
now
is
the
two-year
anniversary
of
the
first
commit
to
tempo,
it's
kind
of
been
an
on
and
off
project
we
dabbed
on
it
for
two
or
three
months
of
being
a
nanya,
which
is
why
I
really
wanted
him
here
for
this,
but
so
it's
just
me
and
him
at
first
and
then
we
had
about
three
or
four
months
off
where
we
were
working
on
other
projects.
A
We
kind
of
came
back
that
fall
so
maybe
like
slightly
less
than
two
years
of
work
and
two
years
since
the
first
commit,
and
it's
been
a
fun
time
for
six
months.
It
was
like
a
heads
down,
nobody
knew
about
project
and
we
had
a
lot
of
fun
building
it
and
then
we
kind
of
announced
it
and
we've
had
a
fun
community
and
a
lot
of
people
who
are
enjoying
using
it
and
yeah.
It's
just
been,
it's
just
been
a
lot
of
fun
and
that
the
team
is
huge.
A
Now
conrad,
marty
and
mario
are
here,
and
then
we
have
some
others,
so
you
couldn't
make
it.
I
suppose,
but
a
lot
of
great
devs,
I'm
putting
some
time
into
this
project
and
it's
it's
just
fun
to
see
something
you
know
grow
that
much.
I
suppose,
and
get
set
get
traction
like
that.
A
So
yeah
and
the
the
commit
message
is
silly,
but
the
very
first
commit
to
tempo
was
loki
like
with
90
of
the
code
deleted,
and
then
I
started
from
basically
that
shell,
which
kind
of
defined
the
major
pieces
and
started
like
putting
code
into
that.
So
I
carved
up
loki.
That
is
the
that
is
the
first
commit
so
yeah.
Thank
you
all
for
you
know
being
involved
in
the
project.
It's
been
fun.
A
Cool
what
else
we
got
today.
C
Right
yeah,
we
have
the
coming
next
release,
which
is
1.3.
I
think
it's
been
roughly
like
two
months
like
early
november,
since
we
released
1.2
and
then
we
did
a
patch
release,
1.21
and
yeah.
We
just
want
to
comment
a
bit
on.
Where
are
the
main
changes
and
yeah?
So
I
guess
the
highlight
of
the
piece
is
full
back
in
search.
C
We
will
talk
about
this
more
in
depth
a
bit
later,
but
essentially
we
started
with
a
search
in
the
past
release
and
we
were
searching
the
recent
traces,
but
not
the
full
space.
So
we
were
able
to
search
for
traces
in
the
ingestions,
but
once
blocks
were
flushed
to
the
storage,
yeah
search
wouldn't
be
able
to
search
for
those,
but
but
now
tempo
is
able
to
so
yeah.
C
This
is
great,
and
not
only
that
there
are
obviously
the
space
is
way
bigger
when
you're,
comparing
the
full
retention
with
compared
to
only
recent
data,
so
to
support
full
back-end
search,
we're
also
making
available
the
usage
of
cloud
functions,
to
kind
of
be
able
to
parallelize
more
and
have
more
compute
power
to
search
across
more
blocks
at
the
same
time.
So
yeah
pretty
exciting,
then
we
have
all
the
features
and
and
improvements
such
as
sorry
support
for
inline
environments.
C
This
is
a
deployment
theme
in
tanka,
where
you
you
can
define
environments
and
be
and
have
them
evaluate
on
runtime,
instead
of
defining
them.
Aesthetically.
C
Yeah,
which,
depending
on
your
deployment
info
or
yeah,
depending
on
on
your
deployment,
can
be
very
useful
regarding
search,
we
dropped
the
tag
cash,
so
tax
are
extradited
on
demand.
This
has
the
huge
benefit
of
not
being
limited
by
a
the
size
of
a
cash,
so
we
can
return
thousands
of
values
every
time
we
also
were
hit
with
huge
traces
and
that
so
we
improved
the
memory
efficient
on
compaction
and
block
cutting
so
yeah
to
overcome
those
now
we're
also
exposing
metrics
as
sorry
limits
as
as
metrics
we
also
have.
C
I
think
this
is
a
there
was.
This
was
a
very
common
issue
in
the
community
having
our
healthy
factors,
we
fixed
that
and
yeah
not
to
go
for,
like
20
minutes
listing
every
improvement.
There
are
obviously
many
more.
I
don't
know
if
anyone
wants
to
yeah
comment,
one
that
I
missed
and
which
you
have
probably
mentioned.
C
Yeah,
that's
exactly
the
next
one
yeah
and
we
have
the
release
candidate
it's
available
on
intel,
github
and
all
the
builds
are
also
available
in
docker,
so
yeah,
if
in
a
week
or
so,
but
approximately
everything
goes
well.
I
think
we'll
around
that
time
we
will
be
cutting
the
proper
release.
1.3.
D
D
A
And
I
was
looking
for
the
the
tag
I
was
looking
for
the
the
tag
and
docker
hub,
so
we
could
post
it
here
here.
It
is
right
here.
A
Mm-Hmm
jack
grabs
just
being
really
slow
for
me,
maybe
but
I'm
pretty
sure
that's
130
rco
so,
like
mario
said,
maybe
like
in
a
week
or
so
we'll
cut
the
the
real
one.
If
you
know
we
don't
see
any
stability
issues
in
our
internal
environment,
cool
and
then
the
cadence
has
been
what
every
two
months
or
so
for
me.
I
like
that.
I
don't
know
if
people
have
been
okay
with
that
kind
of
a
release
schedule.
I
feel
like
that's
about.
A
We
get
a
about
a
good
amount
of
you
know,
features
and
bug,
fixes
and
improvements
in
about
that
time,
so
we're
just
kind
of
playing
it
by
ear
right
now
we
don't
have
anything,
you
know
formal
or
we
don't
have
anything.
You
know
required
like
we
release
once
a
month
or
anything
like
that,
and
I
feel,
like
things
are
working
out
pretty
well,
if
that's
not
working
out
for
you
feel
free
to
you
know,
let
us
know
either
through
this
beating
or
you
know,
slack
or,
however,
whatever's
easiest
for
you
cool.
E
Yeah
yep
I'll
talk
a
bit
about
the
metrics
generator,
so
this
is
a
project
we're
working
on
right
now,
so
this
will
be
something
for
the
next
release,
so
we're
looking
at
a
different
direction
now
I'll
just
share
my
screen
for
a
second,
because
I'm
not
using
the
slides.
E
Okay,
is
this
readable
just
you
know,
let
me
know
if
this
is
too
small
or
whatever.
E
Yeah
I'll
do
it
like
this,
so
yeah
the
matrix
generator.
This
is
a
new
thing.
We
want
to
add
to
tempo.
The
goal
of
the
metrics
generator
is
to
generate
metrics
from
the
trace
data
as
it
is
being
ingested.
So
this
is
a
completely
new
feature
in
tempo.
So
far
tempo
is,
you
know,
tracing
data
store.
You
can
store
traces,
you
can
fetch
traces
and
you
can
also
search
for
traces
now
with
the
metrics
generator.
We
also
want
to
generate
metrics
and
push
them
somewhere
into
a
matrix
data
store.
E
Since
this
is
you
know,
a
big
new
component,
there
are
a
lot
of
new
design
decisions,
we've
been
making
this
design
document
the
design
proposal,
and
we
also
wanted
to
make
it
public
so
that
you
know
people
in
the
community.
Basically,
everyone
can
see
what
we're
working
on
and
can
also
comment
and
suggest
stuff.
E
So
this
document
describes
how
we
are
planning
to
build
this
component,
how
we
will
add
it
to
temple.
You
can,
you
know,
read
a
document,
leave
suggestions,
leave
comments,
provide
feedback
whatever,
and
we
can
take
this
into
account
as
we
build
a
feature
and
just
you
know
make
it
better
for
everyone.
E
E
E
This
pr
will
remain
open
for
like
a
couple
of
weeks,
so
you
can
check
it
out
at
your
own
time
and
you
can
leave
a
comment
here
in
the
thread
or
you
can
leave
a
pull
request
review.
Just
you
know,
do
whatever
works
for
you,
you
can
also
leave
a
slack
message
or
dm
me,
or
I
don't
know
whatever
but
yeah.
I
was
just
planning
to
quickly
go
to
the
document
just
to
give
like
an
overview
of
how
it
will
work,
so
this
describes
a
bit
what
it's
about.
E
So,
regarding
the
architecture,
we
were
thinking
about,
adding
a
new
component
to
tempo,
so
a
new
microservice,
which
is
you
know
specifically
responsible
for
generating
these
metrics
and
pushing
them
to
prometheus
or
previous
compatible
data
store.
So
that
means
that
the
in
the
ingest
path
will
change
a
little
bit
so
thus
far,
we
had
a
distributor
the
ingestion
on
the
back
end,
which
would
you
know,
process
the
ingress.
E
E
The
reason
we
will
be
adding
a
new
component
is,
you
know,
we're
kind
of
considering
different,
alternate
alternatives.
An
option
is,
for
instance,
to
integrate
this
into
the
adjuster
to
just
run
it
together
with
the
gesture,
but
the
concern
was
there
that
that
would
make
the
injustice
very
complicated.
It
would
have
to
deal
with
traces
with
the
state
of
these
traces
flushing
the
traces
and
also
has
to
think
about
the
metrics
and
pushing
them
it's
kind
of
you
know
we
were
worried.
E
It
would
be
too
complicated
and
by
adding
a
new
component
it
can
hopefully
remain
simple,
and
you
know
if
it
blows
up.
Only
this
part
of
tempo
will
blow
up.
So
it's
kind
of
like
a
limited
blast
radius.
E
E
There's
a
bit
more
detail
here.
I
would
use
a
grpc
protocol
and
this
is
kind
of
how
the
metric
generator
will
look
inside.
So
this
is
a
component
that
also
exposes
grpc
server
like
all
tempo
components,
so
it
will
receive
a
batch
of
spans
from
the
distributor
and
then
it
will
pass
this
on
to
different
matrix
processors,
which
will
process
the
spans
and
then
you
know
generate
metrics.
E
So
that
way,
all
the
processors
are
isolated
per
tenant,
and
then
we
have
a
different
process:
the
matrix
collector
just
the
name
which
will
regularly
collect
metrics
from
the
different
processors
and
then
push
them
out
into
promisius
or
permissions
compatible
backend,
for
instance,
cortex,
which
means
you
know,
cortex
is
multi-talent.
E
So
that
way,
you
can
have
multiple
tempo
tenants
writing
to
multiple
cortex
tenants
and
again,
there's
a
bit
more
detail
here
about
different
components,
some
of
the
trade-offs
and
initially
we
are
targeting
two
processors:
the
service
graph
processor.
E
This
is
a
processor
that
already
exists
in
the
grafana
agent
and
it
can
generate
metrics
which
can
power
a
service
graph
in
grafana.
So
I
put
the
link
to
the
docs
here
that
are
kind
of
like
this.
This
can
make
a
service
map
of
all
your
services
based
upon
trace
data,
so
we'll
be
able
to
generate
these
metrics
using
tempo
and
the
second
one
we
want
to
integrate.
E
Is
the
spam
matrix
processor,
this
processor,
just
you
know,
ingests
all
the
spans
and
for
every
span
it
will
keep
track
of
request,
error
and
duration
metrics
and
just
send
them
to
permissions
or
whatever,
and
this
will
allow
you
to
yeah,
basically
see
these
red
metrics
from
all
those
bands.
In
your
data
again,
there's
a
bit
more
details
here,
there's
a
bit
more
detail
here,
so
you
know
feel
free
to
take
your
time
to
retrieve
this.
E
I
think
that's
kind
of
like
all
I
wanted
to
say
about
this,
so
yeah
we're
targeting
this
for
the
next
release
tempo
1.4.
E
I
guess
that
will
be
in
two
or
three
months
or
something
like
that,
and
this
will
be
an
initial
implementation.
So
you
know
there
are
a
lot
of
concerns
about
how
do
we
deal
with
crashes,
with
data
loss,
with
persistence
like
a
wall
stuff
like
that?
Those
things
will
probably
postpone
a
bit
until
we
have
an
initial
implementation
that
has
the
full
pipeline
and
then
a
later
stage.
We
can
look
at
how
can
we
reduce
data
loss
and
make
it
more
reliable.
E
Cool
yeah.
A
I
think
the
questions
for
me
for
the
community,
like
for
people
who
want
to
participate
in
that,
are
things
like
what
kind
of
metrics
are
people
interested
in
if
there's
things
that
are
not
on
that
list,
that
might
people
might
see
value
in
we'd
love
to
hear
from
that,
and
then,
of
course,
if
there's
any,
you
know
just
any
kind
of
just
concerns
or
thoughts
about
the
design
or
if
it
would
work
or
not
work.
A
You
know
there's
also
we're
supporting
the
prometheus
remote
right
endpoint
at
first,
because
it's
so
you
know
widely
used
and
we
have
a
lot
of
experience
with
it
and
there's
a
very
you
know,
there's
very
easy
to
use,
go
client,
but
we
also
kind
of
expect
longer
term
to
consider
other
formats,
otlp
kind
of
being
the
kind
of
the
obvious
other
one
to
eventually
support
hotel
metrics
but
prometheus
remote
right.
I
think
every
major
vendor
supports
it.
Every
metrics
back
end
supports
it,
so
I
don't
think
you'll
have
any
issues
you
know
using.
F
Awesome
so
yeah
january
17th
is
the
official
two
year
anniversary
of
tempo
and
we're
counting
days
from
the
very
first
commit
that
you're
pushed
to
tempo
and
yeah.
It's
been
super
exciting.
I
think,
like
I've
been
in
the
team
from
the
very
beginning
and
it's
been
super
fun,
it
was
like
close
source
development
for
a
while
we
announced
with
o2o
in
october
2020
and
yeah.
It's
been
super
fun.
We've
onboarded
new
team
members,
growing
the
community
yeah,
it's
been
fun,
congrats,
everyone
and
congrats
to
you.
A
Also,
I
didn't
even
realize
it
was
coming
up.
It
was
nanya,
he
remembered,
so
he
slacked
me
on
monday.
I
think
of
this
week
and
it's
like
hey.
Did
you
remember
that
the
community,
the
anniversary
is
coming
up
so
appreciate
that
reminder
I
would,
I
would
have
let
it
go
by
and
not
remembered
cool.
A
A
Cool
what's
next,
I
think
I'm
next
actually
full
back
and
search.
A
So
I
think
I
have
one
slide.
I
also
kind
of
wanted
to
show
do
some
screen
shares
yeah
this
slides
awful
excuse
me.
I.
B
A
We
were
like
all
in
this
presentation
like
two
minutes
before
this
before
the
community
all
started
like
rapidly
slamming
bullet
points
in
so
this
is
not
a
great
slide
but
I'll
get
to
the
I'll
get
to
the
content.
It'll
all
be
there
and
I'll
share
some
stuff
as
well
on
my
screen,
but
anyways.
So
we're
building
this
back
in
search
functionality
and
we're
kind
of
building
it
on
top
of
a
a
format
that
was
really
intended
to
be
a
trace
by
id
search.
A
So
what
we
did
initially
was
we
took
a
bunch
of
open
challenge
proto
and
we
batched
that
up
into
a
trace
and
we
pushed
it
to
s3.
So
our
back
end
is
organized
around
a
trace
like
blob.
You
know
a
trace
every
or
blocks
of
traces.
It
actually
makes
it
kind
of
hard
to
search.
It's
not
really
built
for
search
it's
built
for
trace
by
id.
A
So
our
first
attempts,
the
team
is
kind
of
split
into
two
factions.
At
the
moment.
I'm
currently
building
out
ways
to
just
parallelize
the
search
path
and
use
cloud
functions
and
use
some
of
these
features,
while
other
members
of
the
team
are
looking
at
well.
How
do
we
improve
this
backend
format
to
be
just
easier
to
search
so
we're
kind
of
doing
both
of
these
things?
A
At
the
same
time
and
right
now,
one
three
provides
the
ability
to
do
fullback
and
search
either
in
cloud
functions,
or
you
can
also
do
this
in
the
queries
themselves,
so
they
both
are
capable
right.
Now
we
are
consuming
about
180
mega
second,
and
for
our
scale
we
would
need
hundreds,
maybe
even
a
thousand
plus
queriers,
to
search
the
back
end
in
any
kind
of
reasonable
time
frame,
which
is
a
lot.
So
we
looked
at
cloud
functions.
It's
a
great
way
to
kind
of
have
burst.
A
You
know
compute
on
demand
and
pay
a
much
smaller
price
than
kind
of
always
having
allocated
space
in
a
kubernetes
cluster
or
in
some
kind
of
other
provisioned
environment.
So
I
think
for
different
people.
The
queriers
are
extremely
easy
to
set
up.
In
fact,
the
queries
require
no
setup.
If
you
just
use
the
queries,
they'll
work
and
so
people
who
are
maybe
receiving
a
low
amount
of
traffic
will
be
able
to
use
back
and
search
immediately,
just
by
scaling
their
queries
up.
A
But
if
you
do
have
a
considerable
amount
of
traffic
you're
going
to
need,
probably
some
help
for
that
and
that's
kind
of
what
the
cloud
functions
part
is
for
so
with
one
three.
There
will
be
a
blog
post
as
well
as
some
help
on
how
sharing
how
we
have
set
up
some
terraform,
some
other
details,
I'll
go
through
all
of
this
stuff.
So
if
you
want
to
attempt
this,
if
you
want
to
try
to
use
the
cloud
functions
for
backend
search,
you
will
be
able
to
make
use
of
it
as
well.
A
Currently
we're
doing
about
10
gigs,
a
second
which
isn't
great,
isn't
terrible.
It
just
costs
so
much
to
pull
proto
unmarshall
it
like
go
through
every
span.
Looking
for
some
kind
of
condition,
you
know
return
true
or
false,
based
on
that,
the
proto
in
particular
on
marshalling
marshalling,
is
just
very
expensive
in
cpu
and
memory,
and
just
takes
so
long
and
that's
kind
of
what
the
back
end
team
is.
The
team.
That's
looking
at
the
backend
format
is
really
focused
on
is
like
how
do
we
get
rid
of
proto?
A
How
do
we
make
this
more
efficient
to
marshall?
How
to
make
this
more
efficient
to
search
is
really
kind
of
the
direction
they're
looking
and
so
10
gigs
a
second
isn't
great,
but
it's
not
awful
and
then
five
to
ten
cents
per
query.
So
we
can
kind
of
interestingly,
like
really
quantify
how
much
a
query
costs
because
we're
you
know
paying
for
these.
Google
cloud
function,
resources
where
you
can
see,
I'm
spinning
up
thousands
of
functions
for
you
know,
I'm
doing
a
query.
A
I'm
spending
up
thousands
of
functions
to
answer
that
query
and
I
I'm
estimating
it's
like
five
percent
ten
cents
per
query,
which
isn't
again
great,
isn't
terrible.
It's
it's
good
enough
for
now
and
we're
just
going
to
move
forward
and
continue
to
improve
this
feature.
You
know
in
the
future.
A
In
fact,
I
have
a
pr
that
won't
go
in
one
three,
because
it's
just
too
much
change,
but
we'll
do
a
pr
shortly
after
one
three
that
will,
I
think,
improve
this
quite
a
bit,
make
it
a
little
bit
easier
to
search
before
the
full
back
end
kind
of
shift,
and
let
me
do
a
little
screen
sharing.
F
A
F
A
A
It'll
return
in
a
couple
seconds,
if
you're,
if
you're
whatever,
if
you're
lucky.
B
A
Right
I
apologize
for
showing
you
a
stackdriver
dashboard,
but
I
don't
have
all
these
metrics
set
up
in
grafana
and
prometheus
just
yet,
but
you
can
kind
of
see
some
of
the
stats.
I
was
just
doing
some
searches
before
this.
This
call
started
to
get
some
metrics
in
here.
A
You
can
see
like
kind
of
latency
per
function,
the
number
of
instances
like
we're
breaking
2000
instances
when
we
are
doing
some
of
these
queries
and
then
you
can
see
the
requests
per
second
are
like
100,
or
so
I
really
kind
of
want
to
push
this
even
harder
and
get
it
to
the
point
where
I
can
exhaustively
search
an
hour
or
so
in
seconds.
I
think
we
can.
A
The
kinds
of
things
we're
fighting
are
like
cold
start,
which
is
a
common
issue
with
functions
as
well
as
just
kind
of
like
the
fan
out
like
how
many
processes
are.
You
know
putting
pressure
against
cloud
functions
to
make.
It
start
scaling
up
hard
and
so
there's
a
lot
of
different
concerns,
and
there
it's
kind
of
a
fun
world
to
work
in
I've
not
worked
with
lambda
or
any
of
these
cloud
function.
A
So
this
is
the
queer.
This
is
the
search
interface
and
this
hasn't
changed.
I
want
to
kind
of
I
wanted
to
show
this
to
make
that
clear.
The
search
interface
is
the
same
as
it
was
before
previously,
though,
up
here
in
the
corner,
it
says
like
last
one
hour
right
previously
up
here.
This
was
not
respected.
It
just
searched
whatever
happened,
to
be
an
adjuster,
so
you
could
say
show
me
stuff
from
two
days
ago
and
it
would
show
you
what
was
ever
at
mid
jesters
so
now.
A
The
change
here
is
basically
tempo
is
using
the
range
provided
it
is
going
to
the
back
end
or
the
injectors,
making
a
choice
and
searching
for
the
choices
that
that
you
know
you
queried
based
on
these
conditions.
So
this
is
all
the
same.
A
We
just
kind
of
added
the
ability
to
to
respect
this
and
I'd
say
we're
hoping
to
get
this
in
graphonic
cloud
as
kind
of
still
as
a
beta,
but
in
graphonic
cloud
for
people
who
are
using
that,
maybe
in
the
next
month
or
so,
is
the
goal
to
roll
this
out
to
everybody
in
grafana
cloud
and
well,
you
know
grafana
cloud
volume
is
significantly
lower
than
ours.
So
I
think
we
have
a
lot
of
questions
up
in
the
air
there
in
terms
of
like.
A
Do
we
use
cloud
functions
there
or
do
we
continue
using
the
queries,
and
so
there's
just
some
operational
questions
about
how
to
do
that
correctly?
That
are
still
on
the
table,
but
I'd
say
in
about
a
month.
We
would
hope
for
this
to
be
available
in
cloud
and
then
finally,
here's
a
trace.
So
this
is
a
search
it
took
about
13
seconds
again
we're
searching
like
the
hundred
and
around
180
mega
seconds
is
our
is
our
ingest
and
at
our
scale
search
feels
a
little
like
a
batch
job?
A
It's
not
quite
as
responsive
as
if
you're
sitting
at
prometheus
or
you're
sitting
at
loki
and
you're
executing
these
queries
right,
and
this
is
kind
of
what
we
really
want
to
ratchet
down.
We
want
to
get
it
to
the
point
where
you
type
out
a
query
like
you
do
against
prometheus
within
under
a
second,
you
get
some
nice.
You
know
metric
back,
you
can
kind
of
repeat:
you
can
kind
of
get
in
a
good
feedback
loop.
You
know
filter
down
what
you're
looking
for
and
finally
get.
You
know
the
trace
that
you
want.
A
So
it's
not
quite
there
for
our
scale,
but
we
think
we
can
easily
hit
that
in
cloud
which
shows
you
know,
most
users
of
cloud
have
significantly
less
than
we
than
we
do
in
our
operations.
Cluster
I
mean
you
can
see
here.
I
did
expand
this
one
section.
So
this
query,
which
was
a
15
minute
query.
A
It
found
four
blocks
in
our
back
end
that
match
and
it
made
7
000
jobs
to
search
those
blocks.
So
this
is
kind
of
the
challenge.
As
we
have
all
this
data
it
made
7
000,
smaller
jobs.
Each
is
a
roughly
about
10
megabytes
and
that's
configurable,
so
each
of
these
7000
jobs
says
go
look
at
these
10
megabytes
and
then
all
of
the
cor,
the
then
all
the
serverless
functions
they'll
spin
up
they
get
these
jobs
they're
like
I'm,
going
to
search
this
10
megs
they
pile
through
it.
A
They
return
their
answers
and
we,
you
know,
are
looking
for
the
the
results,
hopefully
in
a
in
a
reasonable
amount
of
time,
but
yeah
so
search
is
on
the
way
it's
in
one
three.
You
can
start
experimenting
with
it.
Now
there
is
a
specific
grafana
version,
that's
needed
to
pass
the
right
parameters,
and
I
guess
I
don't
know
that
off
on
my
head.
A
So
this
start
and
end
parameter.
Maybe
you
can
see
that
there
yeah.
This
is
the
new
parameters
passed
by
grafana.
The
latest
versions
of
grafana-
and
this
tells
the
tells
tempo
to
you
know,
use
a
range
query.
Instead
of
just
do
the
ingester
query,
the
recent
trace
search,
we
had
been
working
on
previously
cool,
so.
E
A
And
it's
I
used
to
do
operation
work
on
aws,
since
I
have
a
lot
of
experience
there
and
at
grifano
we
use
google
cloud,
and
so
I
don't
have
as
much
experience
on
google
cloud,
and
so
I
feel
like
the
dumb
developer,
who
is
always
knocking
on
the
door
of
the
sre
like
what's
going
on
with
all
this?
How
do
I
fix
this?
What's
where's
the
billing
page
where's,
the
where's,
the
everything?
A
A
Yeah,
that's
a
good
point,
so
we
use
google
cloud,
so
I
kind
of
prioritized
it.
Actually,
we
have
clusters
on
every
major
cloud:
don't
we
or
at
least
the
azure,
google
and
aws,
so
that's
kind
of
next
steps.
For
me,
it's
just
add
support
for
those
other
providers.
B
A
This
is
something
that
I
don't
know
how
to
do,
but
I
really
want
to
find
a
way
to
make.
You
know,
like
some
kind
of
agnostic
cloud
agnostic
code
package
that
I
can
deliver
to
all
three
and
my
initial
kind
of
like
searching
was
not
promising
in
that
world.
So
I'm
not
sure
where
that's
gonna
go,
but
you
know
we'll
figure.
It
out
cool
sure.
You're.
Welcome
to
ask
a
question.
G
Yeah
yeah,
so
with
the
new
backend
search,
the
in
in
ingester
search
will
be
totally
gone,
so
we.
A
It's
actually
both
so
with
the
latest
version.
Grafana
it'll,
pass
that
start
and
end
parameter
and
the
query
front
end
will
create
those
jobs,
so
we
could
do
7000
jobs.
Some
of
those
jobs
will
be
in
jester
search
if
your
range
hits
the
ingestors.
Basically,
so
all
the
all
the
stuff
that
you
previously
used
is
still
there.
The
the
recent
trace
search
works.
It's
just
been
extended
to
also
look
in
the
back
end
if
it
needs
to.
G
A
All
right,
I
think,
marty's
got
some
information,
so
marty's
kind
of
heading
up
this
team.
That's.
D
Yeah
cool
yeah,
so
we
kind
of
touched
on
this
before
a
little
bit
where
the
current
backup
and
block
format
for
tempo
is
compressed,
otlp,
protobuff,
bytes,
and
so
we
kind
of
are
feeling
like
that's
the
bottleneck
for
some
of
the
things
we
want
to
do,
search
being
one
of
them,
but
there's
a
lot
of
other
stuff
down
the
line
like
time,
series,
queries
and
metrics,
and
things
like
that,
so
we've
been
looking
at
what
can
we
do
in
there?
D
So
something
like
a
new
block
format
or
even
a
new
trace
format,
and
so
I'm
just
going
to
talk
about
kind
of
like
the
current
status
of
what
we're
doing
and
what
we
have
done,
and
we
have
some
code
numbers
and
stuff
cool
yeah.
So
we
probably
mentioned
this
on
previous
ones.
So,
but
one
thing
that
we
dug
kind
of
deep
into
was
the
current
search
and
the
injustice
uses
flat
buffers.
So
it's
and
we
had
a
good
experience
with
it.
D
It's
quick,
flexible,
like
it
worked
well,
so
we're
looking
at
it
to
do
as
the
entire
trace
in
flatbuffers.
So
what
we
did
with
this
approach
is
a
100
otlp
round
tripable
format.
So
it's
not
like
the
perfect.
You
know
format,
maybe
for
things
that
we
want
to
do,
but
it's
a
very
something.
That's
basic
and
easy
to
do
like
a
very
good
first
approach
and
it's
useful.
D
So
some
of
the
optimizations
and
flat
buffers
that
you
are
that
I
guess
you
have
to
do
that.
Maybe
like
it's
important
to
do
some
things
you
get,
are
string
and
object
de-duping.
So
in
flatbuffers,
oh,
I
guess
I
should
explain
what
flatbuffers
is.
Flatbuffers
is
just
it's
a
format
where
the
on
disk
and
the
memory
layout
are
the
same.
So
once
you
have
the
I
o
the
the
result
of
the
I
o,
you
can
just
load
it
straight
into
membrane.
D
There's
no
deserialization
step,
so
you
have
a
kind
of
a
pointer
to
a
bunch
of
structs
and
you
can
just
start
iterating
them.
So
some
things
you
can
do
is
are
de-duping.
So
in
protobuf
this
doesn't
really
exist,
because
when
you
deserialize,
every
string
will
be
its
own
copy
in
compression
that's
gone
away,
so
it
doesn't
really
make
a
difference,
but
in
fluff
buffers
we
actually
can
reuse
those
objects
and
then
other
things
like
reduce
nesting
and
flattening.
D
So
if
you're
familiar
with
otp
the
resource
bands
and
resource
kind
of
like
the
indirection
of
the
two
different
structs,
that's
you
know
things
that
we
can
optimize
here
and
then
be
table
trimming.
This
is
something
where
b
flap
upper
says
v
tables
and
if
you
have
a
field
unset,
it
actually
can
trim
that
from
the
results.
So
if
we
can
kind
of
reorder
some
of
this
fields
and
put
less
frequently
used
ones
at
the
end,
then
they
can
be
eliminated
from
the
payload
yeah,
so
going
through
some
numbers.
D
What
we
did
is
we
took
one
of
our
blocks
out
of
our
internal
environment
and
it
had
154
000
traces
in
it
and
it's
compressed
with
lz
for
one
megabyte
dictionary
size,
so
the
so
all
things
being
equal.
It's
the
exact
same
block,
the
exact
same
data,
but
the
only
difference
is
the
the
trace
format,
and
so
I
can
just
kind
of
walk
through
some
of
these
numbers
here.
D
D
So
this
was
a
good
experiment,
but
it
didn't
necessarily
have
like
a
clear
thing
that
we
necessarily
necessarily
want
to
merge
or
go
down
so
but
we'll
kind
of
maybe
just
put
this
on
the
side
and
keep
considering
but
yeah,
so
search
latency,
just
searching
that
block
and
age,
exhaustive
search
of
every
spanning
tag,
21
seconds
for
protobuf
3.3
for
flat
buffer.
So
that's
about
a
6.6
x
increase
and
the
rest
of
the
numbers
down
below
are
kind
of
similar
traces
per
second
spans
attributes.
D
So
the
final
kind
of
like
speed
up
was
maybe
about
6
6.6
x
right.
The
actual
data
processing
speed
is
that
last
one
it's
actually
in
once
you
have
the
I
o.
It's
able
to
go
12
times.
You
know
almost
11.5
or
12
times
faster,
but
because
the
compressed
block
size
is
1.76
you're
losing
some
of
that
benefit,
so
yeah
and
then.
A
D
Yeah
there's
this
link
down
at
the
bottom,
so
if
you're
interested
this
is
on
a
branch
I
have
and
that
file
there
is
the
schema.
So
it's
a
flat
cover
schema
for
this
new
stuff,
so
that
would
probably
be
a
good
place
to
start
if
you're
interested
in
looking
at
it
so
yeah
that
was
kind
of
our
findings
from
that
want
to
go
to
the
next
slide.
D
So
what's
next
so
like
I
said
that
wasn't
really
conclusive,
so
we're
going
to
keep
working
on
this.
So
next
up,
we
want
to
look
at
something
very
different,
so
columnar
doing
the
whole
block
as
a
calendar
store,
seems
very
promising.
There's
a
lot
of
things
we
could
do
there
there's
other
formats,
but
parquet,
I
think,
is
a
good
contender
that
we'll
we'll
look
at
and
colander
makes
sense,
because
what,
if
you
could
extract
all
the
searchable
tags
into
their
own
columns?
D
That
would
be
really
great
for
search,
but
it
also
brings
a
bunch
of
other
work,
so
just
the
additional
complexity
of
recreating
the
trace
from
those
different
columns,
the
different
schema
that
could
be
per
trace
per
block
things
like
that,
but
yeah.
D
So
I
think
that's
a
really
promising
thing
that
we're
going
to
look
at
and
then
flat
buffers
the
main
concern
really
kind
of
or
the
main
drawback
was
the
block
size,
and
so
if
we
could
do
reorganize
the
page
to
be
to
be
as
like
an
entire
object
like
it
would
increase
the
amount
of
de-duping
that
we
could
do
a
lot
and
it
would
bring
that
block
size
down.
D
I
mean-
and
I
think
that
could
be
really
competitive
and
then,
if
we're
redoing
the
whole
page,
then
maybe
we
can
do
a
more
searchable
structure.
Whereas
right
now
it's
each
trace
is
an
individual
set
of
bytes.
So
doing
it
at
the
page
level
lets
you
kind
of
like
break
open.
That
structure.
A
F
Yeah
yeah
we
had
a
company
internal
hackathon
I
was
just
trying
to.
I
was
just
trying
to
see
if
the
sdks
on
the
internet
have
have
everything
that
we
need
to
work
with
columnar,
so
they
have
good
support
for
compression
page
access,
putting
only
the
columns
we
need,
I
think,
that's
obvious,
but
and
I
think
the
results
of
that
are
inconclusive
as
well.
F
A
Cool
something
interesting
might
be
like
a
hybrid
format,
so
we
swapped
to
lz4
from
z
standard
because
of
the
speed
of
decompressing,
but
we
lost
some
space
on
that.
So
we
could
gain
that
space
back,
go
back
to
z,
standard
and
keep
the
existing
block
format
and
just
write
a
column
or
format
to
the
side,
maybe
and
use
that
for
search.
I
don't
know,
there's
a
lot
of
options
here
and.
F
There's
some
movement
yeah.
Maybe
it's
interesting
to
mention
this,
but
there's
some
movement
on
this
in
open,
telemetry
as
well.
There
is
there's
a
pr
or
two.
I
think
we
can
link
that
in
the
community
doc,
that's
beginning
to
look
at
columnar
formats
for
the
trace
data,
so
yeah,
maybe
we'll
drop
some
links.
A
Yeah
we
should
have
made
jersey
come
to
talk
about
it.
Yeah
guest
speaker.
A
E
A
Does
anybody
have
questions?
I
guess
we're
got
about
13
minutes
left
in
the
official
time
here
I'll
stay
on
a
bit
longer.
If
people
do
have
things
they
want
to
chat
about.
I'm
fine
with
that.
But
if
anybody's
questions
you're
welcome
to
unmute
and
ask
you
are
welcome
to
put
in
the
chat
or
whatever
you'd
like
to
ask.
B
I
had
a
question,
perhaps
I
know
tanner
had
chatted
with
marty
regarding
our
compaction
settings,
so
we,
before
the
break,
basically
the
holiday
break,
we
kind
of
dropped
our
competition,
static,
compaction
window
from
five
minutes
to
three
minutes
and
that
kind
of
gave
us
a
boost
in
compaction.
But
I
would
but
for
like
a
very
short
time
I
feel
like
for,
like
maybe
24
hours,
we
come
back
and
do
way
more
and
then
it
kind
of
kind
of
went
back
to
the
same
level
as
before.
B
So
right
now,
what
three
minutes
and
we're
kind
of
having
trouble
kind
of
keeping
the
block
this
under
control,
like
the
block
loss,
is
kind
of
it's
kind
of
growing
and
we're
not
even
throwing
more
volume
at
it
yet,
but
but
that's
kind
of
like
the
next
thing
we
want
to
do
so
we're
going
to
want
to
bring
I
we.
B
What's
the
other
things
we
could
try
to
help
with
compaction
because
compactions
we
actually
are
tracking
a
metric,
we're
running
like
36
compactors
and
we're
tracking
a
metric
where
the
number
of
compactors
that
have
like
cpu
usage,
less
than
half
a
core,
and
that
is
consistently
like
10
to
15.
Compactors
are
in
this
state
and
so
we're
trying
to
see
like
what
we
can
do
to
just
maybe
increase
the
usage
across
the
competitors
and
like
hope
that
it
will,
they
will
do
more
compaction
or.
D
Yeah,
so
you
said
that
boost
only
worked
for
about
24
hours.
I
guess
one
thing
in
mind:
is
I'm
wondering
if,
like
the
actual
amount
of
work,
that
was
needed
did
decrease
because
the
compactors
were
able
to
get
finish
a
compaction
or
get
through
some
compactions
that
were
backlogged,
whereas
before
maybe
they
weren't
able
to
keep
up
the
block
list
is
out
of
the
blacklist?
Length
is
kind
of
growing
like
do
you
have
a
rough
number
on
that.
B
Right
now
we're
our
block
list
on
the
busiest
cluster
hovers
like
I'm,
I'm
graphing
like
a
month.
Let
me
like
do
one
day:
it's
like
hovering
between
30
and
31k.
Right
now,.
B
Yeah,
it
kind
of
translates
to
longer
chord
times
like
our
koi
times,
seem
to
be
hovering
more
around
like
10
seconds
now
it
used
to
be
more
like
around
five
seconds
when
the
block
list
was
shorter,.
A
Yeah,
I
think
maybe
1819
is
about
as
big
as
we
saw
it
when
we
were
at
our
largest
cluster
sizes,
so
maybe
there's
some
stuff.
We
could
tweak
right.
Certainly
so
doing
your
window.
Size
down,
allows
more
compactors
to
participate
in
the
early
time
slots
which
will
help,
but
it
also
makes
for
more
blocks
in
the
long
run,
because
you
have
a
shorter
time
slice.
A
The
things
I
guess
I
would
look
at
are
like
how
many
blocks
per
time
window
exist.
Maybe
you
could
increase
so
there's
a
setting
that
is
the
maximum
block
size
it'll
make
as
well
as
like
the
maximum
objects
it'll
put
in
a
block
size.
If
you
bump
those
up.
Maybe
it's
refusing
to
do
some
compactions
that
could
attempt,
because
those
are
too
low.
I
don't
know
what
those
are
by
default
or
even
what
we
have
to
set
up.
But
I
know
those
are
some
limiters
on
what
the
compactors
will
do.
B
Oh
seven
days
or
eight
days,
oh
eight
days,
yeah,
and
it's
also
like
I'm
bringing
this
up
just
we're
we're
in
a
state
where
we
want
to
keep.
We
want
to
increase
like
the
the
volume
of
traffic
we're
sending
to
tempo
right
now,
we're
like
at
ten
percent
of
our
sample
volume,
which
is
like
a
number
that
doesn't
really
mean
something
out
of
like
in
this
context.
B
So
you
do
a
lot
of
of
ifs
and
what
that
number
means,
but
like
we're
looking
to
like
bring
it
up
to
like
20
and
30
percent,
like
our
goal
is
to
have
everything
unsampled,
basically
in
tempo,
and
it's
just
like
right
now.
We
want
to
make
sure
that
are
blocked.
This
is
under
control.
I
think
for
the
rest
of
the
settings
we're
running
close
to
the
defaults.
A
B
A
D
For
eight
days,
if
you
do
the
math,
like
the
old,
the
least
amount
of
blocks
you
could
have
is
whatever
the
compaction
window
is
spread
out
every
eight
days,
and
that's
only
in
that
couple
thousand.
So
that
does
it
solve
by
order
of
magnitude.
So
it
doesn't
something
doesn't
sound
right.
Are
you
still
running
your
custom?
Compactor?
Yes,.
B
So
yeah
we
didn't
have
time
to
update
to,
but
now
that
1.3
rc
is
out
like
my
goal,
is
to
update
like
our
test
environment,
to
that
and
if,
if
it's
good,
then
we,
I
think
that
yeah,
I'm
I'm
not
like
excluding
the
option
that,
like
something's
up
something's
different
there,
and
maybe
that
is
a
reason
but.
D
B
D
B
Somewhere,
yeah,
to
be
honest,
actually
it's
it's
just
like
it
was
hard
to
see
exactly
what
was
happening
before
just
because
we
had
like
a
lot
of
these
like
huge
traces.
You
know
five,
five
million
spans
and
up
and
now
those
are
mostly
gone
and
we're
kind
of
having
a
better
picture
of
what
happens
with
the
cluster.
Where,
like
when
stuff
is
more
normal,
you
know-
or
at
least
we
don't
get
a
lot
like
a
really
large
amount
of
these
very
large
traces
anymore,
but.
A
B
B
I
have
I
just
I
yeah
it's
it's
actually,
we've
seen
like
15
million
spans.
Also
it's
these
it's
not
like
short
traces.
It's
just
those
traces
run
for
like
hours
and
hours
and
hours.
They
should
be
like
we're
moving
our
stuff
to
use
links
instead
and
that's
fixed
the
problem.
It's
just
like
we
haven't
done
it
everywhere.
Yet
so.
D
Yeah
now
there
is
a
change
in
1.3.
The
ingester
will
be
more
strict
about
allowing
rights
for
those
large
traces.
So
previously
what
as
soon
as
it
flushed
from
memory,
it
would
allow
more
rights.
So
it
kind
of
escaped
the
max
trace
size
limit.
So
we
did
tighten
that
up
a
little
bit,
so
hopefully
that
won't
actually
cause
any
ingest
issues
that
you
don't
want,
but
that
would
protect
against
some
of
these
larger
traces
once
they
hit
the
compactor.
They
just
wouldn't.
B
All
these
settings
are
we
we
disable
them
like
max
max
race
for
user
and
max
bytes
per
trace.
This
is
like
we
set
this
to
zero
because
because
we
don't
have
a
way
of
knowing
basically
at
this
moment,
but
I
don't
think
this
this
long
term.
I
think
the
goal
is
to
bring
this
back
under
control,
eventually
also
I'd
love.
B
As
metrics
that's
awesome,
but
actually
I
have
a
a
kind
of
a
follow
up
which
is:
do
you
have
like
recommended
cpu
and
memory
allocations
for
couriers,
we're
seeing
a
lot
of
now
that
we
have
way
more
people
using
we're
seeing
more
quarrier
ohms,
and
we
want
to
know
what
would
be
like
the
ideal.
What
like
you
guys,
allocate
right
now?
I
think
we're
on
like
two
cpu
and
four
gig
limits
per
courier
and
how
many
creatures.
A
A
We
have
nothing
our
queries
use
about
100
millicourse,
each
hundred
to
two
hundred.
I
think
we
have
maybe
two
gigs
allocated
and
they
sit
around
1.5
just
whatever,
because
go
holds
memory
not
because
it
needs
it.
Yeah
metrics.
B
Seem
to
indicate
that
they
mostly
don't
use
that
much,
but
they
seem
this
it's
spiky.
Sometimes
perhaps
it's
our
large
traces
or
I'm
not
sure,
but
we
have
a
lot
of
restarts.
I
mean
like
hundreds
per
courier
because
of
ohms.
A
That's
really
weird,
the
only
thing
I
can
think
is
people
are
querying.
These
15
million
span
traces
because
at
a
reasonable
number
of
spans,
like
I'd,
say
our
queries
never
like
when
we
have
big
traces.
What
we
tend
to
see
is
the
compactor
zoom,
basically,
while
they
try
to
while
they
try
to
recombine,
which
I
know
you've
seen
as
well.
Yeah.
B
A
That's
that
is
a
surprising
thing
to
I
don't
know.
If
I
guess
I
would
try
to
track
it
down
to
you
know
what
is
being
queried
at
the
time
and
see.
Maybe
hopefully
you
can
track
it
down
to
which
thing
you
know
which
trace
has
been
queried,
but
how
many
queries
do
you
have
as
a
question?
It's.
B
Not
that
much,
I
have
a
query:
volume
is
kind
of
slow,
it's
kind
of
low
still
adoption
like
is
is
growing,
but
it's
I'm
not
saying
like.
We
probably
don't
have
multiple
quarries
per
minute.
Okay,.
A
Yeah,
it
probably
is
somebody
well,
my
guess
would
be
it's
somebody
hitting
a
big
query,
but
you
could
also
consider
increasing
your
the
shard
so
there's
a
configuration
option
to
say
how
many
shards
to
take
a
choice
by
id
search
into
if
you
make
that
larger
it'll
make
the
job
smaller,
because
each
one
will
be
smaller
and
then
you
could
skip
scale
out
your
queries.
I
think
we
have
25
small
queries.
Basically,
so
maybe
like
a
lot
of
small
queries
with
a
lot
of
jobs,
would
be
a
better
match.
A
Then
your
query
front
end
will
aim
once
all
of
the
jobs
hit
that
and
it
tries
to
recombine
a
5
million
span
trace,
something
we've
been
talking
about,
which
we
haven't
done,
but
it
sounds
like
it
might
help
you
as
well
is
start
having
some
of
these
limitations
on
the
query
path,
which
we
don't
know.
So,
if
the
on
the
query
path,
it
sees
it's
about
to
attempt
to
return
it.
You
know
500
megabyte,
trace
or
something
it
just
stops
and
returns
an
error
instead
of
booming.
B
Yeah,
I
think
definitely
the
one
thing
that
we
know
is
that,
like
people
sometimes
will
just
like
accidentally
try
to
quarry
a
large
straight
like
they
don't
know
that
it's
a
large
trace
and
then,
if
we
had
a
way
to
cleanly
returned
this
trace
is
too
large,
because
I
know
some
that's
what
some
vendors
do
also
and
then
we
could
say
well,
oh,
this
is
too
large,
like
maybe
don't
query
it
or
but
okay.
A
B
B
It's
like
oh
something's
happening
like
no,
no
queries
are
responding
anymore
and
then,
like
yeah,
somebody
did
a
big
query
or
queried
a
large
trace
and
yeah,
but
we're
kind
of
we
understand,
like
the
the
the
correct
ways
like
to
actually
limit
how
big
they
are
when
they
come
in,
instead
of
not
limiting
it
and
then
also
then
causing
problems
on
the
quarterback,
but
like
yeah,
I
think,
having
this
setting.
A
A
All
right,
thanks,
gabriel
cool
anything
else,
team.
F
Yes,
sorry
to
keep
this
waiting
for
the
very
last
minute,
but
I
want
to
do
the
oh
and
one
more
thing.
Tom
always
does
this
tom
wilkie
and
I've
always
wanted
to
do
this,
but
anders
who's
on.
This
call
wrote
a
blog
post
about
tempo
and
link
ready,
and
I
owe
him
a
huge
apology,
because
I
worked
with
him
on
this
and
I
was
supposed
to
follow
up
and
it
went
up
and
I
didn't
know-
and
he
told
me
that
it
actually
went
up.
So
this
is
awesome
and
sorry
about
that.
A
F
I
mean
andrews
was
chatting
on
the
tempo
channel
about
linkedin
tempo
and
I
messaged
him
and
said
hey.
You
should
turn
this
into
a
blog
post
and
it
happened.
G
The
the
the
tldr,
at
least
for
linkedin
is
that
they
only
support
the
b3
propagation
format,
so
that
kind
of
logs
into
what
you
can
do
and
yeah,
but
other
than
that
it
was
yeah
smooth
sailing,
yeah.
A
Yeah,
that's
so
weird,
because
istio
only
does
zipkin
too.
G
G
What
I've
heard
from
linkadia,
I
think
they're
waiting
for
the
official
spec
to
stabilize
the
hotel,
open
clem
of
the
spec
thing,
but
that
might
take
a
while,
though
so
yeah
cool,
so
yeah.
Thank
you
for
accepting
my
blog
post,
good
stuff.
A
Thank
you
thanks
anania,
for
putting
that
together,
cool
all
right.
That's
that
I
think
it
was
a
good
one
appreciate
everybody
for
showing
up
feel
free
to
jump
in
our
slack
and
ask
questions.
Of
course,
there's
the
github
repo,
a
million
ways
to
contact
us
and
gabriel,
like
we
were
talking.
If
you
want
to
follow
up
with
a
configure
something,
we
can
help
diagnose
that
issue
a
little
bit
and
maybe
get
your
block
list
down
from
a
crazy
30
thousand,
but.