►
From YouTube: Tempo Community Call 2022-06-09
Description
Tempo Community Call 2022-06-09
Discussion of new backend formats, TraceQL, metrics generator and usage stats.
A
Cool,
hopefully
that
means
the
recording's
certain
all
right.
So
this
is
the
june
the
june
temple
community
call
2022.
thanks
for
coming
agenda's
over
here
feel
free
to
add
anything.
Do
you
want
we'll?
Also,
I
think,
towards
the
end.
Just
have
an
ama
q
a
kind
of
time
period.
We
can
chat
about
anything
the
community
would
like,
but
in
the
meantime,
we'll
kind
of
go
through
these
there's.
B
A
We'll
go
through
these
points.
I'd
say
we
don't
have
a
huge
agenda
for
today,
but
I
think
there's
some
cool
stuff
to
talk
about
and
then
yeah
like,
I
said,
anyone's
welcome
to
throw
something
in
the
chat
or
unmute
and
talk
or
just
put
something
in
the
dock,
whatever
kind
of
works
for
you,
the
best
to
kind
of
get
a
question
or
talking
point
over
to
the
team.
A
So
to
start
oh
dang,
I
meant
to
do
this
before
the
column,
but
I
clearly
forgot
we'll
just
kind
of
go
down
this
list
here.
Two
trace
qlpr's
are
up.
One
was
merged.
Marty
reviewed
my
design
design.pr.
A
Yesterday
there
were
some
changes
to
it
kind
of,
as
I
was
writing
the
parser
and
kind
of
running
into
maybe
some
actual
real
problems.
You
know,
like
you,
know,
issues
with
implementation.
While
I
was
building
it
kind
of
made
some
small
changes
to
make
up
for
that.
But
for
the
most
part,
the
core
concepts
thing
has
not
changed
since
we
initially
posted
it
all
the
same
ideas
are
there
just
some
very
minor
syntactic
and
I'm
kind
of
looking
for
it,
but
I
can't
find
it.
A
I
want
to
post
it
in
the
doc
man,
it's
been
up
for
a
while.
I
had
to
go
three
pages
deep
in
the
pull
request,
the
close
pull
request.
A
So
this
is
the
the
core
concepts
vr
like
I
said,
please
review
if
you
want
and
then
secondly,
a
parser
pr
is
up-
and
I
think
I
have
some
small
changes
to
it.
Marty
gave
me
some
good
feedback
there,
so
I
have
some
small
changes
to
it
before
I
before
I
merge
it
just
based
on
some
of
his
feedback,
but
the
parser's
up,
and
so
hopefully
we'll
get
that
merged
soon.
A
Now,
to
talk
a
little
bit
about
that
kind
of
where
the
where
the
team
is
and
where
we're
headed
next
tres
cuellar,
we
really
wanted
to
get
some
momentum
there.
We
we've
done
a
lot
of
work
in
terms
of
design.
I
wanted
to
get
some
extra
coat
on
the
ground,
which
is
why
we
put
this
parser
together,
and
I
think
it
would
be
a
good
chance
again
for
people
who
are
watching
the
project
community
kind
of
keep
an
eye
on
our
longer
term
goals.
A
A
In
particular,
the
back
end
is
looking
quite
good.
They
they
can
give
you
some
details,
but
we
have
a
functioning
park
a
back
end
now
and
it's
running
in
a
development
cluster.
A
But
in
order
for
trace
ql
to
be
implemented,
we
have
this
problem
where
we
have
to
be
able
to
run
those
queries
against
the
most
recent
traces
that
sit
in
the
adjuster
and
those
traces
still
are
in
the
older
formats.
So
I
think
the
next
thing
for
me
will
be
to
get
serious
about
that,
because
we
can't
move
forward
with
the
traceql
until
we
have
our
most
recent
traces
in
this
kind
of
same
column
or
format
or
something
similar
that
we
can
do.
A
Some
of
these
more
complicated
queries
against
so
traceql
is
going
to
be
shelved
for
a
little
bit,
maybe
a
couple
months.
While
I
get
into
that
code
start
helping
that
team
out
and
anya
and
marty
have
been
really
carrying
the
load
here,
I'm
going
to
get
involved
more
heavily
and
start
helping
with
that
part
of
the
code
while
they
continue
working
on
the
back
end
tuning
it
and
getting
it
into
some
of
our
more
productive
clusters,
larger
clusters,
but
I'll
get.
Let
them
give
you
more
details
there.
A
I'm
also
going
to
remove
this
cloud
running
system
below
parquet
because
it
is
related
to
rk.
So
traceql
is
a
good
spot.
Please
take
a
look
at
that
pr.
If
you're
interested
in
the
language,
I
think
the
core
concepts
is
a
great
start.
I
think
the
parser
pr
is
going
to
help
a
lot
as
well.
You'll
see
test
cases
in
there
and
all
kinds
of
good
information
about
what
the
language
will
actually
look
like
that
as
you're
typing
it
to
you
know
to
find
traces.
A
A
But
yeah.
B
C
Sure,
okay,
yeah
hi
everyone,
so
yeah
we've
been
running
the
kind
of
like
this
parquet
branch
in
a
development
cluster
internally
it's
about
10
000
spans,
a
second
like
which
is
a
pretty
decent.
You
know
size
like
we
feel
good
about
the
performance.
The
memory
usage
resources,
speed
things
like
that,
so
it's
looking
good.
What
we're
wrapping
up
here
is
what
we
would
consider
like:
it's
stable,
but
still
experimental.
C
So
this
in
the
link
here
in
the
document.
There's
the
pr
to
actually
merge
this,
and
so
we're
kind
of
putting
finishing
touches
on
that
right
now
and
it'll
be
you
know
you
could
turn
it
on.
You
could
play
with
it.
It
would
be
experimental
at
that
load
and
then
the
second
link
here
in
the
document
now
is
what
we
would
consider
what
we
really
need
to
make
it
this
production
worthy.
C
So
that's
making
it
taking
it
too
much
larger
scale
and
some
of
these
other
things
that
are
only
really
needed
at
that
other
scale,
like
maybe
some
caching
improvements
and
things
yeah.
So
that's
the
current
state
of
it
yeah.
What
else
could
we
talk
about
for
that?
It's
really
cool
we're
really
happy
with
it,
how
it's
working
so
far.
A
Yeah
I'd
say:
I've
looked
at
that
cluster
sum
and
done
some
of
the
queries
across
that
it's
definitely
looking
really
good.
I
really
am
looking
forward
to
getting
into
our
ops
cluster,
which
is
2
million
spans
a
second,
that's
really
where
these
ideas,
I
think,
are
going
to
get
proven
out.
So
the
fact
that
it's
stable
and
the
resources
are
looking
in
line
with
what
we
were
doing
previously
is
amazing,
so
yeah
push
it
to
ops,
push
the
broad.
D
B
A
Cool
per
k,
in
particular,
is
going
to
allow
us
to
search
significantly
faster,
just
because
we're
going
to
have
to
pull
so
much
less
data
from
the
back
end
and
it
parses
faster.
Why
don't
we
put
a
link
up
to
the
repo
we're
using?
I
want
to
give
credit
to
the
segment
folks.
A
This
thing
is
ridiculously
good,
but
when
we
first
were
looking
at
this
whole
problem
of
how
to
do
a
new
backend,
there
was
some
parquet
libraries
exist
or
some
parquet
libraries
existed
for
go,
but
that
you
know
martin
and
I
were
playing
with
these
and
they
were
instantly
non-starters.
It
took
days
to
figure
out
that
it
was
not
worth
pursuing,
and
this
is
a
library
that
actually
gets
the
job
done.
It
can
be
used
at
scale
and
can
be
used
in
a
actual
production.
Large
scale
application,
so
pretty
exciting
cloud
run,
exists.
A
Gonna
have
to
pivot
a
little
bit
here.
So
cloud
functions.
Google
cloud
functions.
We
we've
talked
about
this
a
lot
before,
but
we
use
serverless
to
do
our
search
to
do
you
know
our
searching
in
massive
parallel,
but
cloud
functions
only
supports
go
116
and
the
parquet
library
with
this
particular
library.
Right
here
is
using,
go
118
features
and
they
have
support
for
117.
A
But
116
is
just
missing
some
of
the
some
of
the
things
they're
using
and
it
was
we're
trying
to
look
at
how
to
make
this
library
116
compatible
and
we
it's
possible,
but
it
was
just
we
felt
like
we
were
digging
ourselves
in
a
hole
and
that
we
didn't
really
want
to
put
ourselves
in
a
position
where
you
know
with
google
cloud
functions.
We
were
always
having
to
wait
on
them
to
upgrade,
to
use
newer,
go
features
and
newer
right,
go
versions
because
it
would
just
be
a
fight
constantly.
A
So
we've
moved
to
google
cloud
run
internally
in
our
ops
cluster
as
of
yesterday,
and
I
should
submit
a
pr
I'd
say
hopefully
this
week,
but
maybe
early
next
week,
where
we
change
the
documentation
over
currently
there's
documentation
about
using
google
cloud
functions
with
tempo,
we're
gonna
use
cloud
run
and
updates
the
the
build
process
and
all
that
so
we're
moving
to
cloud
run
and
we
can
think
we
can
basically
get
the
same
speeds.
It
was
a
little
bit
different
kind
of
an
experience.
A
A
Another
really
cool
thing
is
cloud
run
is
built
on
top
of
k,
native,
which
I'm
probably
pronouncing
horribly
wrong,
but
k-native
is
a
serverless
platform
on
top
of
kubernetes,
so
I
think
it
opens
up
options
in
the
future.
In
terms
of
you
know
how
we
can
run
serverless
with
tempo,
if
we
integrate
with
cloud
run,
we're
technically
integrating
with
k
native
and
in
the
future.
Maybe
we
run
k
native
internally.
A
Maybe
people
who
have
clusters
in
places
that
don't
have
serverless
technologies
available
will
prefer
something
like
k-native
to
aws,
lambda
or
google
cloud
from
so
I
think
it's
overall
a
good
switch.
It
was
just
kind
of
a
curveball
at
the
last
second
and
hopefully
in
the
next
yeah
a
couple
days,
we'll
get
up
a
pr
that
will
make
some
of
those
changes.
B
It's
interesting
that
the
clouds
have
two
different
ways
of
running
serverless
and
we're
kind
of
seeing
similar
speeds,
and
it's
also
good
in
a
way
because
we're
finally
moving
off
of
writing
a
zip
file
to
gcs
and
running
a
function
from
that.
So.
E
A
It'll
be
pushing
a
docker
container,
different
kind
of
artifact,
but
still
a
very
similar
process
and
and
then
running
that
in
in
k
native
or
whatever
the
17
ways
to
run
containers
in
aws
yeah.
A
I
have
no
idea
how
many
there
are
in
google
cloud,
but
are
we
gonna
move
to
118
in
tempo
soon
we're
on
117
still?
Is
there
a
reason
for
that.
C
A
A
There
I've
no
off
the
top
of
my
head.
I
can't
think
of
a
place
in
tempo
where
generics
would
make
a
ton
of
sense.
Maybe
there
is
I'm
not
sure
I
don't
know
if
118
came
with
any
performance
benefits,
I
feel
like
every
go
version
tries
to
give
you
a
couple
percent
back
here
there.
So
I
think
we
should
move
forward
regardless
cool
metrics
generator
could
rod.
Mario,
you
two
needed
to
talk
at
the
exact
same
time
now.
D
Yeah
some
magic
generator
updates
so
to
to
kind
of
summarize,
so
we
have
right
now
a
couple
of
really
cool
grafana
improvements
in
the
pipeline,
which
I
think
will
be
released
with
like
the
next
version
of
graphana,
like
I'm,
not
sure
if
the
exact
version
is
known
but
colin
was
working
on
integrating
the
metrics
area,
parametric
generator
into
the
trace
view,
and
then
joey
was
also
working
on
expanding
the
service
graph
page
to
integrate
spam
metrics
into
a
table.
So
those
will
be.
D
You
know,
I
think
those
pr's
are
merged
or
almost
merged
and
will
be
like
getting
into
grafana
soon.
The
next
thing
we
want
to
focus
on
with
automatic
generator
is
including
cues
and
databases
into
service
graphs.
So
say
you
have
a
system
which
you
know
has
two
services
that
are
decoupled
using
kafka
or
rabbit
mq.
D
Whatever
these
connections
currently
don't
show
up
in
service
graphs,
because
the
spanking
is
slightly
different,
but
we
also
want
to
include
that,
so
you
can
see
on
service
graph,
like
you
know,
hey
you're,
using
this
queue
and
it's
processing
that
many
requests
per
second
and
average
agency
is
this,
so
just
provide
more
context
and
the
same
for
databases
so
we're
currently
in
the
design
phase
there
figuring
out,
you
know:
how
can
we
capture
this
information?
D
How
can
we
efficiently
store
this
and
expose
this
to
grafana
and
then
also
visualize
it
and
then
the
last
point
about
metric
generators
so
like
for
the
past
six
to
eight
months,
I
don't
know
like
maya
and
I
have
been
focused
on
metric
generator
like
building
this
whole
new
component,
designing
it
and
you
know
running
it
in
production
now,
so
we're
now
approaching
the
end
of
this
project.
D
Kind
of
so
we'll
be
wrapping
up
with
the
keys
and
databases,
also
making
sure
that,
like
the
last
bits
are
in
place
like
dashboards
alerts,
stuff
like
that
and
from
then
on,
we
won't
be
like
completely
locked
onto
this
project.
A
A
D
So
the
way
it
works
is
like,
usually,
the
q
itself
doesn't
emit
spans,
but
the
process
which
pushes
data
on
the
queue
or
that
consumes
data
from
the
queue
will
create
a
span,
and
this
is
the
is
also
described
in
the
hotel
semantic
conventions
they
describe.
You
know
if
you're,
using
a
queuing
system
and
you're
pushing
data
on
a
queue
you
should
create
a
span
of
the
operation
name
and
also
span
kind,
should
be
producer
and
the
same
for
the
consumer
side.
D
So
there's
some
kind
of
conventions
there
and
we
plan
to
use
those
to
detect
when
you're
pushing
data
on
a
queue.
So
what
would
happen
is
service
a
uses,
the
queue
to
push
data
on
it
service
b,
reads
from
it,
so
the
span
from
service
a
and
the
spam
from
service
b
together,
we
will
be
able
to
you,
know,
see
the
queue
in
between
those
two,
even
if
the
queue
itself
doesn't
emit
any
tracing
and
similarly
for
databases.
D
C
C
I
mean
correct
me
if
I'm
wrong,
but
I
think
it
still
requires
the
application
to
kind
of
pass
the
trace
context
through
the
queue
itself
so
that
the
destination
on
the
other
side
can
re
resume
that
same
trace
so
either.
It's
part
of
the
message
like
a
trace
id
is
in
your
message,
or
maybe
there's
metadata
on
the
outside
of
the
message
payload,
but
you
know
on
the
message
itself
right
so
yeah.
A
So
I
actually
asked
this
question
forever
ago
of
the
auto
instrumentation
java.
The
otel
auto
instruction
java
client
will
pass
context
through
kafka
automatically,
but
there
is
there
was
like
a
I'm
trying
to
find
the
question
now,
there's
a
specific
way.
You
have
to
do
it.
If
you
do
it
the
wrong
way,
it
won't
work
if
I
can
find
it
I'll
link
it
in
the
in
the
chat.
But
I
guess
what
I
was
getting
to
is.
A
I
think
some
clients
do
support
automatic
choice,
propagation,
but
I
do
think
marty's
also
right
in
general
kind
of
check
your
documentation.
You
might
have
to
like
cobble
something
together
and
your
producer
consumer
kind
of
like
libraries,
but
you
know
you
might
get
lucky.
Basically
all
right,
I'm
going
to
find
this
and
I'll
I'll
post
the
link
on
the
doc.
D
D
But
if
there's
anything,
you
know
that
you
can
make
it
a
bit
more
relaxed
and
you
can
also
explore
that,
of
course,
and
there's
a
second
question
from
lucas
about.
You
know
what
what
does
the
carnality
of
the
metrics
look
like?
That
really
depends
it's
very
difficult
to
create
the
kind
of
rule
because
we
saw
you
know
for
every
user.
It's
it's
different.
It
depends
on
your
architecture,
the
way
you've
instrumented
your
volume
and
it's
not
like
a
fixed
formula.
D
So
what
can
happen,
for
instance,
is
if
you
have
a
fairly
simple
architecture
with
like
three
services
but
they're
processing,
a
lot
of
requests.
That
does
not
necessarily
mean
you
have
high
cardinality,
because
your
servers
are
simple.
You
might
don't
have
like
a
lot
of
different
span
names,
while,
on
the
other
hand,
if
you
have
a
very
complicated
architecture
which
has
very
low
amount
of
requests,
that
could
still
generate
a
lot
of
active
series,
because
you
have
a
lot
of
different
span
names
and
like
unique
services
and
spread
names
and
operations
so
yeah.
D
This
is
yeah.
We
haven't
figured
out
like
a
good
rule
yet
and
what
we
usually
recommend.
If
you
want
to
try
out
metrics
with
a
magic
generator,
you
can
enable
it
with
collection
disabled.
So
what
we
will
do
is
we
ingest
data.
We
generate
active
series,
but
we
don't
collect
samples
and
remote
write
them.
Yet
we
only
keep
track
of
the
amount
of
active
series.
So
that
way,
you
can
just
run
it
for
a
couple
of
hours.
For
a
couple
of
days
you
can
see.
D
Oh
it's
like
almost
10
000
active
series,
and
if
that's
okay,
you
can
enable
collection
which
will
capture
samples
and
remote
write
them
to
your
time
series
database.
We
do
this
because
it's
kind
of
unpredictable,
like
yeah.
F
Yeah
just
to
quickly
expand
on
it.
There
is
a
small
section
on
cardinality
on
the
service
graph
docs,
which
goes
into
what
conrad
was
mentioning
of
hopes,
hopes
between
different
services
and
how
can
how
that
will
impact
cardinality?
F
So
it
gives
you
a
very
rough
formula
to
calculate
the
metrics
cardinality
but
yeah
the
biggest
problem
is,
you
generally
don't
know
your
traces
cardinality,
so
yeah,
it's
the
recommendation
would
be
just
do
a
dry
run
with
the
metric
generate
generator
and
yeah
get
a
feel
of
what's
the
number
of
active
series,
but
yeah
still,
if
you
want
to
deep
dive
on
on
the
topic,
that's
a
starting
point.
F
E
A
F
Yeah,
you
can
configure
the
maximum
amount
of
active
series
that
you
have
per
tenant
and
one
that
threshold
is
reached.
New
active
series
will
be
dropped
and
also
it's
important
to
mention
that
we
periodically
prune
series
that
are
no
longer
active.
F
So
if
you
hit
that
threshold,
it
doesn't
mean
that
you
won't
ever
get
a
new
active
series
ever
again
but
yeah
as
soon
as
an
active
series
is
no
longer
updated.
It
will
be
taken
out
from
the
registry
and
there
will
be
space
for
anyone.
D
F
Where's
another
question:
can
you
filter
by
attributes
like
ignore
service
names
that
are
not
something?
So
what
you
can
do
is
define
the
dimensions
that
you
want
that
appear
on
the
metrics,
and
these
are
attributes
that
exist
only
traces
that
will
appear
as
labels
in
the
metrics.
F
It's
a
allow
list.
So
by
default
there
are
only
a
few
attributes
that
we'll
get
injected
as
labels
to
the
metrics,
and
you
can
add
any
other
attribute
that
exists
in
your
traces,
but
other
than
that
than
that
there
is
no
sampling
or
any
sampling
related
techniques
that
can
happen
on
on
the
metric
generator.
That
should
happen
beforehand.
D
D
So
if
I'm
understanding
use
case
query
would
that
be
like
you
know,
you
have
some
service
a
which
creates
a
lot
of
traces
and
you
don't
want
metrics
for
them,
so
you
want
to
exclude
them.
That
would
definitely
be
interesting.
We
can
write
a
feature
request
for
that.
A
Cool
so
yeah
metrics
generator
a
lot
of
excitement
in
the
community,
which
is
cool.
A
lot
of
people
have
been
interested
in
it,
both
in
grafana
cloud
and
from
our
open
source
users,
which
is
awesome
and,
like
we
said
kind
of
wrapping
it
up
a
bit
but
lucas
and
anyone
else.
If
you
have
you
know,
feature
requests
just
follow
his
issues
on
the
github
repo
and
we'll
kind
of
try
to
prioritize
those
in
the
coming.
A
Whatever
months
quarters,
I
think
mario
was
going
to
tease
the
upcoming
conline
session
and
he
had
some
awesome
thing
he's
going
to
show
us
he's
not
allowed
to
anymore,
but
he's
going
to
he's
going
to
make
you
want
it
and
go
to
the
go
to
the
session.
I
think.
F
Right
yeah,
so
grafanacon
2022
is
coming
up
next
week.
It
lasts
the
entire
week
so
from
june
13th
to
june
17th.
There
is
a
very
extensive
agenda.
You
can
check
it
out.
You
can
register
for
free
for
the
entire
event
and
watch
as
many
or
as
little
talks
as
you
want,
but
obviously
we
want
to
highlight
the
tempo
talk
that
we
will
be
given
so
on
grafanacon
2022.
F
So
one
year
ago
we
announced
graphana
temple
1.0,
as
this
high
volume
locus
tracing
store,
but
back
then
it
was
limited
to
search
by
id
only
retrieving
traces
by
the
trace
id,
but
since
it
has
evolved
so
much
today
we
have
a
search.
F
We
have
we're
we're
able
to
derive
metrics
from
the
ingested
traces
and
there
are
a
lot
of
upcoming
new
improvements
and
features.
I
think
we've
touched
all
of
them
like
traceql
parque,
the
new
columnar
format,
so
yeah.
There
is
a
lot
of
stuff
that
we
will
be
showing
even
for
the
metrics
generator
that
we're
wrapping
up.
I
think
congrats
already
mentioned
it.
They're
still
improvements
coming
up
in
grafana,
so
we're
trying
to
see
how
we
can
get
more
value
out
of
the
metrics
that
we
are
generating.
F
So
we
have
spam
metrics
and
service
graphs,
and
there
is
a
lot
of
things
we
can
still
get
out
of
them
in
terms
of
jumping
from
one
telemetry
type
to
the
other
and
just
kind
of
showing
the
information
in
a
way
that
it's
more
valuable
to
the
to
the
user.
So
yeah
in
in
the
realm
of
apm,
there's
going
to
be
some
previews
that
we
will
show
in
graphing
and
yeah.
So
this
will
be.
F
This
talk
will
happen
on
june
15th.
So
next
wednesday
at
5,
30,
utc,
sorry
fif,
17,
30
yeah.
I
don't
know
how
to
say
that
time.
Yes,
five
in
the
afternoon
and
yeah,
and
even
if
you
cannot
attend
the
video
will
be
on
demand.
I
believe
I
I
hope,
I'm
not
saying
that
wrong,
but
you
can
probably
find
out
that
in
the
grafanacon
page
yeah,
I'm
sure
if
there
will
be
an
ama
at
the
end,
any
of
the
speakers
yeah.
There
will.
F
Cool
nice
and
yeah,
I'm
not
sure
if
I'm
missing
anything.
A
I
think
you
covered
it
all
and
more
cool,
so
if
you're
interested
in
tempo
or
watching
someone
run
doom
on
grafana
either
of
those
things,
they're
equally
exciting
go
check
out
the
graphonic
online
sessions
next
week
and
yeah
we're
really
hoping
to
show
off
a
new
ui
feature,
but
we
got
the
thumbs
down
on
that.
So
we
will
be
showing
that
off
next
week
at
the
session.
Hopefully,
you
can
attend
and
check
that
out
cool.
A
Final
piece
of
information
zach
is
bringing
us
talking
about
a
new
tempo
feature
for
us.
Called
music
stats.
Go
ahead.
E
Yeah,
so
for
the
last,
I
guess
this
has
been
a
conversation
within
the
company
for
maybe
a
couple
months
now,
different
teams
are
working
on
trying
to
figure
out
what
the
community
looks
like
and
who's
using
the
products
and
what
the
shape
of
that
usage
is,
and
so
we're
going
to
do
the
same
thing
for
tempo,
we're
going
to
add
a
feature
that
includes
some
anonymous
metrics
from
your
pods
or
from
your
instances
and
reports
those
to
us
so
that
we
can
answer
certain
questions
like,
for
example,
if
a
configuration
is,
you
know
heavily
used,
we
might
want
to
invest
more
in.
E
You
know
that
that
particular
feature
and,
conversely,
if
a
feature
is
not
used
in
the
community
or
you
know-
maybe
that's
not
something
that's
important
for
us
to
work
on,
so
it
should
help
us
drive
some
decisions
and
kind
of
get
an
idea.
What
the
shape
of
the
community
is-
and
you
know
what
the
shape
of
the
usage
of
tempo
is,
like
I
said,
it'll
be
anonymous,
it'll
be
opt
out.
There
will
be
an
option
for
you
to
disable.
E
If,
if
you
know
sending
anonymous
metrics
to
us
makes
you
a
little
bit
nervous,
but
it
would
help
us.
You
know,
like
I
say,
answer
some
questions,
so
you
know
if
you,
if
you
don't,
have
strong
feelings,
please
you
know,
consider
leaving
it
on,
and
so
we
can
get
that
that
information
yeah.
It
should
be
very
little
impact
on
performance.
You
know
we'll,
I
think,
right
now
the
pr
says
every
four
hours
it'll
be
reported,
so
this
is
infrequent.
E
You
know,
should
have
no
impact
on
any
of
the
components
in
terms
of
in
terms
of
that
there'll
be
one
additional
file
in
your
back
end
that
you
probably
won't
even
notice
and
yeah.
I
think
that's
pretty
much
pretty
much.
It.
E
A
Give
us
some
examples:
what
are
some
of
the
things
that
we're
planning
on
sending
to
the
mothership.
E
Yeah
so
some
of
the
configuration
items
specifically
around
you
know,
like
maybe
encodings
that
are
in
use
compression,
that's
in
use
what
back
ends
you
know
like
you
know,
we
should
be
able
to
identify,
even
based
on
the
ip
that
you're
reporting
from
which
cloud
I
expect
to
be
able
to
see
things
like.
Are
you
running
in
kubernetes
or
bare
metal?
E
You
know
those
kinds
of
things
I
think
are
really
interesting
and
and
where
in
the
world
you
know,
I
think,
that's
also,
you
know
we're
a
global
company.
We're
international
and
I
think
it's
kind
of
interesting
to
see
geo
maps
are
always
fun.
You
know
it
gets
everybody
excited.
E
So
that's
one
thing
that
that
I'm
in
particular,
looking
forward
to
cool
all.
A
E
I
was
just
going
to
mention
also
potentially
scale.
You
know.
If
there's
you
know
different
sizes,
you
know
like,
what's
the
average
size
of
a
tempo
cluster,
I
think
that's
an
interesting
question
asked.
E
A
A
A
We
really
are
going
to
get
it
production
ready,
get
it
in
one
five
start
whispering
internally
about
a
2.0,
but
not
say
it
so
loud
that
it
jinxes
us
and
that's
going
to
be
the
next
three
to
six
months
of
tempo's
life
and
then
once
that
is
all
in
place
and
ready
to
go
trace,
keel
is
going
to
kick
back
into
gear.
We
have
the
purser
ready
and
it's
going
to
be
figuring.
How
to
you
know,
take
these
structures
and
then
execute
them
against
execute
them
against
the
parquet
blocks.
A
Metrics
generator
kind
of
wrapping
up
a
bit
but
awesome
features
there.
I'd
like
to
thank
kinrod
and
mario
for
that.
I
think
they've
done
an
amazing
job,
pushing
that
project
forward
and
pitching
it
to
the
team
convincing
the
team.
It
was
the
right
choice
and
absolutely
it
has
been
it's
a
great
new
feature
for
tempo
and
then
youtube
stats
and
grafana.
Conlang
good
good
community
call
open
mic
karaoke
yeah,
all
right
ananya.
A
What
do
you
got?
Can
you
sing
staying
alive
saturday,
night
favorite
go.
A
You're,
probably
too
young
to
know
that
song
all
right.
I
guess
I've
dated
myself
cool.
Everyone
have
an
awesome
week.
Hopefully
we'll
see
you
at
the
graphonic
online
session
next
week,
but
if
not
at
the
community
call
in
a
month
take
care
and
I'll
see
you.
When
I
see
you.