►
From YouTube: Tempo Community Call 2021-12-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
the
best
I
have
cool,
so
we
have
a
small
agenda
as
always,
and
we've
left
it
blank
again
on
purpose.
We
always
do
this,
the
idea
being.
If
you
have
ideas,
we
want
you
to
put
them
in
the
doc
I'll,
go
ahead
and
link
the
doc
and
chat.
If
you
don't,
if
you
haven't
seen
it
on
the
the
slack
channel,
but
I'll
put
it
chat
here
as
well.
A
Yeah
feel
free
to
add
anything
to
the
agenda
and
we'll
just
kind
of
go
down
the
list
and
address
as
much
as
possible
in
the
time
period.
You're
also
welcome
to
just
kind
of
unmute
and
ask
questions.
This
group
is
small
enough.
It
can
be.
You
know
it
can
be
a
casual
chat.
This
is
not
necessarily
like
a
rehearsed
presentation.
It's
just
kind
of
an
opportunity
to
talk
about
tempo
with
people
who
are
using
it
who
have
questions
about
it,
you
know,
etc.
A
We'll
start
with,
we
have
a
new
maintainer
who's,
not
on
the
call.
What
happened
to
zach
is
he
off
today?
Did
he
have
a
day
off?
Do
we
know?
Yes,.
A
I
think
it's
sack
all
right.
That's
cool
zach
leslie
is
our
latest
maintainer.
We
just
put
up
a
vote
yesterday
and
merged
it
today
and
he's
done
a
lot
of
work,
particularly
in
like
our
automated
testing
areas,
as
well
as
some
work
on
tempo
and
helping
us
set
up
clusters
and
doing
a
lot
of
these
things.
So
zack
is.
A
Cool
talk
a
little
about
one,
three,
oh
and
one,
two
one.
So
since
our
last,
since
our
last
call,
I
think
we've
cut
one
two
one:
we
had
a
bug
where
compactors
were
failing
to
move
through
empty
objects,
which
we
haven't
totally
got
to
the
bottom
of.
It's
not
too.
Concerning
these
objects,
although
they
exist,
don't
really
hurt
anything,
and
so
we
pushed
a
pr
that
basically
fixes
the
second
factor's
work,
and
that
was
in
one
two
and
this
this
fix
will
just
make
compactors
not
fail.
A
Constantly
and
marty
put
that
together,
I
think,
are
they
all
right?
Is
the
change
log
done
for
this?
I
think
we
realized
this
morning.
The
changelog
yes,.
A
Oh,
it's
not
merged.
Okay,
we'll
merge
this
after
this
call
and
then
I'll
I'll
link
the
change
log
here,
it's
just
one
line
for
the
the
bug
fix
basically
also
tempo
one
three
so
talk
about
that.
A
little
bit
coming
up.
We,
we
really
kind
of
wanted
to
do
a
little
bit
shorter
of
a
cycle,
but
with
the
holidays.
It's
just
hard
to.
A
You
know,
get
everybody
on
the
same
page,
so
temple
one
three
will
be
coming
in
january,
after
kind
of
the
holiday
period,
the
big
thing
there
will
be
full
back
and
search
merged.
We
will
have
the
ability
for
the
queries
to
do
this
work
as
well
as
a
way
to
build
like
a
serverless
architecture.
A
A
We
operate
tempo
in
all
of
three
of
those
cloud
providers,
so
we'll
need
to
get
them
all
eventually,
but
I
think
at
release
of
1.3
it
will
be
available,
for
it
will
be
available
for
google
cloud
only
or
at
least
it
will
be
obvious
how
to
do
it
in
google
cloud
only.
You
could
probably
wrangle
it
into
the
other
ones.
If
you
had
the
time
but
we'll
document
it
all
eventually
we're
not
really
sure
what
the
costs
will
be
at
our
scale.
A
We
ingest,
I
think,
six
to
seven
hundred
mega
second
right
now
and
there's
still
some
questions
in
the
air
of
what
it
costs
to
just.
You
know
pile
through
all
this
data
and
brute
force
it
we
will
not
be
dropping
the
experimental
flag.
Due
to
some
of
this,
like
the
idea
is
we're
going
to
build
this
massive
parallel
architecture
to
scan
traces
and
find
matching.
A
You
know
traces
matching
certain
criteria
and
then
start
piling
on
improvements,
and
once
we
feel
it's
reached,
you
know
a
kind
of
fair
break
point
in
terms
of
like
cost
to
query.
Then
we'll
drop
the
experimental
label
and
we
will
make
it.
You
know
a
supported
part
of
tempo
proper,
so
it's
kind
of
up
in
the
air
for
us
we're
going
to
be
releasing
this
on
grafana
cloud,
no
matter
what
it
costs.
A
Basically,
unless
it's
way
more
than
I
expect,
but
it
might
be,
it
might
be,
it
might
be
prohibitive
for
some
other
people
with
their
with
their
open
source
tempo
installs.
So
we'll
give
more
in
the
next
community
meeting
we'll
have
more
data,
we'll
give
more
details
with
the
1.3
release.
We'll
have
a
lot
of
details
about
the
costs
of
this
at
different.
You
know
break
points
at
different,
like
you
know,
ingestion
break
points
and
we'll
have
a
lot
more
data
right
now.
A
We
don't
have
good
data,
but
we
kind
of
know
the
path
we
want
to
take,
and
so
1.3
will
have
that
as
well
as
some
other
bug
fixes
and
some
other
improvements.
A
I
did
want
to
also
highlight
a
recent
change
if
you're
kind
of
like
pulling
from
maine
right
now,
we
do
have
a
breaking
change,
which
is
that
we
re-vendored
otel
to
get
up-to-date
and
they
changed
their
default
port
and
we
just
went
ahead
and
changed
our
default
port
along
with
it.
Their
old
default
port
was
in
the
there's,
like
you
know,
official
port
ranges
right
and
there's
a
range.
That's
like
don't
ever
make
us.
A
You
know
a
real
port
here,
because
the
kernel
is
going
to
use
this
for
outgoing
requests
and
they
chose
one
in
that
range
for
unknown
reasons,
55680
and
then
they
moved
it
to
4317..
So
4317
is
the
current
default
standard,
hotel,
proto,
port
grpc,
and
so
we
have
moved
to
it
as
well.
Just
a
heads
up
if
you
just
pull
main
or
when
we
cut
1-3.
A
Cool
and
then
we
had
some
the
compaction
fix
we
met,
so
we
also
did
some
memory
improvements.
It
kind
of
has
helped
some,
but
it's
not
totally
fixed
on
our
side,
and
I
know
others
are
having
issues
with
compaction
memory
for
extremely
large
traces
as
well,
and
so
we'll
continue
to
kind
of
look
at
this
marty's
also
going
to
talk
about
some
options
for
improving
or
maybe
not
improving,
but
changing
our
back
end
trace
format
and
how
that
might
impact
kind
of
compactor
memory.
A
He'll
talk
about
that
in
a
second
here,
that's
kind
of
all.
I
got
for
one
three,
but
yeah
should
be
coming
around
january.
If
there's
any
questions
about
that
or
we
do
have
some
a
milestone
set
up,
I
should
probably
link
that
with
the
things
we
want
in
130,
it
might
be
a
little
tight.
We
might
push
some
stuff
to
one
floor
again.
The
holidays,
just
kind
of
they
make
they
make
three
months
disappear,
basically
way
faster
than
normal.
A
One
three:
for
me,
the
big
one
is
there's
a
lot.
There's
like
six
or
seven
things
on
here.
The
one
I
really
want
to
do
is
what
is
it
the
unhealthy
compactors
do
not
leave
the
ring
after
rollout.
I
know
some
people
are
having
issues
with
this.
We
have
only
seen
this
like
once
and
I
don't
know
what
the
difference
in
environment
is.
That's
causing
this.
We
just
don't
understand.
A
So
what
we'd
like
to
do
is
move
to
a
new
ring,
a
new
implementation
of
the
ring
that
lets
us
establish
a
kind
of
timeout
period
on
the
compactor,
so
you
can
configure
a
if
you
don't
see
a
compactor
for
five
minutes.
A
Just
just
like
forget
it
like
automatically
kind
of
thing,
so
we're
still
not
sure
why
this
is
happening
and
one
three
almost
certainly
won't
fix
the
root
cause,
but
we
can
at
least
provide
options
to
configure
you
know,
configure
an
automatic
forget,
timeout
period,
so
you
don't
have
to
you
know,
do
this
manually
or
go
about
some
other
manner.
A
If
we
were
seeing
this
internally
all
the
time,
you
know
we'd
have
more
cases
where
we
could
kind
of
reproduce
and
hopefully
find
it
ourselves,
but
we're
really
struggling
to
to
reproduce
it
for
some
reason,
but
other
than
that,
I'm
not
sure.
If
there's
anything,
anybody
really
wants
feel
free
to.
You
know
talk
about
it
here
or
add
us
in
slack
or
whatever.
If
you
want
us
to
prioritize
something,
we
can
try
to
fit
something
into
one
three.
A
C
Yeah
sure
so,
with
the
temple
squad,
we
were
thinking
about
opening
up
the
design
process
a
bit,
so
we
created
a
lot
of
design
docs
internally
right
now,
as
we
you
know,
develop
new
features,
but
kind
of
the
downside
of
this
is
that
these
design
logs
are
like
internal
to
grafana
labs,
but
it
doesn't
really
have
to
be
internal.
If
it's,
you
know
only
about
tempo,
so
we
we're
kind
of
looking
for
a
process
or
a
way
to
just
make
them
public
first
be
able
to.
C
C
So
we're
right
now,
thinking
about
the
process
so
expect
to
see
like
something
posted
soon
like
a
pr,
maybe
to
set
up
a
process
or
something
and
we'll
probably
post
something
in
slack
but
yeah.
The
goal
would
be
to
be
able
to
share
these
design
logs
in
in
public
before
we,
you
know,
start
implementing
this.
These
not
sure
if
there's
anything
else,.
C
A
Yeah
mario's
gonna
talk
a
little
bit
about
our
next
feature.
I
think
this
is
one
of
the
first
ones.
We
want
to
really
provide
a
public
design
for
people
to
comment
on,
and
just
so
others
can
give
input
who
are
using
tempo,
who
might
have
interest
in
the
feature
who
want
to
kind
of
have
some
input
on
how
it
goes
and
mario
is
now
queuing
up
what
we're
talking
about
so
go
ahead.
F
Yeah
we
prepare
a
small
presentation.
Permission
will
be
short.
Can
you
see
it
all
right
yeah?
So
we
want
to
present
something
that
we're
working
on,
which
is
seven
site
metrics.
F
At
least
that
is
the
internal
name
so
far,
and
what
we
want
to
do
is
we're
basing
these
of
existing
features
of
the
agent.
So
our
intention
is
move
some
of
these
features
from
the
agent
of
the
agent,
the
group
of
an
agent
to
tempo.
F
So
this
is
the
now
right
now
we
have
different
processes
in
the
agent,
which
is
our
tracing
pipeline,
in
which
we
are
able
to
process
traces
and
extract
data
from
them
from
from
the
tracing
metadata,
and
we
can
build
different
things
like
metrics,
red
metrics
and
service
graphs,
which
is
something
I
will
mention
shortly
and
the
future
is
why
don't
we
move
these
features
to
temple,
so
we
can
generate
them
serve
aside.
F
So
why
are
we
doing
this?
Well
for
one?
Not
everyone
is
using
the
grafana
agent,
so
this
is
very
powerful
to
for
supporting
non-agent
deployments.
There
are
a
thousand
ways
of
ingesting
traces
and
you
don't
even
have
to
run
a
tracing
pipeline
like
it's,
not
a
mandatory
component,
so
yeah
moving.
These
features
to
depot
would
allow
allow
us
to
support
non-agent
deployments.
F
F
Also,
most
of
these
features
rely
on
load,
balancing
across
different
agent
instances,
and
this
is
not
a
driven
task
and
adds
significant
overheads.
That
is
something
that
could
be
more
easily
managed
if
we
do
them
in
tempo
and
finally,
we
hope
them
to
be
easier
to
operate.
F
Since
now
you
only
need
one
component
to
have
both
things
so
tempo
proper
and
these
extra
features
instead
of
two
would
which
would
be
tempo
and
the
grafana
agent.
So
what
are
we
targeting,
as
I
mentioned
earlier,
we're
targeting
two
of
the
features
that
we
currently
have
in
agent,
which
are
service,
graphs
and
star
matrix,
so
the
first
one,
the
one
on
the
left
is
service
graphs,
which
are
visual
representation
of
the
inter
relationship
between
various
services.
F
It
works
by
inspecting
spams
and
looking
for
the
attacker's
pan
kind,
so
it
can
match
client
and
service
pans
and
with
that
generic
metrics
that
represent
edges
edges
between
nodes
in
a
graph
and
the
other
one
is
parametrics
which
basically
generate
metrics
from
your
tracing.
Data
automatically
essentially
generates
request
server
duration,
so
write
metrics
out
of
out
of
this.
So
you
get
like
auto
instrumentation
out
of
your
trace
data.
F
Both
features
are
generated
in
the
ingestion
path,
so
yeah,
the
graffana
agent
or
tempo
in
the
future
will
process
traces
and
generate
these
data
and
for
in
form
of
prometheus
metrics.
We
have
already
some
early
thoughts.
Comrade
you
want
to
go
over
them.
C
Yeah
sure
thanks
mario
yeah,
we
were
already
thinking
about
an
architecture
in
tempo
to
support
this,
since
this
is
like
a
whole
new
like
functionality,
we
are
adding
to
tempo.
So
far,
temple
was
ingesting
traces
and
then
you
can
query
and
search
them,
but
this
would
be
a
feature
in
which
we
ingest
traces
and
then
extract
metrics
to
you
know,
push
them
somewhere
else.
C
So
we
were
considering
a
couple
of
options,
including,
like
you
know,
we
could
integrate
this
into
the
distributor
or
maybe
the
ingester,
but
kind
of
what
we
feel
most
comfortable
with
right
now
is
to
add
a
separate
component
which
is
dedicated
to
generating
this
metric.
So
this
would
be
an
optional
component
that
you
would
run
if
you
want
to
have
these
metrics.
C
And
what
would
happen
is
the
distributor.
Would,
you
know,
receive
all
the
traces
as
normal
and
it
would
write
to
two
destinations.
It
will
write
to
the
ingester
to
store
traces
in
the
back
end
and
we'll
also
write
to
the
matrix
generator
component,
like
we
still
have
to
figure
out
a
good
name
for
that,
so
that
this
component
can
generate
a
matrix
yeah
and-
and
this
has
like
a
couple
of
advantages-
a
separate
component
means
it
can
be
optional.
You
can
just
like
leave
it
out
if
you
don't
want
it.
C
If
this
component
explodes
for
some
reason
like
there's
like
a
spike
in
cpu
usage
or
memory
usage,
it
doesn't
impact
the
rest
of
the
ingest
path.
So
you
know
that's
kind
of
why
we
are
like
nudging
towards
this
architecture,
yeah
and
basically
we're
planning
to
write
like
a
more
detailed
design
document
and
publish
it
on
github
as
well,
so
that
people
can
see
how
we
would
set
this
up
like.
There
are
a
lot
of
details
about,
for
instance,
running
with
multiple
replicas.
Can
we
run
multiple
instances
of
this
component?
C
How
will
we
short
requests
between
them
and
like
yeah,
there's
just
a
lot
of
details
there?
I
don't
think
yeah
I
mean
it's
like
the
most
important
part
of
the
architecture.
C
And
okay,
I
think
we
have
like
one
final
slide.
Is
the
timing
yeah?
So
we
want
to.
We
will
be
working
on
this
like
now,
so
expect
to
see
a
lot
of
like
improvement
there
like
a
lot
of
new
things
in
the
coming
months.
It's
probably
not
going
to
be
included
in
1.3,
but
for
the
next
release
like
yeah,
I
mean
we'll
see,
and
we
also
want
to
run
this
in
grafina
cloud
as
well,
but
it's
kind
of
like
early
days
right
now,
so
we
can't
really
give
a
timeline.
A
So
right,
I
think,
for
us
you
know
providing
it
on
the
back
end
is
really
powerful.
We
really
kind
of
want
to
offer
these
features
natively
in
grafana
cloud.
That
was
one
of
the
big
drivers
and
then
like
they
said
the
efficiency
is
far
better
for
for
the
span.
Metrics
there's
not
a
huge
difference
because
you're
just
generating,
like
you,
know,
metrics
per
span,
it's
pretty
straightforward,
but
the
difference
in
service
graphs
is
pretty
amazing
because
you
have
to
load
balanced
service
graphs
and
you
have
to
like
aggregate
traces
in
one
spot.
A
It's
kind
of
like
tail-based
sampling
in
that
way,
so
doing
that
on
our
side,
we'll
just
open
that
up
for
a
lot
of
people
and
then
also
for
other
people
who
are
running.
You
know
temp
oss.
A
You
know
the
ability
to
kind
of
generate
these
metrics.
If
you
want
them
server
side,
I
think,
would
be
more
attractive
than
putting
them
in
you
know
like
in
the
agents,
so
we're
pretty
excited
about
those
features,
and
we
also
see
the
opportunity
to
expand
that
right.
So
any
other
kind
of
like
analytics
or
like
automated
kind
of
metrics.
We
could
generate
from
spans
or
trace
data
could
kind
of
go
in
this
in
this
place,
cool.
F
Yeah
another
thing:
I'm
not
sure
if
we
mentioned
yeah.
B
F
Extra
thing
that
we're
looking
forward
to
work
on
is
cardinality
is
not
in
the
best
place
in
the
agent
right
now.
It
can
be
too
high
if
the
cardinality
traces
themselves
are
high
and
we've
had
like
experience
with
cardinality
problems
internally,
especially
with
this
parametric.
So
that
is
something
that
we
will
be
working
on
and
and
yeah
working
to
to
improve.
B
A
Cool,
so
I
really
doubt
that
would
be
in
one
three:
I
don't
really
see
any
any
chance.
There
would
be
that
in
oss,
one
three,
but
I
don't
know-
maybe
in
the
coming
releases,
one
four
or
something
one.
Five
and
we'll
have,
of
course,
more
information
in
the
community
calls
and
change
logs
and
blog
posts
and
documentation
and
everything.
A
So
we
should
continue
to
see,
hear
information
about
that
and
keep
you
all
up
to
date
and
you
know
use
or-
and
you
can
use
those
features
also
like
we
said
this
will
be
in
a
proposal
like
pr
to
the
repo
we'll
highlight
that
in
slack
in
other
places,
if
you
are
using
tempo-
and
you
have
interest
in
this-
please
jump
in
there.
Please
comment
about
how
you
feel
about
the
design.
A
How
you
feel
about
the
feature
set
like
this
is
the
opportunity
for
us
to
kind
of
as
a
community
agree
on
what
we
want
this
to
do
and
what
we
can
like
all
use.
So
please,
please
feel
free
to
to
comment
and
give
us
feedback
and
help
us
out
make
something
we
can
all
use
cool
okay
kind
of
next
on
the
list.
I
really
wanted
marty
to
chat
about
some
of
his
work
he's
been
doing
lately,
he's
looking
at
so
like.
A
A
E
Yeah,
so
our
current
block
format
is
really,
I
guess,
compressed
pages
of
open
telemetry
line,
protocol
protobuf
right,
and
so
that
becomes
a
cost
driver
like
for
unpacking
this
deserializing
it
because
to
in
order
to
inspect
a
lot
of
it
like
you
kind
of
like
the
general
libraries
are
too
you
know
unmarshalled
like
the
entire
trace,
and
so
we've
kind
of
already
started
tinkering
with
or
exploring
the
use
of
flat
buffers.
So
that's
a
format.
That's
requires
no
marshalling.
E
You
can
just
load
the
bytes
into
memory
and
it's
able
to
be
read
so
we
are
currently
using
that
in
our
adjuster
search,
so
kind
of
what
we're
doing
is
exploring
using
that
in
the
block
format
on
the
back
end,
and
so
that's
not
also
not
something
we'd
have
in
1.3,
it
would
be
later
on
and
it's
not.
It
has
pros
and
cons.
So
it's
not
like
a
clear
you
know
win.
E
So
where
kind
of
the
approach
we
would
go,
this
would
be
configurable
and
different
workloads
or
different
use
cases
maybe
would
benefit
from
it
differently,
because
you're
kind
of
trading
you
know
re-time
performance
for
right
time
performance.
So
they
are
more
intense
to
create
than
the
protocol
buffers
so
kind
of
like
wait.
You
know
if
you're
doing
heavy
search
or
reads,
and
then
it
might
be
better.
E
So
a
search
is
like
a
big
part
of
this,
so
we
want
especially
for
this
serverless
approach
that
we're
doing,
which
is
kind
of
like
large
parallelization,
brute
force
kind
of
like
very
simple,
back
end
block
format,
but
then
just
go
really
wide
with
the
search.
Unmarshalling
is
a
big
part
of
that,
so
we
think
we
could
bring
that
down,
and
that
would
be
really
great
there.
It
would
have
some
benefits
like
on
other
parts,
where
we're
seeing
like
this
compactor
compacting
these
things
together.
So
there's
a
lot
less
work.
E
That
needs
to
be
done
in
order
to
bring
you
know
different
trace
segments
together,
and
so
I
think
you
know
that's
something
that
we'll
keep
exploring
and
I
think
there
like.
There
are
definitely
some
good
things
there
yeah.
So
the
timing
would
that
be
that
timing
on
that
would
be
later.
E
Yeah,
I
think
yeah,
I
don't
really
have
any
numbers
I
mean
just
general
rule
of
thumb
might
be
something
like
I
think
it's
like
six
times
faster
to
read,
but
it's
like
also
like
one
and
a
half
or
double
in
size.
So
it's
like
increasing
storage
costs,
but
it's
a
lot
faster
to
process.
A
Yeah,
I
kind
of
see
going
forward.
You
know
we
don't
have.
You
know
huge
detail
here,
but
we
see
this
flat
buffer
as
an
iteration
on
the
kind
of
improving
the
back
end
like
we're
not
really
changing
anything
to
the
basic
way.
We
write
traces
to
the
back
end
and
we're
improving
the
method
in
which
we
can
you
know,
parse
them
and
search
them,
and
so
improving
efficiency
there.
After
that,
I
think
we're
going
to
kind
of
regroup
as
a
team
and
look
at
like
a
v2
storage
format.
A
You
know
columnar
comes
up
over
and
over
again
you
even
have
a
engineer
ananya
who's,
not
in
this
call,
but
he's
been
kind
of
like
having
some
fun.
This
week
we
have
a
hackathon
week,
so
he's
been
having
some
fun
with
some
columnar
storage
formats.
Maybe
some
kind
of
you
know
dynamic
index.
We
just
don't
know
so.
A
We're
gonna
probably
do
this
flat
buffer
iteration
for
improved
search
performance
and
then
regroup
as
a
team
and
talk
about
what
would
the
next
step
look
like
for
you
know
better
searching
and
better
kind
of
like
metrics
generation
as
well.
A
So
I
see
this
as
an
iterative
step
just
like
taking
the
exact
same
write
and
read
path
and
just
changing
the
format
and
then
the
next
steps
we
expect
to
be
kind
of
more
extreme
and
we're
going
to
be
talking
about
that,
as
in
the
coming
months
after
we
lock
down
flat
buffer
cool,
I
think
that's
kind
of
what
we
have
so
that's
about
half
of
the
old
meeting
here.
If
anybody
has
any
questions
or
comments
or
whatever
you're
welcome
to
oh,
I
think
there
was
actually
one
in
chat
right.
A
F
Yeah,
so
there
is
already
like
an
early
version
of
the
ui.
You
can
check
how
how
to
enable
it
is
essentially
a
feature
toggle
on
grafana
and
but
as
we're
officially
rolling
it
out
to
profana
cloud.
We
were
looking
to
fit
in
some
improvements,
since
we
think
the.
Why
is
it
still
be
rough?
So
I
think
in
the
coming
months,
or
so
we're
looking
to
to
like
to
make
the
official
roll
up
in
the
hosted
grafana
in
in
grafana
cloud.
A
And
right
you'll
just
write
to
a
cortex
endpoint
or
the
cloud
prometheus
endpoint.
You
don't
even
really
have
to
do
that.
You
can
use
in
your
hosted
grafana
any
prometheus,
any
prometheus
data
source.
It
could
be
the
hosted
one.
It
could
be
one.
You
have
local
it's
up
to
you
really
and
then
you
can
write
the
agent
metrics,
the
service
graph
metrics
there
and
you
could
use
it,
but
it's
yeah.
It's
a
little
bit
of
a
science
experiment
to
set
up.
B
A
So
yeah
it
should
be
shortly.
I
mean
I
don't
even
think
we
can
turn
this
on
yet
is
the
version
of
grafana
there.
I
think
we
just
need
a
little
bit
more
iteration
on
our
side,
but
we
would
be
glad
to
enable
this
early
once
we
feel
a
little
bit
more
confident
about
the
ui
if
you're
interested
in
that
feature.
A
Perfect:
okay,
any
other
thoughts,
questions
holiday,
wishes,
silly
hands.
D
A
Cool
thanks,
like
I
said
this
will
be
up,
so
please
jump
in
there.
If
you
know
this
is
an
opportunity
to
give
us
feedback
on
the
kinds
of
things
you
want
to
generate
and
if
we
can
do
more,
you
know,
of
course
we
will,
and
we
expect
there
to
be
a
lot
of
kind
of
value
here.
A
We
do
kind
of
long
term
have
this
idea
of
like
a
query,
language
and
you're,
going
to
be
generating
metrics
and
analytics
out
of
that
query
language,
but
even
then
the
cost
of
that
versus
pre-generating
metrics
is
enormous.
So
we
will
probably
always
do
some
kind
of
like
pre-generated,
candid
metrics,
where
it
makes
sense
thanks.
A
Cool
well,
it
was
good
commute
off
of
nothing
else,
appreciate
everyone
showing
up,
and
thank
you
for
your
time.
A
Thanks
yeah
appreciate
that
have
a
great
break
and
we
will
see
you
all
sometime
next
year.