►
From YouTube: Grafana Loki Community call [2023-03-02]
Description
Few highlights from the call.
1. New patch releases v2.7.3 and v2.7.4
2. LID (Loki Improvement Document) discussions
3. Multi-day query splitting in Grafana (demo)
Meeting notes are available in google docshttps://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit#heading=h.xma6rt7n0xbi
A
A
Hello:
everyone
Welcome
to
our
March
Loki
Community
call
I
am
kaviraj
and
I
am
as
a
software
engineer
in
grafana
labs,
so
yeah
usually
I
mean
I've
shared
the
meeting
notes
in
the
chat
for
people
who
are
joining
for
the
first
time.
Yeah.
You
can
use
this
link
to
see
the
agenda
and
if
you
have
any
questions
or
topics
to
discuss,
feel
free
to
add
it.
There
we'll
try
our
best
to
go
through
everything
yeah
at
the
end
of
the
session.
We
can
also
have
open
mic.
A
A
B
A
A
All
right,
so,
in
terms
of
like
new
release,
we
don't
have
much.
We
have
just
a
tiny
patch
portions,
since
the
last
Loki
Community
call
so
we'll
go
down
with
the
okay.
A
So
yeah
I
mean
I
just
put
some
highlights
here,
but
I
also
linked
the
exam
change
log.
You
can
see
the
exact
changes
that
went
into
the
releases
there.
Yeah
mostly
it's
about
fixes.
A
Yeah
2.7.3
has
this
if
it
related
to
the
delete
request,
when
you
submitted
editor
closed
with
start
and
end
timers
exactly
same
yeah,
which
will
panic
and
yeah,
we
have
a
fix
for
that
yeah.
We
also
have
some
Panic
fixes
in
2.7.4,
but
also
yeah,
some
other
things
like
this.
One
I
think
this
is
the
one
I
did
I,
guess
the
CRA
tanks
so
yeah.
This
is
one
thing
which
reported
from
the
community.
It's
it's
ready
to
prompt
him.
So
what
happens
there?
A
Is
we
added
this
support
for
CRI
tags
just
a
bit
of
a
context
there?
A
A
Like
because
of
our
upper
cap
on
the
number
of
characters
in
the
line,
so
before
Loki
don't
have
any
support
for
this,
it
just
treats
everything
as
a
single
line.
So
we
added
this
support
last
year,
but.
A
Like
this
doesn't
care
about
the
log
line
coming
from
which
stream,
so
that
is
a
chance.
The
log
line
from
different
streams
can
end
up
in
some
other
stream
and
during
Edition,
so
yeah.
This
release
should
fix
that
one
yeah
and
that
is
also
fixed
for
the
window
event
log,
where
yeah
I
think
that's.
It
was
scraping
like
incorrectly,
and
this
is
from
the
community
yeah
and
now
the
isolator
should
be
scraping
it
fine.
A
If
anyone
worked
on
this
feature
or
if
you
have
any
more
yeah
anyone
that
on
this
appears,
feel
free,
yeah
I
just
pasted
like
some
of
the
highlights,
if
you
want
to
go
through,
but
today,
I
think
we
can
spend
more
most
of
our
time
discussing
the
lids
and
we
have
some
thing
from
Ivana,
also
from
grafana
side,
so
yeah
before
moving
on
to
Lids
I
just
want
to
ask
if
anyone
have
any
questions
on
the
patch
releases.
A
D
A
Different
components
that
need
some
consensus
with
the
contributors
or
like
Community
maintenance.
So
what
you
do
is
you
just
go:
create
this
lid
and
then
before.
A
Then
you
get
the
Consciousness
and
then
yeah.
You
can
start
working
on
the
feature,
so
we
so
the
LED
can
be
approved
or
rejected
for
different
reasons.
So
we
try
to
keep
the
lids
merged,
even
if
it's
detected
for
the
historical
reasons.
E
A
Feel
free,
if
not,
we
can
go
with
the
howdy.
Do
you
want
to
discuss
about
scheduler
lighting.
D
No
worries
yeah,
I
I
can
start
with
the
first
one,
which
is
an
improvement
document
for
improving
how
the
scheduler
treats
the
quality
of
service
across
multiple
users
within
the
tenant
so
copy
me.
You
may
want
to
open
the
documentation
link.
A
D
D
The
the
query
fairness
between
users
is
is
established
in
this
carry
already,
but
if
you
have
like
a
contagious
user
within
within
single
tenant,
a
single
user
could
theoretically
block
other
users
queries
from
the
same
tenant
and
the
idea
there
is
to
make
the
the
scheduler
more
aware
of
kind
of
being
able
to
make
the
schedule
a
way
of
like
a
concept
of
something
like
a
user
or
doesn't
need
to
be
used,
but
any
other
differentiation
between
they
are
differentiation
within
the
tenant,
so
that
these
users,
for
example,
within
the
tenant,
are
treated
all
equally
and
they
have
all
the
same
quality
of
service
within
the
tenant
and
yeah.
D
It's
kind
of
The
Proposal
there
is
I,
did
the
kind
of
hit
the
it's
just
a
second
layer,
introduce
kind
of
this
concept
of
users
or
whatever,
which
we
don't
really
want
to
do,
and
the
other
proposal
is
to
make
a
fully
hierarchical
scheduler,
which
kind
of
can
build
a
recursive
tree
of
cues,
where,
with
certain
there's
a
certain
API,
you
can,
instead
of
only
just
specifying
the
tenant
which
is
which
is
mandatory
anyway,
but
as
you
can,
you
can
specify
in
which
bucket
within
the
tenant,
a
certain
query
would
be
scheduled,
and
since
this
is
hierarchical,
you
could
like
divide,
attended
into
users
and
the
user
within
multiple
other
sub
applications
or
dashboards
or
whatever,
and
like
each
of
these
queues,
then
does
basically
a
round
dropping
pull
from
one
of
their
their
sub
keys,
and
so
you
can
guarantee
the
the
fairness
across
like
multiple
actors
within
the
within
the
tenant.
D
It
was
accepted
going
for
the
for
the
second
proposal
for
the
fully
hierarchical
queue,
mainly
because
it's
not
much
different
to
from
the
implementation
point
of
view
to
to
just
add
another
layer,
but
it
opens
much
more
possibilities
since
it's
much
more
flexible
how
you,
how
you
can
kind
of
to
quality
of
service
control,
Within
These
tree
of
cues.
D
Okay,
that
was
kind
of
the
first
thing.
The
second
proposal,
or
the
second
document,
was
about
changing
the
sharding
within
the
index
Gateway.
This
is
not
on
the
website
yet
because
it
has
been
merged.
D
So
the
the
problem
we
are
facing
is
to
index
gateways.
Right
now
is
that
we
shot
index
index
requests
of
the
Gateway
per
tenant,
which
means
we
hash
the
on
the
client
side.
We
hash
the
tenant
ID
and
then
execute
the
the
request
of
one
of
the
assigned
index
Gateway
server
instances,
which
means
every
every
tenant
hits
always
the
same
index
Gateway
and
additionally,
to
that
the
charting
across
index
gateways.
D
We
have
replication
of
the
replication
Factor,
so
it's
not
always
doing
it
on
a
single
and
single
server
instance,
but
on
multiple,
it's
usually
three
in
the
beginning,
by
default.
The
problem
with
that
is
that,
of
course,
we
have
in
multi-10
systems.
You
have
a
small
tenants
and
large
tenants
and
by
sharding
just
pretending.
The
d
means
that
every
tenant
gets
the
same
kind
of
access
to
the
same
amount
of
resources.
So
it's
just
the
amount
of
instances
that
the
replication
Factor
basically
defines
and
so.
D
But
in
in
the
ideal
case,
you
kind
of
wants
that
the
more
or
the
bigger
the
tenants,
the
more
data
it
has,
the
more
resources
it
should
be
should
be
given
and
just
be
shot
it
properly.
So
you
can
scale
more
easily
horizontally.
So
what
we
have
to
do
now
for
scaling
the
the
index
gauges
is
to
add
more
index
data
instances
and
increase
the
replication
Factor,
but
this
is
then
also
globally
for
every
for
every
single
tenant,
which
it
doesn't
make
a
lot
of
sense.
D
So
there
are
kind
of
birthday
portals
again.
The
first
one
was
to
kind
of
Define
a
dynamic
replication
Factor
for
each
for
the
for
each
tenant.
That.
D
So,
basically,
that
that,
instead
of
a
fixed
amount
of
replica,
so
if
you
have
a
yeah
instead
of
having
a
fixed
amount
of
of
of
replicas
for
each
for
each
tenant,
you
do
that
dynamically
based
on
the
amount
of
available
instances.
D
So
to
say,
every
tenant
has
access
to
thirty
percent
of
the
of
the
of
the
instances
and
it
then
it
calculates
the
replication
packed
accordingly.
This
is
the
same
system
problem
that
that
each
tenant
is
treated
equally,
then,
the
other
proposal
would
be
to
kind
of
give
a
replication
effective
per
tenant,
and
then
you
could
charge
it
accordingly.
D
Just
add
a
shot
ID
to
the
to
the
tenant
ID
for
creating
the
hash
function,
so
a
bigger
bigger
than
it
could
be,
could
have
multiple
shots
and
therefore
have
kind
of
different,
different
hashes
and
different.
D
Correct
amount
of
assigned
index
catering
census,
and
the
third
proposal
is:
is
com
is
different
from
them,
which
is
introduced
or
removes
the
concept
of
tenants
of
tenant
charting
but
kind
of
thus
sharding,
based
on
the
on
the
index
files
that
it
that
equally
will
access
so
and
based
on
the
on
the
index
file?
Name,
the
index
file
name
is
basically
hashed
and
it's
distributed
across
the
index
gateways.
D
This
is
a
more
General
approach,
which
also
gives
the
opportunity
to
say
if,
if
a
query
touches
not
only
one
day
but
multiple
days,
it
will
would
kind
of
touch
multiple
index
files
for
multiple
days
and
they
would
be,
they
would
be
also
shouted
across
multiple
instances,
all
right,
that's
kind
of
the
the
three
proposals.
D
D
Yeah,
it's
kind
of
was
probably
way
way
much
into
the
details.
You
can
read
it.
You
can
read
the
the
pr
as
well
so.
A
Is
it
this
one
already
got
accrued.
D
Notice,
this
is
still
in
in
open,
State
I
think
we
have
a
tendency
towards
the
search
proposal,
which
is
more
generic
and
also
is
gives
more
options
to
to
tweak
it.
A
Cool
makes
sense.
I
have
one
quick
question
about
the
other
proposal:
the
scheduling
one
yeah.
So
do
we
have
any
like
like
a
benchmark
or
like
comparison?
How
it's
going
to
impact
the
multi-dependent
scheduling,
one
I'm,
just
curious
I,
don't
need
to
find
it
on
the.
A
D
Not
implemented,
they
can't
can't
say
it
how
it
would
would
influence
it,
but
so,
but
what
we
will
definitely
see
is
that
did
the
idea
that
you
have
a
sub
queue
for
a
single
tenant,
or
do
you
have
that
they
have
multiple
sub
queues
to
a
single
tenant
and
treat
all
sub
cues?
D
Equally,
it
will
mean
that
a
single
10,
a
single
kind
of
user,
that's
what
we
will
pass
on
in
profound
cloud
from
regular
found
user
will
pass
into
the
scheduler,
as
this
is
the
second
level
that
a
single
user
won't
be
able
to
kind
of
influence.
D
The
other
users
within
the
tenant
as
much
I
need
more
I
mean
it
probably
still
have
an
influence
because
you
have
just.
If
you
have
a
extremely
large
query
with
a
lot
of
sub
queries,
a
single
user
will
have
a
bigger
queue
than
another
queue
than
other
other
uses.
But
when
you're
doing
round
robin
and
not
waiting
the
the
the
the
query
is
everyone
should
get
their
equal
share
within
the
tenant.
A
D
Is
the
same
thing,
but
exactly
so
the
kind
of
the
the
contest,
the
shuffle
sharding
concept,
Still
Still
applies.
This
is
kind
of
the
first
level.
The
first
level
is
slightly
different
to
the
to
the
other
to
the
other
levels,
because
the
first
level
needs
to
take
into
account
that
a
single
tenant
can
have
multiple
queries
yeah
to
to
execute.
A
Okay,
cool
good
to
know
just
I
will
stop
with
this
question.
Just
one
more
onto
this
one,
so
I'll
be
planning
to
ship.
This,
do
you
have
any
this
quarter
or
we
are
just.
D
Yeah,
this
is
so
started
with
implementation,
and
this
is
going
to
be
shipped.
This
quarter.
A
Yeah,
if
you
don't
have
any
questions,
then
I
think
we
have
two
other
lids
I.
Don't
think
we
have
this
contributor
in
the
call
down.
Maybe
if
he's
there,
can
you
please
shout
out
I,
don't
not
sure.
A
Yeah,
okay,
I
think
caution
has
some
review
going
on
with
this
LED.
Also,
if
caution
is
here
also
can
give
a
some
idea
on
this,
but
yeah.
If.
D
A
E
F
A
lot
so
this
one
concerns
the
ruler
component
and
just
a
quick
recap.
F
E
A
We
are
explaining
about
the
ruler
component
itself.
After
that
we
didn't
hear
you,
okay,.
F
All
right
yeah,
so
the
the
ruler
itself
behaves
like
a
query
in
the
sense
that
it's
evaluates,
queries
that
gets
applied
to
it
and
it
knows
how
to
interact
with
the
store
or,
what's
a
query,
the
the
data,
that's
in
memory
in
the
ingesters
and
as
such,
it
also
doesn't
have
any
way
of
parallelizing
queries.
F
So
what
mimir
did
recently
and
what?
What
we're
planning
to
do
as
well
is
to
implement
remoteral
evaluation.
So
what
we'll
be
able
to
do
is
have
the
ruler
just
act
as
a
client
effectively
like
if
you're,
using
grafana
to
query
Loki
or
log
CLI,
we'll
just
have
the
ruler,
execute
the
query
and
form
out
the
execution
of
the
query
to
the
the
full
read
part.
So
that
would
include
the
query
front
end.
B
F
So
by
moving
that
query
execution
to
the
full
read
path
we
get
all
of
those
benefits
and
that
results
in
fewer
rules
missing
their
evaluations.
That's
pretty
much
what
happens
if,
if
your
rules
run
too
slowly,
they
back
up
and
then
they
cause
list
evaluations
which
are
are
really
not
good.
It
means
you
can
list
alerts
or
you
can
have
gaps
in
your
recording
rule
metrics.
F
So
that's
really
what
this
is
aiming
to
achieve,
and
this
is
something
that
we
we
hope
to
implement
within
the
next
within
this
quarter,
which
ends
I
believe
end
of
April
yeah.
So
we
should
have
this
soon
and
I
want
to
just
reiterate
about
these
Lids
that
they
are
really
designed
for
Community
collaboration.
So
even.
B
F
A
lid
has
been
approved
and
merged,
or
if
or
if
there's
one
that's
still
outstanding.
You
know
we
really
want
the
community
to
engage
and
to
ask
questions
and
to
really
offer
you
all
an
opportunity
to
share
your
voice
and
to
share
your
use
cases
around
these
features.
You
know
so,
if
you
have
a
specific
use
case
for
this
or
one
of
Christians,
nerds
or
legal
jeans,
you
know
absolutely.
F
We
really
want
as
much
feedback
as
possible
and
and
please
do
read
through
the
it's-
the
the
first
load
over
there,
the
introducing
let's
triple
zero
one
and.
B
F
Explains
like
what
we're
trying
to
do
with
with
this
whole
process
and
what
the
procedure
looks
like.
A
Cool
thanks
Danny.
If
anyone
have
any
questions
like
Danny
said,
feel
free
and
yeah
I
mean
if
you
have.
If
you
want
to
go,
read
the
dots
first
feel
free
and
then
you
can
also
engage
in
the
issue
itself
feel
free
to
comment
on
the
pr
and
yeah
we're
happy
to
hear
you
all
right.
A
G
Okay,
so
hey
everyone,
my
name
is
Ivana
and
I
am
part
of
the
team
that
works
on
Lucky
data
source
and
logging
in
graphanet,
and
usually
our
team
shares
on
this
lucky
Community
calls
either
features
or
improvements
that
were
recently
released
or
are
going
to
be
released
very
soon.
But
today
we
have
decided
to
do
something
different
and
we
would
like
to
share
with
you
a
feature
or
Improvement
that
is
currently
in
very
early
stages
of
development,
and
we
are
looking
for
early
feedback
on
it.
G
So
let
me
switch
to
grafana,
and
the
Improvement
I
would
like
to
talk
about.
Is
the
query
splitting
multi-day
query
splitting
and
basically
this
Improvement
is
aimed
at
the
users
who
run
queries
that
span
across
multiple
days.
So
let's
say
our
query
is
hey.
G
G
So
let
me
show
you
how
it
works,
so
I'm
here
simulating
a
little
bit
slower
Network,
and
so,
when
I
run,
the
query,
as
you
can
see
now
like
each
day,
is
filling
out
one
by
one
and
like
as
I
said
like
at
any
point.
I
can
cancel
the
query
and
that
way,
I
will
only
see
the
data
for
the
already
executed
queries.
G
This
works
for
Matrix
query
for
log
queries,
but,
as
I
said,
it's
a
currently
in
the
development
stage,
so
there
might
be
some
bags
there
might
be.
Maybe
some
experiences
that
you
notice
that
are
not
best
for
you
or
something
that
you
actually
really
like.
G
So
if
you
find
anything,
I
have
linked
here,
the
the
graphene
issues,
GitHub
issues,
URL,
where
you
can
create
the
issue
or
share
your
feedback
and
I,
have
also
linked
the
lucky
slack
Channel
communities
like
Channel,
where,
again,
if
you
have
any
feedback,
if
this
is
solving
your
problem,
if
you
would
like
to
see
some
changes
or
just
like
share,
what
do
you
think
we
would
love
feedback
of
everyone
who
will
try
it
out
foreign.
A
H
Hi
Bella,
thank
you
for
sharing
this.
This
is
really
huge.
This
will
definitely
improve
the
way
people
are
clearing
Loki
and
we
certainly
have
people
carrying
multi-tape
things
and
getting
gigabytes
of
data
back.
One
question
about
the
implementation:
is
this
shooting
multiple
queries?
Is
this
somehow
picking
packing
on
something
new
or
lucky
that
I
have
foreseen?
That
is
part
of
our
API?
G
By
the
way,
Sven,
if
I
say
something
that
is
incorrect,
correct,
please
jump
in
because
I
personally
wasn't
working
on
this
feature,
but
the
people
who
were
working
who
are
working
on
it
couldn't
be
here.
So
basically,
what
we
do
is,
if
you
run
multi-day
query.
So,
let's
say,
as
we
have
here
last
seven
days,
we
split
it
currently
day
by
day,
so
we
run
seven
one
day,
queries-
and
this
is
like
currently
only
splitting-
which
we
do,
which
is
by
day,
did
I
answer
your
question.
Arsen.
H
Yeah
this
answers
my
question,
so
this
means,
if
this
is
enabled
by
default,
we
will
have
basically
way
more
queries
against
our
Loki
instances.
Smaller
queries,
so
my
question
on
top
of
it
is:
have
you
thought
about
leveraging?
This
starts
API
or
something
similar
from
Loki
to
see?
Okay,
if
you,
you
may
just
want
to
shoot
seven
days,
seven
days
of
code,
but
it
might
be
just
two
megabytes
of
data.
H
So
why
shooting
multiple
queries
against
two
megabytes
of
data,
maybe
making
it
more
data
driven
in
future
to
say:
okay,
if
I'm
accessing
gigabytes
of
data,
then
yes
split
it
and
basically
lock
is
doing
behind
the
scenes.
The
same
thing,
but
having
a
little
bit
of
voyeuristics
here
before
shooting
too
many
queries
just
for
a
very
small
amount
of
data.
It's
just
the
overhead
may
be
just
having
over
the
wire
7
queries
or
X
queries
might
be
just
not
worth
it
I'm,
pretty
sure.
H
When
tsdb
lands,
we
will
have
may
way
more
features
at
our
disposal
to
answer
query
sizes
from
the
index
rather
from
the
chunks.
So
it
might
be
a
thing
to
cooperate
here
and
making
this
with
couple
of
46
more
robust
or
at
least
have
a
look
on
this
over
the
next
iterations.
G
So,
by
the
way
this
is
a.
This
is
great
question
and
if
you,
if
you
can
see
here,
we
already
like
in
a
sense
use
the
stats
endpoint
to
to
show
you
how
much
data
will
be
processed,
and
this
is
definitely
something
we
talked
about
because,
like
on
the
other
side,
you
can
have
a
query.
That
is
let's
say
last
hour,
but
we'll
touch
a
lot
of
data
and
we
won't
split
that
so
so
there
can
be.
You
know
the
the
other
side
of
this,
so
it
was
definitely
discussed.
G
It's
definitely
something
we
keep
in
mind,
we
something
we
would
like
to
do
or
consider
doing
in
the
future,
but
at
this
first
iteration
we
have
decided
to
basically
to
test
like
the
visualization,
to
make
sure
everything
is
working
correctly.
That
kind
of
like
yeah
like
that
that
user
experience
is
good.
We
have
decided
to
go
with
this
like
simple
approach,
but
that's
definitely
something
that
we
will
and
we
will
consider
improving
in
the
future.
H
C
G
I
think
it's
only
on
Main
I
think
it
was
added
after
the
branch
out,
so
you
have
to
be
on
Main
and
you
have
to
enable
feature.
Toggle
cool
thanks.
D
Yeah,
my
question
would
be
since
there
is
sub
queries
done
in
sequential
in
sequential
way.
What,
if
one
of
these
sub
query
requests
fails
for
whatever
reason
understand
the
the
rest
of
the
queries,
countless
well
or
other
other
rest
of
the
queries
executed.
So
it's
just
one
part
of
the
the
visualization
is
missing
or
yeah.
C
D
H
At
least
something
visual
like
hey,
this
is
a
partial
result.
Something
happened
like
a
banner
or
something
like
that
people
will
make
I
mean.
Usually
people
are
using
the
locks
they
make
worth
out
of
it,
even
if
they
are
partial,
because
they
get
at
least
something
and
yeah.
A
So,
to
follow
up
on
Christian
gushing-
that's
is
this
same,
for
this
is
only
for
metric
queries
or
for
both
log
and
Metric
currents.
C
Basically,
it
also
might
be
good
because
we
are
looking
for
early
feedback
here,
there's
a
little
hack
like
if
you
want
to
see
like
a
chunked
result
or
a
split
result
and
a
non-split
result
you
can
to
the
letter
a
is
blue,
a
Ivana
to
the
ref
ID.
You
can
just
add
a
do.
Dash
not
Dash
chunk
and
do
do
not
chunk,
but
we're
done
with
that
and
and
not
chunks,
not
split
query.
C
This
is
super
good
for
testing
if
you
want
to
see
results
back
in
the
dashboard
or
something
if
you
want
to
compare
to
two
things.
H
Have
you
thought
about
making
it
independent
from
multi-day
to
make
it
let's
say,
multi
something,
for
example
splitting
within
an
hour
or
within
within
a
day,
for
example,
because
if
yes,
there
might
be
a
coordination
Improvement,
for
example,
if
you
split,
if
you
set
the
query
to
login,
you
get
get
them
split
in
half
an
hour
by
default,
if
I
remember
correct
or
setting,
and
so
this
means,
if
you
really
are
getting
small,
the
actual
results
of
sub
queries
and
you
aggregate
them
and
we
might
need
to
stop
doing
the
job
in
lock
it
or
at
all,
I
mean
you
know,
query
front
end
is
doing
exactly
this
piece
splitting
aggregating,
so
this
might
yeah
no
need
to
do
the
job,
for
example.
H
This
is
just
an
idea
for
us
for
for
food
for
thought.
Let's
say
like
this:
it's
not
really
something
huge,
but
if
it's
get
arbitrary,
how
you
split
the
queries
and
it's
configurable-
it
might
be
a
nice
thing
to
record
in
there
with
a
sub
query
splitting
in
low
key
I'm,
not
sure.
If
mimir
does
this
I'm
pretty
sure
they
have.
C
Yeah
so
at
least
from
from
the
front-end
side
from
girlfriend
outside
there's
another
little
hack,
where
you
can
at
least
also
now
to
try
it
out
Define
the
the
time
range
on
which
you
will
split
I'm,
currently,
not
remembering
how
how
it
was
set
up
I
can
share
it
in
the
dock
afterwards.
So,
basically,
you
can
configure
the
days
or
the
the
time
range
which
will
split
and
I
guess.
We
will
also,
at
one
point
add
it
to
the
data
source
configuration
docky
itself.
H
H
Normal
users,
so
yeah.
A
Wait
anyone
else
have
any
questions,
not
just
grafana
or
any
anything
else.
We
discussed
so
far
feel
free.