►
From YouTube: Loki Community Meeting 2021-04-01
A
Welcome
this
is
the
the
april
1st
edition
of
the
grafana
community.
Sorry
loki
community
call
rather
kind
of
short
agenda.
Today,
a
couple
things
to
chat
on
the
sort
of
loki
squad
has
been
pretty
heads
down
on
kind
of
cleaning
up
some
of
our
internal
work
and
some
of
our
team
members
are
on
loan
to
other
teams.
A
Raise
your
hand
owen
so
so
the
news
is
this
month
is
a
little
light,
but
there's
still
some
interesting
stuff
to
talk
about.
I
think
I'll
start
with
the
2.2
release,
because
I'm
pretty
sure
I
talked
about
that
and
at
least
let
me
share
my
agenda
here.
A
Well,
I
don't
even
want
to
go
back
further
than
that,
so
anyway,
2.2
was
released
and
2.2.1
we'll
do
probably
today
or
tomorrow
mainly
for
this
bug
right
here,
these
two
bugs
35,
50
and
3502.
A
I
would
warn
everyone
in
the
sort
of
strongest
terms
if
you're
using
the
pack
and
unpack
features.
So
I
don't
suspect
many
people
are
because
we
didn't
really
advertise
much
of
this.
We
added
it
to
kind
of
support
some
use
cases
that
we
were
trying
to
better
handle
like
I
think
I
talked
about
it
last
time.
So
the
idea
is
with
pac
and
unpack
that
you
can
take
some
very
high
cardinality
source
of
logs.
A
So
in
our
example,
we've
we're
working
with
a
ci
system
that
basically
just
generates
log
or
generates
pods
as
part
of
the
build,
like
thousands
and
thousands
and
thousands
of
them-
and
you
know
typically,
our
kubernetes
scrape
configs
would
have
pods
as
a
you
know,
pod
name
as
a
label,
but
in
this
case
that
ends
up
with
so
much
cardinality
that
it
can
become
a
little
bit
heavy
for
the
index
and
loki.
You
know
normal
pod.
A
Churn,
isn't,
you
know,
usually
isn't
a
problem,
but
this
you
know
specific
use
case
was,
you
know,
basically
doing
its
best
to
generate
pods
as
fast
as
possible.
So
so
what
pack
would
do
in
prom
tail
is
lets.
You
take
labels
and
like
pod,
for
example,
and
insert
it
into
the
log
line.
So
we
would
take
the
normal
log
line,
wrap
it
in
a
json
object
and
then
let
you
select
labels
and
put
them.
You
know,
instead
of
putting
them
in
the
index
they
get
shipped
with
the
log
line.
A
So
this
is
a
little
bit
of
a
you
know:
stop
gap
until
some
better
things
come
along
and
loki
specifically
like
some
overhaul
to
the
storage
to
give
us
more
ability
to
handle
higher
cardinality.
But
in
the
meantime,
if
you
find
yourself
in
a
use
case
where
you're
you
know
you're
churning
a
huge
amount
of
labels
for
something
that
you
don't
want
to
remove,
so
that
pod
label
was
still
valuable
for
the
querying
here.
A
So
that
was
what
pack
and
unpack
were
for,
so
the
unpack
command
sits
in
loki
and
it
unwraps
that
and
uses
it's
equivalent.
It's
like
syntactic
sugar
for
a
json
parser
and
then
extracting
doing
a
line
format
to
replace
the
line.
However,
it
had
a
pretty
nasty
bug
where,
when
it
did
that
it
actually
rewrote
the
log
line
and
permanently
so
you
know
unfortunate,
we
were
trying
to
optimize
for
performance
and
push
the
bar
a
little
bit
too
far
there.
A
So
if
you
were
or
are
interested
in
the
pack
and
unpack,
then
you're
gonna
want
2.2.1
and
the
split
shard
creation
bug
if
anybody's
upgraded
and
they
find
that
they're
getting
errors.
That
say,
end
time
must
be
after
start
time.
That's
related
to
this
usually
hits
on
longer
query
durations,
but
that
should
fix
that,
and
I
think
I'm
also
going
to
cherry
pick.
A
This
label
allow
stage,
because
it's
sort
of
a
nicer
way
to
kind
of
get
control
over
what
labels
are
sent
out
of
prom
tail
than
having
to
do
it
with
relabel
configs
and
then
actually
gives
you
some
more
control
over
that,
because
relatable
configs
happen
before
the
pipeline
stages.
So
that
should
be
an
easy
one
to
add
in
there
too,
can
you
talk
about
what
label
allowed
us
yeah?
A
So
we
have
a
label
drop
stage
now,
which
you
can
essentially,
you
know,
drop
list
any
number
of
labels
that
you
don't
want
to
send
to
loki,
but
that's
cumbersome
if
you
it
doesn't
do
like
regex
matching
or
anything.
So
that's
cumbersome.
If
you
don't
know
the
sets
of
labels
so
label.
Allow
is
the
other
approach
with
an
allow
list
approach.
Where
you
can
say,
I
only
want
to
send
namespace
cluster
pod
just
examples,
those
all
those
three
labels
are
allowed
to
be
sent
and
everything
else
is
dropped.
A
That's
kind
of
a
lot
more
attainable
approach,
for
you
know
dynamic
environments
where
the
you
know,
labels
that
might
be
out
of
control.
This
could
be
useful
too,
so
you
know
the
the
default
scrape
config
that
we've
shipped
from
day,
one
in
the
helm,
chart
and
jsonnet
automatically
includes
all
pod
labels
as
indexed
labels
in
loki,
and
in
hindsight
I
feel
that
this
is
a
mistake.
A
I
don't
think
it's
changed.
He
changed
it
in
the
promtel.sh
file.
I
think
that's
still,
okay,
which
is
most
people,
aren't
using
unless
they're
doing
the
grafana
cloud
install
path,
but
at
this,
as
far
as
I
know,
I'm
pretty
sure
this
is
still
the
default
because
I
think
to
change
it.
It
would
need
to
be
communicated
well
because
it's
gonna
cause
labels
like.
So.
A
If
we
remove
that,
then
it's
a
it's
a
pretty
noticeably
changed,
behavior
that
people
might
be
relying
on,
but
the
problem
is-
and
we
see
it
more
and
more
now
is
like
with
istio,
because
istio
wants
to
put
like
sever,
700
or
800
labels
on
a
pod
and
well
or
you
know,
20..
You
know
still
way
more
than
what
we
really
want
for
loki
and
they
don't
really
add
any
useful
contacts
in
the
in
the
format
of
how
you
would
query
for
logs
for
an
app.
But
it
is
the
case.
A
An
istio
is
not.
I
mean,
there's
there's
a
number
of
places
where
pod
labels
are
added.
I
mean
we
do
it
internally
for
different
reasons,
and
then
those
became
index
labels
but
they're
they're
sort
of
almost
never
something
you
would
query
by
or
maybe
shouldn't
be
and
that
ends
up
with
you
know
either
hitting
the
label
limit
the
default
of
15,
you
know
or
sort
of
causing
other
trouble
that
it
just
sort
of
unnecessarily
grows
the
index.
B
I
think
that
this
is
a
really
interesting
problem
that
we
have,
because
it's
illustrative
of
like
knowing
there's
a
better
way
to
do
something,
but
feeling
like
we
like
it's
really
hard
for
us
to
make
what
we
kind
of
consider
backwards
and
compatible
change
right
and,
like
I
think,
there's
been
a
couple
examples
historically,
where
this
has
kind
of
been
the
case
and
like
like.
How
do
we
choose
what
option
to
take
right?
B
Do
we
make
the
choice
that
makes
things
better
in
the
long
run
right
like
that
that
moves
us
towards
a
more
ideal
scenario,
but
that
causes
like
some
migration
pain,
for
instance,
right
like
we
did
this
and
we
change
in
the
helm,
charts,
for
instance,
suddenly
you
know
people
might
run
into
more
problems
with
like
out
of
order
errors,
for
instance
like
that
could
be.
C
A
A
The
historically
I've
been
of
the
opinion
that
it's
better
to
make
small
breaking
changes
for
the
long-term
sanity
of
the
project
and
its
maintainers,
and,
as
a
result,
I
always
really
recommend
everyone
look
at
the
upgrade
guide
to
really
do
our
best
to
over,
communicate
that
as
well
as
try
to
have
changes
that
are
breaking
be
sort
of
immediately
obvious.
This
one
is
harder,
though,
because
you
know
it's
not
going
to
like
fail
to
start
or
anything
like
that.
A
It's
just
a
behavior
change
in
what
you
know
what
you
were
seeing
versus
what
you'd
start
seeing
it's.
Also,
technically,
you
know
the
helm.
Charts
are
a
separate
repo.
The
json
is
like
they're
sort
of
versioned
a
little
bit
differently
than
the
project,
so
it's
we
could
do
it
as
part
of
a
release
of
the
project,
but
it
doesn't
really
make
sense
to
tie
those
together.
A
B
B
I
think
there's
two
categories
of
types
of
errors
right
here
right,
the
first
one
is
oh.
I
used
to
use
this
pod
label
to
to
query
right,
and
now
it's
not
there
which,
like
that
kind
of
sucks,
but
is
okay,
the
other
one
is
I'm
not
ingesting
logs
anymore,
and
that's
that's
pretty
bad
right
like
that.
One
really
hurts
because
you
have
to
go
figure
that
out
the
tooling
for
figuring
out
exactly
how
to
do.
It
is
not
super
easy
to
discover
if
you're,
if
you're
relatively
new,
to
loki
this.
B
A
Yeah
yeah,
I
just
yeah-
I
don't
know
it's
one
of
those
things
that
needs
to
be
done
and
not
really
looking
forward
to
it.
But
just
you
know
it's
not
ideal
right
like
to
have
to
make
changes
like
that,
but
in
the
long
run
it
will
benefit
everyone.
In
my
opinion,
all
right.
I
saw
your
note
diana.
I
think
diana
wants
to
demo
the
new,
so
grafana
7.5
was
released
and
with
that
was
a
new
label
browser
and
loki.
D
Oh,
probably
you
should
you
should
demo.
I
just
was
offering
mine,
because
it
definitely
only
has
my
computer's
stuff
on
it,
but
I
do
would
like
to
demo
the
site
search
at
some
point,
because
it's
kind
of
cool.
A
Okay,
I
did
that
I'm
assuming
you're
looking
at
an
explore
screen
on
my
screen.
A
Yeah,
so
the
the
historically
when
you
were
over
here
next
to
I
don't
know
how
well
my
pointer
is
visible,
but
it's
bigger
just
in
case
when
you're
visible,
so
this
log
would
be
a
drop
down
list
and
they
would
just
list
labels.
So
in
7.5
you
now
have
a
label
browser
or
log
browser,
really
it's
a
label
browser,
but
no
don't
criticize
anyone's
choice
of
words,
because
I'm
terrible
at
everything
I've
ever
named,
which
is
probably
going
to
be
evident
in
these
labels.
A
This
is
a
loki
instance
that
I
run
at
home
and
so
the
way
this
works
is
you
basically
have
the
option
now
to
sort
of
select
a
label.
So
job
labels
are
pretty
consistently
applied
in
in
most
kind
of
prometheus
and
loki
scrape
configs.
And
then,
if
you
click
this
is
going
to
show
you
basically
contextually
what
other
labels
you
would
find,
as
you
know,
on
logs
that
were
shipped
with
a
job
label,
and
so
let
me
do
yeah.
A
So
I'm
sorry,
those
are
the
values
that
are
available
and
then,
when
I
select
the
values
now
you
can
see
here
that
the
other
labels
that
are
accessible
for
that
are
highlighted
too.
So,
if
I
remove
start
with
just
job
right,
so
there's
no,
you
know
within
any
of
these
particular
labels
so
like.
I
only
know
that
time
fidget
has
the
type
label
but
end
of
version
labels.
A
So
honestly,
this
is
really
nice
compared
to
in
the
search
field,
2,
which
is
really
handy.
If
you
want
to
search
for
values,
you
know
my
setup
here
has
very
few
labels
and
very
few
values,
so
that's
very
easy
to
click
through,
but
you
know
that's
often
not
the
case
on
bigger
installations
yeah.
So
this
is
is,
like
I
said
new,
so
feedback
definitely
encouraged.
People
have
used
cases
where
they'd
like
to
see
this.
A
You
know
improved
or
you
know,
features
those
should
be
made
in
the
grafana
repo
github.com,
because
this
falls
into
the
front-end
side
of
things,
but
yeah
by
all
means.
Everyone
enjoy.
B
D
There
was
a
lot
of
ux
work
done
on
that
by
the
way
is
actually
a
great
segue,
because
the
grafana
youtube
channel
by
the
way
exists
and
we've
been
posting
community
calls
like
these
and
our
ux
feedback
sessions
there,
and
there
was
a
ux
feedback
session
yesterday
where
we
talked
about
loki
pagination,
so
that
should
be
posted
in
public
in
a
few
days,
and
so,
if
you
guys
have
feedback
or
commentary
when
that's
posted,
take
a
look
and
tell
us
your
thoughts
on
that,
because
that
will
help
affect
what
the
end
result
is.
D
So
here
is
a
link
to
our
grafana
youtube
and
highly
recommend
checking
some
of
those
things
out.
There's
a
lot
of
great
content.
A
Definitely
all
right
so
diana
you
mentioned,
you
had
something
you
wanted
to
demo,
I'm
happy!
Yes,.
D
D
All
right
so,
just
today
or
yesterday,
they
added
this
teeny
tiny
little
icon.
So
you
can
do
a
site-wide,
grafana
search
and
if,
for
instance,
you
don't
really
know
for
sure
what
you
want
you
can
type
in
that
search
and
it
will
search
everything
or
you
can
limit.
You
can
limit
it
by
contact
tent
type
there,
and
this
will
search
both
our
documentation
and
it
will
search
in
webinar
descriptions,
it'll
search
at
blog
posts,
all
sorts
of
stuff.
D
So
let's
say
I
just
want
loki
blog
post
because
I
saw
a
lucky
lucky
blog
post
a
while
ago,
and
I
want
to
to
find
it
or
that
I
want
to
do.
Let's
say
the
loki
product
page,
so
you
can
get
some
of
those
so
anyway
check
it
out
and
maybe
you'll
find
something
new
like
this
cool
webinar
with
observability
with
loki
2.0.
A
B
B
That
that
exposes
you
like
that
publicly.
This
is
getting
weird
anyways.
I
think
now
we
have
some
process
to
like
automatically
publish
like
webinars,
and
things
like
that
that
we
run
that
I
think,
is
like
time
gated
by
like
a
week
or
two
for
like
requiring
emails
for
signups.
But
the
idea
is
that,
like
we
run
these
things
right,
especially
from
the
engineering
team,
we
really
care
about
like
presenting
useful
demos,
and
we
want
to
then
be
able
to
use
those
as
to
like
give
people.
You
know
examples
or
feedback
right.
B
A
Nice
I
had
a
it's
not
merged
yet,
but
it
is
under
review
so
aditya,
one
of
the
lucky
maintainers
that
has
a
real
knack
for
adding
really
useful
features
like
label.
Allow,
as
an
example,
is,
has
a
pr
open
for
geoip
support,
and
I
can't
remember
the
name
of
the
company
everybody
uses,
but
it's
the
idea
is,
you
can
use
the
sort
of
standard
databases
there's
a
range
of
them.
Basically,
there's
like
a
free
version.
A
I
think
the
part
we
have
left
to
figure
out
is
how
to
do
the
bundling
of
that.
If
we're
going
to
do
any
bundling
or
have
that
as
an
exercise
for
the
user
to
download
the
database,
and
the
you
know
prompted-
would
need
a
path
to
it,
but
these
geoip
databases
would
be
able
to
take
a
ip
address
and
return
geolocation
information
around
it
that
you
can
use
to
in
a
pipeline
stage
to
enhance
your
log.
There's
a
word,
I'm
looking
for
there,
enrichment.
A
B
A
A
And
then
yeah
we
did
mention
pagination
pagination
take
your
pick
should
be
coming
in
in
some
form
soon.
My
guess
is
probably
the
next
release
of
grafana,
although
maybe
in
beta
at
that
point,
but
still
it's
it's.
A
A
I
would
I
would
say
that
I've
done
pretty
well,
I
think
we've
all
done
pretty
well
without
it,
but
it
is
coming
better
late
than
never.
That's
it.
That's
that's
the
most
words
I
can
use
to
fill
time
about
the
fewest
amounts
of
topics.
I.
B
Think
there's
a
greater
trend
of
like
we've
noticed
that
some
people
use
loki
in
kind
of
different
ways
that
we
than
we
do
internally
right.
Another
example
would
be
the
tailing
like
the
talent
point
right.
I
don't
think
I've
like
unless
I'm
debugging
tailing
itself.
I
don't
think
I
ever
use
that
right
really.
B
A
B
B
Yeah,
I
I
should
have
maybe
not
said
we
don't
do.
Oh.
A
A
So
this
reminded
me
of
another
thing
that
I
want
to
talk
about,
because
there
was
an
issue
that
was
open
this
morning
on
the
the
community.grafana,
the
discord
sports,
yeah
discourse,
not
discord
discourse,
so
a
common
feedback
we
get
extremely
common
and
usually
in
the
form
like
this
issue,
was
that
people
being
mad
at
us
or
disappointed
with
us
is
the
behavior
of
what
they
expect
is
different
than
what
happens
when
they
search
for
logs,
and
I
have
no
judgment
to
pass
on
what
people
expect
or
like
what's
right
or
wrong
here.
A
A
You
will
get
back
at
most
a
line
limit.
So
by
default,
that's
1,
000
log
lines.
A
A
We
actually
did
that
for
a
long
time,
and
then
it
ended
up
looking
like
you
didn't,
have
any
logs
until
the
very
end
and
then
all
of
a
sudden
you'd
have
logs,
because
that's
where
the
last
1000
were,
and
then
that
was
since
changed
to
just
show
the
histogram
for
the
time
period
that
we
do
have
logs,
because
that's
sort
of
less
confusing,
but
it's
also
not
you
know,
has
not
removed
this
confusion
or,
I
would
say,
unhappiness.
A
It's
a
nice
way
that
I
feel
like
some
of
the
comments
that
I
read
around
this,
where
you
know
everyone's
expectation
is
that
that
histogram
will
show
you
the
full
query
length
and
the
volume
of
logs
you
would
see
the
reason
it
doesn't
do
that
is
pretty
intrinsic
to
how
loki
works.
So
this
is
the
big
difference
between
loki
and,
let's
say,
elasticsearch.
A
A
For
you
know
what
this
should
look
like,
and
I
think
that's
why
most
people
are
unhappy
when
they
have
this
experience
for
loki
to
tell
you
the
rate
of
log
lines
for
a
time
window.
It
has
to
go
count
all
of
them
every
single
one
of
them,
and
so
by
default
we
chose
not
to
do
that
because
a
you
know,
a
normal
query
can
return
a
thousand
log
lines
in
you
know
less
than
a
second.
A
In
most
cases,
right
like
we
only
start,
you
know,
can
stop
querying
as
soon
as
we
get
those
thousand
log
lines.
If
you
want
to
know
what
a
histogram
looks
like
you
can
write
a
rate
query
just
take
your
same
query
and
put
rate
around
it
and
probably
put
some
around
that,
and
you
will
get
back
that
histogram.
A
So
it's
it
exists
now.
You
know,
we've
talked
a
lot
about.
Should
we
just
automatically
do
that
right?
Like
should
every
query
just
submit
two
queries,
the
log
query
and
the
rate
query.
Maybe
this
is
the
part
where
I
have
to
be
careful
because,
like
I
don't
want
to
have
to
eat
my
words
but
like
I
don't
personally
find
any
value
in
that
histogram.
A
I've
never
used
it
in
my
life
to
answer
a
question,
so
I'm
I
I'll
just
back
up
from
that
thing,
I'm
either
I'm
just
very
likely
missing
the
use
cases
where
that's
fairly
helpful
and
also
maybe
it's
just
really
easy
for
me
to
know,
because
I
helped
build
loki
that
I
can
write
a
rate
query
right.
So
so
I
will
absolutely
admit
that
my
understanding
here
and
my
use
is
heavily
biased
by
my
experiences
and
what
I
know
about
the
tool.
A
So
the
you
know
in
conclusion,
I'll
just
stop
talking
here,
we're
not
quite
sure
what
the
best
solution
to
this
is.
I
think
an
approximation
maybe
could
help
here
right,
like
we've,
talked
about
having
some
special
case
around
how
we
do
things
to
kind
of
you
know,
you
know,
give
back
an
approximation
for
the
number
of
log
lines
like
to
give
a
meaningful
histogram
over
the
time
range.
That's
close
enough.
I
absolutely
would
love
to
hear
the
use
cases
of
like
you
know.
It's
just
very
likely.
A
Maybe
the
case
that
the
logs
that
I
look
at
are
sort
of
not
you
know
they
don't
vary
enough
in
volume
that
that
histogram
has
value
so,
like
everyone
else
could
have
a
different
use
case.
I
I
don't
mean
to
sound
critical
of
anyone.
I
think
this
one's
interesting
to
me
because
of
how
aggressively
people
tell
us
we're
wrong
about
it
or
how
bad
it
is
that
we
don't
do
it
so
that
is
sort
of
the
basis.
For
my,
I
would
love
to
know
why
we're
so
wrong
about
it.
A
I
think
we're
wrong
about
it
here.
I.
D
B
A
Yeah
and
it's
I
don't
know
it's
a
it's
a
hard
problem
for
loki
to
solve,
and
my
general
question
here
is
this:
the
problem
that
the
loki
community
wants
us
to
solve.
You
know:
there's
lots
of
features
right.
There's
lots
of
things
that
people
would
like
to
see
in
loki
is
this.
One
really
important
you
know
is
this:
the
is
this
a
real
deal
breaker
for
people
because
it
absolutely
sounds
like
it
is
like.
A
I
don't
just
believe
people
when
they
say
that
they
can't
use
loki
because
of
this
right,
like
I
mean,
maybe
they're,
being
a
little
melodramatic,
but
I
still
think
it's
significant
enough
that
they
take
the
time
to
give
that
feedback.
So
I
guess
that's
the
hard
question
to
answer
right.
We
always
sort
of
look
at
like
how
do
we.
I
guess
we
do
have
an
issue
for
it,
but,
oh,
no,
that's
something
else.
I
got
to
look
to
see
what
issue
we
have
for
it.
B
A
B
About
how
we
can
actually
implement
this
cheaply
but
like
I,
don't
really
want
to
kind
of
go
down
that
road
here.
A
You
are
now
returning
a
much
smaller
subset
of
data
than
what
the
index
knows
about
right,
so
so
that
histogram
won't
be
accurately
reflected
as
soon
as
you
add
a
filter
expression,
and
so
that's
where,
like
you
know,
we
just
don't
know
right
because
we
don't
index
this,
and
I
think
that's
the
difference
here.
Right,
like
different
solutions,
will
have
an
index,
that's
more
helpful
than
us.
A
So
this
is
a
definite
case
where,
where
loki's
design
makes
this
a
hard
problem
for
us
where
other
systems
it's
not,
and
so
that's,
I
yeah
the
the,
I
think
getting
feedback
right
like
having
a
clear
issue
here
in
github
that
we
can
sort
of
use
to
solicit
feedback,
and
maybe
posting
that
in
a
few
places,
but
just
to
gather
useful
cases
and
like
really
ask
this
question
right
of,
like
you
know,
is
this
more
important
than
removing
the
ordering
constraint
from
loki?
A
Is
this
more
important
than
a
streaming
query,
api
or
a
custom
retention
or
deletes
you
know
so
giving
it
some
context
right
like
and
that's
other
thing
we
can
do
is
help
provide
a
little
bit
better
roadmap
of
the
things
that
we
think
are
towards
the
top.
This
is
an
interesting
and
sort
of
difficult
problem
to
get
feedback
from
a
community
that
we
don't.
You
know,
you
know
we
have
sort
of
limited
avenues
of
contact
with.
C
C
You
know
the
time
picker
yeah
to
see
like
well
did
this
happen
earlier,
because
I
we're
frequently
diving
into
logs
of
like
we
know
a
thing
happened,
but
the
the
user
of
our
system
just
gave
us
a
vague
like.
Oh,
it
happened
around
10
o'clock.
You
know
so
gotcha
yeah
we're
like
well.
That
could
mean
anything
right
so
narrowing
in
the
window.
C
Okay,
they
gave
me,
you
know
they're
doing
this
type
of
action.
You
know.
When
did
these
happen?
It's
a
lot
to
look
through
and
being
able
to,
so
they
sometimes
will
look
at
the
histogram
of
like
oh
yeah.
There
is
kind
of
a
tight
cluster
right
here.
Maybe
that's
when
they
met.
F
C
D
C
By
design
it's
not
broken
but
like
but
yeah
when
you
hit
the
limit
like
real
fast,
because
something's
churning
out
logs
at
a
high
rate,
it
I've
often
wished
like.
Oh,
I
wish
there
was
just
like
an
arrow
on
the
histogram
that
I
could
scroll
to
the
next
like
10
minute
chunk
or
something
you
know.
Oh
that's!
A
good
idea.
C
Would
probably
get
us
there,
but
but
yeah,
because
otherwise
you're
just
constantly
screwing
with
like
well?
Okay,
let
me
go
change
the
time
window.
A
What
you
suggested
there
was,
so
I
want
to
start
by
thinking.
You
know
zach
chance
both
of
you,
because
I
think
I
just
did
a
really
poor
job
of
setting
a
stage
for
a
good
discussion
there
in
an
attempt
to
maybe
be
a
little
humorous,
I
feel
like
I
wasn't
very
open
about
having
a
discussion,
so
I
really
appreciate
both
of
you.
A
You
know
stepping
up
to
tell
us
about
your
experiences
here,
because
out
of
that,
right,
like
what
you
just
suggested
is,
is
a
really
interesting
idea
right,
like
what
about
an
on-demand
button
for
fulfilling
the
histogram
out
right.
My
reluctancy
of
having
it
submit
that
query
with
every
request.
Is
that
that's
wasted
sort
of
compute
right
like
if
people
don't
need
the
histogram,
but
an
easy
way
to
populate
the
histogram
might
be
a
really
nice
compromise
here.
Right
like
or
yeah,
pagination
would
certainly
help
too
yeah.
I'm
I'm
thinking
about
that.
A
So,
but
yeah
and
in
chance
to
your
point
right
like
like
as
a
sort
of
improved
time.
Picker
I
mean-
and
I
guess
I'm
lying.
If
I
say
I've
literally
never
done
this
right,
like
I've,
never
used
that
right
like
I
just
I,
I
think
I've
found
it,
and
maybe
I
find
it's
not
useful
because
it's
clearly
not
useful
the
way
it
works
today,
right
a
little
bit
of
a
bias
on
my
part
for
sure.
D
C
I
have
applications
that
even
if
they're,
like
oh
yep,
we've
structured
the
logs
for
our
application,
they've
got
four
different
middleware
in
their
go
app
and
they
log
in
different
ways,
and
nobody
knows
how
to
control
it.
So,
like
yeah,
the
more
mixed
your
log
sources
are
the
worse.
It
gets.
A
Awesome
I
appreciate
that
discussion
actually
really
enjoyed
it.
D
Yeah,
this
is
why
we
we
have
community
calls
so
that
we
can
hear
from
the
community,
so
any
anything
else
on
this
subject
or
really
any
others,
and
by
the
way,
if
zach,
I
posted
to
you
in
the
chat.
But
this
goes
for
everybody.
If
you
have
problems
that
grafana
is
not
helping.
You
with
post
in
the
grafana,
fails
channel
on
the
public
slack,
because
the
ux
team
monitors
that,
and
there
is
a
reasonable
chance
that
they
will
reach
out
to
you
for
follow-up
and
for
user
research,
because
we
do
have
you
know.
B
B
E
D
F
F
Of
it
and
making
it
an
opt-in
instead
of
an
opt,
opt
out,
I
guess
for
the
performance.
F
C
D
F
F
Yeah,
usually
it's
quite
a
quite
short
interval.
Okay,
and
that's
just
because
you
know
log
lines,
I
don't
want
them
to
have
to
like
look
at
a
big
buffer.
Anyway,
usually
I
can
go
as
low
resolution
as
I
want
with
loki
as
opposed
to
metrics.
B
Yeah,
I
guess
for
a
little
bit
of
context,
unlike
the
irate
stuff.
Alright,
it's
generally
helpful
when
you
have
things
that
change
very
quickly.
Am
I
getting
that
right
or.
C
Brian
brian
has
said
that
for
visualizations
irate
is
better
because
you
see
the
spikes
at
a
smaller
interval
and
that
rate
is
better
for
alarms.
So
that's
where
I
use
them
differently.
B
D
A
So
we
are
going
to
iterate
over
the
docs
soon
enough
to
the
the
big
areas
that
I
want.
I
think
that
we
missed
right
are
good
kind
of
getting
started
and
how
to-
and
it's
just
I
don't
know
what
did
I
find
today?
That
was
the
if
you
go
to
the
clients
section,
there's
a
section
at
the
bottom,
which
doesn't
actually
format
correctly
too,
but
it
talks
about
third-party
clients
and
there's
this
thing
there
saying
that
the
push
api
is
not
stable.
Yet
so
clients
can
change
this.
C
B
The
but,
by
the
way
I
had
no
idea
that
we
had
like
a
edit
this
page
linked
it'll
automatically,
like
you
know,
go
to
github
for
you
and
do
that.
That's
that's
so
great.
D
Yeah,
it's
been
normal,
it's
been
in
the
grafana
oss
since
forever,
but
we
recently
applied
it
to
exactly
like
that
to
other
oss
documentation
that
we
have
like
loki
and
tempo
and
we
added
an
email
feedback
link
in
our
non-open
source
products,
so
that
people
can
give
us
feedback
on
all
of
our
documentation
pages.
D
F
But
the
context
is
basically,
you
can
do
this
in
promptail,
so
that's
cool,
but
sometimes
I
want
to
you
know,
adjust
my
histogram
buckets
and
it'd
be
cool
to
be
able
to
do
this
at
runtime
with
a
query,
I
kind
of
just
expect
it
to
work
like
any
metric
query
where
I,
basically
it
counts
up
the
values
that
are
less
than
a
particular
value
across
the
entire
time
range.
F
I
think
the
biggest
part
would
be
like
how
do
you
do
that
against
a
label
instead
of
the
value
of
which,
usually
it's
on
the
value
of
a
metric
or
something
right,
so
that
can
be
where
it's
challenging.
I
think.
F
And
that
would
also
be
nice,
but
specifically
I'm
thinking
like
I
want
like
most
of
our
use
cases
are
we
have
these
rpc
jobs?
They
are
incredibly
frequent
and
it
would
be
great
to
just
be
able
to
categorize
them
by
like
their
latencies
or
durations
and
right
now,
most
of
our
graphs
use
averages
or
something
like
totals
basically
over
time,
and
we
just
kind
of
eyeball
to
see
if
there's
any
weird
abnormalities
in
that.
But
it's
much
harder
to
you
know,
diagnose.
A
F
A
F
Bar
bar
chart
histogram
style
histogram,
where
it's
just
like
cumulative
values
where
each
bar
is
just
like.
You
know
this
is
from
zero
to
one,
and
this
is
from
one
to
five
or
I
guess
it
would
be
less
than.
F
Yeah,
I
think
we
used
the
regular
quantity
instagram.
So
for
that
at
one
point-
and
we
have
some
of
that
on-
like
our
metrics,
of
course-
that
we
have.
F
Is
it's
a
high
cardinality
stuff
for
like
per
tenant,
rpcs
and
so
doing
that
query
time
would
make
the
most
sense
for
this.
A
Because
this
is
the
this
would
be
one
area
you.
I
there's
no
way
to
generate
this
from
loki
right
now
and
if
you
had
a
way
to
actually
return
buckets
with
counts
in
them,
then
you
probably
could
so
that
would
be
kind
of
interesting.
A
F
But
it's
incredibly
incredibly
painful,
like
I
remember
talking
to
owen,
and
it's
like
you
can
do
lots
and
lots
of
comparisons
and
like
produce
labels
from
that,
but
it's
like
you're
going
to
be
basically
doing
the
like
less
than
or
equal
to
bounds
for
each
bucket
by
hand
and
it
it.
I
remember
the
exercise
was
like
wow.
This
is
going
to
be
like
a
page
long
to
get
like
five
buckets
you're.
D
One
thing
you
guys
might
think
about
so
there's
currently
a
bart
chart
alpha
panel,
the
beta
of
that
will
be
released
in
grafana
8
and
since
we
can
now
treat
loki
logs
like
metrics
in
grafana,
you
might
be
able
to
there.
D
Some
more
possibilities
for
doing
what
you're
talking
about
a
little
bit
less
painfully,
when
graffana
eight
rolls
out
in
a
couple
of
months.
F
A
There's
a
few
gaps
that
we
haven't
closed,
like
you,
can't
use
range
in
the
dollar
sign
underscore
range
in
yeah,
loki
data
source.
You
can't
you
can't
force
this
step
in
explore
in
a
there's
a
couple
things
here
that
we
need
to
to
button
up
to
get
that
experience
to
be,
but
you're
right
so
like,
and
that's
if
anybody's
watching
this
any
future
people
out
there
that
are
watching
this
loki
presents
a
prometheus
compatible
api,
so
you
can
add
loki
as
a
prometheus
data
source.
A
You
just
have
to
put
slash
loki
after
the
url
when
you
specify
the
url
to
your
loki
server.
So,
as
you
add
the
prometheus
data
source,
you
just
put
slash
loki
in
there.
A
D
B
What
it's
worth
this
used
to
be
the
de
facto
way
to
run
loki
to
get
metrics
out
of
loki
right
before
grafana
had
like
better
like
native
loki
support,
and
you
could
select
loki's
a
data
source
and
do
metric
queries
and
stuff
right
like
that.
This
is
how
you
would
get
metrics
out
of
loki
back
in
the
day.
B
A
Those
are
the
ones
that
I've
come.
Oh,
like
basically,
the
the
dollar
is
kind
of
score
range.
Doesn't
unders
doesn't
expand,
that's
the
one
that
I
most
often
run
up
against
and
I
think
in
like
an
explorer.
Sometimes
I
want
to
force
the
step
and
you
can't
do
that
and
explore
with
loki.
B
I
think
there's
also
been
some
kind
of
unrelated
there's
been
some
talk
about
mixing
loki
and
like
different
data
source
types
in
explorer
together,
because,
like
one
of
the
things
that
I
want
right
now
is
like
I'll
I'll
look
at
loki,
and
I
like
generate
metrics
with
some
query.
And
then
I
want
to
overlay
those
against
like
prometheus
right,
especially.
B
E
A
We're
kind
of
out
of
time,
because
we
screwed
around
for
a
long
time
before
we
started
this,
but
I
I
think
I
have
a
few
more
minutes.
If
anyone
has
anything
else,
we
want
to
chat
about.
That's
all.
I
had
thanks
chance.
Thanks,
zach
appreciate
you
all
joining
and
participating
makes
it
more
fun
for
us
thanks
diana
and
owen,
and
thanks
ed,
you
did
a
great
job.
You
did
a
great
job,
ed
thanks.
Everyone
thanks,
everybody.