►
From YouTube: Loki Community Call 2020-11-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
had
I
put
a
few
things
on
the
agenda.
Didn't
have
a
ton
that
I
was
gonna
walk
through
and
talk
about.
If
anyone
here
wants
to
add
anything
feel
free,
there's
a
bullet
at
the
bottom
and.
A
A
All
right
just
to
start
with
an
update
on
the
2.0
release.
Generally
things,
I've
been
pretty
smooth,
but
there
is
a
handful
of
things
that
we've
noticed
I
put
down
there
in
the
you
know.
Is
it
2.0.1
bug
fixes?
The
goal
is
to
have
the
2.0.1
release
out
this
week?
Hopefully
today,
but
maybe
tomorrow.
A
Most
of
these
fixes
have
merged
I'll
talk
about
one
of
these
and
see
if
anybody
else
has
any
opinions
here,
but
the
probably
important
one
there
is
going
to
be
that
panic,
if
your
believe
it
just
applies.
If
you're
upgrading
to
bolt
db
there's
a
case
where,
when
we
a
querier
that
is
looking
for
a
method
on
the
ingestor
tries
to
query
it
and
that
method's
not
there,
it
causes
a
panic.
A
B
A
Whether
the
query
was
a
metric
query
or
a
log
query,
so
in
in
grafana's
case
grafana
will
add
the
limit
equals
to
every
request,
because
it
doesn't
know
whether
it's
a
metric,
query
or
not
limit
doesn't
really
apply.
This
is
saying
like
only
return
thousand
lines
or
two
thousand
lines
limit
doesn't
apply
to
metric
queries.
So
now
loki
is
according
to
that
change
or
after
that
change,
we'll
honor
that
so
yeah
and
then
this
compactor
config
one
is
tough.
A
So
the
bolt
db
shipper
index
type,
which
we've
made
the
default
for
the
docker
image
and
the
single
binary
example
config
file,
as
well
as
helm
and
k
sonnet
in
2.0.
This
is
the
default
now
requires
a
compactor
and
we've
seen
a
couple
cases
where
there's
sort
of
two
problems
here,
one
is,
if
you
start
loki
without
a
compactor
config
in
the
single
binary
mode.
This
is
mainly
where
that
affects,
because
we
try
to
start
the
compactor
in
the
single
binary
mode.
A
A
They
should
only
really
be
one
one
time
and
right
now,
when
you
run
multiple
single
binaries,
the
table
manager
will
run
with
each
of
them.
It's
generally
not
going
to
be
a
problem,
but
the
compactor
config.
The
way
it's
configured
today
is
not
going
to
run
if
it
basically
looks
at
the
type
of
ring
being
used.
If
it's
in
memory
in
a
single
binary,
it
starts.
A
If
it's
not
in
memory
in
a
single
binary,
it
doesn't
so
trying
to
figure
out
what
is
sort
of
the
easiest
way
to
make
sure
this
is
easy
for
people
getting
started
right
like
run
the
single
binary
and
not
have
to
know
about
whether
or
not
you
should
run
the
compactor
or
table
manager,
but
also,
you
know,
not
run
multiple
of
them.
If
you
decide
to
run
a
few
of
them,
there's
probably
a
long-term
solution
here,
using
the
ring
or
or
gossip
to
communicate.
This
information.
A
But
in
the
short
term,
it's
a
question
of
like
what's
the
same
default
here,
you
know
running
this
stuff
for
everyone
unless
they
scale
horizontally,
which
case
they
would
have
to
make
a
change.
Probably
something
like
that.
But
that's
sort
of
what
this
is
I've
been
waiting
on
is
trying
to
unders
like
come
up
with
an
idea
that,
I
think,
is
the
you
know
best
quick
fix
for
this,
so
that
people
can
still
change
the
config,
but
it's
sort
of
not
hidden.
A
That's
one
of
the
troubles
now
is
the
reason
we
want
to
make
this
required
is
there's
a
lot
of
people
that
are
upgrading
an
existing
config
file,
which
doesn't
have
a
compactor
config,
and
we
need
to
make
this
a
little
more
obvious
for
them
and
also
it's
important
to
run
the
compactor
with
both
db
shipper.
It's
going
to
improve
your
performance
a
lot,
so
apologies
for
the
rambling,
but
that's
the
piece
there
that
we're
obviously
still
not
sure
on
what
to
do
with
anybody.
C
I
got
a
review
from
sandy
today
on
my
performance
improvement,
so
I
think
they
might
be.
We
might
be
able
to
get
that
into
or
one.
A
Well,
we've
got
a.
We
got
to
get
it
merged,
so
we
might
include
that
too,
because
that
would
be
a
nice
one
for
folks.
A
That
is,
but
so
following
2.0.1
will
be
2.1.0
and
the
timeline
for
that
is
still
a
little
in
the
air.
But
we.
A
Wanted
it
to
be
basically
not
more
than
a
month
after
the
2.0
release,
so
that
would
put
it,
as
you
know,
probably
two
weeks,
probably
not
in
the
three
week
range,
but
this
was
targeted
around
some
things
that
we
did
for
2.0
that
were
basically
compromises
to
meet
the
release.
Schedule,
deadline
versus
you
know,
solutions
that
were
a
little
more
timely
that
are
better.
So
there's
going
to
be
some
improvements
there,
we
did
upgrade
the
go
version
and
we're
starting
to
use
milestones
in
the
github
to
track
this
stuff.
A
So
you
take
a
look,
and
you
can
see
the
kind
of
stuff
that
I
think
we
have
tagged
for
2.1
and
then
I
wanted
to
mention
so
grafana
just
started
our
q4,
our
financial
q4,
which
is
what
we
do
our
planning
around
our
is
our
financial
year,
and
that
runs
from
november
1st
to
january
31st.
So
internally,
those
are
the
three
things
that
we're
mostly
focusing
on.
A
So
on
that
you
know
same
line:
ask
for
folks
to
be
a
little
more
patient
around.
You
know,
issues
and
pr
reviews
and
slack
response
times,
because
we're
all
trying
to
recover
a
little
from
the
work
that
we
just
did
and
enjoy
some
holidays
in
the
end
of
the
year
stuff,
hopefully
as
much
as
we
can.
A
A
That'll
be
a
big
improvement
if
possible,
we'll
get
that
into
2.1,
but
if
not
we'll
get
it
into
a
release
shortly
thereafter
and
the
other
big
feature,
this
is
a
sort
of
commonly
requested
and
is
something
that
we're
going
to
start
looking
at
is
being
able
to
remove
the
out-of-order
constraint
for
rejection
of
logs.
A
There's
a
couple
options
here:
there's
the
sort
of
easy
option
of
just
use
more
memory
and
do
in-memory
sorting
and
then
there's
more
complicated
options
around
how
once
we've
so
the
way
loki
processes
incoming
data,
we
take
a
certain
amount
of
it
and
then
compress
it
and
keep
it
in
memory
just
to
keep
the
memory
footprint
down,
but
that
makes
it
a
lot
harder
to
go
back
and
sort
of
reinsert
entries.
A
You
know
this
trade-off
between
memory,
consumption
and
sort
of
ease
of
use
that
the
chunk
format
itself
already
accommodates
the
out
of
order
pretty
well,
it's
just
within
the
chunk
that
we
have
to
sort
this
out,
including
maybe
on
the
read
path.
We
need
to
change
assumptions
about
the
fact
that
everything
is
expected
to
be
in
order.
There's
a
couple
things
to
approach
here.
I
don't
know
that.
There's
the
solution
will
be
in
place
by
the
end
of
january
for
this,
but
it
should
be
shortly
thereafter.
A
15
minutes
in
and
that's
all
I
had
to
cover
so
anybody
else
have
anything
they
want
to
chat
about
any
questions.
C
Yeah,
I
think
it's
a
bit
unrelated,
but
since
since
we
have
bartek
and
camel
in
the
the
meeting,
I
wanted
to
tell
you
that
the
I've
seen
that
you,
you
guys,
have
a
an
issue
for
building
a
series
api
cache
on
the
query
front
end
I
just
wanted
to
let
you
know
that
this
is
something
that
we've
been
looking
at
in
loki
and
so
far
we
haven't
done
it,
because
it's
not
as
trivial
as
anything
else,
and
the
reason
is,
is
because
the
series
api
returns
you
the
list
of
series
that
match
a
given
time
range,
but
in
the
response
itself
it
doesn't
tell
you
at
what
time
this
series
appear
and
disappear,
and
so,
when
you
cache
that
the
only
way
to
reuse
that
cache
is
if
the
original
request
overlap
entirely
with
what
you
have
in
the
cache
right
compared
to
a
you
know,
a
query
range
api
where
you
will
receive
timestamp
for
each
of
the
data.
C
You
don't
receive
a
timestamp
back
right
on
the
series
api.
So
I'm
just
letting
you
know
about
this
because
we
wasted
a
couple
of
time
unlucky
trying
to
do
that.
So
if
you
guys
are
working
on
this,
just
make
sure
that
you
you
are
aware
of
that.
I'm
not
saying
that
it's
not
possible
to
do
the
cash.
I
think
it
is
possible.
I
just
don't
think
you
can
use
what
cortex
is
doing.
D
Yeah
yeah,
thanks
for
letting
us
know,
I
think
we
we
thought
about
this,
and
you
know
there
are
kind
of
a
couple
of
arguments
that
still
make
it
possible
to
catch.
Those
one
thing
is
that
it
doesn't
has
to
be
very
accurate,
like
even
now,
grafana
was
never
even
using
the
like
for
a
long
time.
They
were
not
using.
You
know
time
ranges
in
the
series,
so
you
were
actually
rendering
all
potential
series
you
ever
had.
Unlimited.
D
You
know
for
your
variables
in
grafana,
so
and
people
were
fine
with
that.
So
if
we
would
return,
maybe
more
than
is
accurate
within
some
time
range.
I
think
this
is
still
kind
of
okay
for
some
enforcement
setups
and
another
point
is
that
for
really
large
requests.
This
is
still
useful
to
cash
because
it
will
overlap
fully
within
one
day,
for
example
cash
so
yeah
there,
but
but
this
is
definitely
something
that
that
is
it's
limiting
and
we
cannot
maybe
reuse
the
same
code
or
something
but
yeah.
We
are
looking
into
that.
D
C
Then
something
more
related
to
loki.
I
think
we
we
have
ivana
and
david
also
today.
So
maybe
we
want
to
talk
about
the
issue
of
extracted
label
in
grafana,
yeah.
A
And
returns
the
labels
they're
in
the
api
they're
presented
exactly
the
same
as
labels
that
were
in
the
index.
Let
me
show
you
what
that
creates
for
us.
A
Front-End
logs,
because
those
are
kind
of
the
most
familiar
to
me,
so
if
I
just
take
any
arbitrary
set
of
logs,
I
can
see
the
labels
and,
if
I
want,
I
can
limit
this
to
say
you
know,
hit
this
plus
sign
and
I
get
level
of
only
debug.
It
adds
it
to
my
query.
However,
of
course
this
is
log
format
and
I
now
get
a
whole
bunch
of
other
logs.
A
Yep
so
collar
is
an
extracted
label
and
if
I
click
this
filter,
it
adds
it
to
the
list
of
because
it
doesn't.
The
api
has
no
way
to
know
right
now
that
that's
not
technically
an
indexed
label
and
then
the
query
stops
working
so,
unfortunately,
like
we
talked
a
little
there's,
there's
really
no
way
for
grafana
to
know
right
now,
because
they're,
you
know.
As
far
as
the
api
response
is
concerned,
it's
exactly
the
same
as
a
label
that
was
indexed.
C
E
And
we
actually
talked
with
david
about
this
as
well
in
regards
of
another
issue
which
is
much
easier
to
solve,
and
that
is
the
name
of
parkfield.
So
currently
graphene
parses
the
log
message,
and
we
call
this
fields
parse
fields
which
might
be
confusing,
because
now
with
locale
2,
you
are
able
to
parse
fields
directly
in
loki.
So
we
have
decided
to
rename
this.
E
We
were
thinking
about
deeply
field
and
we
were
talking
about
exactly
the
same
thing.
If
loki
could
provide
us
information
about
which
field
which
labels
our
labels
and
which
labels
are
basically
parsed
fields,
because
we
could,
even
in
ui,
make
it
more
clear
which
are
which
and
yeah
it
would
solve.
Also,
this
issue
that
we
could,
when
you
are
filtering
for
that
label,
just
add
it
to
like
in
the
correct
format,
so
yeah
yeah.
A
This
this
is
what
ivana
is
talking
about
with
parse
fields.
So
in
some
ways
this
has
been
a
source
of
of
confusion
for
users.
Already
we
see
this
a
lot
where
they
they,
they
assume
loki,
is
responsible
for
generating
this
information
here,
which
it's
not
which
grafana
is
doing,
and
they
wanted
to
be
able
to
do
aggregations,
and
you
know,
write
that
into
queries
even
before
loki.
2.0
people
wanted
to
take
this
because
they
could
see
that
it
was
extracted
and
use
it
in
you
know
modifying
their
query.
A
So
I
think
what
you're
suggesting
is
that
this
could
be
renamed
or
something
like
detected
fields
to
make
it
maybe
more
obvious
that
it's
not
parsed,
but
that's
still
tricky
right,
like
I
think
the
tricky
part
is
knowing
what's
being
done,
client
side
versus
server
side,
because
that
generally
affects
how
you
can
manipulate
the
query
line
itself,
because
loki
can
only
really
work
around
or
make
changes
to
things
that
were
managed
server
side,
whereas
these
fields
here
are
are
going
to
be
client-side.
E
So
another
thing
that
maybe
we
could
do
is
add
there
like
a
tooltip
or
something
just
explaining
that
those
are
first
of
graph
on
the
side.
So
that's
a
good
point
make
it
like
super
clear
for
everyone
that
it's
just
we
were
able
to
get
on
front
end.
A
A
That's
I
don't
know
that
it
might
be
arguably
breaking
if
we
release
like
2.1,
where
the
behavior
of
this
changed,
although
I
would
in
some
way
say
that
it's
perhaps
not
too
late
to
make
that
kind
of
change
and
introduce
the
the
problem
would
be
that
if
you're
using
an
older
version
of
grafana,
you
would
lose
any
visibility
to
the
extracted.
C
So
just
had
an
idea:
maybe
we
could
provide
the
list
of
keys
that
are
original
from
the
label
and
the
list
of
keys
that
are
original
that
are
not
original
and
extracted
right.
The
keys
itself
has
a
string,
an
array
like
basically
two
arrays
of
string.
That
tells
you
which
labels
are
from
the
index
and
which
label
are
the
extracted
one.
C
The
problem
I
have
with,
I
think
we
talked
about
this-
is
if
we
do
duplicate
the
extracted
one
and
then
you
know
have
another
one
with
both
of
them,
then
the
the
size
of
the
requests
will
double.
You
know,
because
the
data
is
basically
the
log
line
itself
extracted
as
a
you
know,
a
key
value
pair.
If
we,
if
we
do
another
array
of
the
extracted
one,
then
you
know
it's
definitely
gonna
double
the
the
size
of.
A
Json
document
yeah
exactly
and
you're
like
the
default
case
to
pull
everything
out,
you
might
pull
100
fields
out
or
perhaps
more
and
then
that
would
be
doubled,
because
this
behavior
will
stay
the
same.
It
will
include
both
and
then
we
just
need
to
decide
what
what
we
would
send
back
to
differentiate
the
two
so
from
the
ui
side.
If
I
want
to
think
about
like
what
might.
B
Although
this
sounds
like
a
server-side
optimization
here,
if
we
do
extract
it
or
as
a
string
here,
I
would
say
it
depends
on
how
fast
the
client
side
can
do.
This
work.
I'm
not
refinance
front-end
source
code
expert
here,
but
basically
the
odd
thing
is
parsing
ability.json
and
javascript
versus
doing
all
the
math
work
like
extracting
from
one
single
string,
all
the
keys
and
then
do
the
matching
or
by
yourself
in
the
ui.
B
So
I
would
say
this
this
christ
for
a
small
poc
from
the
front-end
side
to
say
what
is
faster.
What
works
here
I
mean
parsing,
big,
jsons,
chrome,
firefox.
All
these
browsers
can
do
this
fast
today.
So
if
they
need
to
do
all
this
matching
themselves,
depending
on
how
big
the
logs
are,
it
might
be
tricky
to
do
this
on
the
front
hand
side.
But
again,
I'm
I'm
not
sure
the
expert.
F
So
is
what
you're
suggesting
to
kind
of
figure
out
in
the
front
end:
okay,
we're
dealing
with
a
lock
format,
expansion
here
to
then
kind
of
look
at
the
message
itself
and
say
like
okay.
Well,
these
fields
were
all
detected
in
the
log
format
and
hence
they
are
sort
of
the
parse
labels.
B
Exactly
either
you
get
this,
let's
say
as
a
normal
key
value
pair,
and
so
you
can
pick
them
by
yourself
by
reading
the
key
value
pairs
or
you
can
do
the
parsing
yourself
again
doing
the
parsing
yourself
in
the
front
and
as
far
as
in
our
sensor
will
keep
the
size
of
the
response
messages.
Small,
as
is
today,
and
on
the
other
side,
it
just
only
moves
the
problem
to
the
client
side
so
depends
on
performance.
B
C
B
I'm
not
suggesting
duplicating
this,
I'm
only
questioning
here
if,
if
we
omit
duplicating
the
streams
and
the
response
size,
which
I
understand
it
is
a
server-side
optimization
in
the
end,
because
we
don't
want
to
send
double
the
size
of
the
response
yeah.
However,
this
optimization
pushes
the
problem
to
the
client
side
who
needs
to
do
the
parsing
themselves,
then.
C
Yeah,
but
this
is
going
to
be
the
list
of
so
for
each
stream
you
can
have
a
list
of
keys,
so
it's
going
to
be
very
small
and
I
guess
we
can
see
also
with
david
if
you
prefer
to
have
a
map
of
object
instead
of
first
thing,
and
so
you
could.
Maybe
you
know
once
you
pass
the
element,
you
can
already
quickly
access
the
key
and
figure
it
out
if
this
key
is
extracted
or
original
so
yeah.
I
think
I
I
mean
I
don't.
A
E
E
A
A
I
think
this
is
maybe
to
your
point
perry
like,
but
the
usability
from
the
client
side
like
especially
if
I'm
making
a
really
simple
client-
and
I
basically
get
I
get
stream,
which
was
what
it
was
right.
Then
I
get
index
labels
and
I
get
extracted
labels
as
two
separate
lists.
It's
very
very
easy
to
consume
that
as
a
client.
It's
truly
easy
right.
I
don't
need
to
do
any
matching
or
diffing
or
anything
I
just
get
all
of
that,
but
it's
extremely
verbose
as
well.
So
I
think
those.
C
Are
two:
it
could
also
be
a
map
right,
it's
even
faster
for
the
javascript
to
access
a
map
object
instead
of
an
array,
because
every
time
they
want
to
verify
for
a
given
key,
they
just
need
to
access
the
element
and
the
value
could
be
either
true
or
false,
which
tells
you
if
it's
an
index
or
not.
A
F
We
would
have
like
a
key
field
that
would
that
would
have
the
value
of
the
original
key
and
then
the
like
some
other
fields
like
parse,
true
or
something
right,
so
that
it's
going
to
be
a
lot
bigger,
that's
sort
of
like
an
object,
but
at
least
it
won't
have
the
values,
because
this
is
only
the
metadata
for
the
label
keys.
F
E
C
A
map
really
as
the
and
the
key
being
the
key
of
the
label,
because
I
I'm
pretty
sure
that
if
you
you're
gonna
have
to
build
that
map
anyway
at
some
point,
if
you
don't,
if
we
don't
provide
it
like
this
right,
so
there's
like
a
there's
passing
that
you're
gonna
have
to
do.
If
you
don't,
if
you
don't
do
it
on
the
side,
and
I
think
it's
valuable
for
you
to
just
directly
access
when
you
go
through
a
line,
you
can
directly
access
each
label
very
quickly.
A
But
this
is
that's
definitely
going
to
require
some
cooperation.
I
think,
with
our
two
teams
here
to
see
what
makes
the
most
sense,
but
as
it
is,
I
don't
think
there's
any
real
practical
way
for
the
ui.
A
A
C
A
B
E
B
Yes,
if
you
have
a
lucky
data
source
and
you
get
the
index
labels
which
are
called
currently
lock
labels,
and
then
you
get
the
lock
the
lock
fields
there
as
a
separate
thing.
How
much
sense
does
the
parse
fields
make.
E
B
Sense
too,
too,
because
I
think
if
you,
if
someone
who
uses
log
format,
they
look
for
my
code
operator
and
is
extracting
this,
and
then
he
sees
the
three
groups.
He
will
ask
himself,
even
if
we
rename
the
parse
fields
to
something
like
extracted
or
client-side,
extracted
fields,
you
name
it
what
you
you
feel
more
comfortable
and
better
for
user
experience
it
will.
They
will
ask
themselves.
Oh
these
are
three
sources.
How
do
I
handle
them
in
in
general,
the
parse
fields
and
the
log
format
fields
will
be
basically
the
same
thing.
B
C
I
think
we
should
find,
and
I
mean
I
don't
want
to
influence
too
much,
but
I
think
having
both
parts
field
and
those
new
labels,
maybe
it's
too
much
for
the
user
or
redundant
so
yeah,
it's
it's
probably
only
specific
to
when
you
do
that,
like
maybe
change
the
ui
that
you
know
not
show
the
pass
field
in
this
specific
case.
F
I
mean
it
could
be
as
simple
as
just
not
showing
the
past
fields
that
were
already
shown
from
from
from
the
future
past
fields.
Yeah.
B
A
C
A
C
Well,
if
you
don't,
I
like
the
the
point
of
ivana
on.
Maybe
some
users
are
not
like
super
knowledgeable
on
the
language,
and
maybe
they
don't
know
about
those
new
parsers
that
they
can
use,
and
so
I
think
it's
a
still
a
good
id
good
idea
to
have
that
right.
I
think
it
was
part
of
the
experience
before.
A
F
A
I
think
we
want
consistency
in
regards
to
the
ux
to
as
much
as
we
can
too
right,
even
if
you're
switching
between
log
data
sources,
I
think
right.
It
would
probably
be
good
if
there's
between
elastic
and
loki,
that
the
look
and
feel
is
similar.
It's
only
going
to
help
people
if,
once
they
you
know
finally
realize
that
they
should
switch
to
loki.
F
Perfect
segway,
speaking
of
which,
what
what
are
we
going
to
do
about
the
histogram
at
the
top?
Oh
boy?
Oh
my
god.
A
I
mean
the
long
and
short
of
it
is.
This
is
like
this.
Is
it's
just
a
hard
thing
for
loki
to
do
that
is
apparently
much
easier
for
everyone
else
to
do,
but
you
short
of
running
a
metric
query
on
the
request
to
specifically
populate
the
histogram
with
every
request
which
is
slow.
A
I
think
what
we
have
now
is
is
a
reasonable
fix,
but
I
still
get
a
lot
of
feedback
from
people
that
don't
understand.
You
know
why
it
works.
The
way
that
it
does.
A
I
would
almost
honestly,
though,
of
course
this
creates
a
different
problem,
but,
like
I
would
say,
we
could
get
rid
of
it
and
then,
if
people
want
to
see
a
histogram
of
their
logs,
they
write
a
histogram
query
right.
In
this
case
they
would
write
a
you
know,
a
rate
query
or
count
over
time.
Query.
They
explicitly
do
that
to
graph
the
result.
A
C
C
A
I
I
had
another
option,
but
I
think
what
I
wanted
to
do
at
some
point
is:
I
wanted
to
have
loki
a
ruler
being
able
to
evaluate
this
histogram
so
that
it
will
be
done
in
a
you
know,
efficient
way,
because
the
problem
currently
is
the
efficiency
like
for
most
people.
This
is
gonna
work
for
someone
who
is
sending.
A
A
F
A
Yeah
and
right
now,
you're
generating
it
from
the
log
content,
because
we
don't
return
the
information
right
so
you're,
basically
counting
in
the
lining
and
totally
faking
it
yeah.
I
don't
know
I
mean
who?
Who
else
would
support
me
and
saying
loki
just
doesn't
do
this
and
that
you
teach
people
to
write
metric
queries
to
view
that
information
so
who's
with
me.
F
So
my
problem
is
like
every
time
this
comes
up
and
someone
complains
like.
Oh
I
I
don't
get
like
a
great
awkward,
but
I
only
see
the
the
histogram
for
five
minutes.
The
next
thing
they
say
this
is-
and
this
is
annoying
because
I
wanted
to
use
the
histogram
to
zoom
in
and
before
15
minutes,
right.
G
F
G
Alleviate
that
perception,
I
think
so.
F
Yeah
there
is
a
there's,
a
q4
project
around
locks
navigation
to
kind
of
land
when
you're
at
the
bottom.
You
can
like
see.
F
A
E
G
Yeah
we're
working
on
grafana
stories.
Next,
that's
actually
a
big
q
angle.
G
Anyway,
anyway,
I
also
have
two
things
that
are
actually
quite
in
line
with
some.
G
Ux,
can
I
clarify
those
so
one
of.
A
Them
yeah
we're
getting
a
view
of
the.
C
So
for
the
instagram
I
don't
know,
but
for
the
for
the
first
issue
for
the
first
issue
about
the
response,
what
I
suggest
is
I'm
gonna
open
an
issue
suggest
a
a
new
response,
type
shout
out
to
david
and
ivana,
and
then
we
can
move
forward
with
this
for
the
instagram.
I
think
we
need
anit.
We
need.
We
need
to
think
about
this
more
like
the
the
real
problem
is
with
like
we're.
Gonna
we're
gonna
trigger
a
very
large
queries
when
it's
not
required
like.
C
C
A
C
It's
a
bit
it's
a
bit
complex.
I
mean
I.
B
G
C
Just
that
just
you
know
just
showing
that
is
like
it's
not.
A
It
basically
requires
loading
every
chunk
and
counting
every
line
of
the
query,
and-
and
that's
you
know,
gigabytes
or
terabytes
of
data
depending
on
the
query
range,
yes,
and
so
that
can
take
20
30
seconds
to
process,
so
the
user
experience
is
going
to
be
bad
and
there's
no
way
to
optimize
this,
because
this
is
how
loki
is
built.
So
basically,
this
is
one
area
that
will
always
be
hard
for
loki.
A
My
sort
of
two
sense
here-
and
this
is
you
know,
two
years
of
using
loki-
it's
sort
of
changed.
How
I
you
know
as
I
was
just
I
don't
know.
If
it
was
david,
he
was
talking
to
you
about
it.
Like
we've,
always
we've
talked
about
paging,
but
in
in
those
two
years,
there's
only
been
a
handful
of
cases
where
I
actually
really
wanted
to
be
able
to
page
something-
and
it's
usually
when
I'm
just
totally
at
a
loss
for
what's
going
on
and
I'm
just
reading
thousands
of
log
lines.
A
That
is
not
the
common
use
case,
and
usually
what
I
do
is
I
write
the
query
and
then
I
just
immediately
start
removing
things
from
the
query
that
I
don't
care
about.
So
I
start
removing
log
lines
that
I
don't
care
about,
so
it's
kind
of
changed
my
behavior.
A
E
A
In
that
window,
or
if
a
if
there
was
an
easy
way
to
page
right,
if
there
was
an
easy
way
to
say
like
show
me
the
next
thousand
lines
or
2000
lines
or
whatever
I
I.
I
think
this
feels
like
what
might
be
the
next
best
move
for,
what's
capable
in
a
reasonable
sense,
with
loki
in
the
histogram
and
like
basically
using
what
we
have
today.
A
We
do
get
asked
a
fair
amount
about
like
paging,
because
I
think
a
lot
of
people,
that's
just
how
they're
used
to
viewing
their
logs.
So
I
think
there's
maybe
a
training
element
here.
That's
like
focus
more
on.
You
know
removing
logs
that
aren't
relevant
to
your
search,
that
combined
with
some
paging,
and
maybe
we
don't
need
to
immediately
address
the
histogram.
F
G
I
was
just
the
conversation
just
reminded
me
of
two
things
that
I
ran
into
while
building
up
dashboards
with
loki
v2
and
just
wanted
to
share
it
with
you
and
I
hope
I'm
sharing
the
right
screen.
G
I
think
I
need
to
stop
there.
You
go.
I
just
want
to
show
you
the
problem
that
I
ran
into
so
so.
First,
when
I
was
building
up
a
metric
query
when
you're
doing
that,
it's
pretty
hard
to
understand
which
locked
line
match,
looks
lines
match
when
you're
writing
a
matrix
query
so,
and
that's
not
only
around
just
providing
kind
of
how
to
complete
on
some
of
the
labels
that
are
now
available
here.
G
G
Take
the
the
filter,
expression.
C
For
the
for
the
labels,
while
I
just
wanna,
if
we
can
stop
that
because.
C
A
great,
I
think,
it's
a
great
idea,
and
I
think
we
just
figured
out
a
way
to
do
it
now,
because
if,
in
the
response,
we
were
sending
back
this
list
that
we
were
talking,
it
would
be
easy
for
gaffana
to
then
suggest
not
only
the
new
stages
available,
but
also
the
labels
that
are
possible
to
filter.
With
now.
C
C
C
For
that
david,
how?
How
does
that
work.
F
E
C
G
F
Prometheus
is
working
on
one
of
those
like
a
language,
server,
language
service,
server,
yeah,
and
we
want
to
maybe
build
a
sort
of
an
adapter
for
this
query
field.
Here.
C
Okay,
so
let's
let's
dig
into
this
instead
of
doing
another
api
I'll,
create
an
issue
for
that
too.
If
you
can
add
this
document,
then
you
can
do
it.
F
G
All
right
so
just
to
continue
so
one
of
this,
of
course,
the
the
label
suggestions.
So
we
fixed
that
already
by
creating
a
new
issue,
but
the
other
thing
is
also
to
to
understand
a
little
bit
of
data,
that's
being
returned
for
sometimes,
for
example,
I
do
stuff
like
hey
duration,
and
I
I
try
to
do
a
kind
of
is
bigger
than
zero
to
five,
and
then
I
get
don't
get
the
result
here,
because
I'm
apparently
not
doing
this
correctly
and
then
I
I
always
have
to
need
to
go
back
to
okay.
G
What
are
the
log
lines
that
are
currently
being
processed
up
till
this
point
and
can
get
some
visibility
on
that,
because
that
will
help
me
improve
my
next
decision
on
what
to
write
here
so
just
wanted
to
throw
that
out.
That
is
something
that
I
when
I
was
writing
a
lot
of
the
metrics
queries.
G
I
I
found
myself
doing
quite
a
lot
just
opening
a
panel
on
the
side
and
copy
and
pasting
at
least
the
filter,
expressions
to
the
right
to
get
a
little
bit
more
sense
of
the
context
I'm
working
in
so
so
I
I
don't
need
an
immediate
fix
for
that
one
directly,
but
just
want
to
throw
it
out
see
if
it
is
common
pattern
and
if
it
is
something
we
might
want
to
optimize
in
a
future
version.
G
So
that's
one
thing:
the
other
thing
is,
and
that
is
also
a
question
that
I
have
is.
Let's
say
you
have
that
a
dashboard
like
like
this
one.
G
G
G
How
chrome
is
doing
the
screen
sharing
too
much?
But
what
you
see
here
is
that
in
I've
built
a
dashboard,
but
all
those
dashboards
have
a
kind
of
a
repetition
around
the
filter
statement
here-
and
I
was
wondering
when
you
have
a
big
dashboard
with
a
lot
of
graphs,
it
is
doing
basically
requesting
this
data
parsing
this
data
over
and
over
again
for
all
the
panels,
because
I'm
basically
reusing
the
data
in
a
lot
of
panels
in
different
ways
with
different
aggregations.
G
So
grafana
does
have
the
possibility
to
do
query
sharing
in
between
panels.
So
you
can
use
the
query
results
from
one
panel
in
another
panel
and
then
do
some
transformation
on
it,
which
is
cool
and
will
solve
a
little
bit.
But
I'm
just
wondering-
and
this
is
an
open
question-
how
low
key,
because
we
will
see
much
more
dashboards.
That
probably
will
have
a
very
similar
pattern
where
they're
using
kind
of
a
filter,
expression
zoom
into
a
certain
piece
of
the
data
set
and
then
do
all
kinds
of
aggregation
queries
over
it.
G
A
About
yeah
I
mean
mostly,
caching
is
the.
If
you
change
the
aggregation
type,
you
change
the
query,
so
I
don't
know
it
would
be
hard
to
infer
similarity
between
discrete
queries.
A
G
A
The
result
isn't
going
to
be
cacheable,
I
mean
it
is,
but
it's
not
going
to
be
reusable
if
it's
a
different
query,
because
we
don't
we're
not
like
taking
the
log
lines
prior
to
aggregating
them
as
a
metric
and
caching
them.
You
know
we
would
either
cache
the
chunks
underlying
this
the
source
of
the
request
or
we
would
cache
the
result
and
reuse
the
result
for
the
same
query
or,
if
part
of
the
same
query
had
an
overlapping
time
windows,
parts
of
the
result,
query
or
cache,
but.
C
G
G
So
I'm
using,
for
example,
in
the
nginx
dashboard
they're,
where
I'm
when
I
want
to
get
analytics,
I
want
to
filter
out
a
lot
of
the
craft
and
there's
a
lot
of
craft
in
the
local
lines
that
I'm
just
not
interested
in
like
image
assets
in
my
case
and
and
basically
non-pages,
and
that
query
is
rep,
of
course,
that
filter
query
is
repeated
for
all
the
panels,
because
I
I
basically
want
to
have
all
my
analytics
without
those
assets.
B
Yeah
one
question:
do
you
feel
that
this
is
because
you
reuse
the
lock
query
multiple
times
in
different
aggregations?
Do
you
feel
that
this
will
hit
the
performance
issue
for
loki?
Is
this
your
basic
tenets
here.
G
A
cachable
result
and
the
chunk
the
chunks
that
are
retrieved,
they're
cash,
so
that's
great,
but
the
subset
that
I
need
for
all
my
panels
is
much
smaller
and
sure,
probably
much
much
better
to
optimize
for
that
smaller
subset.
But
it's
just
maybe
this
is
data
point
of
one
just
want
to
throw
it
out
there.
I
don't
need
it
to
be
solved,
but
just
wanted
to
mention.
C
G
C
My
point
of
view
of
that
is
there's
way
all
the
like.
There's
way
more
other
optimization
that
we
have
to
do
first
than
this
one,
because
this
one
is
it's
not
a
low
ending
freight.
Why
it's
difficult
it's,
because
we
cannot
just
cache
all
the
log
selector
that
you're
gonna
do
if
we
do
that,
we're
just
gonna
blow
the
cache
with
a
ton
of
things.
C
B
Goes
it
goes
in
other
directions.
Story
like,
I
believe,
sorry
for
interrupting
you,
but
I
think
what
we
have
today
as
a
single
working
queue
on
the
query
front
end,
which
is
taking
the
full
set
lock
lines
plus
aggregations,
needs
to
be
split
in
something
like
work
here
for
the
aggregation
working
for
the
underlying
queries
and
then
having
this
cache
independently.
But
this
is
blows
up.
B
The
architecture
there
for
performance
issues
that
we
probably
have
in
a
data
point
one
as
word
is
pointing,
but
we
should
see
if
we
have
a
data
point
two
until
data
point
x
and
say
here:
it
is
because
it
blows
up
the
architecture
layer.
That
sounds
a
little
bit
like
like
in
the
monarch
paper
by
google.
B
They
have
this
kind
of,
let's
say
performance
issue:
that's
why
they
solved
this
issue
in
a
two
two
level
working
queue,
but
we
need
to
go
into
the
scale
and
we
need
to
have
this
issue
to
implement
that.
In
my
opinion,
I
I
think
you
will
benefit
by
the
result
cache
a
lot
currently,
even
if
it
is
the
big
query
and
its
query
is
doing
or
redoing
some
part
of
the
work.
B
However,
your
dashboards
usually
cache
until
let's
say
they,
they
hit
the
result
cache
until
what
they
have
seen
to
today
and
then
the
newer
one
minute
or
the
newer
five
minute
slot
needs
to
be
retrieved
end
times
depending
on
which
is,
we
will
see
if
we
really
have
their
performance
issue
there
with
loki,
I
mean
yes,
we.
C
Are
demanding
yeah?
We
have
this
cache
already
this
one,
the
one
where
you
know
we
when
the
time
range
slide,
we
request
only
what's
what's
you
know,
we
need
a
new
part.
We
don't
request
every
time
the
wall
breath
two
or
six
hour.
For
instance,
we
just
ask
for
the
last
hour
or
the
last
about
five
minutes,
depending
on
the
refresh
time.
C
So
yeah
for
the
regex
world,
I
wanted
to
finish
on
this.
We
have
a.
C
We
have
other
idea
to
make
a
regex
faster,
because
I
think
you
may
feel
a
pain
with
the
rig
x
because
regex
are
very
slow
in
go,
but
we
we
have
emitted
some
other
idea
recently
and
we're
going
to
look
into
this.
A
A
C
Yeah
I
mean
like,
like
I
said
so,
the
answer
to
that
is,
I
think
we
can
do
it,
but
there's
like
a
there's,
a
trade-off
that
you
have
to
make.
You
have
to
fall
into
the
same
pattern
that
we
currently
do,
which
is
every
aggregation.
Our
range
aggregation
like
everything,
starts
as
a
as
a
range
aggregation.
C
The
range
aggregation
could
be
take
the
first
one
if
you
want,
and
that
will
do
your
job
right,
but
we
cannot
just
graph
everything
because
compared
to
promoters,
log
line
that
can
be
a
million
log
line
in
a
single
time
stamp,
and
even
if
the
timestamp
is
nanosecond,
precise
right.
So
what
you're
asking
is
not
going
to
work
for
every
type
of
logs
and
we
need
to
make
that
one
for
every
type
of
logs.
A
C
A
Look
I
mean
if
I
had
like
a
zillion
if
I
had
a
zillion
log
lines
right
like
yeah,
it
would
be
a
problem,
but
if
I
had
like
a
hundred
log
lines
right
or
a
thousand
log
lines,
you
know
I
mean
I
thought
within
the
limits
of
the
query.
E
C
I
I
mean
my
my
concern
is:
is
that
there's
nothing
currently
in
loki,
there's,
no
aggregation
that
allows
you
to
just
you
know,
pass
a
single
sample.
It's
always
like
a
bunch
of
them,
reducing
it
into
a
step.
So
I
mean
I
don't
know
if
I'm
ready
to
I'm
not.
You
know,
I
don't
know
if
I'm
ready
to
make
a
change
of
that
yet,
based
on
this
need,
I
think
this
need
could
be
covered
by
maybe
first
overtime
or
last
of
the
time.
A
Wanted
a
long
time
ago,
perhaps
too
right
where
I
I
wrote
the
the
step
thing
that
was
kind
of
crappy,
but
implementing
something
similar
to
the
step
to
just
take.
C
A
C
A
A
E
A
C
Yeah,
the
only
problem
with
those
two
solution,
the
first
and
the
max-
is
that
you
need
to
make
sure
that
the
data
is
the
line
right,
because,
if
you're
totally
not
aligned,
you
may
actually
not
going
to
get
the
result
that
you
want.
A
C
Can
do
I
can
do
what
we
can
do
is
we
can
do
a
very
quick
pr,
because
this
is
very
simple
addition,
the
first
and
the
last
all
the
time
we're
gonna
do
a
quick
pr
and
then,
after
you
can
play
with
it,
see
if
it
really
solves
the
issue.
A
Over
sorry
roger
that,
so
that
the
times
that
this
meeting
is
pegged
to
utc,
so
we
actually
started
an
hour
ago.
A
A
Well,
now
it's
8
a.m,
on
the
east
coast
in
the
u.s,
so
it's
also
currently
very
hostile
for
anyone
on
the
in
the
u.s
time
zones
be
like
4
a.m,
if
you're
in
california,
so
I
don't
know
it's
always
hard
to
make
these
meet,
but
I
think
if
we
push
it
out
another
couple
hours,
we
can
maybe
close
that
gap
a
little.
So
all
right,
we're
over
time.