►
From YouTube: Loki Community Call 2021 01 07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Ritchie,
yes,
yes,
already
yeah,
it's
recording
so
go
ahead.
B
A
Google
gives
you
or
youtube
gives
you
three
things
to
choose
from
and
I'm
I
have
too
many
left
hand
too
many
left
thumbs.
So
I
can't
make
my
own
preview
images,
so
I
just
pick
usually
the
ones
which
have
faces
and
ideally
faces
which
smile
and
then
I
try
to
not
always
pick
the
same
faces.
So
that's
basically,
my
heuristic
of
choosing
one
of
those
three
images,
but.
A
B
Don't
smile,
that's
basically
the
message:
yeah.
Does
anybody
feel
free
to
add
this
agenda
in
the
community?
Call
anything
you
want
to
talk
about
anything
and
everything
I
don't
really
have
a
ton
of
stuff
and
in
a
lot
of
ways
I
think
after
2.0
we
kind
of
chilled
out
a
little
bit
and
there
wasn't
a
whole
lot.
That's
happened
in
the
last
month,
a
lot
of
pto
as
well
too
so,
but
2.1
snuck
in
there
overdue.
B
B
So
I
called
out
a
thanks
to
torsten
and
reinhard
for
their
incredible
work
to
make
that
possible.
So
we
were
approached
some
time
ago.
I
know
scott
rigsby
was
another
name
in
there.
That
was
very,
very
helpful
and
both
convincing
me
that
it
makes
sense
to
centralize
our
grafana
helm,
charts
in
one
repository
and
basically
doing
all
of
the
hard
work
to
make
that
happen.
B
So,
look
to
those
two
new
urls
for
anything
helm,
related
and
including
reinhardt
has
contributed
a
microservices
version
of
the
loki
helm
chart.
I
know
that's
often
been
asked
for
asked
for
a
lot.
In
fact.
So
that's
a
very,
very
nice
contribution
so
check
that
out
too
2.1
itself
was
mainly
a
lot
of
performance.
B
I
would
say
bug
fixes
in
the
2.0
release.
We
added
probably
a
couple
regressions,
there's
a
lot
of
rewrite
to
the
query
code,
both
performance
and
in
some
bugs,
and
I
think
that
that
covers
pretty
much
anything
big,
that
we
saw
some
kind
of
upgrade
paths
that
led
to
panics
that
kind
of
stuff
was
fixed.
So
that's
good
news.
B
But
I
would
say
expect
that
closer
to
the
end
of
january,
another
thing
that's
been
merged
and
we
have
to
kind
of
deploy.
There's
a
big
rewrite
of
prom
tail
thanks
to
cyril
and
karsten,
for
doing
that
to
sort
of
change
how
we
move
stuff
around
to
enable
I
don't
have
the
pr
there,
but
multi-line
support
in
prom
tail.
Probably
one
of
the.
B
I
can't
not
be
distracted,
I
mean,
I
know
you
weren't,
that's
not
what
you're
asking.
B
B
Similarly,
if
you're
on
a
cloud
user,
we
did
increase
the
line
size
limit,
I
think
64
kb.
In
fact,
I
know
it's
64
kb.
I
should
actually
go
look
and
see
what
the
defaults
are
in
our
repo
too,
and
should
we
increase
those
so
with
the
ability
to
append
logs
to
each
other,
we
kind
of
expect
people
will
send
longer
logs,
there's
some
trade-offs
that
you
make
with
allowing
longer
log
lines
in
loki,
but
64
kb
still
seems
pretty
sane.
B
The
general
concerns
would
be
memory
usage
at
query
time
for
queries
that
match
lots
of
streams.
So
if
your
label
selectors
don't
narrow
down
the
streams-
and
you
have
many
many
many
of
them-
hundreds
or
thousands
you
can
still-
or
you
increase
the
likelihood
that
you
could
run
into
memory
problems
if
you
write
really
big
queries.
B
So
that's
the
trade-off
with
the
line
size
and
the
multi-line.
You
know
that
we've
been
working
around
that
with
show
context.
I
know
the
the
for
large
stack.
Traces
like
you
know
things
with
big
frameworks
like
java,
apps
and
things
that
that's
not
the
best
experience
so
we'll
get
that
out
in
2.2
and
look
forward
to
some
feedback
on
on
how
that
works.
B
B
B
We
write
those
to
disk
and
you
know,
follow
similar
patterns
that
prometheus
uses,
where
we
write
to
disk
and
periodically
checkpoint
in
order
to
keep
the
replay
times
a
little
bit
quicker
and
then
basically
the
general
idea
is,
is
currently
if
you
crash
an
ingester.
If
you,
you
know
out
of
memory,
kill
it
especially
any
of
the
memory
that
or
the
logs
that
it
had
in
memory
would
be
lost,
and
we
typically
rely
on
the
replication
factor
in
the
cluster
to
defend
against
that.
B
However,
we
all
know
cascading
failures
happen.
You
know,
any
number
of
things
can
happen
so
right
ahead.
Log
adds
some
resiliency
there,
so
that,
if
you
do
crash
a
process,
it
should
restart
and
replay
the
wall
and
recover
the
information
that
was
in
memory,
and
this
will
let
us
play
around
a
little
bit
with
how
much
information
we
keep
in
memory,
but
there's
sort
of
pros
and
cons
to
that
too.
In
terms
of
the
chunks,
if
the
time
that
they
span
is
too
long,
it
makes
querying
a
little
bit
trickier.
D
Yeah
make
sense.
Thank
you.
I
was
just
mainly
asking
because
you
know
this
is
the
work
that
you
know
the
tsd
already
have.
So
it's
amazing
that
you
actually
got
inspired
so
but
yeah
you
had
to
re-implement
that
for
your
format,
but
it's
funny
because,
like
even
cone,
prof
is
using
exactly
the
sdb
because
there's
already
wall
in
this
sense.
So
no
it's
good
that
it's
it's
kind
of
in
the
same
area
of
patterns
and
designs.
So
no
that
sounds
good
yep.
We
hope
you,
you
didn't
have
to
re-implement
too
much
stuff.
B
Yeah,
I
actually
would
have
to
owen,
did
most
of
the
work
on
that.
So
I
know
like
the
you
for
the
most
part.
I
know
he
uses
the
cortex
wall
work
which
is
based
on
the
prometheus
wall
work.
So
it's
it
does
definitely
follow
a
lot
of
the
same
patterns
and
I
suspect
that
our
implementation
was,
you
know,
fairly
different
because
we're
you
know
pulling
out
of
the
head
block
model.
That
was
basically
the
cortex
chunks
model
and
I
think
the
wall
was
only
ever
implemented
in
cortex
for
tsp.
D
Nice
yeah,
what
I'm
saying
what
I'm
saying
is
that
you
know
there
is
this
kind
of
kind
of
it's
a
good
direction,
but
since,
for
example,
controv
is
using
that
and
abusing
obviously
yeah
there
is
a
need
for
a
wall
or
like
essentially
a
tsdb
for
bigger
chunks.
So
you
know
this
is
the
direction
where
prometheus
thanos
cortex
con
proof.
Those
these
dbs
could
could
grow
potentially,
but
yeah
good
stuff.
Thank.
B
B
B
Not
necessarily
whether
they're
opened
or
closed,
I
don't
you
know,
there's
another
discussion
that
I
want
to
have
about.
I
think
what
my
philosophy
is
changing
too
about
whether
how
we
handle
open
and
close
issues,
but
primarily
like
something
being
closed,
is
not
intended
to
say
we
won't
work
on
it.
It's
more
intended
to
say
we're
not
working
on
it
right
now,
just
to
maybe
add
some
sanity
around
the
amount
of
open
issues.
B
However,
like
I
said
different
discussion,
but
in
terms
of
community
feedback,
you
know
we
look
at
issues,
we
look
at.
You
know
recent
activity
and
you
know
the
thumbs
up.
You
can
sort
of
sort
by
that
and
github
on
the
original
issues,
number
of
comments,
etc
to
try
to
figure
out.
You
know
what
the
community
is
very
interested
in,
but
it
would
be
nice
to
sort
of
just
I
just
kind
of
made
a
general
form
now.
B
Github
issue
isn't
great
for
this,
because
you
know
it's
single
threaded
and
it
sort
of
seems
you
know
nice
to
have
some
conversations.
So
my
only
real
guidance
here
is:
if,
if
there
is
a
topic
that
you
have,
you
know
input
on
to
just
make
a
separate
issue
and
have
that
comment
in
that
separate
thread
or
if
there's
an
existing
issue
talk
about
it.
B
There
feel
free
to
link
it
back
in
that
initial
in
the
past,
when
we
try
to
do
things
like
design,
doc
reviews
or
more
elaborate
things
with
conversation,
it
gets
really
hard
to
follow
when
you
get
multiple
sort
of
sort
of
threads
in
line
there.
So,
but
you
know
additionally,
if
anybody
has
you
know
other
ideas,
we've
considered
sort
of
surveys
and
things,
but
I
guess
the
bigger
part
of
this
is
the
kind
of
the
outreach.
B
So
you
know
feel
free
to
share
that
link
and
we'll
gather
some
feedback
so
far,
so
good
everybody
that
has
commented
so
far.
It's
kind
of
interesting
to
see,
but
I
I
you
know,
I
guess
I
I
don't
really
want
to
have
to
caveat
it,
but
I
can't
guarantee
you
know
what
we
work
on,
but
I
definitely
want
the
feedback
so
that
we
can,
you
know,
kind
of
evaluate.
I
picked
a
common
topic
to
start
the
list
off
because
it's
a
it's
been
one
of
the
older
issues.
B
I've
been
reluctant
to
do
that,
because
I've
I've
actually
had
a
job
in
the
past,
maintaining
demian
packages,
and
I
know
that
it's
not
an
insignificant
amount
of
work
and
there's
you
know,
probably
I
don't
know
four
or
five
at
least
very
commonly
used
package
managers
now,
there's
probably
a
dozen
less
commonly
used.
So
I'm
not
sure,
like
it's
just
been
hard
for
me
to
prioritize
that,
because
there's
a
lot
of
other
fun
stuff,
we
want
to
work
on
a
lot
of
other
neat
features.
E
A
A
lot
of
those
are
in
the
in
the
traditional
space,
and
I
see
bartig
nodding.
That
might
be
because
he
has
the
same
experience
for
wordhat
seems
like
so
don't
underestimate
the
distribution
vector
through
through
classic
packaging
as
a
deviant
developer
myself,
I
hate
debian
packaging
and
it's
ancient
and
it's
broken
and
and
how
how
go
does
static
stuff
versus
how
debian
is
built
around
dynamic
stuff
makes
a
lot
of
things
super
hard,
but
it's
most
likely
still
worthwhile.
B
B
Yeah,
I
I
I
guess,
interestingly,
like
I
haven't
heard
well,
I
don't
know
we
definitely
hear
people
feedback
that
they
want
it
right.
I
don't
know
how
to
measure
that
against
like
how
sort
of
requested
or
how
much
of
a
burden
it
is.
I
know
I've
written
systemd
unit
files
to
run
loki
on
raspberry
pi's,
and
I
can't
say
that
you
know
I
love
it,
but
it's
like
a
couple
minutes
of
googling
and
you
know
I'm
off
to
the
races
but
yeah.
I
yeah
that
I'm
not
sure
like.
B
I
suspect
we
will
get
there.
I've
just
been
dragging
my
heels
on
it.
I
think,
is
what
I'm
saying
so
if
you
know
that
ends
up
being
the
most
popular
issue
on
that
list,
or
you
know,
we
continue
to
get
a
lot
of
input
for
it.
I'm
sure
we'll
end
up
there,
all
right,
somebody's
added
some
topic
suggestions.
Do
you
want
to
feel
free
to
to
that?
What
you
ward,
I
wasn't
sure
who
wpk
is.
C
B
You're
first
for
go
for
a
ward
and,
and
then
you
know,
certainly
if
anybody's
here
that
thinks
there's
something
just
feel
free
to
add
it
yeah
well
word
is
talking.
C
All
right
yeah,
this
is
actually
something
I
was
discussing
initially
with
cyril
together,
I
was
building
out
a
demo,
maybe
for
our
next
video,
and
it
was
using,
in
this
case
cryptocurrency
ticker
events.
C
We
have
the
thing
that
we
do
aggregations
when
we
want
to
charge
something
on
a
graph
and
that
that
integra,
that
aggregation
requires
an
interval
to
be
specified,
and
the
issue
that
I
ran
into
is
that
there
were
two
things
is
the
one?
Is
that
the
way
prafana
handles
the
results
of
loki,
that
it
doesn't
show
kind
of
a
complete
graph?
C
C
So
that
that's
one
thing,
the
other
thing
is
that
I
I
want
to
have
maximal
resolution
zoom
in
capabilities
and
essentially
not
the
max
value,
but
just
give
me
the
value
that
is
actually
in
the
log
file
in
the
event
and
xero
was
already
working
on
in
the
past
on
a
possibility
to
have
kind
of
lost
value
of
time
or
something
like
that
and
yeah.
I
was
just
wondering
if
this
is
something
that
is
interesting
for
more
folks
here
on
on
the
call
and
yeah.
C
B
So
you're
not
alone
here
I
I
definitely.
I
want
to
have
unwrapped
in
the
pipeline
stage
output,
a
series
with
the
same
resolution
yeah,
but
but
there's
there's
problems,
though
specifically
we
don't
know
what
you
do
to
to
deal
with
cardinality
right.
So
if
you
don't
like
unwrap
is
not
a
function,
it's
a
pipeline
stage,
so
it
doesn't
have
any
way
to
do
aggregation
by
label
values.
So
if
you
have
a
json
or
log
format
stage,
that's
creating
tens
or
hundreds
of
labels,
and
you
unwrap
one
of
those
you'll
end
up
with.
B
However
many
series
there
were
for
tens
or
hundreds
of
labels
with
tens
or
hundreds
of
values
or
whatever.
So
somehow
we
have
to
add
the
ability
to
aggregate
that
a
little
bit
and
that's
doesn't
always
make
sense
either
right
because
a
sum
might
work
or
where
you're
entertaining
something
like
values
where
you
know
you
just
show
discrete
values,
but
then
that
would
have
to
error
if
the
aggregation
of
certain
labels
is,
you
know,
if
their
values
are
different,
then
you
know
you
can't
just
combine
them
into
one
stream.
B
I
do
think
it's
possible
there's
one
other
sort
of
thing
here
that
I
I
think
will
be
okay,
but
we
have
a
you
know:
a
prometheus
compatible
endpoint
from
from
metrics,
which
this
would
be
prometheus
technically
doesn't
support
nanosecond
resolution.
But
loki
does
so.
I
believe,
because
the
format
is
seconds
dot
decimal,
that
we
could
just
add
the
precision
at
the
end
and
it
would
be
okay,
but
if
we
can't
do
that,
then
we
you
basically
need
a
function
that
down
samples
your
data
to
it
most
millisecond
resolution.
B
So
we
have
to
kind
of
keep
that
in
mind
too,
but
I
think
that
that
will
be.
I
think
that
will
just
work
because
we'll
just
add
more
precision
to
the
end
of
the
decimal
so
that
we
can
output
the
data
at
the
resolution
that
exists
in
loki.
B
B
B
For
that
step,
I
think
that's
the
correct
behavior
and
I
think
what
we
want
is
graffana
to
have
us
the
option
like
they
do
for
null
values
to
just
connect
the
lines
and
they
have
that
option
for
null
values
to
connect
the
line.
So
this,
I
think,
maybe
we
just
need
to
open
an
issue,
and
I
can
understand
that
so
that
that
graph
can
be
continuous.
B
G
Another
option
is
to
use
the
last
last
value
over
time
to
implement
this
well,
it's
already
implemented,
but
to
use
that
and
then
you
can
use
a
range.
That
is
let's
say
one
minute,
so
you
show
that
every
minute
you
have
at
least
one
sample
and
then
after
you
can
have
a
step.
That
is,
you
know
as
big
or
as
small
as
you
want,
and
you
kind
of
have
the
resolution
that
you
want.
B
I'm
I'm
fighting
the
fight
for
unwrap
still
to
be
able
to
have
like
full
resolution,
but
next
I
need
to
build
a
dock
so
that
we
can
actually
discuss
those
complexities
and
see
what
we're
gonna
do.
Yeah.
We
want
to
solve
that.
There
was
a.
B
Where
did
I
see?
Oh,
it
was
in
that
the
the
threat
of
community
requests.
Somebody
wanted
the
ability
to
use
log
ql
to
sort
of
strip
labels
out
in
line.
That's
potentially
another
solution
for
this
problem
too,
like
if
the.
If
the
query
language
allows
you
to
just
remove
labels,
then
using
unwrap
is
either
you
know,
sort
of
allow
list
or
deny
list
those
labels.
B
B
G
Should
we
still
merge
or
finish
this
pr
with
last
and
first,
because
I
think
it
has
a
nice
use
case
where
yeah.
B
I
I
think
so
I
I
don't
know
if
there's
any
risk
of
adding
it.
I
guess
you
know
adding
complexity
to
the
language.
I
suppose
if
it's
not
used,
but
I
I
feel
like
that's,
I
don't
know.
I
think
we
should.
B
G
It
will
solve
this
this
problem,
because
if
you
do
a
max
of
one
minute-
and
you
have
like
a
10
item
in
that
period,
then
you
might
not
get
the
last
value
you
might
get.
You
know
the
first
one
or
one
in
the
middle,
so
yeah
max
is,
is
kind
of
helping,
but
not
really
solving
the
problem.
G
I
think
the
last
and
and
first
of
a
time
is
kind
of
solving
the
the
problem,
and
it's
also
nice
because
it
could
technically
later
help
for
improving
performance
because
we
could
skip
through
as
soon
as
we
found
one
item
we
could
skip
through
the
rest
of
the
item
and
move
to
the
next
step.
G
G
C
Yeah,
well,
I
also
see
that
d
has
put
a
issue
there
over
a
question.
There
topic
there
so.
C
G
Free
to
switch.
C
B
C
The
next
step
is
everybody
chooses
a
good
where
you
use
that
one.
B
You
want
to
talk
about
this
vr.
I'm
excited.
I
Yeah
cool
yeah
so
first
time,
caller
first
time
contributor,
so
I
ran
across
this.
You
ran
across
the
need
for
something
like
this
recently,
so
I
thought
I'd
just
pack
it
together.
Hopefully
it
will
be
useful,
so
I
think
all
the
details
are
in
the
pr.
If
anybody
wants
to
look
at
it
more
closely
but
effectively.
I
This
is
introducing
group
by
into
log
ql
for
those
of
you
familiar
with
sql,
so
you
just
I've
chosen
that
the
term
dedupe,
which
could
be
helpful
or
hurtful
depending
on
you,
know
its
relationship
with
the
grafana
explorer
view
d-dupe,
because
we
have
something
related,
but
not
exactly
the
same
in
there.
So
all
I
really
wanted
to
do
was
reduce
a
list
of
logs
down
by
a
particular
label
or
a
set
of
labels
or
the
inverse
of
that,
and
so
I
can
maybe
give
a
demo
if
that
would
be
illustrative.
I'm.
I
B
A
A
I
Okay,
cool.
That
should
be
fine.
I
don't,
I
don't
think
there'll
be
anything
too
sensitive
in
here.
There's
no
customer
names
or
anything
so.
I
So
this
is
our
alerting
dashboard,
so
to
show
some
active
alerts
and
what
I've
got
over
here
is
an
annotation
that
shows
when
a
particular
deployment
occurred.
I
So
now
we
we
run
a
ci
cd
process
and
we've
got
multiple
clusters
running
and
so
typically
the
same
change
will
be
applied
to
to
the
same
to
each
cluster.
So
the
log
data
comes
through
here
in
a
way,
that's
duplicated,
so
I
didn't
really
want
to
filter.
You
have
to
force
the
operator
to
pick
a
cluster
here
like
you
could,
and
then
it
would
just
go
down
to
one
line
when
that
deployment
happened,
but
that
didn't
really
feel
like
the
right
thing.
I
I
really
just
wanted
to
group
by
that
cluster
and
just
return.
One
line
entry
so
just
reduce
the
cardinality
of
that
line
by
that
one
label,
and
so
the
query
for
that
is
relatively
straightforward,
with
a
bit
of
line
formatting.
So
what
I
really
want
is
something
like
you
know,
dear
duke,
by
cluster
and
that's
the
syntax
that
I've
written.
B
B
So
I
mean
I
would
be
my
vote,
but
I
would
certainly
I
think
we
should
have
that
conversation
a
little.
But
that's
that's
great
though,
like
that's,
we've
had
similar
conversations
for
you
know
this
kind
of
idea
before
so
awesome.
Thank
you
for
throwing
together
a
pr
for
that.
That's
fantastic
sure
thing.
I
Yeah,
so
it's
not
exactly
like
group
by
in
the
sense
that
you
can't
do
things
like
count
by
or
count
star,
you
know
to
see
how
many
lines
were
actually
evaluated
there.
So
that
is
something
that
cyril
suggested
in
the
pr,
and
I
think
that
would
be
grand
you
know
in
the
in
the
grafana
ui.
When
you
go
to
the
explorer
view
and
you
maybe
let
me
just
get
that
going
as
well
yeah
so.
I
So
initially
I
thought
of
trying
to
implement
the
prometheus
group
function
in
here,
but
the
thing
is
group
will
then
change
all
of
the
values
to
one
from
what
I
understand
and
that's
not
exactly
what
I
wanted.
I
didn't
want
to
change
any
any
values
of
the
data.
I
wanted
to
just
return
a
bunch
of
lines.
I
this
isn't
an
aggregation
so
yeah.
I
need
it.
J
I
Yeah
for
the
annotations
view
you,
you
can't
return
an
aggregation.
It
has
to
be
a
as
a
list
of
labels,
a
list
of
log
lines,
sorry
yeah.
So
so,
when
you
go
into
the
explorer
view-
and
you
do
something
like
this-
you
get
this
dedupe
feature
over
here
in
grafana,
explore
and
when
you
do
dedupe,
let's
say
by
numbers.
You
see
you
get
this
count
here,
yep.
So
what
cyril
suggesting
is
that
and
correct
me?
I
If
I'm
wrong
here
please
or
we
should
be
able
to
return
this-
these
counts
as
part
of
that
did
buy
as
well
so
making
available.
I
don't
know
like
a
meta
label
or
something
you
know.
You've
got
the
double
underscore
error,
double
underscore.
In
certain
cases,
you've
got
double
n
score
name,
so
perhaps
like
double
underscore
d
dupe
count,
perhaps
just
as
an
emitter
label
attached
to
each
log
line
that
gets
returned,
which
is
slightly
tricky.
I
try
to
implement
that
and
can
get
that
working.
G
Yep,
no,
I
think
it's
good.
It's
a
good
idea.
My
my
only
concern
is
that
it
seems
that
there's
like
two
use
case
into
a
single
syntax
like
one
is
reducing
labels
to
use
as
annotation
and
the
other
one
is
really
the
jubin
line.
G
I
wonder
I
wonder
if
the
the
first
one
the
annotation
could
not
be
solved
by
a
metric
query.
If
graphene,
I
wouldn't
support
that.
G
If
you
do
a
metrocard,
then
you
could
get
just
a
label,
you
could
reduce
label.
You
know.
If
you
do
a
count
by
cluster,
then
you
will
get
only
the
cluster
label.
G
I
Well,
in
yeah,
in
fact,
I
want
to
remove
that
cluster
from
from
being
evaluated
when
returning
those
log
lines,
so
it
will
return
one
line.
In
fact,
let
me
get
my
local
loki
running
and
then
I
can
just
demo
it
directly.
J
G
Double
down
also
it's,
it
might
be
something
difficult
to
like
a
burden
at
some
point
to
scale.
G
I
Okay,
this
is
my
local
I'm
just
trying
to
remember,
which
I
think
it
was
the
stream
here.
Let's
go
back
a
few
days.
I
I
Have
sorry
it's
old,
it's
like
hash.
I
I
B
Also,
I
wanted
to
tip
my
hat
because
you
have
another
pr
up
for
supporting
hash
based
comments,
and
I
didn't
know
that
you
could
support
double
slash
based
comments
in
the
syntax
either.
So
both
of
those
are
actually
quite
handy.
I
Yeah,
I
think
that
one's
ready,
yeah
it's
just
interesting,
because
the
syntax
highlighting
in
grafina
as
you
can
see,
handles.
B
I
I
Without
as
you'd
expect
so
like
you
do
without
holds,
and
then
I'll
have
to
do
a
no,
there
would
be
another
use
case
where
I
could
do
this.
Let's
say
without.
G
G
I
G
Yeah,
no,
I
think
it's,
I
think
it's
a
great
great
use
case,
especially
for
the
the
info
here.
I
think,
if
you
have
like
multiple
er
message
and
one
is
really
spammy
that
could
be
very
useful
but
yeah.
I
think
it
maybe
requires
to
send
back
the
number
so
that
maybe
graph-
and
I
can
you
know-
replace
this
digi
using
this
feature
or
they
can
use
it
to.
They
can
show
this
number
yeah,
because.
B
Another
question
that
I
think
I
have
is:
would
we
want
the
syntax
here
to
be?
B
This
is
sort
of
prom
ql
asks
with
buy
and
print,
or
should
it
be
more
of
the
log
ql
filter
type
where
we
could
actually
do
matchers
right
dedupe
by
component
equals
daemon
or
you
know
where
we
have
a
regex
or
equality
matcher
on
the
dedupe
yeah
that'd
be
interesting
too,
because
that's
trying
to
think
I
feel
like.
B
I
had
a
use
case
for
that,
where
I
want
to
remove
duplicates
where
the
log
level
was
info
or
something
or
debug
or
you
know,
I
don't
want
to
dedupe
everything
by
one
label,
but
only
if
the
label
value
was
a
certain
thing.
I
just
consider
those
all
duplicates
and
strip
them
out,
because
I
don't
care.
G
Can
you
can
you
combine
multiple
label?
K
I
just
put
in
the
comments,
but
this
actually
looks
very
close
to
sequel's,
distinct
on
rather
than
calling
it
or
group
yeah
on
doesn't
really
require
a
group
by
statement
at
all.
You
can
just
run
distinct
on
and
you
can
provide
multiple
fields
and
it
will
literally
just
return
essentially
the
first.
E
K
Matches
that
criteria,
I
think
naming
that
would
have
more
similarity
to
all
the
sql
naming
and
remove
some
of
the
collision
of
the
naming
with
the
other
sort
of
concepts.
So.
C
Yeah,
I
think,
because
I
was
initially
confused
between
what's
the
difference
between
summing
and
doing
this
thing
done
in
this
case
yeah
so
reduce.
Would
that
would
definitely
help
me
understand
this
better.
Another
question
that
I
had
was
around:
can
you
also
do
distinct
on
the
original
lock
line
because
they
were
evaluating
labels
and
you
showed
eddie
deduplication
that
can
do
in
ui.
G
G
Yeah,
maybe
we
could
support
the
distinct
without
anything,
and
then
you
could
do
it
from
the
line
right.
I
I
So
that's
so
that's
great,
so
that
feels
more
close
to
a
solution
for
me
like
I
call
this
distinct
because
I
think
that's
a
lot
more
clear
and
then
it
doesn't
clash
with
this
terminology
here,
because
I
was
thinking
of
expanding,
so
you
know
it's
by
right
now,
but
I
was
thinking
of
also
adding
dedupe
exact.
Did
you
number
you
know
and
then
mimicking
this
functionality,
because
these
are
very
simple.
C
Yeah
and
if
you
throw
out
that
metadata
about
what
the
result
was
of
the
d-dupe,
then
this
feature
can
be
accelerated
without
the
limit
of
doing
owning
that
on
thousand
lines.
C
So
for
display
in
the
ui
right,
so
you
could
dedupe
for
a
set
of
thousand
lines.
Let's
say.
B
I
G
Oh,
I
guess
I
guess
they
will
actually
use
the
the
new
feature
if
you
implement
all
of
them.
I
think
it's
better
that
we
use
only
server
side
right,
because
that
he
has
a
big
shot
coming
the
client
side,
that
it
will
do
it
only
on
on
the
result.
Well,
server
side,
you
can
do
it
until
it
has
a
thousand
unique
log
label.
G
So
does
it
modify
anything
from
the
the
actual
log
line
that
is
being
shown
or
not
at
all
enough?
No,
no.
I
It's
the
the
syntax
is
very,
very
simple:
it
just
looks
at
it
gets
all
the
labels
and
then
hashes
them
and
then
anytime.
It
sees
another
line
that
has
labels
that
match
that
hash.
It
just
disregards
that
line,
so
it
will
just
return
the
first
line
for
each
unique
label,
hash.
G
Yeah
so
yeah!
No,
I
I
personally,
I
love
this
feature
I
just
like.
G
Maybe
we
need
a
to
start
a
design
doc
to
put
out
everything
that
we
just
said
and
agree
and
everything,
because
the
demo
really
you
know,
make
it
obvious
that
we
need
this,
but
I
still
think
maybe
we
need
to-
or
maybe
just
in
the
pr
instead
of
doing
a
design
doc,
just
in
the
pr
to
write
the
plan,
because
I
feel
like
it's
going
to
be
super
useful,
but
maybe
getting
review
from
everyone
on
the
design
and
get
an
agreement.
It
is
a
good,
a
good
idea.
B
G
I
I
G
Yeah,
no
I'm
saying
that
if
you
do
distinct
and
you
don't
provide
any
label
list,
maybe
it
could
do
it
on
the
log
line
itself.
G
B
Biggest
one
is
non-structured
log
lines
that
someone
wants
to
dedupe
by
something
right
like
this.
I
guess
the
question
is
under
the
hood.
How
would
we
would
just
do
a
bites
contains
to
you
know
I
mean:
can
we
do
that
quickly,
right
like
to
avoid
having
to
regex
parse
something
to
de-dupe?
It
is
the
advantage
of
being
able
to
de-dupe
the
whole
line.
B
B
Yeah,
I
have
to
think
about
that
a
little
bit.
If
there's
that's
awesome,
danny
thanks.
So
much
for.
K
B
Cool
ward,
I
think
I
I
I
I'll
ask
anybody
other
feedback
on
that
or
anything
before
we
move
forward.
Well,
ward,
you
want
to
talk
about
the
annotations.
C
Yeah
sure
it
is
something
we
have
been
discussing
internally
earlier
this
week,
so
for
the
annotations
in
grafana,
you
can
write
an
expression,
locale
expression
that
does
a
screen,
a
stream
selector,
maybe
a
filter
expression,
but
now
with
look
ql
v2,
where
you
have
label
extraction
there,
there's
a
lot
of
folks,
including
me
that
want
to
also
do
that
on
the
extract
labels.
That,
for
example,
give
me
all.
C
Let's
say
you
have
high
cardinality
values
that
you
don't
want
to
promote
to
a
standard
label
but
extract
it
at
runtime,
then,
let's
say
cluster
name
or
file
path
or
customer
id.
You
still
want
to
be
able
to
do
that.
Have
those
in
a
drop
down
as
part
of
sorry,
that's
not
annotations
template
variable.
I
believe
you
for
the
confusion.
C
So
that's
one
one
thing
question
I
had
the
other
thing
was
annotations
and
where
you
use
the
results
as
an
annotation
in
grafana
graph,
the
problem
is,
if
you
have
a
lot
of
annotations
for
a
certain
time
period,
there's
no
limit
in
the
amount
of
annotations
that
are
shown.
So
the
result
is
that
the
graffani
y
slows
down
quite
a
bit
because
of
that
and
yeah.
It
is
not
an
easy
problem
to
solve.
C
I
think,
because
you
can,
of
course
do
a
hard
limit
on,
let's
say:
let's
say
we
have
a
operator
that
says
a
limit
hundred
on
a
local
expression.
Then
you
don't
get
the
full
result,
but
the
alternative
is
or
showing
everything
or
maybe
some
other
thing
that
we
were
thinking
about
is
showing
it
dynamically.
C
Based
on
the
resolution
of
the
graph
that
you're
looking
at
and
yeah
two
usability
issues
that
are
related
to
graffana
and
loki
used
in
together,
so
yeah
just
want
to
see
if
there
are
some
other
folks
that
running
in
the
same
problems
and
what
could
be
a
nice
solution
for
that
that
you
are
seeing.
B
B
Yes,
I
well,
I
had
a
different
use
case
for
this,
and
one
suggested
solution
here
is
this
idea
of
only
returning
one
out
of
every
or
one
per
time
interval
log
line
so
a
little
bit
along
the
lines
of
you
know
last,
and
you
know,
last
by
time
or
first
by
time
or
max
by
time
or
whatever,
but
instead
of
being
a
metric
just
have
that
operate
on
the
log
line.
B
So
I
guess
max
doesn't
make
a
lot
of
sense
here,
but
if
you
had
last
at
time
or
first
at
time
that
way,
you
could
provide
a
range
to
these
kinds
of
queries
and
use
the
you
know
variable
substitution
in
grafana.
So
as
the
graph
zooms
out,
it
only
shows
you
know
one
entry
for
every
second
or
five
seconds
or
30
seconds,
etc.
B
It's
it's
it's
a
little
bit
weird
right,
because
you're
just
grabbing
one
line
out
of
a
list,
so
you
know
the
annotations
you're
showing
are
sort
of
not
determinate
and
there's
another
problem
here
too,
which
is
as
you
zoom
in
do
you
do
you
know
if
you've
gotten
to
a
point
where
you're
seeing
all
of
them?
B
You
know
we
wouldn't
have
or
don't
have
anything
right
now
in
the
api
that
could
communicate
an
answer
here
about
whether
or
not
the
the
result
set
was,
I
guess,
truncated,
but
it
does
provide
the
use
case
I
had
for
this
was
I'm
storing
gps
coordinates
in
loki,
so
I
have
kind
of
a
metric
and
when
I
display
them
on
a
map,
as
I
zoom
the
map
out,
I
don't
need
to
see
one
second
resolution.
B
I
just
need
to
see
one
out
of
every
10
seconds
or
30
seconds
or
minute
that
use
case
it
doesn't
matter
if
I
just
sort
of
you
know
strip
stuff
out.
Generally
speaking,
I
suppose
I
could
miss
anomalies
if
I
zoom
out
far
enough
right,
like
somebody
drove
away
of
course,
but
if
it's
that
okay,
I
think
that's
what
I
would
suggest
as
a
solution
for
this.
B
B
Yeah
the
template
variable
stuff
is,
I
actually
haven't
dug
into
that,
but
right,
the
the
loki
label
api
is
very
simple.
You
ask
for
a
list
of
labels
in
a
time
range
and
it
gives
you
them
and
you
get
the
rather
the
names
and
for
a
label
name.
You
can
ask
for
all
the
values,
but
I
don't
know
if
the
label
value
function
that
exists
for
prometheus
was
working
for
loki,
but
that's
a
grafana
function.
I
think
that
would
work
against
the
existing
api
yeah.
C
G
But
danny
you
just
just
showed
us
an
example
of
that.
Now
it
wasn't
using
the
label
value,
I
think,
but
you
were
using
a
result
of
loki
curry
into
a
drop
down.
B
G
B
Think
of
it
this
way
right
say
I
tracked,
you
know
what
we
do
right
event,
kubernetes
deployments
or
something,
and
I
want
to
look
at
you
know
I
zoom
my
graph
out
to
like
30
days
or
something
like
that
or
seven
days
like
if
we
restarted
a
hundred
times
a
day
or
a
thousand
times
a
day
right
like
the
graph
would
just
be
a
sea
of
annotations.
B
You
know
on
a
30-day
zoom.
Is
that
annotation
terribly
useful,
like
meh?
Maybe
right,
but
basically
down-sampling
it
like
we
do
with
you
know.
The
queries
themselves
is
is
one
way
to
make
that
more
useful.
So
then,
if
you
saw
some
anomaly
in
your
graph,
as
you
start
zooming
in
more
and
more
annotations
would
appear
until
you
get
to
a
resolution
where
all
of
them
appear.
B
The
challenge
they're
right
is
like
you
know,
I
think,
from
a
usability
standpoint.
It
might
not
be
that
big
a
deal
right
like
you
just
keep
zooming
in
until
you
see,
but
you
could
zoom
into
a
level
where
you
see
some
anomaly,
you
don't
see
the
annotation
and
then
are
like
well.
I
guess
that
wasn't
caused
by
a
restart,
but
had
you
zoomed
in
a
little
bit
more,
it
would
have
popped
up.
So
that's
the
problem.
Yeah
yeah.
C
The
way
way
I
look
at
it
is
like
kind
of
a
sum
over
time
where
the
annotation
line
becomes
fatter
depending
on
the
amount
of
annotations
within
that
certain
period,
and
then
you
can
zoom
into
it
and
then
at
some
point
and
the
right
resolution
will
split
up
to
individual
annotations
but
yeah
again,
I'm
just
thinking
out
loud
and
the
the
thing
is.
It
requires
quite
some
changes,
I
think
also
in
grafana.
So
it's
not
only
entirely
a
separate
yeah
issue.
G
Yeah
but
anyway,
the
maximum-
I
guess
amount
of
annotation
you
get
is
a
thousand,
though.
B
B
It
where
you
have
a
thousand
annotations
crammed
into
the
first
30
seconds
or
something
I
the
both
there's
two
use
cases
we
just
identified
today
that
have
requirements.
We
don't
support
right
now,
which
is
more
metadata
being
sent
back
to
grafana
right,
like
sending
back
that
you
know
some
of
the
anodes
like
if
we
had
a
function
called
entry
at
time
or
first
at
time
or
last
at
time.
B
So
some
thought
here,
I
suppose
to
like
extending
the
api,
is
really
in
order.
You
know,
what's
these
all
sound,
very
similar
right
like
we're
not
sending
some
data
back
to
the
user
on
requests
of
the
user,
but
they
probably
could
stand
to
know
what
wasn't
sent
back.
It
sounds
like
that's
useful,
like
knowing
the
number
of
lines
that
are
deduped.
I
think
that's
useful
to
see
you
know
knowing
that
your
annotation
queries
were
sort
of
truncated
at
your
request,
because
there's
too
many
to
display
excuse
me.
C
I
must
say
I
have
the
histogram
on
top
of
the
explorer
it's
a
similar
thing
where
you
get
you're,
not
sure
if
you'll
see
the
full
histogram
we're
out
of
time
to
dig
into
the
histogram,
but
I
I
think
you're
a
great
point.
Thank
you
very
very
much,
but
it
is
something
that
needs
more
conversation.
F
B
We
are
at
time
so
I
gotta
wrap
things
up
so
thanks
everyone
for
attending.
I
want
to
move
this
call.
We
keep
talking
about
it,
but
it's
gonna
have
to
probably
go
later
in
the
day
to
be
a
little
bit
more
accommodating
to
folks
on
the
west
coast
of
the
u.s,
because
it's
like
four
or
five
in
the
morning
right
now
over
there.
It's
only
eight
a.m
here,
and
I'm
not
thrilled
about
that.
So
we'll
probably
push
this
out
a
little
bit
later,
but
thanks
everybody.
Thank
you.