►
From YouTube: Loki Community Call 2020-06-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
First
order
of
business-
I
don't
know
if
attitude
I
was
able
to
join,
but
congratulations.
I
don't
see
him
sadly,
but
we
is.
That
is
all
officially
done
cyril.
I
know
we
were
in
the
process
of
it.
The
vote
was
approved
yeah.
It
is.
C
So
attitude
is
the
first
non-grafana
loki.
Maintainer
he's
been
contributing
to
the
project,
for
I
would
say
about
a
year
now,
basically,
in
his
spare
time
and
over
the
course
of
that
year,
his
contributions
continue
to
get
more
and
more
complex
and
valuable,
and
he's
demonstrated
all
the
qualities
that
we
like
to
see
for
someone
that
to
be
part
of
the
team,
that's
very
exciting.
I
can't
thank
him
enough
for
his
hard
work.
C
I'm
sad
he's
not
here,
though
there
we
go,
the
other
stuff
I
have
on
the
list.
I
was
gonna
suggest
a
log
ql
demo.
However,
I
think
probably
everybody
here
has
seen
it
if,
if
you
would
like
one,
the
kefir
talk
that
we
did
for
the
loki
future
has
basically
the
same
demo
in
it.
So
if
you've
skipped
through
the
first
25,
four
or
five
minutes
of
me
talking
and
then
maybe
another
10
minutes
of
serial
talking,
you
get
right
to
the
demo.
C
So
I
mean
most
of
this
stuff
is
stuff.
We
talked
about
on
the
tuesday
internal
call,
but
the
I
listed
the
big
items
that
are
in
progress
now
that
we've
got
kind
of
scheduled
for
our
internal.
You
know
q2,
which
I
don't
remember
when
that
ends
exactly
it
started
in.
May
I
don't
remember
how
long
a
quarter
is.
C
I
should
know
stuff
like
this
right,
but
basically
the
the
high
level
projects
are
alerting,
improving
the
bolt
db,
shipper,
adding
deletes
for
targeted
stream
deletion,
improving
our
docks
and
sort
of
operations
guides
and
some
amount
of
ui
improvements.
This
part
is
still
up
in
the
air
because
we're
going
to
leverage
the
grafana
front,
end
team
and
it
sounds
like
their
priority
shifted
a
little
bit
recently.
C
So
the
big
things
that
we
really
want
to
see,
though,
are
some
kind
of
paging
support
to
not
be
stuck
at
that
1000
limit
in
grafana,
like
the
sort
of
the
user
experience
around.
What
happens
when
you
query,
a
lot
of
blogs
right
now
is
is
pretty
clunky,
so
you
get
a
thousand
back.
It's
not
always
clear
why
you
get
a
thousand
back.
The
histogram
doesn't
necessarily
reflect
you
know
the
full
value,
there's
sort
of
some
tricky.
C
You
know
aspects
to
making
that
work
with
loki
across
the
long
time
range,
but
I'm
going
to
look
at
that.
I
think
the
ui
has
been
updated
to
sort
of
improve
that
experience,
but
not
really
fix
the
problem.
It's
still
only
going
to
show
you
what
the
histogram
of
the
results
were,
but
the
some
ability
to
say
show
me
more
logs
right,
like
click
next
page
or
something
because
lots
of
people
have
given
us
feedback
that
that
1000
line
limit
is
is
kind
of
cumbersome.
It's
mostly
there
because
the
ui
doesn't
support.
C
This
is
a
little
bit
of
a
stop
gap
until
we
figure
out
what
the
right
long
term,
you
know
the
original
log
ql
v2
goals
were
to
include
some
kind
of
json
parsing.
But
we've
pulled
those
aside
from
now
to
scope
that
back
down
a
little
bit
and
for
now
I
think
we're
going
to
look
to
shoot
for
doing
like
pretty
printing
of
the
json
results,
at
least
in
grafana,
so
that
json
logs
can
be
a
little
bit
easier
to
read.
I
talked
to
gotham
about
this
a
little
bit
like
I.
C
Yeah?
Oh,
you
missed
it.
I
gave
you
a
huge
congrats
earlier
so
I'll.
Do
it
again,
congratulations
on
being
the
first
non-grafana
team
member
for
the
loki
project.
Thanks
for
all
your
hard
work,
I
really
appreciate
it.
C
That's
very
exciting,
and
then
you
just
missed
me
talking
about
basically
what
the
stuff
we're
working
on
right
now
I
didn't
put
stuff
that
you're
working
on
prom
tail
right
ahead:
log
just
merged
the
series
api
parallelization,
that's
really
nice.
D
Yeah
I
I
tested
this
this
yesterday.
F
D
And
I
think
I
did
like
a
six
hour
query
in
in
less
than
six
five
seconds
or
something
like
this,
so
it
does
work
very
well
now.
C
C
The
way
we
make
other
things
fast
and
low-key
in
the
long
term,
there's
probably
a
schema
change
to
the
index
in
order
to
you
know
optimize
this
a
lot
more,
but
that's
not
going
to
happen
in
the
near
future,
because
we
kind
of
want
to
limit
the
number
of
schema
changes.
We
make.
We've
got
a
few
other
ideas
in
mind
so
for
now,
though,
the
parallelization
of
that
data
will
make
that
work
well,
which
is
exciting.
C
So
the
series
api
is
primarily
used
in
grafana
when
you're
typing
queries
to
give
you
context.
So
if
you
use
a
label,
it
will
only
show
you
values
that
are
valid
based
on
previous
labels,
you've
already
selected,
and
that
makes
that
sort
of
query
complete
a
lot
more
useful,
but
you
can
also
use
it
from
log
cli.
C
If
you
just
want
to
query
what
series
you
have
all
right,
so
yeah,
the
ui,
the
json
thing
get
back
to
where
it
was
there,
pretty
printing
will
help,
but,
ultimately,
that
you
know
most
people
sort
of
want
a
way
to
do
something
like
what
jq
does
right,
where
I
can
select
elements
out
of
the
json,
maybe
rewrite
a
log
line
out
of
it
that
will
take
a
form
in
the
future,
but
probably
not
in
the
next
couple
months.
C
C
It's
kind
of
fun
if
you
go
sort
of
google,
the
internet
for
the
last
five
or
more
years
of
you
know,
json
log
blog
posts,
because
they're
everywhere,
right
and
like
everyone
is,
is
advocating
for
json
logging
and
I'm
sort
of
trying
to
understand
why
you
know
like
it.
I
think
loki's
interesting
in
the
sense
that
we're
not
opinionated
around
what
the
log
content
is
and
our
use
cases
are
targeted
towards
operators
and
developers,
in
which
case
json
logging
is
terrible,
and
it's
totally
unparsable.
C
As
a
human,
without
additional
tooling
like
what
we're
talking
about
here,
but
my
question
really
becomes
you
know
what
are
the
real
use
cases
for
json
logs
like
it's
a
little
bit
risky
to
make
a
campaign
saying,
don't
use
json
logs
because
everybody
just
spent
a
lot
of
years
making
the
campaign
to
do
so.
But
I'm
curious
now,
like
you
know
what
the
right
way
to
support
json
logging
is
with
loki,
you
know,
is
it?
Is
it
tooling
it
query
time
to
decompose
the
json?
Is
it
tooling
and
ingestion
time
to
decompose
the
json?
C
Do
we
not
decompose
it
at
all,
and
you
know
that's,
but
primarily
for
loki
in
the
use
cases
we
target,
which
is
you
know,
ops
and
dev?
I
think
json
logging
is
terrible.
Do
we
only
have
a
few
log
streams
that
use
json
and
they're
my
least
favorite
to
work
with?
And
so
you
know,
I
think,
there's
an
interesting
question.
There.
D
In
in
my
experience,
I
think
this
world
json
stanza,
comes
from
back
in
the
days
where
you
needed
json,
to
send
logs
to
elasticsearch
that
that
was
the
only
way
to
actually
log
into
electric
search
yep.
And
so,
if,
if
you
are
not
doing
json,
then
you
will
need
some
sort
of
logstash
and
if
you
want
to
do
to
go
with
without
logstash,
you
needed
json
logs
directly.
C
Yeah
yep,
another
common
thread
that
I
saw
on
blog
posts
was
that
it's
it's
more
flexible
for
extensibility.
So
if
you
wanted
to
add
more
log
data
like
another
field
to
your
logs
and
you
already
have
tooling
that's
sort
of
parsing
those
logs,
you
wouldn't
break
it
so
easily
right
so
json
you
can
add
elements
too
and
generally
you
don't
break
downstream
clients.
C
Almost
all
the
threads,
though,
have
this
sort
of
feel
of
like
this
allows
you
to
do
all
kinds
of
crazy
post-processing
on
your
logs
and
that's
the
question
that
I'm
really
wondering
is
like
do
people
do
that
you
know
like
was
this
mainly
you
know
to
support
elasticsearch
like
you're,
saying
that's
very
entirely
possible
right,
elasticsearch,
being
the
the
big
open
source
player
in
logging
made
it
very,
very
popular,
and
so
I
don't
know
I'm
curious
about
it.
To
be
honest,.
A
I
think
part
of
it
is
is
basically
similar
to
what
loki
is
trying
to
do,
but
only
the
structured
metadata
stuff
and
not
the
actually
being
efficient
part,
and-
and
I
think
it's
it's
you
could
like.
It
sounds
bad
to
phrase
it
like
this,
but
maybe
it
would
be
fair
to
say
it's
a
misguided
attempt
to
to
solve
several
things
at
once
and
and
it
didn't
really
work
out.
A
Of
course
you
didn't
have
that
that
efficient
back
end,
which
which
you
also
need,
and
then
it's
just,
become
it's
basically
in
between
a
data
lake
and
and
technological
depth
to
to
store
it
as
as
such,
but
I
think
the
the
thinking
behind
it
was
actually
quite
useful
and
bit
of
history
initially
permitted
exposition
form.
It
was
json
as
well
for
as
far
as
I
as
I
remember
and
understand
it
precisely
the
same
reason
it
wasn't.
It
was
a
a
attempt
or
an
attempt
to
to
just
get
structured
data
out
of
stuff.
C
Yeah,
I
mean
it
certainly
resonated
with
enough
people
that
we
see
it
everywhere
right
like
it.
It
was
enough
of
a
good
idea
in
in.
A
Principle
right:
it's
the
the
issues
like
and
again
jumping
over
to
prometheus
one
of
the
one
of
the
things
which
makes
bernie
prometheus
nice.
Is
it's
really
easy
to
really
easy
to
emit
data
which
is
compatible,
and
it's
also
really
easy
to
to
ingest
and
use
that
data,
whereas
the
json
log,
it's
basically
really
really
easy
to
create
it,
then
it's
really
hard
to
work
with
it,
and
I
think
this
is
where
this
whole,
where
this
whole
effort
kind
of
fell
short.
A
C
Yup,
so
I'm
looking
forward
to
sort
of
kind
of
evolving
this
the
story
around
json.
You
know
we
find
more
people.
This
asks
all
the
time
right.
Lots
of
people
have
json
logging,
you
know,
should
we
encourage
it?
Should
we
discourage
it?
Should
we
stay
unopinionated
about
it,
but
we
need
to
have
some
solution
about
how
to
effectively
deal
with
it.
Whether
that's
the
combination
of
ingestion
and
query
time,
tooling-
or
you
know,
I'm
obviously
probably
will
be
both
of
those
things
but
kind
of
wondering
if
we
should
start.
C
F
C
This
is
like
70
fields
in
it,
though
right
like
they.
They
have
to
be
useful.
I
think,
or
not
you
know
it's.
This
is
the
trouble,
it's
so
easy
to
add
data
to
json
that
it
becomes
tricky
to
decompose.
It.
C
I
it's
not
like
plain
text
logs
escape
problems.
You
know
like
in
general.
I
think
we
probably
like
as
a
sort
of
vlogging.
You
know,
community
member
should
look
to
see
what
guidance
you
know
like
my
history
with
this,
and
I
think
people
have
similar
like
use
cases
right.
I
wrote
systems
that
did
order
tracking
and
order
processing
and
basically
learned
that
you
know
I
needed
to
have
an
order
id
in
every
log
line
that
the
application
wrote,
or
else
you
know,
searching
the
logs
and
the
time
it
was
elasticsearch.
C
You'd
always
just
you'd,
be
missing
entries
right.
So
that's
the
case
where
json
makes
sense,
because
you
have
an
object
that
you
can
easily
attach
those
fields
to
so
the
same
thing
would
apply
to
plain
text
logs.
You
need
some
way
to
to
add
context
to
them
so
that
they're
easier
to
search
and
filter
so
yeah.
I
think
we
need
to
start.
You
know
maybe
coming
up
with
our
opinions
on
logging
best
practices
or
you
know,
including
json
or
without.
D
Yeah,
I
don't
think
I
don't
think
we
can
expect
everyone
that
is
using
json
to
switch.
You
know
the
in
one
day
without
jason,
so
I
think
we
need
to
support
json
logging
and
I
would
say
that
the
direction
we
should
take
is
not
to
explode
the
json
at
on
the
pontel
side.
D
It
should
be
probably
at
query
time
if
we
do
that,
because
there's
a
ton
of
information
that
you
don't
want
to
remove,
and
if
you
remove
it
you
know
in
pontel,
then
you
don't
have
it
available
in
nokia,
anymore
yeah.
So
I
think
I
think
it's
better
to
crunch
the
data
after
well.
C
F
F
C
C
So
if
we
have
a
good
solution
for
that
in
terms
of
like
aliases
or
ways
to
reuse,
that
maybe
the
same
thing
applies
to
you
know:
json,
query
type
things
and
it's
a
non-problem
at
that
point.
Right.
D
So
there's
like
two
cases
where
you
want
to
extract
extract
data
from
the
json
definitively
to
be
used
as
a
siri
and
the
other
case
is
it's
actually
not
readable
right
now,
so
we
should
cover
those
two.
I
think
we
should
make
it
readable,
whether
it's
scaffolder
or
yeah
or
looking
like
one
one
or
the
other
has
to
handle
that.
C
So,
for
now
you
know
pretty
printing,
I
think,
will
get
us
a
long
ways
like
that
at
least
makes
it
manageable,
but
yeah
some
level
of
extraction,
whether
it's
the
ui
or
is
going
to
be
a
requirement
or
going
to
be
a
necessity,
yeah
right
ahead,
log
for
prom
tail
attitude,
and
I
have
been
going
back
and
forth
on
this
a
bit.
C
I
think
gotham
pointed
out
before
conceptually
it's
a
bit
goofy
to
have
basically
log
files
which
are
a
form
of
a
right
log,
reading
them
and
then
rewriting
them
to
a
disk.
The
there's
a
a
couple
other
reasons,
though,
like
one
of
the
reasons
is
prompttail,
does
source
data,
that's
not
from
a
file
via
syslog,
and
we
will
be
supporting
like
direct
ingestion
by
http.
C
So
in
those
different
scenarios
it
makes
a
little
more
sense,
but
the
the
real
reason
for
the
right
head
log
is
basically
kind
of
what
prometheus
remote
right
has
to
deal
with,
which
is
you
know.
Prom
tail
currently
can
send
to
multiple
low-key
servers,
but
the
implementation
of
that
is
a
bit
naive
and
doesn't
really
handle
what
happens
if
one
of
them
doesn't
respond
very
well,
so
that
they
don't
they're
not
isolated
properly.
C
So
if
one
of
your
loki
servers
is
holding
a
response
for
a
long
time,
it
affects
how
quickly
you
can
send
data
to
the
other.
It's
effectively
single
threaded,
you
know,
making
it
asynchronous,
creates
a
new
problem,
which
is
you
know?
How
do
you
handle
you
know?
How
do
you
handle
the
data
that
one
of
them
can't
receive?
C
You
have
basically
one
pointer
into
the
file.
We've
generally
solved
this
problem
by
running
multiple
prom
tail
instances
and
each
having
their
own
positions,
file
and
tracking
that,
like
that
works
well,
but
there's
a
fair
amount
of
overhead
there.
It's
not
a
very
sort
of
elegant
solution,
so
a
right
ahead.
Log
will
allow
us
to
do
what
prometheus
does,
which
is.
You
know,
basically
keep
multiple
pointers
into
it
for
multiple,
you
know
sending
clients
to
be
able
to.
You
know
track
how
far
someone
has
and
have
durability
against
crashing.
G
Wait,
why
do
we
need
the
writer
headlock
to
have
multiple
pointers
into
our
file.
G
Yep
in
your
tracking,
in
the
tracking
file
you
just
have
like
you,
just
have
a
per
remote
entry.
C
Know
very
well,
I
mean
conceptually
like
you're
gonna
have
a
file
right,
which
has
all
the
and
and
someone
is
gonna,
be
at
point.
A
and
someone's
gonna
be
at
point
b
how
that's
stored.
I
don't
necessarily
care,
but
we
would
need
to
have
a
way
for
tracking
different
positions
in
that
file
or
in
the
log.
In
the
right
hand,
log.
G
E
I'll
go
to
the
point
where
we
will
have
one
right-hand
log,
but
two
different
clients
which
is
looking
for
the
same
values
will
be
taking
that
l
and
we'll
be
writing.
G
So
my
question
is
so:
let's:
let's
forget:
we
have
a
writer
headlock.
Let's
have
a
single
file
that
we
that
a
log,
a
log
stream
that
we
are
tailing,
we
can
still
achieve
multiple
pointers
into
the
log
stream.
We
don't
read
or
write
a
headlock.
C
Yeah
I
like
this
wait
so,
but
that
doesn't
help
if
we're
not
reading
from
a
log
file,
and
it
also
the
problem
we
have
with
log
files
is
log
files
roll
outside
of
our
control.
G
Which
is
which
might
be
a
good
thing,
it's
like,
because
we
need
to
roll
the
prometheus
right,
like
the
prompter
like
right
headlock
as
well,
if
the
remote
doesn't
respond
and
we
need
to
build
in
controls
and
limits,
and
all
of
that.
C
We
would
be
able
to
control
that
right,
like
we
would
say,
buffer
x,
amount
of
data
or
x
amount
of
time
or
something
right
like
we
don't
have
any
control
over
the
files
themselves,
so
a
file
can
roll,
and
that
sets
a
point
in
time
which
we
then
either
need
to
send.
What's
there
or
ultimately,
that
file
can
be
deleted
right,
like
we
don't
have
any.
G
I
mean
like
this
again.
This
seems
super
goofy
to
me
and
I
don't
see
other
tailing
clients
do
this.
I
don't
know.
D
D
This
is
to
me
it
feels
real,
like
I
understand
the
use
case
for
things
like
not
files,
but
when
it
comes
to
sending
data
from
a
file,
I
don't
think
we
need
a
right-hand
log.
If
someone
is
really
complaining
about
file
rolling,
I
mean
it's
it's
up
to
him
to
not
hold
the
file
too
early.
He
can
change
that
we
can.
We
can
give
some
advice
on
you.
A
D
C
I
guess
to
me:
it's
it's
no
different
right
like
you,
you
solve
it
the
same
way
I
mean
at
least
internally
prompted
structured.
You
would
solve
it
the
same
way,
not
that
that
should
drive
the
decision
on
how
we
do
it.
But
you
know
the
amount
of
work
to
rewrite.
Prom
tail
is
sort
of
valid
there,
but
you
know
if
we're
receiving
from
syslog,
and
we
need
to
write
a
right
head
log
for
that,
like
I
don't
I
don't
see
why
we
wouldn't
take
advantage
of
that.
D
C
C
And
then
we
send
that
we
create
a
batch.
The
pointer
is
adjusted
in
the
file
before
we
confirm
the
batch
is
sent
so.
G
Yeah,
we
can
fix
that.
I
think
right
ahead,
like
over
engineering,
that.
C
D
And
I
like
to
keep
the
idea
of
hotel
being
a
smaller
agent,
also
so
duplicating
that
on
disk
feels
like
it's
going
to
be
a
pain
to
operate
like
you
know
again
quickly.
If,
if
you
have
like
a
a
single
node
that
really
logs
a
lot,
then
you're
going
to
need
a
very
big
pvc,
because
it's
double
and
then
it
starts
to
be,
you
know
likely
to
operate.
C
Yeah,
I
mean
it's
definitely
an
ios
problem.
Right,
like
you
know,
because
you're
duplicating
an
iops.
You
know
that
or
double.
D
F
One
advice
here
from
fluenty
experience
is
that,
basically,
because
you
run
this
a
serial
site
as
demon
set,
you
run
this
on
worker
notes
and
people
need
some
control
on
how
big
these
buffers
will
become,
and
if
these
buffers
get
full,
you
need
also
an
exception,
a
back-off
strategy,
something
to
signal
how
to
work
out
on
these
things.
So
you
need
also
the
reverse
to
to
look
in
the
reverse
case.
F
So
if
things
don't
go
well,
because
the
store
is
not
responding
or
whatever
and
people
need
control
of
that
at
least
our
experience
with
doing
the
openshift
is
that
a
lot
of
people
ask
for
these
features.
F
D
Yeah,
that's
interesting
that
you
brought
up
the
subject
of
qnd,
because
fluently
does
exactly
some
sort
of
wall,
but
they
call
it
a
buffer
and
it
can
go
on
disk
but
like
like
a
parakis
is
saying
you
can
configure
the
maximum
byte
that
the
buffer
will
handle
and
above
that
it
will
start
skipping
data.
So
maybe
we
should
look
into
how
friend
is
doing
this.
F
This
was
not
an
advice
and,
at
the
same
time,
a
big
warning
and
at
least
from
our
customer
cases,
we
can
tell
that
it's
not
easy
to
find
a
good
balance
between
the
workloads,
logging,
things
on
the
notes
and
fluency
logging
itself.
F
By
creating
these
buffers
on
the
node,
you
need
to
find
a
balance
before
taking
a
note
away
because
of
missing
space,
and
that
and
it's
not
easy-
it's
not
an
easy
task.
Definitely
not
that's
why
we
we,
until
now
we
don't
expose
these
things
for
users
in
in
cluster
logging
in
openshift,
and
but
people
persistently
ask
for
this
knobs.
F
C
Cool,
thank
you
yeah
that
and
I'm
forgetting
his
name
now.
There's
another
fellow
from
red
hat
that
way
back
opened
issues
actually
met
him
at
kubecon.
Where,
when
you
do
disk
logging,
you
can
effectively
a
node
from
all
of
the
pods,
so
we
actually
have
with
the
docker
logging
driver
a
work
around
for
that
now,
like
you,
can
do
logging
with
no
disk
touching,
but
that's
sort
of
a
desired
state.
C
I
think
we
want
to
have
support
for
either
through
the
docker
logging
driver,
and
I
don't
know
if
kubernetes
will
ever
directly
support
this
or
not,
but
basically
no
disk
logging,
because
the
like
the
iop,
the
I
o
operations
like
if
you
saturate
the
I
o
of
a
node,
you
affect
every
pod
on
that
node
effectively,
regardless
of
whether
they're
logging
or
not,
and
there's
not
much.
You
can
do
to
about
that.
F
Cryo
has
this
kind
of
features
but
yeah
you
need
to
test
them
and
then
figure
out
how
things
work.
F
So,
for
example,
we
we
have
this
kind
of
strange
situation
where
we
would
like
to
back
off
on
this
and
say
to
the
cryo
to
cryo:
hey,
stop
emitting
things
so
go
slower,
and
this
would,
let's
say,
put
the
the
the
single
container
in
a
state
like
going
slower
yeah.
It's
not
really
the
right
wording
for
that,
but.
F
Basically,
it's
not
there,
so
that's
why
we
stay
again
still
with
the
files
and
people
can't
select
between
either
throwing
an
exception
and
the
fluency
thread
stops
and
stop
emitting
things.
F
F
So
users
will
ask
for
customizations
here
or
flags
on
on
prom
tail
to
tune
this
on
their
situation.
D
Out
of
time
yeah
I
wanted
to
just
quickly
to
see
pericles.
How
is
it
going
with
your
experiment
of
running
the
baldy
bishop?
I
think
you
were
trying
also
to
set
up
the
what's
the
name
of
the
gossip
you
wanted
to
use
a
member
list.
How
is
it
going.
F
We
are
we're
preparing
our
manifest
currently
for
a
first
internal
release
against
our
customer
zero,
which
is
an
internal
customer.
We,
we
have,
let's
say
very
few
volume
currently
on
that
thing,
which
is
basically
me
only
sending
logs.
So
I
cannot
tell
currently
a
lot
of
how
good
or
how
performant
it
is.
However,
I
hope
that
they
can
share
my
experiences
in
the
next
weeks
when
we
have
the
more
volume
of
that
system.
F
So
far
it
looks
promising
because
I
don't
see-
let's
say
averaging
the
logs
or
any
anything
that
cause
havoc,
but
I
don't
have
the
volume
on
the
traffic
currently
to
say.
Let's
say
we
run
an
issue
x
or
y,
so
stay
tuned.
F
With
a
big
question
mark,
we
we,
the
big
the
big
interest
into
this
specific
area,
is
to
run
without
any
dependency
on
console
or
ncd
in
environments
where
we
don't
have
it
on
board,
and
so
we
we
also
try
to
try
to
let's
say
to
to
break
the
barrier,
how
low
cost
you
can
run
loki,
meaning
just
with
some
computing
power
and
some
s3
storage.
F
F
C
Currently
so
I
put
performance
as
a
question
mark
as
we
just
don't
know,
we
don't
have
it
in
a
cluster,
yet
with
any
real
substantial
volume,
the
you
know
the
chunk
deduping.
We
have
a
work
in
place
for
that,
so
we
have
to
change
cortex
to
be
able
to
allow
so.
The
chunk
deduping
work
there
for
replication
uses
a
cache,
but
it
currently
doesn't
write
the
index
entry
when
it
dedupes
and
we
need
it
to
write
the
index
entry
because
the
index
is
not
shared.
C
In
this
case,
we
sandeep's
working
on
a
tool
to
do
some
validation
of
basic
queries.
So
we'll
put
two
clusters
side
by
side
and
the
query
t
tool
that
we
wrote
for
cortex
is
going
to
get
some
ability
to
validate
the
responses
are
the
same
and
generate
some
metrics
on
that,
and
so
the
general
performance
questions
are
going
to
be.
So
it's
it's
interesting
like
this.
C
The
work
we're
doing
here
is
very
similar
to
what
cortex
is
doing
with
tstb
blocks
and
what
thanos
is
doing,
because
those
are
you
know
highly
aligned
or
coupled
it's
just.
We
have
a
slightly
different
approach,
so
they
have
a
per
block
index
and
we
still
have
a
sort
of
global
index,
but
we're
gonna
end
up
facing
a
lot
of
the
same
issues
like
you
have
to
download
the
index
to
the
queries.
C
You
know
as
your
data
set
grows
and
grows
and
grows
that
becomes
its
own
thing,
so
you
know
cortex
and
thanos
are
introducing
a
store,
gateway
and
levels
of
caching
to
help
work.
With
that,
I
expect
we're
going
to
need
to
go
down
similar
roads
as
well.
D
I
don't
expect
any
performance
issue
on
the
push
path,
so
it
should
be
pretty
much
the
same
yep
right
now.
I
know
yeah
oslo
is
below
a
second
batch
received,
yeah.
C
Downloaded
right,
like
they'll,
be
in
memory
and
it's
it's
questions
of
like
when
that
stuff
gets
really
big.
You
got
to
start
figuring
out
how
to
shard
it
and
how
you're
going
to
download
like
the
startup
time
of
a
query
if
it
has
to
go
fetch
index
files
or
at
query
time.
I
think
the
way
it's
implemented
right
now
or
I
should
say
I
know
the
way
it's
implemented
right
now.
Is
we
fetch
the
index?
C
If
we
don't
have
it
already,
so
there's
a
download
time
there
that
the
query
has
to
pay
and
then
it
keeps
it
and
stores
it
for
amount
of
time
and
so
levels
of
caching,
there
will
be
important,
but
memcache
has
not
been
a
great
fit
for
us
for
caching
really
large
things.
So
I
think
we
need
to
look
at
redis,
probably
or
something
else
for
that.
F
C
F
So
you
expect:
do
you
expect
an
on
a
more
than
double
or
three
digit
gigabyte
scale
issues
on
query
time
right,
because
you
need
to
download
all
the
indicators
that
are
shipped
on
s3
on
the
courier
side
and
then
merge
them
and
yeah.
C
Yep,
so
we
probably
are
going
to
suggest
changing
to
24-hour
index
periods
to
to
change
how
this
works
a
little
because,
like
if
you
right
now,
we
have
a
weak
index
by
default,
and
so
a
week-long
index,
depending
in
our
bigger
environments,
could
be
as
much
as
a
few
gigabytes.
So
you
know
downloading
a
few
gigabytes
from
s3
at
query.
C
Time
is
going
to
take
you
know
many
seconds,
so
that
is
a
hit
at
the
query
performance
breaking
that
down
into
smaller
chunks,
like
24-hour
periods
means
you
have
to
download
less
of
them.
Some
of
this
stuff
will
will
work
itself
out
already
kind
of
nicely
with
the
query
front
end
and
the
way
queries
are
split,
so
each
individual
query
would
only
be
downloading
or
processing.
C
You
know
a
small
period
of
time
on
the
split,
and
so
you
get
some
parallelization
benefits
as
they
all
sort
of
download
couriers,
but
there
will
be
some
changes
there
right
because
it's
you
know
querying
one
central
large
store
like
table
is
going
to
be
fast
for,
like
those
large
periods
of
time-
and
you
know
when
we're
managing
that
ourselves
and
we
have
to
download
it
or
cache
it
or
you
know,
download
it
and
re-download
it
or
something
like
that.
We're
going
to
you
know
pay
some
of
that,
but
I'm
not
sure
how
that
well.
C
That's
why
it's
a
question
mark
because
I
don't
I
don't
know,
I
don't
think
it's
going
to
be
terrible,
but
it
you
know
it.
It's
certainly
I'm
sure
it
can
be
in
some
environments
right,
like
in
huge
environments
with
very,
very
diverse
query
loads
you're
either
going
gonna
need
to
have
queriers
with
a
huge
amount
of
storage
for-
and
I
say
huge-
I
mean
you
know-
probably
in
the
closer
to
like
10-
or
you
know-
100
terabytes
a
day
like
where
your
index
is
going
to
be
significant
in
size.
C
Right,
like
our
environments,
that
are
at
five
terabytes
a
day.
The
index
is
still,
I
don't
know
a
gigabyte
a
week.
You
know
that's
not
unreasonable
and
also
potentially
in
longer
retention
periods
like
if
you
had
a
very
long
retention
period,
and
you
had
query
loads
that
were
trying
to
query
over
very
long
periods
like
that
stuff
is
going
to
be
tricky,
but
that's
already
kind
of
tricky.
Now,
like
that's,
not
necessarily
the
use
case
that
we've
optimized
well
yeah,
which
is,
I
want
to
run
a
query
over
a
year's
worth
of
data.
C
Loki
is
more
than
happy
to
sort
of
just
continue
to
paralyze
and
process
those
requests,
but
we
don't
really
have
a
framework
for
submitting
a
job
and
getting
an
async
result,
which
is
what
you
would
need
for
something
that
large
or
paginating
it
or
something
like
that,
but
letting
the
client
sort
that
out,
but
the
the
client
side
part
of
that
we're
sort
of
talking
about
now,
which
is
you
know
with
grafana,
and
things
like.
If
you
make
a
query
that
times
out,
is
it?
C
Can
we
make
it
so
that
you
can
get
basically
the
results
that
were
partial
results
of
what
was
returned
and
then
make
it
easy
to
sort
of
resubmit?
The
query
continuing
from
where
you
left
off.
I
didn't
write
that
in
here,
because
it's
not
really
a
goal,
but
it
maybe
if
we
can
do
it
easy
enough
for
the
next.
C
You
know
quarters
worth
of
work,
but
so
I
mean
in
a
nutshell:
I'm
not
expecting
any.
You
know
significant
performance
problems,
but
we
will
face.
You
know
the
reality
that
we're
downloading
the
index
files
and
merging
them
and
when
those
index
files
become
multiple
gigabytes,
that
that
time
is
significant
right,
like
we'll,
have
to
figure
out
optimizations.
F
C
A
I'm
monitoring
the
other
meeting.
There
is
no
one
on
there,
okay,
so
and
just
roll
over,
but
point
of
order
before
that.
Would
it
make
sense
to
just
make
this
call
one
hour
long,
because
I
think
we're
always
running
over,
which
is
good
actually,
because
we
have
enough
content
for
for
more
than
those
30
minutes.
But
then
we
should
actually
adapt
the
process
to
to
reality,
not.
D
A
No,
I'm
pretty
sure
we
ran
over
the
last
call
as
well,
and
we
basically
had
the
same
thing
with
with
with
the
same
discussion
about.
Maybe
it
was
a
different
community
call,
but
I
think
I'm
pretty
certain.
I
don't
think
I'm
pretty
certain.
It
was
this
one
which
again
it's
a
really
good
sign
that
there's
enough
stuff
to
be
talked
about.
A
We
tend
not
to,
but
I
don't
really
care
like
it's
it's
I.
I
can
tell
you
what
what
has
been
done
before,
but
this
is
just
information.
I
I
deliberately
don't
have
an
opinion
on
on.
If,
if
you
want
to
record
the
loki
bug
scrub
or
not,
if
you
want
to
no
worries-
and
I
can-
I
can
stop
the
recording,
I
can
create
a
new
recording
and
then
we
can
even
upload
the
correct
bits.
C
Yeah
I
mean
it
doesn't
hurt
anything
people
can
watch
if
they
want.
I
don't
know
why
anybody
would
want
to
watch
it,
but
I
guess
it's
curious
to
see
I
mean
what's
interesting
to
me
about
it.
Right
is,
like
I
mean
we
have
this
problem
right.
Like
you
know,
we
have
150
open
issues
the
two
or
three
times
we've
done
this
we've
started
at
the
end
with
the
oldest
issues
that
has
the
problem
where
we
like
we've
talked
about
those
issues
again
and
again,
and
usually
nothing
has
changed.
C
We
started
at
the
very
beginning
of
the
issues
and
the
problem
with
those
is
if
they've
not
really
even
been
triaged
or
looked
at
many
of
them
are
not
worth
having
an
audience.
This
large
look
at
so
then
I
just
clicked
into
the
middle
and
this
week
I'm
just
going
to
start
right
in
the
middle.
Let's
just
pick
some
issues
and
see
what
we
think
about
them
right.
A
C
Yeah
we
could
yeah,
I
don't
remember
when
we
started.
H
H
H
A
Can
do
that,
but
that's
not
that
that's
at
least
not
the
initial
intention
of
the
bug
scrubs,
as
as
prometheus
team
started
doing
them
there.
The
intention
was
really
quickly
iterate
through
all
the
issues,
make
sure
they're
still
valid,
get
a
quick,
really
quick,
lightweight
consensus
on
on
to
close
ad
tags.
Remove
text,
maybe
poke
some
people
whatever
and
move
on
like
walk
through
it
briskly
at
pace.
So
every
so
it's
reasonably
clean
it's
reasonably
up
to
date,
and
everyone
has
a
rough
idea
of
of
what
to
do.
H
A
So
at
least
to
me
the
main,
the
main
beauty
of
having
a
concerted
bug
scrub
is
that
you
get
the
hive
mind
to
to
quickly
say:
is
this
still
valid?
Do
we
need
more
information?
Do
we
want
to
reassign
this
to
someone?
Do
you
need
to
poke
a
person
for
more
information?
Maybe
it's
already.
It's
already
not
valid
anymore,
because
we
fixed
it
and
we
just
didn't,
update
it
and
the
main
the
main
benefit
here.
Is
you
have
more
brains,
you
can
you
can
quickly
ask
others
and
just
walk
through
it
relatively
at
pace?
A
Okay,
you
know
you
have
a
clean,
clean
issue
list,
a
clean
bug
list
and
then
you
can
work
on
top
of
that
because
you
know
there
is
some
baseline
consensus
because
you
you
talk
to
other
people,
you
don't
have
to
think
about
whom
should
I
be
doing
this
or
that-
or
maybe
it's
like
this,
because
you
had
someone
to
talk
to
recently
and
also
you
write
this
down.
A
A
Okay,
so
I'll
stop,
recording
the
community
call
and
then
I'll
restart,
recording
the
buck
scrub
correct,
or
should
I
not
record
the
bug
scrub.