►
From YouTube: Loki Community Call 2020-09-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Recorded
okay,
so
the
agenda
is
actually
pretty
short
for
today
I
didn't
really
have
it's
been
a
bit
quiet,
the
most
work
that
we're
going
to
talk
about
at
least
what
we're
working
on
are
the
bold
tv,
shipper
and
alerting
stuff,
but
anybody
has
anything
else
that
they
want
to
talk
about
or
add
or
questions
feel
free
to
pop
them
in
at
the
bottom
of
the
list.
B
Sure
so
we
merged
alerting
and
have
been
running
it
in
a
number
of
different
environments
and
basically
cleaning
up
small
bugs
here
and
there
and
then
most
of
my
work
recently
has
been
in
the
kind
of
cortex
and
then
some
some
tooling
that
we
have
there's
a
repository
called
called
cortex
tools,
which
is
a
cli
used
for
interacting
with
the
ruler,
among
other
things,
so
basically
pushing
out
changes
to
support
log
ql
in
those
respective
places.
B
B
So
that's
kind
of
the
first
stage
we
are
going
to
add
a
way
to
persist
the
metrics
created
during
alerting
by
prometheus
as
well,
that's
initially
not
included
in
the
loki
distribution
of
alerting,
because
we
don't
bundle
a
metric
store
at
all
and
so
a
little
bit
of
background.
Basically,
when
you
evaluate
alerts
you,
you
persist
a
few
metrics
to
vacuum,
prometheus
store,
which
makes
sense
if
you're
running
prometheus
loki,
it's
a
little
bit
trickier.
B
B
B
B
There's
not
that
much
surface
area
where
this
is
actually
useful.
In
my
opinion,
just
like
this
right,
like
you
can
prep
for,
like
you
know,
people
leaking
http
credentials
right
things
like
that,
but
for
this
to
become
significantly
more
useful,
we're
probably
going
to
need
the
log
ql
b2
updates
merged,
which
is
being
headed
by
cyril.
When
he
comes
back,
I
think
that's
next
week.
I
hope
it's
next
week.
B
C
That
makes
sense,
but
but
I
guess
you
work
on
the
framework
itself,
so
you
can
configure
something
like
prometheus
alert
rules
but
with
log
ql
instead
of
yeah.
So.
B
The
api
is
the
exact
same
right
with
the
distinction
that
we're
just
evaluating
while
ql,
instead
of
prongql
and
then
a
kind
of
mess
a
few
messy
internals,
because
we
don't
actually
write
series
to
prometheus
anywhere
right.
C
A
All
right
so
the
bolt
db
shipper-
I
I
I
threw
another
name
out
for
this,
but
everybody
shot
me
down
that
we
needed
another
branding
for
the.
So
the
bolt
db
shipper
is
a
new
index
type
in
loki.
It's
not
probably
new
for
people
that
have
heard
me
say
this
before,
but
to
replace
the
requirement
for
a
nosql
store
like
dynamodb
or
bigtable,
and
instead
use
bulldb
and
flush
the
full
db
files
to
the
object
store.
A
A
The
initial
the
initial
supply
was
sort
of
comically
bad,
but
sandeep
has
been
doing
some
awesome
work
through
compressing
index
files
and
the
addition
of
a
compactor,
and
I
like
it
bartek
and
actually
it's
doing
pretty
good
so
sandeep.
You
wanna
give
us
an
update
on
where
we're
at
now
and
what's.
D
Next,
so
we
are
doing
about
two
dbs
per
day
in
that
cluster
and
we
are
running
two
parallel
clusters,
one
using
big
table
and
another
using
volte
shipper,
and
we
are
replicating
the
rights
to
both
the
cluster
so
that
they
have
same
logs
and
we
are
using
a
query
in
front
of
it.
So
we
we
compare
the
query
performance
and
the
query
response,
so
the
big
table
cluster
receives
many
more
queries
because
we
run
canary
that
point
that
points
to
it.
D
So
the
query
load
is
not
the
same,
but
for
the
queries
that
we
are
getting
from,
grafana
are
same
and
are
getting
compared.
So
for
some
of
the
queries
voldemi
shipper
is
faster
for
some
of
the
queries
big
table
is
faster,
so
the
queries
which
are
slower
are
for
recent
data,
which
are
for
which
the
index
is
still
with
investors,
and
we
have
to
query
injustice
for
the
logs,
and
that
is
slope.
A
Awesome
yeah,
that's
that's
fantastic.
They
actually
was
a
little
quicker
to
get
to
those
comparable
performances
than
we
expected
yeah
the
the
work
next,
so
the
the
sort
of
limitation.
Now
is
that's
not
a
limitation,
but
it's
it's!
It's
it
can
be
improved,
is
a
better
way
to
say
that,
where
the
the
way
that
works
right
is
the
ingesters
are
creating
these
bold
db
files
and
then
uploading
them
in
a
regular
interval
and
until
they
upload
those
files
they're
the
only
knowledge
of
flushed
chunks.
A
So
so
the
queries
have
to
ask
the
adjuster
say:
hey
give
me
the
data
for
the
chunks
that
you
flushed,
but
that
are
part
of
the
index
you
haven't
shared
with
everybody
yet
and
that
forces
the
ingestors
to
do
querying
like
the
queriers
do,
which
then
requires
their
memory
usage
to
be
increased
and
as
sandeep
says,
performance
is
an
issue
too,
because
they're
doing
other
things
so
syndeep's
idea,
which
I
like
is
a
good
one,
is
rather
than
having
the
ingesters.
Do
the
query
and
return
the
logs,
we're
gonna.
A
Add
another
api
to
the
grpc
connection.
To
just
have
them
do
the
index
lookup
so
that
you
can
ask
the
adjuster
hey
just
give
me
the
chunk
ids
for
the
chunks
you
flushed,
that
you
haven't
shared
yet
and
then
the
queries
can
download
that
data
and
process
it
separately,
and
that
then
removes
a
lot
of
that
extra
memory.
Burden
of
having
the
adjusters
do
that
lookup.
A
So
that
should
help
close
the
gap
there,
as
well
as
stop
us
from
crashing
our
injesters
and
that's
pretty
exciting,
the
the
I
think
you
know,
I
think
what
we're
hoping
to
do
is
get
you
know
that
experimental
flag
off
of
this,
and
I
think
we're
going
to
probably
be
able
to
do
so,
not
too
terribly
long
in
the
future.
Certainly
with
you
know,
workloads
similar
to
what
we've
tested
on
and
then
the
in
the
long
term.
A
You
know
anybody
that's
familiar
with
what
cortex
has
been
doing
with
the
tscb
store
conversion
we'll
face
some
of
these
same
problems,
eventually,
where,
if
you
look
orders
of
magnitude
bigger
and
your
index
is
now
you
know
a
terabyte
in
size
right,
like
probably
two
orders
of
magnitude
bigger
than
what
we
are
now
so,
if
you're
doing
like
100
terabytes
of
logs
a
day
and
you're
shipping
a
one
terabyte
index,
you
know
the
download
of
that
will
become
so
that
where
we
have
to
look
into
like
a
store
gateway
and
sort
of
splitting
and
sharding
that
into
chunks
and
or
into
separate
memory,
components
and
those
kinds
of
things
I
expect
to
come.
A
A
You
know
query
splitting
and
caching,
and
not
yet
at
least,
but
we
will
probably
end
up
there
anyway,
just
because
there'll
be
performance
gains
for
doing
that
too.
So
yeah,
good
stuff
perry.
You
want
to
chat
for
a
bit
about
the
loki
benchmarks,
project
yeah.
E
Sure
I
have
also
to
share
a
little
bit
my
screen
for
that,
because
I
I
I
mean
we
have
here-
nine
people
at
least
so
we
should
should
make
this.
Let's
say
not
over
the
voice.
Let's
give
it
a
sec.
E
I've
created
just
two
slides
just
for
visualization
and
I
need
only
to
digest
wayland
sharing
techniques
again.
D
C
Just
a
cold
component,
cool
yeah,
okay,
sorry
very
pleased.
E
No,
it's
okay!
You
just
need
to
to
give
me
the
voice,
because
I
cannot
see
you
so
I
don't
know
where
you
are.
You
just
can
only
hear
you
currently.
Okay,
look
at
benchmarks
for
some
of
you.
This
is
not
really
new.
Some
of
you
work
with
me
in
the
same
company.
That's
why?
But
in
general
we
have
this
kind
of
very
small
flame
started.
E
A
couple
weeks
ago
in
my
team,
we
would
like
to
have
a
tool
to
evaluate
loki
besides
grafana
cloud,
especially
in
our
case
for
a
typical
openshift
cluster.
So
it
means
something
I
have
loki
running
inside
the
cluster
somewhere
and
I
want
to
stress
test
it
and
I
want
to
have
in
general
for
every
time
I
run
this
stress
tests
or
benchmarks
how
we
started
to
calling
it
have
a
report
and
also
discuss
around
this
report
and
get
some
nice,
let's
say
insights.
E
So
basically,
we
started
writing
something
around
ginkgo's
benchmarking
suite.
E
So
something
like:
how
can
we
structure
and
write
some
things
like
like
test
tes
vdd-like
benchmarks,
and
we
came
up
with
a
small,
simple
repository
out
with
where
a
week
which
we
call
loki
benchmarks,
and
it's
just
a
small
driver
test
suite
where
you
say
we
throw
this
test
again,
a
kubernetes
cluster
where
you
have
loki
inside
running,
we
collect
some
measurements
through
the
metrics
that
look
exports,
and
then
we
create
just
a
simple
report
so
that
people
can
see
what
what
is
running.
E
So
what
we
really
currently
support
is
really
running
loki
inside
kate's.
The
benchmarking
suite
doesn't
deploy
this
thing.
It
expects
that
you
have
somewhere
loki
running.
E
We
do
some
couple
of
measurements
around
request,
duration
and
we
let's
say
you
can
install
this
specific
cluster
synthetic
logger
which
is
based
on
serial's
logger
work
months
or
years
ago,
which
I
had
to
bump
a
little
bit.
E
That
is
basically
a
prom
tail
that
gets
synthetic
lock
messages
the
center
they
sent
sends
them
against
loki
and
a
small
reader
utility.
So
you
can
do
query
ranges
currently
and
then
you
can
collect
samples
from
the
metrics
you
have
inside
loki
and
create
a
report
out
of
it.
So
basically
loki
benchmark
is
just
only
a
runner.
It's
just
something
you
can
run
from
your
machine,
but
it
doesn't
really
do
anything
more
than
you
point
it
to
a
tubericus
cluster.
E
You
deploy
there
by
giving
the
deployment
manifest
of
the
lockers
you
want
to
have
and
the
readers.
Currently
they
are
untainted,
so
they
may
land
on
the
same
nodes
as
your
lucky
cluster,
and
then
we
just
run
in
local
dot,
slash
prometheus
instance
and
just
scrape
over
port
forwarding
the
the
loki
metrics.
E
This
is
just
a
simple
thing
which
we
just
collect:
the
metrics
over
a
sample
period
and
simple
interval,
and
then
we
just
can
export
them
currently
to
csv
some
glue
plot
and
you
get,
let's
say
a
small
nice
overview
in
markdown,
so
the
gist
of
it
is
basically-
and
I
hope
this
is
readable-
this
is
basically
the
loki
benchmark.
Suite
is
something
that
you
declare
a
config.
E
It
includes
already
two
tests,
which
we
call
one
high
volume
reads,
and
one
high
volume
rights,
but
basically
you
declare
what
is
your
logger?
What
is
your
query
here?
This
is
basically
something
that
you
can
come
up
and
your
own
custom
images
here
that
know
how
to
push
and
how
to
read
from
loki.
The
metrics
declaration
part
is
here
more
how
we
want
to
scrape.
E
E
You
can
basically
create
new
tests
in
this
suite
and
you
can
then
declare
them
as
scenarios
and
there
there
you
have.
You
can
control
things
like
how
many
samples
do
I
want
to
take
at
which
range.
E
What
range
should
my
my
rates
rate
queries
get
in
and
on
every
which
interval?
I
want
to
collect
the
sample
same
for
writers.
How
many
writers
do
I
want
to
have
how
many
messages
should
they
have
as
a
throughput,
and
the
readers
are
the
same
thing
where
you
can
declare
the
query
for
query
range.
Currently,
where
you
say
okay,
I
want
to
collect
this
amount
of
one
hour
data
and
then
rate
them
sum
them
and
produce
something
by
level.
Something
like
that.
E
The
final
thing
is
start
the
threshold.
This
is,
let's
say,
a
latch.
E
E
I
call
it
demo
time,
but
it's
not
really
something
to
demo,
it's
just
more
so
where
things
are
currently
so
web
on
repo.
It's
currently
under
one
reddit
organization
called
observatorium,
but
we
were
talking
already
with
ads
to
migrate
this
to
something
more
easy
identifiable.
E
Basically,
this
is
the
repo
where
you
can
add
your
tests,
contribute
and
create
things
for
your
setup
and
the
world
just
is
basically
a
small
markdown
which
you
can
let's
say,
transform
to
html
and
something
that
you
can
let's
say,
zip
and
give
it
give
it
to
somebody
else
or
you
can
archive
it
to
see
how
what
happened
over
days,
but
this
is
basically
more
a
scratch
of
results.
E
Like
hey,
I've
done
my
I've
done
my
my
my
benchmark
and
I
want
to
have
let's
say
the
results
of
my
measurements
and
since
a
little
bit
of
csv
and
gnuplot
can
work
some
nice
images.
We
integrated
that
there.
So
you
basically
get
here
a
hierarchical.
E
Result
like
you
get
for
each
component
that
that
are
that
plays
into
a
test
you
get,
let's
say
it's:
p99
p50
and
the
average
of
the
durations
and,
for
example,
in
rights.
I
have
only
the
distributor
and
the
injustice.
That's
why
I
scrape
only
this
matrix.
However,
if
we
go
down
to
reits
reach
basically
land
on
the
very
front
end,
I
take
measurements
there
on
duration,
then
they
land
on
the
carriers
and
then
they
land
again
on
the
injustice.
E
So
you
may
wish
to
let's
say:
do
the
tests
from
an
end
to
end
fashion,
but
you
may
want
also
to
see
what
happens
on
each
component
when
I
do
this
high
volume
reads,
as
we
call
them
currently
yeah,
and
to
finish
this,
I
would
like
to
call
out
some
limitations.
We
are
looking
after
contributions,
the
one
thing
that
is-
and
this
is
probably
the
major
blocker
this
is
this-
is
this.
E
Other
reader
patterns
like
queries
and
tails,
maybe
run
partial
benchmarks
and
don't
use
a
local
prometheus
instance
describe
loki
use,
something
that
you
have
in
your
setup.
So
things
like
that
yeah
tainted
loggers
is
something
if
you
want
to
benchmark
something
in
true
isolation,
like
you
put
loki
on
a
specific
set
of
machines
in
your
kubernetes
or
whatever,
and
then
you
want
to
put
your
loggers
and
readers
somewhere
else.
So
basically,
this
out
ideas
that
I'm
coming
up,
improving
this
small
thing
and
moving
forward
for
us.
E
It's
good
enough
to
write
new
tasks
for
our
scenarios
and
test,
not
michelle,
but
probably
for
sure
it's
not
enough
for
you
people,
so
I'm
calling
out
to
the
community.
If
someone
wants
to
contribute
reach
out,
there
is
ton
of
work
to
do
so.
Let
me
close
the
presentation
and
nice
feel
free
to
shoot
questions.
A
That's
awesome,
perry!
Thank
you.
We
talked
a
little
bit
about
this
earlier
this
week
and,
and
you
know
this
is
an
area
too,
where
we
want
to
extend
this
for
sort
of
non-kubernetes
testing
too,
as
much
as
as
possible.
But
hopefully
what
I'd
love
to
see
is
a
framework
where
people
can
basically
benchmark
loki
on
their
own
hardware.
We
can
sort
of
figure
out
how
to
aggregate
those
results.
A
We
get
asked
a
lot
right
about
how
fast
is
loki
or
what
about
the
scenario
and
setup,
and
we
don't
really
have
any
benchmark
numbers
for
that.
So
it
would
be
nice
to
be
able
to
enable
the
community
to
generate
those
as
well
as
test
things
themselves.
So
that's
a
nice
ultimate
goal
there,
but
it's
a
great
start
thanks
very
much
for
that
work.
You've
done
so
far
before
I
move
on
to
this
log
cube.
A
Anybody
have
any
questions
on
that
all
right,
so
the
log
ql
v2
update
is
that
I
don't
really
I'm
not
quite
ready
to
give
an
update.
Yet
so
we've
we've
kind
of
sat
on
it
for
a
bit
because
serial's
been
on
pto
he's
coming
back
next
week
and
we
sort
of
yesterday
started
digging
up
some
discussion,
but
I
in
in
the
process
of
doing
so.
I
I
realized
that
probably
should
have
dug
this
discussion
up
sooner,
but
some
of
the
stuff
that
we
haven't
decided
on.
A
Yet
I'm
a
bit
concerned
about
a
kind
of
a
too
many
cooks
problem.
If
we
open
it
up
to
a
wider
community
because,
honestly,
like
the
the
decision,
needs
to
be
made
sort
of-
I
don't
know-
maybe
just
by
a
rough
consensus,
because
it's
going
to
be
picking
between
like
two
words
and
it's
going
to
be
hard
to
have
a
what
I'm
seeing
so
far.
Is.
I
don't
know
that
anybody's
going
to
have
like
a
more
logical
argument.
A
So
I'm
worried
that
if
we
add
too
many
people
to
the
dock
until
we
kind
of
narrow
down
some
of
those
more
ambiguous
decisions,
there's
just
going
to
be
more
people
presenting
different
logical
arguments
that
don't
really
have
any
more
bearing
or
weight.
That
too
many
cooks
is
what
I'm
calling
that
in
the
kitchen.
A
So
next
week,
I
think
we'll
do
when
cereal's
back
or
shortly
thereafter,
there's
a
small
group
of
I
think
five
stakeholders
that
are
looking
at
the
design
and
once
we
get
that
these
couple
last
things
figured
out,
we
will
publish
a
version
of
that
for
people
to
look
at,
and
I
imagine
at
that
point
soon
I'll,
probably
update
his
proof
of
concept
and
hopefully
get
something
that
we
can
play
around
with
a
bit
and
be
on
our
way
to
having
the
ability
to
extract
labels
at
query
time,
which
is
going
to
be
especially
with
what
owen
said
and
in
general,
a
huge
advantage.
A
I
did
want
to
call
this
out
because
I
really
like
this,
but
I
don't
have
the
pr
handy,
but
recently
merged
into
grafana
was
a
way
when
you're
in
explore
to,
if
you're,
using
json
or
log
format,
to
select
and
view
individual.
So
the
grafana
does
some.
A
You
know
client-side
parsing
of
the
response
and
if
it
recognizes
the
format
of
the
logs,
it
allows
you
now
to
select
individual
components.
This
is
nice,
particularly
for
sort
of
big
json
documents,
but
I'm
also
using
it
a
lot
now
with
log
format
where
you
know
a
lot
of
times.
I
don't
care
about
the
you
know
elements
I
only
really
want
to
know
something
like
latency
or
throughput,
and
you
can
go
in
and
select
those
now.
I
don't
know
if
that's
in
a
released
version
of
grafana,
yet
I
think
that
it
probably
is.
A
E
Which
will
be
end
of
september.
A
Gotcha
all
right,
so
I
teased
you
guys
too
early
unless
you
have
a
a
master,
build
or
a
nightly
build
that
you're
working
from,
but
I
think
this
is
exciting
because
we
have
the
master
build
running
internally
and
I've
been
using
it,
but
only
a
couple
minutes
left.
Does
anybody
have
anything
else?
Questions
comments,
concerns.