►
From YouTube: Loki Community Meeting 2021-10-07
B
B
All
right,
so,
if,
if
there's
nothing
or
something
missing,
throw
it
on
the
agenda
but
we'll
talk
about
observability
khan,
loki
2.4
since
roger's
here
and
I've
been
chatting
with
him
a
bit
about
some
work,
he
was
doing
on
the
tailing
in
prompto.
I
figured
we
should
talk
about
that.
B
I
want
to
do
a
demo
and
talk
a
little
bit
about
the
work
that
is
going
on
for
making
loki
easier
to
run
and
then
kavi
is
going
to
do
a
demo
on
a
project
turned
into
something
that
we'd
like
to
see
finished
up,
so
that
folks
can
use
log
cli
against
text
files
on
their
local
machines.
Like
a
super
fancy
grip.
B
Like
I
said,
feel
free
to
add
stuff,
if
you'd
like
to
talk
about
other
things,
hopefully
everybody's
seen
observabilitycon,
there's
a
talk
on
loki
wednesday
november
10th.
I
guess
we
have
pacific
time
in
utc
for
options
so
put
it
in
your
calendar
check
it
out.
There
will
be
a
live
q,
a
so
that's
the
incentive
to
come
watch
it
live.
If
you
can't
make
it
live,
we're
going
to
make
the
recordings
available.
B
I
believe
the
same
day
for
this
conference,
which
is
nice,
but
trevor
owen
and
ivana,
are
going
to
go
through
what's
new,
how
we've
basically
been
working
a
lot
on
the
usability
and
flexibility
for
loki
for
a
lot
of
common
use
cases,
but
I'm
not
gonna
give
away
any
more,
even
though
that's
you
have
to
come
watch
it
in
the
same
time
frame.
B
We
are
targeting
the
next
loki
release
to
be
at
the
same
basically
right
around
observability
con,
which
is
going
to
wrap
up.
You
know
most
of
the
features
we're
talking
about
in
what
will
most
likely
be
non-beta
format.
So
things
you've
already
heard
us
talk
about
recording
rules.
Danny's
talked
about
this
before
there's
an
implementation
out.
B
He
has
been
improving
that
implementation
to
take
advantage
very
cleverly
of
a
lot
of
code
already
from
prometheus
for
having
a
red
head
log
and
replaying
that
log
to
give
a
lot
more
durability
to
recording
rules.
So
we're
pretty
excited
about
this,
we
are
running.
Actually,
I
think
that
it
makes
itself
available
in
all
of
our
hosted
clusters,
probably
by
the
end
of
this
week.
B
So
if
you
want
to
play
around
with
loki
recording
rules,
I
guess
I'll
just
shill
our
hosted
offering
real
quick,
but
it's
a
good
way
to
go
see
and
if
you
make
a
new
account,
I
believe
you'll
end
up
with
the
new
grafana
alerting
experience
too.
So
that
has
easier
ways
to
create
recording
rules
through
the
ui,
so
you
can
play
around
with
you
know,
logs
to
metrics,
I
recommend
it.
C
B
Yeah,
definitely,
the
I
think
the
first
sort
of
fun
use
case
for
us
is
we.
You
know
we
log
a
lot
of
statistics
around
loki
queries
in
loki's
logs
and
we
write
some
recording
rules
to
basically
extract
some
per
customer.
You
know
query
latency,
you
know,
query
throughput,
query
metrics!
B
We
could
do
this
with
traditional
metrics,
but
the
nature
of
our
sort
of
distributed
infrastructure
means
you
typically
get
these
metrics
scraped
from,
depending
on
the
size
of
the
cluster,
tens
or
hundreds
of
pods.
So
you
get
a
lot
of
cardinality
from
the
environment,
so
I'm
not
sure
that
I'm
going
to
say
this
is
the
best
way
to
solve
all
of
your
problems.
But
what's
interesting,
is
you
end
up?
If
you
do
it
through
loki?
B
So
if
you
have
access
to
these
metrics
through
logs
and
you
write
a
recording
rule,
you
create
one
series
per
customer
and
not
potentially
hundreds
of
series
per
customer,
so
it
has
a
nice
cardinality
reduction.
On
generally,
a
dimension
you
probably
aren't
interested
in,
which
is
what
the
performance
per
component
within
the
cluster
is
we're
just
looking
at
the
customer's
performance
as
a
whole.
So
that's
been
fun
yeah
out
of
order.
Similarly,
owen
I'll,
just
kick
this
over
to
you,
since
this
has
been
your.
C
Champion
effort,
I
was
super
happy
to
finally
have
like
recording
rules
like
round
tripping
in
all
of
our
environments
and
like
this
is
something
that
we've
talked
about
for
I
don't
know,
probably
over
a
year
at
this
point
and
to
like
finally
kind
of
see
it
all
working
and
hooked
up
as
we
expected,
was
a
real
treat
but
anyways
how
to
order
we've
been
running
this
now,
for
goodness,
I
don't
know
a
few
months,
maybe
does
that
sound
right,
ed.
C
C
So
this
is
both
kind
of
coinciding
out
of
order
being
a
new
supported
format,
but
also
with
like
we
are
increasingly
pushing
towards
having
more
sensible
defaults
in
loki
as
kind
of
a
lot
of
our
effort
is
and
will
continue
to
go
into
making
loki
easier
to
use,
and
this
is
kind
of
an
example
of
that
there
are
a
lot
of
edge
cases
and
weirdness
around
different
agents
or
deployment
topologies
that
can
cause
out-of-order
problems,
so
we're
going
to
go
ahead
and
make
this
the
default
mode
of
operation,
because
there's
not
really
a
reason
not
to
from
our
perspective
yeah.
C
B
Yeah
I
was
going
to
put
a
picture
in
here
of
like
we
asked
on
the
issues
of
oh
actually,
that
was
yeah.
There's
both
of
these
at
some
point,
I'm
going
to
circle
back
around
to
the
issue
that
we
opened
on
what
people
wanted
in
2021
for
loki,
because
I
actually
think
we
we
did
a
number
of
things.
B
I
mean
there's
a
lot
of
a
lot
of
requests
on
the
list,
so
we
didn't
do
all
of
them,
but
some
of
the
really
commonly
requested
ones,
some
certainly
out
of
order
logs
and
then
this
one
scalable
simple
deployment.
We've
been
trying
to
figure
out
the
right
naming
strategy
for
this.
So
I'm
gonna
talk
a
little
bit
more
about
this
in
a
couple
seconds,
but
the
general
idea
is:
how
can
you
run
loki
both
easier
outside
of
kubernetes?
B
So
we're
doing
this
by
kind
of
opinionating
the
the
stack
a
bit
and
I'll
talk
about
the
trade-offs
in
a
minute,
but
that's
going
to
be
something
that
will
be
a
big
kind
of
feature
of
abilitycon
and
if
we
can
get
it
done
in
time,
which
I
think
we
can
roger,
I
don't
mean
to
put
you
on
the
spot
here.
I
don't
know.
C
D
I
can
talk
about
it,
so
we
have
a
user
in
our
local
environment
that
produces
they.
They
write
a
lot
of
blogs
like
ridiculously
high
high
volumes,
and
you
know
like
they
were
running
it
first
in
like
a
cube
like
running
as
a
daemon
set
on
on
a
per
node
basis
with
promptly,
but
based
on
the
docker.
D
Like
the
cube
environment,
they
had
limited
disk
size,
so
it
rotated
the
log
too
fast,
so
so
now
that
they
moved
to
like
storing
the
logs
on
an
efs
volume
and
then
and
then
we
have
prom
tail
reading
from
from
efs.
D
But
what
we
noticed
was
that
they
were
still
the
problem
they
had
was
like.
They
get
gaps
because
either
it
would,
it
would
rotate
the
log
too
fast.
D
We
would
miss
an
entire
log
file
or,
with
this
nfs
scenario,
we
would
still
lose
log
files
like
we
would
have
gaps
in
the
log
where
it
would
have
prompted
kind
of
missed
the
last
section
of
the
log
and
what
I,
what
I
found
was
that
there's
a
there's
a
risk
since,
since
when
you,
when
they
rotate
the
log
they
they
do,
they
flush
to
this
the
the
file
cache
and
then
they
rename
the
file,
and
since
the
flashing
of
the
data
needs
to
to
traverse
the
network,
it
splits
that
up
into
smaller
chunks-
and
it
seems
like
prom
tail-
is
able
to
reach
end
of
file
multiple
times
before.
D
All
of
that,
cache
has
been
flushed
to
disk,
causing
prom
tail
to
like
close
the
file
too
early
before
before
everything
has
flushed.
D
So
I've
made
a
change
to
the
tail
to
the
underlying
tail
package
that
we
use
so
that
when
it
when
it
detects
that
the
file
is
deleted
or
renamed,
it
will
keep
the
file
open
until
like
it
will
wait
until
there's
been
no
more
changes
for
a
certain
time
period
before
it
actually
closes
the
file.
Instead
of
just
running
it
one
more
time.
B
I'm
I'm,
I
wrote
the
code
that
you're
in
so
I
empathize
with
the
problems
that
you're
probably
facing
well.
I
should
say
that
we
took
this
library
and
I
added
the
code
that
does
the
one
last
read
attempt
and-
dare
I
say
this
in
a
recorded
public
session-
I'm
a
bit
amazed
that
it's
worked
as
well
as
it
has
to
be
honest
because
it's
the
the
going
back
like
when
this
is
a
story
time
with
that
guys,
because
we
got
a
short
call
today.
B
So
one
of
the
first
things
that
I
worked
on
and
lookee
project
when
I
wasn't,
I
was
new
to
go
and
new
to
a
lot
of
stuff
was.
Was
this
and
you
know
we
had
a
number
of
problems
with
prom
tail,
not
following
and
catching
all
of
the
logs.
B
I
looked
at
a
number
of
different
ways,
so
the
tailing
implementation
supports
file,
watches
and
notifications
on
changes,
and
it
also
does
polling.
I
wish
I
could
remember
at
this
point
who
who
it
was
that
did
a
really
really
good
talk
at
fosdem
about
I.
I
can't
even
remember
the
project
now.
My
brain
is
I'll,
find
it
and
put
in
the
notes,
because
it's
worth
watching
on
the
different
ways
to
tail
a
file
and
like
different
ways
to
try
to
basically
do
the
best
job.
B
You
can,
and
I
looked
at
how
other
systems
do
it
like
file,
beat
file,
beat
polls,
but
filebeat
had
a
distinct
advantage
that
they
could
pull
and
send
things
because
at
the
time,
elastic
supports
out
of
order,
but
but
prom
tail
doesn't
or
loki
did
not.
So
we
couldn't
leave
a
thread
open
on
the
old
file
handle.
B
So
that's
something
we
can
actually
talk
about
now
is
a
different
option
that
we
have
available
to
us
now,
which
is
having
say
a
thread
per
file,
so
that
you'd,
because
that's
the
way
they
handle
it,
was
a
timeout
at
the
end.
That
said,
you
know,
if
you
don't
get
any
updates
on
this
file
after
a
certain
amount
of
time,
consider
it
close
and
stop
tailing
it.
Actually
speaking
out
loud
is
the
first
that
I
really
thought
about
this.
B
To
be
honest,
roger,
I
would
have
given
you
some
pointers
earlier
so
so
we
went
with
this
approach
of
like
when
you
get
to
the
a
notification
that
the
file
has
changed.
The
file
name
is
changed
or
basically,
when
you
we
do
a
stat
on
the
file
and
the
file
handle.
We
get
back,
isn't
the
same
as
the
one
we
have.
We
know
something
has
changed.
It
does
one
last
attempt
to
read
the
file
and
the
contents
to
the
end
of
the
file
before
moving
to
the
new
one
which
actually
works
really.
B
Well
aside
from
the
scenario
you
described
where
you
might
be
trickling
some
data
in
over
a
network
connection,
so
what
you're
doing
sounds
you
know
like
a
good
solution
to
me
right,
like
I
didn't
see
anybody
else
solve
this
differently,
other
than
have
some
dwell
time
that
you
wait
to
say.
Okay,
I'm
really
done
with
this
file.
The
other
problem
that
you
faced
in
the
first
place
is
one
that
I'm
also
very
familiar
with
too,
which
is
probably
more
common
than
people
might
realize.
B
B
If
anything
is
interrupting,
sending
to
loki
and
loki
holds
that
file
handle.
If
you
can
fill
an
entire
another
file
before
the
read
on
the
previous
file
finishes,
you
can
miss
an
entire
file,
so
I've
actually
seen
that
happen
before
too
some
of
our
ingress
controllers
could
log
in
you
know
many
megabytes
a
second.
It
seems
well
not
in
a
single
file,
but
that
the
problem
is
interesting.
The
solutions
you're
describing
are
interesting
too.
I'm
not
sure
if
any
other
people
have
come
across
this,
but
so
yeah.
B
I
think
we
can
push
kind
of
through
the
the
idea
of
tailing
it.
I'm
a
bit
curious
now,
if
it,
if
there's
it,
might
make
sense
to
actually
use
separate
threaded
tailors
now
that
out
of
order
is
that
that
might
allow
you
to
even
go
back
to
that
first
use
case
where
we
just
keep
multiple
file
handles
open
and
we
don't
have
to.
B
I
will
say,
though,
that
that,
while
that
library
works
really
well,
it's
like
not
necessarily
the
most
tested
in
that
code.
It's
like
we
forked
it
because
the
library
was
unsupported,
and
so
we've
made
some
changes,
but
yeah
I'm
happy
to
to
work
with
you
to
try
to
see
what
we
can
do
to
improve
this,
because
it
is
you're,
not
the
only
one,
I'm
sure
the
network
use
case.
B
I
would
say:
that's
some
pretty
good
sleuthing
there
to
figure
out
what
was
happening
because
I
generally
just
sort
of
tell
people
don't
use
nfs,
but
I
mean
how
can
you
escape
it
right?
So
that's
kind
of
a
combo.
B
I
know
I
always
tend
to
forget
that
too,
that
most
persistent
volumes
are
just
nfs
mounted.
I
mean
how
else
would
it
work
right
so
yeah?
I
would
like
to
see
us
improve
this
and
I'm
curious
now
how
easy
it
would
be
to
do
the
the
out
of
order
is,
was
the
original
limitation
as
to
why
it
was
solved
this
way,
but
now
that
we
can
work
around
that,
maybe
we
can.
B
B
So
we
created
the
canary
and
the
answer
is
really
good
honestly.
So
we
we
have
canaries
on
all
of
our
nodes.
We
have
more
than
a
thousand
of
them
running
now
all
onto
this
ops
cluster.
They
generate
millions
and
millions
of
logs
a
day,
and
I
I
won't
tell
you
that
they
that
the
answer
is
zero,
but
we're
in
the
five
or
six
nines
of
generally
like
log
traffic
and
the
ones
that
miss.
I
could
usually
explain
like
it's.
It's
not
loki's
fault.
B
You
know
like
so
the
good
news
is
prom
tail
does
work
really
well
and
even
on
these
high
volume
cases.
You
know
we
do
see
this,
but
it
is.
I
do
sometimes
chuckle
a
bit
when
you,
when
you
sent
me
that
link
to
that
pr,
and
I
went
looking
at
that
code
again.
It
was
like
trip
down
memory
lane
that
I
tried
to
forget
thanks
roger
appreciate
it.
Thank
you.
B
C
Yeah
yeah,
I'm
just
hoping
that
ed's
gonna
be
the
one
doing
that,
and
not
me
right,
like
that,
was
fantastic.
B
Thanks
jordan
appreciate
that
I
made
a
powerpoint
presentation
for
y'all.
B
It's
really
really
simple,
but
the
I'm
gonna
do
a
quick
sort
of
run
through
relatively
quick,
because
you
guys
just
listened
to
me
talk
for
a
while,
and
I
have
a
demo
that
will
hopefully
work,
but
the
the
single
binary
of
loki
has,
in
my
opinion,
been
one
of
the
best
things
that
it
offers
in
terms
of
making
it
very
easy
to
just
run
the
thing
you
know
the
original
helm
install
before
we
had
a
distributed,
help
chart
just
installs
it
it's
easy
to
set
up
easy
to
get
going.
B
You
know,
I
run
it
locally
on
raspberry,
pi's
and
and
actually
it's
you
know
how
I
tend
to
do
development
just
running
this
binary
and
I
can
hook
a
debugger
to
it.
It's
it's
great,
but
it
has
a
couple
big
limitations
currently,
which
is
obviously
it's
not
highly
available.
So
if
you
are
running
a
single
binary
well,
you
can
easily
put
hundreds
of
gigabytes
of
logs
a
day
into
it.
If
you
want
to
upgrade
or
something
you
have
to
restart
it.
B
That's
downtime
and
the
query
performance
has
never
been
very
good
because
there's
no
query
front
end.
So
it's
just
a
single
threaded
query,
so
you
tell
it
to
query
something
and
it
will
just
go
fetch
all
the
chunks,
sequentially
and
process
them
as
fast
as
they
can.
But
you
know
that's
typically
in
the
hundreds
of
megabytes
a
second
so
not
super
fast,
so
we
kind
of
set
out
to
say
okay,
what
can
we
do
to
make
the
single
binary
more
scalable?
B
B
You
know
I
don't
know,
I
won't
say
the
word
nightmare,
but
it's
a
challenge
right
like
to
try
to
set
up
all
of
the
moving
parts
of
loki
specifically,
if
your
scale
isn't
that
huge
and
you
just
want
the
features
of
so
what
we're
doing
is
making
the
single
binary
easier
to
run
and
adding
in
so
this
isn't
the
best
next
slide,
but
basically
what
the
new
version
of
our
single
bionic
will
do
is
include
the
query
front
end
to
allow
parallelization
of
queries
and
has
some
additional
intelligence
for
how
to
communicate
with
itself.
B
So
you
know
the
next
evolution.
If
you
start
with
just
one,
you
know
this
is
the
story
I
want
to
tell.
Is
you
start
with
one
loki?
If
you
want
or
some
better
slightly
better
query
performance,
you
add
two
more.
B
The
minimum
required
for
actual
will
be
three,
because
the
way
our
replication
factor
code
works
so
with
three,
you
can
support
a
single
node
down
and
because
you'll
be
adding
more
cores
for
both
reads
and
writes.
B
This
would
allow
some
more
throughput
on
both
reads
and
rights,
but
then
we're
going
to
take
it
one
step
farther,
which
is
adding
some
new
targets
called
read
and
write,
and
so,
instead
of
going
right
into
a
microservices
mode,
where
you'd
have
to
start
looking
at
distributors
and
queries
and
adjusters
we're
keeping
some
of
those
components
bundled
together,
so
that
you
have
just
two
paths
for
scaling
now,
so
you
you
can
take
what
was
your
three
single
binaries
and
then
run
them
with
a
target
flag
of
you
know,
write
and
read.
B
You
still
want
three
four
rights
for
high
availability,
but
you
can
add
kind
of
as
many
read
paths
as
you
want,
and
this
allows
now
lets
you
scale.
The
read
and
write
path
separately
gives
you
some
different
stability
around
the
read
and
write
path.
So
you
know
your
read
path,
nodes
can
crash
and
it
won't
affect
your
right
path
or
they
can
be
taken
down
or
done.
They're,
scaled
separately,
etc,
and
you
can
add
a
bunch
more
of
them,
so
they
do
horizontally
scale.
B
And
you
know
you
can
go
down
this
road
to
get
to.
You
know
hundreds
or
thousands
of
gigabytes
terabytes
a
day
of
ingest
and
also
to
get
the
higher
levels
of
query
performance.
B
And
then
also
you
still
have
the
option
of
microservices.
If
you
want
to
get
to
the
road
of
having
sort
of
the
most
flexibility
and
capability
over
loki,
so
microservices
have
some
advantages.
Still,
it
has
a
better
caching
layer,
the
caching
that
we're
doing,
at
least
in
the
first
version
of
this
you
know
more
easily.
Scalable
simple
deployment
is
really
naive,
whereas
it's
not
centralized
so
there's
some
redundancy
there.
So
microservices
give
you
centralized
caches,
you
know
even
more
components.
B
It
gives
you
a
little
bit
better
observability,
because
each
component
can
be
monitored
for
logs
and
metrics
separately.
We're
working
on
this
in
this
you
know
modified
or
improved
single
binary,
but
when
you
run
the
distributor
and
the
ingester
and
these
things
in
the
same
process
like
their
logs
are
and
metrics
are
the
same.
So
you
you,
you
can't,
for
example,
see
the
processor
time
of
how
long
it
took
in
the
distributor
versus
the
ingestor,
because
the
metric
that
outputs,
that
is
the
same
so
anyway,
the
microservices
approach,
will
always
be
available.
B
If
you're
in
kubernetes,
it's
generally
a
lot
easier
to
work
around,
but
you
know
the
idea
that
this
would
cover
most
everybody's
use
cases
where
you
can
have
this
easier
to
run,
scalable
simple
binary.
So
in
practice
it
looks
a
little
bit
like
this.
Now
this
is
kubernetes
just
because
it's
actually
easier
for
us
to
deploy
and
do
things
the
way
we're
set
up
to
operate,
but
there's
no
reason
that
it
has
to
be.
B
We
have
a
couple
gateways
that
sit
in
front
of
that,
so
that's
basically
just
doing
some
authentication
for
us
and
does
some
routing
to
these
nodes.
The
read
and
write
path.
So
there's
a
thing
in
there.
You
call
singleton
just
just
ignore
that
guy
for
now
we're
going
to
try
to
make
that
go
away.
B
But
do
you
have
these
separate,
read
and
write
paths
we've
played
around
with
you
know
these
three
right
path:
adjusters
and
we're
able
to
do
50
megabytes
a
second
into
those
which
is
kind
of
shooting,
for
that
you
know
terabyte
or
terabyte
and
a
half
a
day.
So
you
you
don't
need
a
huge
amount
of
right
path
to
be
able
to
support
pretty
good
volume.
Let's
see
if
oh
seven
days
is
a
long
time,
let's
do
24
hours
just
to
play
around
with
so
like
karen.
B
B
B
I
suspect
that's
a
data
source
timeout,
but
the
the
point
that
I'm
trying
to
prove
here,
if
I
can
get
it
to
to
work,
is
how
the
hair
parallelization
and
how
that
would
work
for
making
it
easy
to
add
more
query
processing.
So.
B
B
B
Yeah,
if
you
so
kubernetes
tip
of
the
day,
if
you
work
with
stateful
stats,
make
sure
you
always
set
your
pod
management
policy
to
parallel,
or
else
these
would
start
one
at
a
time.
Ask
us
why
we
know
this.
Everybody
that's
experienced
this
like,
especially
when
you're,
adding
things
like
ingestors
that
tend
to
be
a
bit
slower
to
start.
B
B
F
B
I
just
ran
it
and
it
did
this
yeah.
Let's
get
on
that,
we're
getting
there
here
all
right.
So
now
we
got
20
of
these
watch
my
eat.
My
words
here,
it'll
be
the
same
speed,
but
so
what's
actually
inside
of
these,
and
the
thing
that
I
worked
on
most
recently
was
loki-
has
a
thing
called
a
query
scheduler
as
long
as
the
query
front
end.
So
what
we're
working
on
is
minimizing
the
amount
of
like
sort
of
dns
configuration
that
you
need
to
do
to
run
loki
so
having
the
query.
B
B
B
C
Do
you
have
10
gigs.
C
B
Not
even
that
many
cores
on
these
machines,
bad
neighbors,
in
our
dev
environment,
here,
okay,
oh,
this
is
going
to
roll
them
out
one
and
one.
Oh!
No,
because
it's.
C
B
C
B
Let's
just
roll
them
out
a
little
faster,
let's
kill
them
all
at
once.
It
works
with
her
without
the
space
that
part
at
least
dave
cool.
I
didn't
know.
C
B
This
is
just
a
quick
lesson
and
the
probably
two
most
important
configs
that
you
need
for
query.
Parallelism
too.
B
So
the
couple
things
I
would
point
out
about
this
should
have
quit.
While
I
was
ahead,
but
the
there
are
some
trade-offs
that
we're
making.
I
was
mentioning
to
like
make
it
easier
to
run
which
have
to
do
with
caching,
and
I
would
say,
though,
the
performance
is
still
really
good,
so
you
know
we
were
shooting
for
yeah
I
broke
it
got
greedy,
we
were
shooting
for.
Is
this
sort
of
trade-off
of
what
we
can
do
today?
So
there
will
be
more
improvements
coming
around
the
caching
layer.
B
Make
it
easier
to
run
loki
in
h
a
in
horizontally
scalable
way
without
needing
so
you
know,
viewer
outside
of
kubernetes.
You
just
need
these
two
binaries
and
they
can
share
the
same
config
file.
B
E
Do
we
have
anything
like
how
this
compared
to
the
one,
the
one
we
run
inside.
E
No
asking
like
is
there
any
like
benchmark?
We
tried
with
this
running
library
right
and
right
target
compared
to
the
one
we
run,
this
yeah,
so
different
components.
Basically,.
B
So
we've
gone
just
a
bit
of
head
to
head,
so
in
order
to
sort
of
make
sure
that
the
performance
was
acceptable,
I
don't
have
a
whole
lot
for
like
real
detailed
benchmarks,
so
in
the
same
cluster,
when
we
were
querying,
but
it
was
with
a
smaller
set
of
queries,
you
know
probably
three,
I
think
at
the
time
the
performance
was
the
same
or
very,
very
similar.
B
B
You
end
up
using
the
same
chunk
multiple
times
in
separate
queries,
so
having
a
centralized
cache
that
can
share
that
same
info.
Saves
you
from
a
lot
of
round
trips.
Other.
C
B
Yeah
and
that's
a
good
point,
because
the
config
is
the
same,
we're
keeping
the
same
so
there's
another
part
that
I
didn't
mention
is
there's
an
effort,
as
part
of
this
is
to
reduce
the
config
complexity
as
well
to
stop
having
to
fill
out
sort
of
redundant
configs,
and
so
that
work
will
also
make
it
a
little
bit
easier
to
run
this,
but
the
same
config
will
apply
so
you
could
it
take
this,
like
owen
just
said,
and
introduce
memcache
in
here
and
add
the
config
to
tell
loki
to
use
a
centralized
memcache.
C
B
E
B
So
I
don't
like
coffee
share
and
copy's
gonna
tell
us,
while
these
nodes
boot
and
I
figure
out
how
to
fudge
the
screenshot
he's
gonna
tell
us
the
work
that
he
did
on
using
log
cli
without
loki
to
be
like
loki.
E
So
yeah,
okay,
so
yeah
it's
it's
basically
like
yeah,
like
it
said
it's
one
of
the
hackathon
project.
We
did
like
couple
of
months
back
and
just
to
note
it's
in
the
same
state,
so
I
haven't
touched
after
it.
So
if
something
breaks
yeah,
I
think
I
don't
know
we'll
see
so
yeah.
E
So
the
basic
idea
is,
we
have
a
locule
right
and
it's
a
standard
language,
and
currently
we
can
use
this
slop
here
only
against
the
loki
servers
right,
because
loki
servers
are
the
one
who
fetches
this
log
lines
from
the
chunks
for
us.
C
E
Do
the
passing
and
stuff
apply
lock
your
on
top
of
it
right,
but
if
you
think
about
it,
if
you
already
have
the
files
locally,
you
don't
need
loki
servers
right
so
because
all
the
log
files
you
have
it's
in
a
text
files
or
like
some
file
in
your
local,
and
you
want
to
do
just
a
lock
here
on
top.
C
E
Currently,
you
can't
basically,
so
that's
what
that's
that's
the
idea,
basically
and
why
we
have
to
do
that.
There
are
some
various
reasons
right
like
I
said,
local
is
a
standalone
language
right
and
you
can
do
like
way
more
things
for
processing,
logs
and
yeah.
You
don't
need
to
have
a
low
key
server.
Why
should
we
have
a
lucky
server
just
to
do
lock,
you.
C
E
That's
yeah,
that's
kind
of
like
a
motivation
behind
it,
plus
yeah.
Also
learning
lock
here
right,
so
you
see
loki
and
you
want
to
try
some
lock
your
queries.
You
don't
have
to
set
up
all
this
big
loki,
servers
and
yeah
so
that
you
can
just
to
learn
lock
your
basically
plus
it
also.
The
another
point
is
like
in
the
community
channel.
E
We
often
get
asked
a
lot
of
questions
about
local
and
usually
the
conversation
goes
like
some
people
ask
the
question
and
we
just
go
and
try,
and
we
kind
of
like
try
to
guess,
because
it's
always
difficult
to
have
the
exact
log
entries
and
our
exact
to
simulate
this
exact
thing
right
with
our
infrastructure.
So
we
try
to
guess
and
if
it
goes
wrong,
they
come
back.
They
try
this
query
and
they
come
back
again.
E
The
the
problem
here
is
like
the
people
who
is
asking
this
question:
they
already
have
the
log
files
right.
They
have
this
log
entry
lines,
but
still
they
couldn't
be
able
to
like
quickly
validate
some
some
piece
of
this
local
just
because
they
need
to
send
it
to
loki
servers
and
it
has
to
be
updated
right
so
yeah,
that's
kind
of
some
of
the
motivations
behind
it
of
this.
So
yeah,
that's
basically
my
presentation
here
and
let's
see
the
demo,
I
don't
know,
did
you
even
ask
like.
E
Okay,
cool
so
yeah.
I
have
some
log
files
here.
First
thing
is
taken
from
loki.
I
just
quickly
take
this
from
our
infrastructure
and
also
from
internet.
Actually,
so
forgive
me
if
it
doesn't
make
much
sense
so
yeah.
This
is
our
own,
lock,
loki,
query,
sorry,
loki
logs
and
also
I
have
some
engine
x
log
and
also
I
have
anthonyx
json
log
yeah,
basically
just
to
try
out
these
things.
Okay,
so
the
only
extra
thing
we
added
and
yeah.
E
We
are
adding
this
in
a
lock,
cli,
probably
yeah.
Everyone
knows
loxie
lights,
cli
too,
where
you
can
yeah
kodi
loki
servers
basically
like
for
the
yeah.
You
cannot
play
lock
your
wire
lock
clip,
so
we
have
lock
c
line
and
the
only
extra
thing
we
added
for
this
feature
is
this
flat
today
right.
So
what
this
does
is,
instead
of
sending
a
request
to
the
local
servers
to
fetch
the
logs
it.
Does
it
try
to
take
the
log
entries
from
the
standard
input?
E
Basically,
so
usually
you
do
something
like
this
loki.log
and
you
pipe
to
it,
and
then
you
make
a
query
basically
so
yeah
before
trying
out
the
query.
I
have
to
mention
a
couple
of
points
here
right
so
because,
if
you
use
loki
and
lockheel
probably
know
like
labels,
unlabeled
selector
place
a
major
role
there
right
whenever
we
scrape
logs,
we
add
certain
labels,
but
one
what.
But
when
we
talk
about
text
files
from
the
standard
input
or
like
from
the
local
files,
there
are
no
labels
right.
E
So
currently
we
kind
of
like
a
fake
it,
because
local
itself
kind
of
like
compliance
as
a
syntactical
error.
If
there
is
no
block
circle,
but
we
are
working
to
remove
that
kind
of
like
a
weird
dependency
there,
so
yeah,
so
basically,
I'm
gonna
type
this
o
equal
to
bar,
but
it
has
no
meaning
here.
That's
why
I
explained
it
before
so
this
is
just
too
fake.
We
giving
some
labels
selector
to
the
lock
here,
but
it
has
no
meaning
currently
in
our
scenario
right.
E
So
let's
try
this
one
so
yeah.
So
this
should
display
all
the
locks
in
the
log
file
because
we
haven't
coded
anything
right.
So,
let's
maybe
start
simple:
let's
try
some
simple
line,
matcher,
maybe
so,
let's
say
level
equal
to
error.
I
don't
know
yeah.
There
are
like
a
couple
of
lines.
We
have
error
here
right,
so
this
is
a
simple
line
matcher
and
we
can
also
do
the
same
thing
with
the
label
matter
right
since
our
locks
unlock
pump.
So
we
can
use
the
lock
from
parser.
E
And
then
we
can
do
the
same
thing
now.
If
you
see
after
doing
this
log
font,
everything
is
a
label
now
right
so
yeah.
So
we
can
use
everything
as
a
label
here,
so
the
last
kodi.
What
we
did.
We
can
also
do
like
this
all
right.
It
should
be
same
yeah,
so
you
have
a
couple
of
error
long
times
right.
So
let's
maybe
try
something
else.
E
E
F
E
E
Yeah,
it's
only
I
mean
yeah
only
for
log
queries.
Metric
queries
is
a
bit
weird:
it
kinda
works,
but
yeah.
We
are
trying
to
take
it
out
of
scope
for
now,
because
when
we
say
metric
queries,
we
need
timestamp
right
apart
from
the
labels,
so
it's
difficult
to
assign
timestamp
to
this
log
entries
coming
from
standard
input,
yeah
yeah,
I
think
we,
I
tried
like
few
methods
there
like
assigning
some
ordering,
basically
but
yeah
so
yeah.
E
First
thing
we're
gonna,
make
it
work
completely
for
log
queries
and
then
yeah,
we'll
park
matricules
and
we'll
pick
it
up
later.
Basically,
that's
the
idea.
F
E
F
Yep
it's
a
three
or
four
nice
we
should
be
able
to
chain.
I
think
so,
let's
see.
E
E
So,
let's
see
if
it
works
first,
this
is
yeah.
This
is
a
normal
engineering
slog
without
any
json
or
something
so
here
there
are
two
options
here
right:
we
can
either
use
regex
or
pattern.
Let's
try
pattern
parcel
here.
F
A
E
Cool
so
yeah,
I
think
it
works
with.
I
think
it
you
should.
I
mean,
because
there
is
no
magic
in
this
australian
flag.
It's
just
a
simple
it
does
everything
like
before,
like
a
local
wise,
the
only
thing
is
like
yeah,
I
don't
make
a
network
calls
and
it
takes
input
from
basically.
So,
ideally,
it
should
work
for
all
the
log
queries
so
technically.
B
Yeah
the
this
is
really
fun.
I
got
to
give
a
number
of
use
cases
that
I
would
use
this
for
just
working
with
files
that
I
have
on
disk,
where
maybe
they're
big
json
files
and
I'd
only
want
to
see
part
of
them.
B
B
Queries
are
a
bit
weird
because
you're
going
to
get
a
bunch
of
you
know,
range
intervals,
output
and
I'm
not
sure
how
to
use
those,
but
we'd
have
to
figure
out
somehow
how
to
you
know
shoehorn
in
some
timestamp
parsing
to
make
it
effective
for
some
of
the
things.
But
I
don't
know
some
of
the
counts
and
it's
possible,
but
anyway,
for
now
we're
working
on
getting
yeah.
E
B
Take
hobbies
working
on
making
it
useful,
for
you,
know,
text,
file,
filtering
and
reformatting
and
give
you
a
chance
to
play
around
with
your
log
ql
locally,
and
I
think
it's
pretty
slick
copy
nice
work.
Man.
A
E
Cool,
I
think
that
I
don't
know
if
people
notice
there
are
a
couple
of
box
there
like.
If
you
see,
while
printing
the
log,
it
will
print
the
direction.
I
think,
if
you
try
to
reverse
it,
I
think
it
will
panic
or
something
I
guess
I
just
followed
this.
E
E
I
mean
there
are
a
couple
of
options
here
right,
so
I
think
we
already
discussed
a
little
bit
like.
If
we
decide
to
take
different
files,
we
can
have
file
name
as
a
label,
for
example,
so
one
option
yeah
but
yeah,
but
you're
right.
I
think
it
should
work
without
label
matcher
for
sure,
because
that
should
be
definitely
an
option.
B
Yeah,
so
you
know,
while
kavi
was
working,
I
managed
to
get
like
150
gigabytes.
A
second
worth
of
query
should
have
been
made
up
a
bigger
number
terabyte,
a
second
all
right.
No,
I
actually
reverted
it
back,
so
we're
not
running
so
many
nodes,
but
I
actually
did
go.
Look
up
the
the
talk
I'm
talking
about
it's
linked
in
the
in
the
community
call
notes.
B
Fabian
stauber
gave
this
at
fosdem
2017
about
the
tool
that
he
wrote
called
grok
exporter,
where
he
went
into
some
really
fascinating
detail
around
how
polling
files
and
different
operating
systems
and
options,
and
that
was
a
huge
help
to
me
at
the
time
for
trying
to
figure
out
how
to
manage
this
prom
tail.
B
I
ended
up
taking
the
easy
way
out
and
just
pulling
because
it's
reliable
and
it
works,
and
it's
been
working
well
for
us
actually
promptly,
does
use
fs
notify
to
know
when
files
are
added
and
removed,
but
we
don't
use
it
for
knowing
when
events
are
added
to
a
file,
but
just
closing
the
loop
on
that
one
for
y'all,
that's
it
that's
all
we
got
on
the
agenda.
F
A
Talk
about,
I
think
I
have
drawn.
Maybe
I
will
see
you
guys
tomorrow.