►
From YouTube: Loki Community Meeting 2021-09-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
let's
do
this
september
september
loki
I'm
going
to
share
this
window
because
I
got
some
important
stats
to
cover
here.
I'm
pretty
excited
about
this.
A
B
A
It
sneaks
up
on
you,
you
know,
even
though
it's
once
a
month,
but
I
think
it's
important
to
note
that
out
of
the
tempo
and
loki
and
grafana
community
calls
of
last
month
that
we
did
get
the
most
likes,
although
grafana
got
slightly
more
views
than
we
did
so,
if
you're
watching
this
on
youtube,
you
you
know
what
to
do.
I
don't
know
what
to
do
when
you
want
to
kick
us
off
talk
about
how
to
order.
I
think
that's
what
everybody's
here
for.
B
Sure
so,
yeah
big
out
of
order,
update
it
out
of
order,
is
actually
live
in
grafana
cloud
in
all
production
clusters.
So
if
you
are
using
graphonic
cloud
right
now,
you
can
actually
send
out-of-order
data
with
up
to
one
hour
tolerance.
B
This
is
very
exciting.
For
us,
there
may
be
a
few
things
that
we
tweak
here
and
there,
but
largely
we're
pretty
happy
with
it.
It's
been,
we've
been
testing
this
for
a
long
time,
but
it's
been
live
for
over
a
week.
B
At
this
point,
I
ran
a
couple
stats
in
one
of
our
production
clusters
and
found
out
that
we
by
traffic,
by
byte
volume
we
were
experiencing
about
one
and
a
half
percent
of
our
ingestion
traffic,
was
being
rejected
due
to
out
of
order,
and
then
that
dropped
basically
down
to
zero
afterwards.
B
This
is
largely
because
when
you
see
out
of
order
errors
most
of
the
time
it's
you
know
a
second
or
two
out
of
order,
maybe
up
to
a
minute,
but
usually
nothing
longer
than
that.
This
makes
it
that
the
outer
order,
tolerance
now
makes
it
a
lot
easier
to
get
logs
from.
B
You
know
other
agents
that
don't
that
aren't
ordering
aware
that
don't
have
the
concept
of
loki's
ordering
constraint
or
previous
order
and
constraint,
so
things
like
fluent
are
probably
our
biggest
ones
there,
but
we've
also
seen
things
with
vector
and
blue
bits
and
d
all
that
sort
of
stuff,
so
that
should
pretty
much
work
a
lot
better
out
of
the
box
now
and
should
play
nicely
with
retries,
and
you
know
like
network
interruptions
and
all
sorts
of
fun
stuff.
B
That's
going
to
also
be
included
in
the
2.4
release
which
we're
looking
at
cutting
pretty
quickly,
but
I
won't
give
a
I'm
not
going
to
give
a
date
for
that.
Yet
yeah,
let's
see
anything
else,
I
missed
there's
a
couple
other
use
cases
for
out
of
order
that
we're
excited
about
particularly
reviving
the
aws
lambda
logs
and
other
sort
of
hyper
ephemeral
service
logs
getting
into
loki.
B
But
there
should
be
a
little
bit
more
information
about
that.
Maybe
at
the
next
community
call
we're
still
flushing
a
few
things
out
internally.
A
Is
case
61
or
k60,
I
figured
if
someone
wanted
to
play
around
without
an
order
that
either
of
those
is
probably
a
61
is
new.
This
week,
though,
right.
B
A
A
I'm
not
sure
every
time
I
say
a
date,
it's
always
wrong,
but
let's
say
by
the
end
of
the
month,
hopefully
a
little
sooner
than
that
and
the
out
of
order
stuff
will
be
released
as
probably
his
beta.
Although,
as
owen
mentioned,
we
are
do
we
are
running
it
in
prod,
so
we're
pretty
confident
in
its
ability
already
but
2.4
so
yeah.
Hopefully,.
C
Thanks
ed
yeah
in
2.3,
we
released
the
initial
experimental
version
of
recording
rules
for
loki.
So
in
case
you
don't
know
what
that
is.
That's
a
way
to
evaluate
log,
ql
queries
on
an
interval
and
if
you
use
a
metric
query
that
produces
a
whole
bunch
of
samples
and
we
can
send
those
to
prometheus.
C
So
you
can
have
a
time
series
of
metrics
that
come
out
of
those
and
that's
really
useful
if
you've
got
queries
that
touch
a
lot
of
data,
so
you
can
run
those
queries
on
very
short
intervals
over
a
short
time
range
and
get
your
metrics
out
that
way.
C
So
the
current
version
in
v
2.3
only
buffers
the
samples
in
memory
before
remote
writing
them
off
to
prometheus,
and
the
new
version
will
support
a
writer
headlog,
and
this
is
reusing
a
lot
of
code
from
prometheus's
writer
headlock
and
that
will
result
in
a
more
durable
storage
for
those
samples.
So
if
the
ruler
crashes
at
any
point
in
time,
you
won't
lose
the
samples
that
were
evaluated
by
those
recording
rules
ahead.
John
kristoff.
D
It
was
about
the
out
of
the
order
and
now
that
it's
working
does
it
mean
that
we
can
try
to
to
fill
up
the
chunks
at
the
maximum,
which
is,
I
guess,
better,
to
have,
instead
of
to
have
a
lot
of
chunks,
which
are
you
in
my
case,
usually
really
small,
to
try
to
fill
them
up
and
to
the
opposite.
Can
it
produce
some
other
issue?
If
I
try
to
to
put
a
lot
of
logs
into
the
same
chunks,
chunk.
B
Yeah
it
actually
out
of
the
the
out
of
order.
Tolerance
actually
allows
you
to
to
make
more
efficient
chunks
now,
because
you
don't
have
to
fight
between
ordering
and
cardinality
as
much,
which
is
basically
exactly
what
you're
describing
so
previously
in
people
would
be
incentivized
to
create
labels
extra
labels
effectively
to
ensure
that
the
ordering
constraint
wasn't
broken.
B
But
now
that
that's
not
as
strict
anymore,
you
can
generally
remove
labels
and
we're
actually
kind
of
in
the
process
of
that's
there's
still
kind
of
outstanding
to
do
item
to
remove
a
couple
labels
from
some
of
our
default
prom
tail
configs
as
well.
Now
that
this
is
kind
of
a
a
new
feature,
so
you're
exactly
right.
You
can
definitely
do
that
and
it
should
help
make
the
rest
of
loki
more
efficient
by
you
know,
reducing
your
index
size
and
reducing
the
amount
of
things
that
you
need
to
kind
of
merge.
A
query
time.
D
Okay,
because
now
that
I
don't
have
the
out
of
order
issue,
I
see
other
issue
where
I
seem
to
actually
force
the
same
stream
as
it
seems
that
I
really
filled
them
up
and
I've.
I
can't
remember
the
issue,
but
there
is
a
one
megabyte
size
per
stream
and
I
and
it
seemed
that
in
some
case
for
me,
I
fill
up
the
this
one
megabyte
size.
B
Yeah,
that's,
I
guess,
there's
not
a
strict
science
to
when
we
add
labels
to
split
up
to
split
streams
or
remove
them.
We
generally
try
to
prefer
less
streams
that
are
that
are
more
filled,
and
so,
if
you're,
actually
filling
up
your
chunks
and
you're
flushing
a
lot
of
your
chunks
because
they're
full
that's
actually
usually
a
really
good
sign.
It
can
in
certain
situations
like
if
you
tried
to
log
everything
into
one
stream.
B
That
would
obviously
not
be
very
ideal.
So,
if
you're
hitting
a
if
you're,
pretty
consistently
hitting
the
chunks
being
flushed
due
to
size
but
you're
not
experiencing
any
bad
effects
of
that,
then
that's
great
and
bad
effects
could
be
like
having
a
couple
ingestors
tip
over.
If
you
tried
to
log
everything
to
one
stream,
for
instance,
because
you
would
only
be
split
across
replication
factor
number
of
ingestors.
B
That
is
also
largely
going
to
be
less
of
a
problem
now,
because
we
have
per
stream
rate
limits
which
are
being
introduced
alongside
the
out
of
order,
tolerance
to
basically
prevent
people
from
accidentally
tipping
their
clusters
over.
So
as
long
as
you're
not
being
rate
limited
and
you're
flushing
chunks
you're
in
a
really
good
spot,.
D
Okay,
but
what
happened
when
I
get?
Actually,
I
don't
have
the
error
in
mind,
but
I
told
you
when
I
hit
this
limit
on
the
client
side.
Does
it
mean
that
it
needs
to
retry
or
the
the
log
is
just
as
before,
disguised.
A
A
A
I
made
a
couple
notes:
there
yeah,
the
first
stream
rate
limit
is
new
and,
as
owen
said,
is
to
what
we've
discovered
is
and
what
john
kristoff
is
talking
about.
Is
you
can
remove
labels
and
combine
streams
and
that's
good,
because
that
will
help
you
fill
bigger
chunks
and
it
can
be
helpful
to
reduce
cardinality.
A
I
put
an
example
in
here
where
there's
some
cases,
we
have
pods
with
multiple
containers
and
the
container
is
a
label,
and
it
has
to
be
because
it
sources
from
separate
files,
but
a
lot
of
times
we
never
search
for
the
separate
container
logs,
so
we're
going
to
remove
the
container
label
from
some
of
our
workloads
and
just
combine
the
multiple
containers
into
one
stream
per
pod.
That
may
not
fit
for
everybody,
but
it
does
for
some
of
you
our
use
cases,
so
it
would
help
consolidate
streams
a
bit.
A
But
when
you
continue
to
consolidate
streams,
you
could
find
yourself
in
the
opposite
situation,
where
you
have
a
stream.
That's
sending
multiple
megabytes
a
second,
and
that
could
be
too
much
for
an
adjuster.
So
I
think
we
put
the
default
at
two
megabytes
per.
Second.
Is
that
right.
A
Yeah,
so
so
we
we
have
a
and
there's
a
burst
bigger
than
that.
So
I
guess
you
know
be
aware
of
that.
If
you're
rolling
out
and
the
you
you
can
adjust
that
limit
up,
it's
just
there
like
most
limits
to
sort
of
prevent
your
cluster
from
tipping
over
when
things
get
and
yeah.
The
other
thing
I
didn't
you
didn't
mention-
or
you
mentioned
owen,
but
I
wanted
to
re-mention-
is
the
we're
going
to
change
this
label
too.
A
I
think,
but
the
way
that
ordering
works
is
we
accept
if
you
send
something
at
the
current
time,
you
can
send
something
that's
older
than
that
up
to
the
max
chunk
age
divided
by
two,
so
you
can't
send
sort
of
infinitely
variable
timestamps,
because
it
just
becomes
kind
of
unrealistic
for
loki
to
sort
and
order
those
and
store
those,
because
your
chunk
would
have
a
start
and
end
time.
That
was
huge
and
it
would
not
play
so
well
in
the
query
path.
So
so
you
can
still
load
older
data.
A
What
the
it's
a
differential
between
for
any
given
log
stream,
the
most
recent
sample
that
it's
received
and
then
nothing
can
be
older
than
max
chunk
h
divided
by
two.
So
we're
still
working
on
how
to
make
that
a
little
bit
easier
to
understand,
because
I'm
not
sure
that
I
just
did
a
very
good
job
there,
but
all
right
any
other
questions
on
how
to
order
or
feedback
or
thoughts.
A
Nice
danny,
I
think
we
basically
got
through
your
updates.
Is
there
anything
else
on.
C
Reporting
rules
yeah.
Ironically,
the
the
question
was
out
of
order,
yeah,
no,
all
good.
I'm
done
nice.
A
So
yeah
recording
rules
super
exciting.
This
implementation
is,
is
really
good.
Danny,
the
the
like
you
were
talking
yesterday,
like
the
amount
of
code
we
didn't
have
to
write,
for
this
is
really
nice
that
largely
uses
the
prometheus
code
for
the
right
head
log
and
reading
from
the
right
ahead
log
and
a
lot
of
durability.
So
this
is
gonna,
be
a
pretty
reliable
way
to
get
metrics
out
of
logs
very
excited
all
right.
What
do
I
have
next
index
gateway?
A
I
don't
remember
how
much
we've
talked
about
the
index
gateway,
so
I
put
it
on
here
what
the
index
gateway
is
so
the
bolt
db,
shipper
data
store,
uses
bull
db,
no
tricks
there
bolt
db
requires
memory,
mapped
files
in
order
to
to
work.
So
what
that
had
created
is
a
situation
where
your
queries
now
needed
to
be
have
access
to
persistent
disk.
A
So
the
index
gateway
is
a
new
component
that
you
can
optionally
run
which
handles
the
management
of
the
bolt
db
files
themselves.
So
when
you
run
the
index
gateway,
you
can
provide
the
queries
and
rulers
with
an
address
to
it
and
then
having
them.
Instead
of
them
downloading
the
bolt
db
files
themselves,
they
just
go
query:
the
the
index
gateway,
the
index
gateway
effectively
acts
as
a
remote
bolt
db.
So
it's
like
a
pull
tp
over.
A
Maybe
it's
grpc
actually
not
http,
but
so
now
you
can
run
a
few
of
them.
There's
no
they're
not
terribly
sophisticated
yet
so
they
don't,
you
know,
do
any
kind
of
sharding
or
you
know
the
like.
So
so
we
run
you
know
two
or
three
I
mean
you
want
two
for
high
availability.
You
might
want
more,
if
you
have
a
lot
of
index,
query
load
so
that
they
can
share
that
load,
but
they
all
end
up
downloading.
A
The
same
files
and
basically
making
the
index
data
available
to
the
queries
without
the
queries
having
to
download
those
files,
so
that
can
save
you
on
a
bunch
of
operations
type
stuff.
So
I
don't
believe
this
has
made
it
into
the
distributed
helm
chart.
Yet
it
is
in
our
json
it.
You
know
it
should
be
pretty
straightforward.
You
just
run
the
thing
and
there's
some
configs
about
you
know
the
similar
configs.
I
believe
that
apply
to
the
query
and
like
how
much
data
for
it
to
download
like
ahead
of
time.
A
B
A
Oh
yeah,
I
think
so
I
don't
think
any
of
us
have
updated
the
helm
chart
yet
so
largely
the
community
maintains
the
helm
chart.
You
know
our
production
workload
is,
is
jsonnet-based
so
that
usually
gets
more
attention
from
us,
but
at
some
point
we'll
probably
circle
back
around.
If
nobody
in
the
community
beats
us
to
it.
A
B
Yeah,
so
the
query
scheduler
is
a
new
component
in
loki.
This
is
something
that
we've
pulled
in
from
cortex,
which
is
our
upstream
dependency
and
it
basically
breaks
apart.
What
is
formerly
known
as
the
query
front
end
into
two
components.
Now
the
query
front
end
still
exists,
but
when
running
with
the
query
scheduler,
the
query
front-end
is
horizontally
scalable
and
is
only
really
responsible
for
query
planning
for
requests
and
merging
responses.
B
And
then
the
part
that
maintain
the
per
tenant
cues
is
now
split
out
into
component
called
the
query
scheduler,
and
we
suggest
that
you
run
two
of
these
for,
but
you
don't
really
need
to
run
any
more,
probably
for
any
plus
any
reasonable
cluster
size
and
it
holds
the
number
of
kind
of
in
process
or
in-flight
requests
and
keeps
pertaining
cues
for
them.
B
A
So
everybody
gets
a
cue
and
you
can
control
how
much
goes
into
a
queue
we
run
two
of
them
so
that
there's
high
availability,
however,
they
don't
communicate
between
each
other
around
what's
in
the
queue.
And
so
what
would
happen
is
if
you
try
to
run
three
or
four
or
five,
you
essentially
add
three
or
four
or
five
cues,
so
it
kind
of
increases
the
amount
that
anyone
can
put
in
at
the
same
time
because
they
have
access
to
more
cues
as
the
traffic
is
round-robin.
So
that's
not
terribly
desirable.
So
this
solves
that
problem.
A
By
separating
out
the
queuing
element
into
a
thing
you
can
just
run
a
pair
of
and
then
you
can
add
multiple
front-ends
which
can
be
advantageous
because,
like
owen
put
in
here,
they
do
the
merging
part
of
queries.
So
sometimes
they
handle
a
lot
of
data
depending
on
the
types
of
queries
and
if
you
have
blocks
of
active
query
traffic
and
tenants,
so
you
might
want
to
run
more
of
them.
A
A
I
don't
know
how
to
help.
You
probably
add
a
you,
can
add
a
comment
to
this
doc
or
just
go
in
pub
slack.
A
Probably
it
would
be
a
good
way
to-
and
I
put
a
note
in
here
because
I
don't
think
I've
merged
this
pr,
yet
I'm
still
kind
of
figure
out
the
exact
verbiage
we
want
to
use,
but
the
stale
bond
that
we
use,
I
just
wanted
to
mention
quickly,
kind
of
why
we
use
it
and
the
change
to
the
message
and
the
idea
so
still
bots
are
kind
of
an
awful
experience
for
anybody.
That's
ever
come
across
one.
A
You
know
you
find
yourself
having
a
question
or
a
problem
or
even
opening
a
pr
and
if
nobody
looks
at
it
in
our
case
30
days,
the
stale
bot
starts
coming
after
you
telling
you
wants
to
close
the
ticket.
So
we
use
it
because
there
are
a
couple
instances
that
it's
actually
very
helpful
for
us.
Primarily
that's
when
someone
that
created
an
issue
or
pr
no
longer
maintains
it
or
supports
it
or
is
sort
of
not
you
know,
responsive,
so
it
it
simplifies.
A
Our
kind
of
you
know
takes
care
of
us
having
to
go,
find
somebody
and
ask,
and
maybe
even
nag,
to
see
if
they
you
know
if
it's
still
broken
or
whatever.
So
it's
it's
sort
of
nice
for
cleaning
up.
However,
I
do
want
to
balance
that
with
the
sort
of
bad
experience,
so
so
the
step
has
a
nicer
message
now,
but
what
that
message
is
really
trying
to
say
is
like
we
don't.
A
You
know
an
issue
being
closed,
doesn't
mean
we're
not
going
to
fix
it
or
we
don't
care
about
it.
We
do
try
to
go
back
through
when
we
look
at
issues.
We
do
sort
them
based
on
the
thumbs
up
on
them.
That's
the
best
way
to
get
something.
Attention
is
thumbs
up
an
issue
if
they're
thumbs
up
and
they
have
the
stale
label.
We
can
filter
on
that.
A
So
we
can
find
things
that
were
closed
by
the
stale
bot
to
look
to
see
if
they're
still
relevant,
or
you
know
what
and
there's
two
other
labels
that
we
use
to
keep
a
live
label.
If
something
is
you
know
worth
having
the
stailbot
ignore
right,
like
it's
an
important
enough
issue
or
enough
people
ask
for
it
or
it's
something.
We're
gonna
work
on
very
soon
and
then
another
one
we
introduced
called
revivable
the
idea
behind
the
revivable
label.
A
Is
we
close
issues
sometimes
just
because
it's
not
likely
it's
something
we're
going
to
fix
in
the
short
term
or
it's
an
interesting
idea
and
we're
not
sure
what
to
do
with
it
or
it
just
falls
into
that
category
of
like
it
doesn't
quite
make
sense
to
leave
it
open
because
it
kind
of
leads
to
the
open
issue
count
and
it's
hard
for
us
to
you
know,
get
a
good
idea
of
like
how
much
stuff
we
actually
have
to
deal
with
so
we'll
close,
something
and
mark
it
as
revivable
cater
that
like
there's,
you
know
nothing
wrong
with
the
idea
or
the
problem.
A
It's
just
something
we
aren't
gonna
be
able
to
deal
with
in
the
short
term,
so
go
add
thumbs
up
to
it
and
we'll
sort
by
those
too
my
two
cents
on
the
stale
bot.
So
that's,
that's
all
we
got
on
the
agenda.
Is
anybody
anything
else?
I.
A
E
A
We
are
running
it
so,
okay,
as
of
last
week,
all
of
our
production
environments
have
it
okay,
so
it
yeah,
it
is
still
marcus,
either
beta
or
experimental.
Just
because,
like
we
haven't
quite
figured
out
how
to
mark
features
consistently
yet,
but
you
know
it,
it
will
make
it
in.
I
don't
know
if
it'll
be
in
2.4
is
or
or
the
following
release
as
like
officially
released,
but
we
are
running
it.
We
are
using
it.
A
E
I
I
picked
up
and
just
I
think
changed
the
image
for
our
compactor.
If
I
remember
correctly,
and
then
I
changed
the
image
for
the
rest
of
them,
I
thought
the
rest
of
the
component
can
run
the
official
image
and
I
picked
the
build
that
I
can't
remember
right
now.
What
was
the
tag
and
put
that
for
the
compactor,
and
I
was
getting
some
error
messages
about
some
db
name
or
something,
but
I
got
distracted
and
didn't
spend
much
time
on
getting
to
the
bottom
of
it.
E
Okay,
but
if
you
are
running
it
for
sure
it
is
working,
I
need
to
sort
out
what's
going
on
with
our
environment,
and
there
is
no.
There
is
no
need,
for
there
is
nothing
about
how
new
the
data
is
right,
because
we
have
data
from
way
long
ago
that
we
just
kept
updating,
loki
itself
and
kept
the
data.
But
all
of
them,
I
think,
is
that
a
schema
v11.
A
Okay,
yeah,
no,
there
shouldn't
be.
The
only
thing
I
can
say
is
it.
I
think
we
only
implemented
support
for
the
compactor
based
retention
with
the
bolt
bb
shipper
index
type.
So
if
you
had
a
setup
that
was
using
like
cassandra
or
dynamodb,
the
compactor
won't
well,
the
compactor
based
retention
won't
apply
to
them.
I
don't
remember
if
it
warns
or
errors
it
just.
It
won't
help
you
with
retention
on
those,
so
you
have
to
be
using
the
bolt
to
be
shipper
type.
D
A
Yeah,
so
the
compactor
based
retention,
the
reason
I
specify
that
is
previously
retention
was
handled
by
the
table
manager.
You
don't
actually
need
a
table
manager
anymore.
We
need
to
get
that
in
a
more
public
sort
of
documentation
way,
but
the
table
manager
historically
was
mostly
used
for
stores
like
dynamo
and
cassandra
where
we
would
have
to
go
in
and
manually,
create
tables
and
things.
Now
the
bold
db
shipper
manages
its
own
tables.
A
So
then
the
table
manager
was
just
used
for
retention,
but
it
was
only
used
for
retention
on
the
index.
Ever
you
always
had
to
do
retention
on
the
objects
within
the
object
stores
themselves
with
like
ttls,
so
the
compactor
based
retention
now
goes
through
the
index
and
it's
like
a
two-stage
process:
it'll
mark
a
bunch
of
things
based
on
configuration
that
you
provide,
which
can
be
just
global
retention
for
an
entire
cluster
or
on
a
per
tenant
per
stream
basis.
A
So
you
can
say
for
this
tenant
I
want
to
have
you
know
these
label
matchers
have
30
days.
These
have
90
days
these
have
two
years,
and
so
it
will
go
through
and
read
those
rules
and
read
the
index
and
mark
chunks
for
deletion
and
there's
a
separate
file
at
stores.
For
that
and
then
24
hours
later,
it
will
come
back
through
and
read
that
file
and
go
through
and
delete
all
of
the
chunks
that
were
marked
for
deletion.
A
So
it's
kind
of
a
mark
sweep
type
operation
and
that
works
like
I
said
against
the
bolt
db
store
and
should
allow
you
to
have
whatever
retention
that
you
would
like
on
a
per
stream
and
per
tenant
basis.
A
And
that's,
I
think,
a
reasonable.
Oh
yeah,
nice
thanks
for
some
docs
nice.
Thanks
for
the
question.
D
You
said
24
hours,
it's
not
like
the
two
hours
delay
when
you
put
in
the
queue,
and
you
wait
that
if
you
change
your
mind
and
you
you
you
consider
that
you
put
something
in
the
delayed
queue.
You
can
do
it
it's
two
hours
or
24
hours.
A
It's
it's
24
hours,
but
I
so
technically
speaking
you
could
so
it's
not
like.
We
designed
support
for
being
able
to
change
your
mind
on
retention,
so
you
you
could
so
so
one
of
the
things
I
would
recommend
everyone
do
is.
D
A
Revisioning
revision,
history
on
your
index
files
you
can
on
the
chunks
too,
if,
if
you
would
like
the
index,
is,
is
more
important
but
and
it's
smaller,
so
it's
not
going
to
cost
as
much,
but
you
technically
could
between
when
that
file
is
created,
that
marks
the
chunks
for
deletion
and
when
they're
actually
deleted
like
go,
save
them
or
delete
that
file.
A
However,
the
index,
I
believe,
is
already
pruned,
so
you
wouldn't
easily
be
able
to
query
those
chunks
without
recovering
an
index
prior
to
them
being
pruned
so
that
the
the
entries
are
removed
from
the
index
and
then
put
in
a
separate
file
to
then
be
removed
from
the
object
store.
If
I
remember
correctly,
the
experts
on
this
are
not
on
the
call,
though,
to
keep
me
honest
if
I'm
wrong,
but
I
believe
that's
how
it
works.
So
so
what
I
would
say
is
like
yeah.
A
A
If
you
change
your
mind,
you
can,
I
don't
know
if
you
can
cancel
it
through
the
api,
but
you
can
definitely
stop
a
delete
request
from
happening
if
you
decided
to
change
your
mind
before
24
hours,
but
when
it
comes
to
retention
like
it
might
be
possible,
but
it's
not
something
like
we've
documented
or
really
tested
around
or
experimented
with
okay,
but
yeah,
just
in
general
having
a
revision
history
because
it
doesn't
cost
like
are
doing
versioning
object.
A
A
Well,
thanks
thanks
john
krasoff
and
massage
for
joining.
We
love
it
having
community
folks
join
been
trying
to
you
know.
I've
been
I've
been
experimenting
with
twitter.
You
know
so
I'm
if
you're
following
me
on
twitter,
I've
been
joking,
a
lot
the
last
24
hours,
but
trying
to
get
more
folks
a
little
more
outreach
to
the
call
just
to
see
we
love
to
hear
about
how
people
are
doing
with
loki,
good
or
bad.
You
know
we're
gonna
cut
the
bad
stuff
out
of
the
recording
but
we'd
love
to
hear
about
it
anyway.
E
D
Deal
very
happy
to
hear
we
don't
have
a
large
load,
but
so
far
I'm
happy
yeah,
but
but
this
week
I
met
someone
I
think
in
300
channel
it
was
the
opposite.
He
said
that
elk
had
no
issues,
but
he
had
a
lot.
D
E
D
My
colleague
with
elk
they
are
elk,
managed
by
amazon.
It's
always
full.
They
always
need
to
re-manage
the
the
index
and
everything
and
it
seems
a
nightmare
and
so
far
they're
low-key.
On
the
on
a
mini
cube,
it's
it's
running
at
the
moment.
It's
running.
A
Fine
yeah,
I
would
only
assume
that
whoever
that
was
just
doesn't
know
what
they're
talking
about.
But
joking
aside,
you
know
elastic
and
loki
do
sort
of
solve
some
different
use.
Cases
like
if
you
need.
A
Is
a
search
engine
right
and
it's
a
killer
search
engine?
So,
but
if
you
want
to
store
logs
inexpensively-
and
I
will
say
that
that
the
operator's
experience
for
loki
is
still
not
nearly
where
we
want
it
to
be,
yet
documentation
is
sparse.
A
But
we
don't
have
a
lot
of
guys
around
scaling.
We
we
do
want
to
solve
this
problem.
You
know
we're
not
intentionally
not
solving
the
problem.
It's
just
you
know
we're
a
team
of
or
a
team
of
eight
now
so
welcome.
Jordan.
Jordan
has
joined
this
week
to
the
to
the
loki
squad
so
and
karen
is,
is
our
documentarian
and
so
we're
getting
her
ramped
up
so
the
it's
coming.
So
you
know
the
loki
operators.
Experience
will
get
better
specifically.
B
D
Actually,
are
you
planning
to
what
will
happen
to
the
the
plugin,
because
the
I
mean
the
300
plugin
at
the
moment
there
are
two
plugins,
so
there
is
the
blue
one
which
you
wrote
and
three
on
bit,
one
which
the
the
guy
from
friend
bit
wrote,
but
the
the
three
and
bit
one
at
the
moment.
It
is
crashing
sometime
for
me.
So
I'm
still
using
the
go
one.
But
but
I
mean,
will
you
d
support
in
the
future?
D
The
I
know
that
that's
what
is
planned,
but
at
the
moment
for
me
it's
the
one
which
is
working
much
better,
maybe
because
you
have
implemented
the
prom
tail
code
inside.
B
I
think
long
term.
The
idea
is
to
to
have
less
rather
than
more,
because
we
don't
want
people
to
have
to
understand
a
bunch
of
potentially
nuanced
trade-offs
in
agent
choice
and
and
plug-in,
and
all
that
sort
of
thing
I
know
we
have
wanted
to
deprecate
the
one
that
we
originally
wrote,
which
was
largely
built
around.
B
Some
of
you
know,
loki's
own
constraints,
which
are
now
kind
of
disappearing,
and
so
I
would
very
much
like
to
see
us
just
use
the
the
fluent
provided
variants
rather
than
have
to
use
our
own
as
well,
once
we
have
a
ga
release
and
ultimately
the
idea
is
not
just
to
have
the
ability
to
toggle
on
out
of
order
rights
or
unordered
writes,
but
to
make
that
the
default
mode
of
operation,
and
hopefully
once
we
get
to
that
point,
then
a
lot
of
the
problems
which
have
historically
plagued
people
getting
started
with
loki
will
disappear.
B
A
I
would
say
also
the
maintenance
burden
on
that
plug-in
is
almost
zero,
so
there's
not
much
incentive
for
us
to
get
rid
of
it.
So
I
don't
see
it
really
going
anywhere
anytime
soon,
so
I
mean
it's:
it's
such.
A
I
mean
I
100,
so
the
trouble
with
all
of
us
for
fluent
bit
is
none
of
us
are
c
programmers,
so
it's
hard
for
us
to
go
in
and
make
the
changes
necessary
to
that
code
base.
I
100
agree
with
owen
like
if,
if
everyone
in
the
community
everyone
using
it
got
exactly
what
they
needed
out
of
fluent
bit,
it
would
just
make
our
lives
easier
to
not
have
the
plug-in.
But
that
being
said,
I
don't
know
that
we've
had
to
merge
an
epr's
for
it.
A
That
I
can
remember
like
you
know,
I
mean
it's.
It's
implementation
is
simple
enough
that
you
know
it's
it's.
The
conversation
might
become
different.
If
we
constantly
have
to
do
maintenance
on
it
and
there's
still
people
using
it
like
how
we
can
get
the
you
know
get
rid
of
it,
but
I
don't
know,
I
guess
I
don't
worry
about
it.
Going
away
anytime
too
soon.
A
Yeah
I
mean
it
uses
a
lot
of
the
same
code.
I
mean
the
the
things
that
that
we
run
into
a
lot
like
the
reasons
people
tend
to
stick
with
fluent
bit
and
and
again
we
might
solve
these
problems
to
where
people
can
just
use
prompto.
But
the
ability
to
send
logs
to
multiple
places
that
are
not
loki
is
the
number
one
thing
right
so
and
of
those
people
really
like
the
ability
to
send
like
just
compressed
logs
to
an
object
store
for
archival.
So
we've
talked
about
doing
that
in
prom
tail.
A
We
might
do
that
in
prom
tail
just
so
that
people
have
that
option
and
they
can
still
use
prom
tail
generally.
The
best
experience
with
loki
should
be
with
prom
tail,
because
it's
the
you
know
it's
the
one
we
build,
but
you
know
lots
of
folks
use
fluent
bit
and
fluent
d.
You
know
and
like
I
said,
for
those
reasons
they
you
know
they
want
to
send
logs
to
multiple
places
or
they've
got
an
existing
infrastructure
that
they
don't
want
to
tear
out
like.
A
So
you
know
we're
going
to
make
that
a
lot
easier.
Now
that
the
unordered
rights
work
so
retries
are
not
such
a
burden
and
that
I
know
that
eduardo
with
fluent
bit
tried
really
hard
with
the
implementation
of
fluent
bit
natively
in
c
to
handle
loki's
ordering
constraints.
So
maybe
we
can
circle
back
around
with
him
now
and
and
he
can,
he
can
solve
it
the
way
he
had
with
others
before
and
improve
that
experience
too.
So,
but
you
know
we
were
not
super
strongly
opinionated
on
the
agent.
A
It's
just
that
you
know
there's
some
advantages
to
us
having
the
coupling
between
our
agent
and
loki.
But
you
know
the
the
api
for
prompt
or
for
loki
is
simple,
and
people
use
lots
of
ways
to
send
logs
to
it.
A
Everybody
and
we'll
see
you
see
you
next
month.