►
From YouTube: Loki Community Meeting 2021-08-05
A
B
Anyway,
that's
for
the
record
kelvin.
I
have
a
very
urgent
appointment
with
a
fire
pit
in
the
sunshine,
so
we
want
to
make
the
full
call
all
right.
A
Yeah-
let's,
let's
get
into
it-
if
anybody
wants
to
add
anything
to
the
agenda
to
talk
about,
please
do
happy
to
talk
about
anything,
otherwise
we're
gonna
dig
into
223,
which
will
be
out
tomorrow.
A
A
A
We've
talked
about
the
pattern
parser
a
bit
before.
Am
I
sharing
my
screen,
or
am
I
no
I'm
not?
We
do
that.
A
So
pattern
parsing
allows
you
to
take
somewhat
slightly
structured
log
lines
like
patchy.
Common
log
format
where
things
are
in
a
known
position
are
usually
delimited
by
something
like
a
space,
but
you
know
it
isn't
necessarily
consistent
enough
or
structured
enough
to
use
like
a
json
log
format,
type
parser
and
battery
pricer
is
really
nice,
because
it's
much
faster,
it
actually
ends
up
being
faster
than
json
and
faster
than
red
x.
Regex
is
the
slowest
of
loki's
parsers
log
format
would
be
the
let's
go
from
fastest
to
slowest.
It
should
be
the
pattern.
A
A
A
Let's
see
custom
retention,
that's
been
a
hot
topic
for
a
long
time,
so
loki's
retention
has
always
been
left
as
an
exercise
to
the
object,
store
implementation.
Aside
from
the
file
system,
the
file
system
table
manager
would
do
some
deletes
for
you,
but
it's
always
just
been
a
big
hammer.
You
know
everything
gets
deleted
after
whatever
date
or
whatever
sort
of
wild
rules
that
you
would
set
up
for
object
stores.
A
So
now
in
2.3
loki
will
let
you
configure.
Let's
find
a
config
example
here,
we'll
let
you
configure
custom
retention
on
a
stream
selector
basis
here
we
go.
This
is
what
I'm
looking
for,
so
you
can
say
on
a
given
stream
to
specify
a
label
selector
how
long
you
want
retention
to
be.
A
You
know,
hours
days,
weeks,
months
years,
if
you'd
like
there's
no
real
penalty
for
very
long-term
retention
in
loki,
because
we
just
ship
it
all
off
to
an
object,
store
and
let
it
hang
out
so
the
implementation,
basically
the
compactor
component.
So
one
caveat,
this
does
only
work
with
the
volt
db.
A
Shipper
index
type,
really
we're,
I
would
say,
putting
no
effort
into
the
maintenance
of
the
separate
index
types
like
cassandra
or
dynamodb,
I
mean
not
to
say
we
won't
fix
bugs
or
things,
but
we're
not
really
devoting
our
effort
into
changing
or
improving
those
and
as
such,
most
new
features
won't
probably
be
ported
to
them
at
all.
So
if
you
haven't
moved
to
volt
db
shipper,
I
recommend
it
and
yeah
with
that.
So
the
compactor
component,
which
is
basically
an
index
compactor
for
bolt
db
shipper,
is
now
taking
charge
of
scanning
the
index.
A
A
We
are
flagging
this
as
a
beta
feature.
Well,
technically
says
experimental.
We
we're
using
it,
we've
been
using
it,
but
we
don't
have.
You
know
as
much
time
on
this
as
to
the
point
where
we
want
to
make
sure
that
we've
worked
out
all
the
bugs,
but
I
would
say
that
you
need
to
be
overly
concerned.
I
would
recommend-
and
we
do
this
as
well-
it's
just
a
general
rule
of
caution
and
good
advice,
especially
on
index
files,
is
not
a
bad
idea
to
enable
object,
versioning
in
an
object
store.
A
It
helps
protect
you
against.
You
know
accidental
configs
and
might
protect
you
against
a
bug,
but
not
as
a
bad
practice
and
generally
you
can
configure
it.
So
it
doesn't
add
a
significant
amount
of
cost.
You
really
only
want
to
keep
like
one
previous
version,
just
in
case
something
got
deleted
or
mangled
that
you
care
about
all
right.
Anything
I
missed
fellows
fellow
loki
folks
on
the
return.
A
Nice
deletes
is
kind
of
goes
hand
in
hand.
We
implemented
both
of
these,
at
the
same
time,
largely
to
be
able
to
do
compliance
related
requests.
So
we've
been
sort
of
cleverly
managing
gdpr
requests
by
only
having
30
days
retention,
so
gdpr
basically
allows
you
to
take
30
days
to
complete
a
request,
so
by
opening
that
window,
for
you
know
for
us
and
our
purposes
and
our
hosted
purposes
for
storing
longer,
we
needed
a
way
to
do
deletes
on
request.
A
It
is
currently
in
that
sort
of
scope
that
we've
implemented
it,
whereas
you
submit
a
delete
request
and
it
goes
into
a
queue
and
then
24
hours
later
it
will
be
processed
from
that
queue
and
the
logs
will
be
deleted.
So
it
is
not
like
up
to
the
second
type
deletion.
It
is
possible
to
do
that.
We've
actually
had
some
of
the
code
in
place
for
it.
A
It
gets
a
bit
tricky
because
there's
several
caching
layers
within
loki
and
you
end
up
having
to
do
like
some
query
time
filtering
and
you
know
cache
eviction
and
things.
So
we
just
didn't
opted
to
not
introduce
that
complexity,
yet
it
you
know
it
could
be
added
if
up
to
the
second,
you
know,
and
up
to
the
millisecond
type
deletes
are
important.
A
I
don't
know
where
you
can
go
comment
on
that.
I
don't
know,
maybe
create
an
issue
if
there's
not
one,
but
yes,
there
is
a
way
to
do
deletes
now
from
loki,
and
that
would
be
on
a
stream
level.
So
you
you'd
have
to
you
can't
do
like
a
filter
expression.
It
would
basically
say
like
with
this
label
selector
and
this
time
range.
A
I
would
like
to
delete
all
of
the
logs.
Let's
talk
about
recording
rules,
alerting
rules
went
into
loki
quite
some
time
ago.
Actually,
you
know-
maybe
almost
a
year
ago,
at
this
point
in
the
very,
very
close
and
very
similar
fashion,
recording
rules,
much
like
alerting
rules
will
take
a
a
query
and
sort
of
just
execute
it
against
live
data
in
an
alerting
sense.
It
then
generates
an
alert
if
conditions
are
met
and
in
a
recording
rule
case
it
does
something
really
fun
with
loki
where
you
can
take.
A
You
know
a
query
executed
against
your
logs
and
turn
it
into
metrics
and
then
remote
write
that
to
any
prometheus
remote
write
compatible
endpoint,
so
you
can
as
a
prometheus
2.20
something
lasts
within
the
last
two
releases.
There's
a
flag
that
actually
allows
prometheus
to
accept
remote
right
pushes
so
you
could
send
from
loki
to
prometheus
or
you
know,
thanos
cortex.
A
Any
of
the
other
sort
of
you
know
big
remote
right
back
ends.
So
now
you
can
set
up
some
automatic
recording
rules
to
turn
your
metrics
or
sorry
logs
into
metrics.
C
Because
we're
that
one's
really
exciting,
I
think
it
really
helps
bridge
the
gap
between
using
loki
as
a
metric
source
in
ad
hoc
fashion
and
actually
like
integrating
it
into
a
kind
of
like
existing
pipeline
where
you
have
prometheus
or
some
prometheus
compatible
back
end.
C
And
so
we
do
this
internally
for
things
like
all
sorts
of
pertinent,
read
path
like
query
metrics
that
we
ultimately
generate
from
logs
themselves,
and
then
we
write
them
into
metrics
back
into
cortex
in
our
case,
but
could
be
anything
and
helps
us
then
kind
of
like
integrate
that
into
our
existing
prometheus
data
sources.
A
A
You
could,
you
know,
take
your
application
that
doesn't
allow
instrumentation
with
metrics.
You
know,
write,
alerting
or
recording
rules
to
then
send
to
a
metric
store
which
has
like
12
or
13
months
retention.
So
it's
another
way
to
get
better
sort
of
metrics
visibility
out
of
an
application
that
might
not
have
the
ability
to
do
that
directly.
C
A
That
there
just
will
be
loki
right
like
that's.
That's
where
we're
heading
here
so
yeah,
don't
tell
anybody.
Injector
query
shouting
is
one
of
my
favorite
things
in
it.
There's
a
couple
things
about
these
next
two
that
are
important.
They
only
apply
to
people
that
have
a
loki
setup,
which
is
using
a
query
front-end.
A
A
We
basically
said
at
the
time
like
you
know,
there's
been
enough
complexity
and
enough
like
we,
we
had
to
put
a
milestone
somewhere
and
it's
been
a
while,
and
luckily
we
were
able
to
kind
of
circle
back
around
finally
to
revisit
that
last
bit
of
being
able
to
shard
queries
that
go
to
the
ingesters,
and
this
matters
a
lot
when
you
have
high
volume
log
streams,
so
you're
typically
flushing.
A
You
know
if
you're
flushing,
you
know
chunks
every
few
seconds
or
a
few
minutes.
If
you
have
log
streams
that
are
generating,
you
know
a
few
hundred
k
or
even
a
megabyte
a
second,
it's
been
somewhat
painful
to
query
those
because
the
only
parallelization
you
get
on
those
is
the
split
by
so
you
know
kind
of.
At
best
we
tend
to
run
like
a
15
minute
split
by
interval.
A
I
think
we've
actually
increased
that
to
30
minutes
because
of
this
now,
so
you
would
only
be
able
to
split
a
query
into
you,
know
15-minute
chunks,
and
that
could
mean
you
know
a
single
query
or
or
yeah
or
in
jester,
I
guess
but
query
or
mainly
doing
an
entire
15-minute
chunk.
That
could
have
been
maybe
gigabytes
and
gigabytes
of
log
data,
so
there
was
no
way
to
really
do
more
parallelization
until
this
was
added.
So
now
that
can
be
split
again
by
our
sharding
factor
and
the
index
of
16..
A
C
I
think
this
is
really
cool
for
kind
of
two
reasons,
one
being
that
recent
data
tends
to
be
the
data
we
care
about
the
most
and
historically,
as
ed
mentioned.
This
was
something
that
we
could
not
take
advantage
of
of
the
same
parallelization
same
optimizations,
and
now
this
enables
us
to
do
so,
and
the
other
part
is
that
this
hasn't
been
sent
to
storage,
yet
so
it's
actually
readily
accessible,
and
so
that
means
that
we
actually
are
seeing.
C
You
know
benefits
beyond
what
we
saw
in
the
starting
implementation,
that
touches
storage,
because
we
also
get
to
filter
out
some
some
bits
that
would
require
us
to
pull
data
from
storage,
and
so
ultimately,
we
get
to
see
some
pretty
impressive
numbers
coming
out
of
that.
You
really
will
start
to
see
benefits
here.
Once
you
hit
a
certain
scale,
I
would
probably,
as
like
a
general
rule
of
thumb,
probably
not
enable
starting
at
all,
unless
you
have
maybe
a
minimum
of
10
queriers
running,
but
your
mileage
may
vary.
A
Yeah,
that's
the
the
the
guidance
on
our
part
here
is,
is,
is
a
little
bit
shallow
or
maybe
another
way
to
say
that
doesn't
really
exist
well
at
all,
but
the
there's
a
couple
of
things
that
play
with
with
parallelization,
like
owen,
said
so
having
the
ability
to
do
the
querying
is,
is
the
one
part
and
two
is
making
sure
you
have
the
resources
so
making
sure
that
you
have
so
the
in
the
front
end
worker
config,
there's
a
section
called
parallelism
and
that
will
configure
how
much
one
individual
querier
we
tend
to
map
that
to
cpu
cores
and
then
there's
boy.
A
I
can't
remember
it
off
my
head.
There's
a
front
end
config
that
will
configure
how
much
parallelism
is
allowed
to
be
done
at
once.
So
there's
some
trade-offs
here.
You
can
make
that
number
really
big,
but
on
certain
queries
it
would
end
up
in
executing
a
massive
amount
of
querying
information
that
isn't
used
so
log
queries
that
only
return
a
thousand
lines,
and
you
know
if
you
tell
it
you
want
to
do-
I
don't
know
500
or
1000
or
1500
query.
What
do
we
call?
A
Those
little
query
execution
units
in
parallel,
like
you'll,
end
up
querying
a
ton
of
data
that
you
don't
need,
because
you
might
find
a
thousand
log
lines
in
the
first
result.
So
but
yes,
query,
front
front-end
make
sure
you
have
enough
querying
capacity
for
what
your
configs
are
and
and
enjoy
that
one,
and
this
is
also,
I
think
this
one
is
to
me.
The
instant
query.
A
Parallelization
is
more
exciting
in
some
ways,
because
this
is
something
that
prometheus
can't
do
so
one
unique
advantage
of
loki
counting
events
is
we
don't
have
a
concept
of
counter
resets,
so
in
like
cortex,
for
example,
we
can't
parallelize
instant
queries,
because
you
need
to
look
at
all
of
the
data
from
the
view
of
one
query
or
to
look
for
counter
resets
to
be
able
to
properly
increment
that
counter
in
loki.
We
don't
have
that
concept.
We
just
count
events,
which
means
it's
actually
pretty
easy
for
us
to
just
break
that
work
up.
A
So
I
don't
know
that
I
have
an
example
of
like
where
this
would
really.
You
know
one
versus
the
other,
but,
like
I
said
in
our
quest
for
global
domination,
you
know
plus
one
to
loki
here,
because
instant
query
parallelization
can
be
really
really
handy,
especially
for
certain
types
of
queries.
Like
top
k,
queries
where
you
want
to
look
at
the
top
k
over.
A
You
know
a
period
of
time
like
you,
you
can
do
that
now
range
query.
Top
k's
always
give
weird
results
and
if
you've,
if
you,
if
you're
familiar
with
this,
you
already
know
why.
But
if
you're
not
like
a
range
query,
it's
basically
just
a
bunch
of
instant
queries
executed
at
a
step
and
each
one
of
those
executions
will
produce
its
own
separate
top
k
and
then
that
result
gets
merged
and
you
ask
for
top
10
and
you
might
get
back
like
30
results.
A
So
now
that
gets
split
and
splitted
splitted
and
charted
based
on
the
same
configs
that
you
have
for
your.
You
know,
like
quarry
range
type,
queries.
C
A
C
We
didn't
fully
wire
it
up,
because
a
lot
of
the
initial
bits
that
we
were
really
worried
about
were
range
queries
executing
really
slowly
so
there's
kind
of
a
trend
here
of
you
know
the
recent
you
know
past
x
months,
where
we're
kind
of
going
back
and
addressing
some
of
these
relatively
you
know,
maybe
not
low
hanging
fruit,
but
things
that
we
were
aware
of
and
really
kind
of,
you
know
yeah
connecting
all
the
dots
there.
A
And
that's
largely
in
part
because
people
like
loki
and
the
product
is
growing
and,
as
a
result,
we've
been
able
to
grow
the
team.
So
callum
is
a
recent
joiner.
Carsten
did
the
instant
query
a
recent
joiner
and
I
think
we
have,
I
think,
six
people
full
time
for
loki
now,
which
is
fantastic.
A
My
my
time
is
now
split
more
into
doing
management
stuff
so
and
in
you
know
talking
until
you
get
tired
of
hearing
me
talk,
so
that's
very
exciting,
so
very
excited
to
see
growth
and
improved
support
for
loki,
because
that
does
let
us
go
back
through
to
close
the
loop
on
a
lot
of
this
fun
stuff
right,
like
there's
a
lot
of
ways
that
we
can
continue
to
make
loki
way
way.
Cooler.
A
In
fact,
there
are
I
added
up
280
prs
in
the
released
between
2.2.1
and
2.3,
so
just
a
fantastic
amount
of
work
has
gone
into
that
release.
This
is
the
sort
of
most
exciting
major
features,
but
numerous
improvements
for
performance
on
all
of
our
parsers
in
loki
in
general.
In
terms
of
you
know,
memory
and
cpu
consumption
granted
that
enabling
more
splitting
and
sharding
results
in
more
cpu
usage
and
faster
results,
but
very
very
excited,
and
that's
the
last
one
that
I
had
on
that.
So
anybody
have
any
questions
on
features
or
2.3.
A
So
2.4
I'll
segue
owen
to
you
to
talk
about
out
of
order,
because
2.4
will
be
the
first
release
to
have
a
what
will
likely
be
beta,
still
version
of
loki
supporting
out
of
order
rights.
Sorry,
it's
not
going
to
make
it
into
2.3
yeah.
C
So
sorry
to
everyone,
that's
following
along
the
out
of
order
issue
where
I
said
I
thought
it
would
be
experimental
in
2.3
this
morning.
Did
you
and
then
quickly
realized
that
we
were
cutting
our
release
from
a
different
point
than
I
thought
we
were,
but
yes,
so,
the
past
few
months
or
month?
I
guess
I
guess
not.
C
Even
that
long
has
been,
I've
been
sequestered
away,
a
lot
working
on
out
of
ordering,
which
is
something
that
ed
and
I
started
putting
together
about
a
year
ago,
and
it
will
not
entirely
eliminate
the
ordering
constraint
loki.
But
what
it
will
do
is
we'll
kind
of
give
you
a
variable
validity
window,
which
should
be
way
more
than
enough,
for
you
know,
99
of
the
cases
where
we
see
so
this
should
really
help
people
who
are
using.
C
You
know
non-prom
tail,
based
agents,
people
who
want
to
ingest
things
like,
for
instance,
high
cardinality
or
ephemeral
data
in
the
loki
so
trying
to
ingest
aws,
lambda
logs
or
load
balancers.
This
sort
of
thing,
historically,
you
kind
of
had
to
fight
between
label
cardinality
and
the
ordering
constraint
until
you'd
be
incentivized
to
do
things
like
create
more
labels
to
get
outside
of
the
ordering
constraint
or
to
do
things
like
ingestion
time,
stamping,
where
you'd
actually
overwrite
the
timestamp
of
a
log
that
was
reported
by
a
subsidiary
service.
C
Those
are
things
which
we
realize
are
not
ideal,
and
so
what
this
work
does
ultimately
is
it
allows
the
ingestors
the
component,
which
does
a
lot
of
the
heavy
lifting
during
writes,
to
accept
logs
for
a
variable
period
of
time,
and
then
it
will
reorder
them
before
flushing
to
storage,
and
so
this
kind
of
allows
the
rest
of
the
system
to
behave
as
normal,
and
it
was
a
way
for
us
to
kind
of
get
an
isolated
change.
C
That
really
gives
us
an
outsized
benefit
in
terms
of
new
functionality,
so
we're
very
excited
about
it.
C
Out
of
order
rights,
so
I
should
be
able
to
answer
this
with
more
clarity
in
the
future
once
we
have
run
some
more
significant
tests.
However,
a
lot
of
this
is
designed
from
the
ground
up
to
not
to
only
incur
costs
when
needed.
So
if
you're,
if
you
continue
to
ingest
the
bulk
of
your
logs
in
order,
for
instance,
you
should
see
almost
you
know
very
negligible
if
any
performance
changes
we
haven't
found
any
yet
at
all.
C
In
that
regard,
we're
going
to
be
kind
of
monitoring
this,
as
I
said,
and
so
I'm
sure
something
will
pop
up.
But
the
things
I'm
currently
looking
out
for
are
increased
memory.
Usage
in
the
adjusters
is
the
big
one,
but
should
have
some
more
information
later.
A
I
had
it
right,
so
the
debate
on
what
data
loki
will
accept
out
of
order,
I
think,
is
an
interesting
one.
That
was
a
bit
of
a
trade-off
on
a
few
things,
but
I'm
really
happy
with
this
trade-off,
because
loki
will
still
happily
accept
old
data,
so
you
can
have
a
you
know,
a
new
machine
spin
up
or
you
could
sort
of
back
load
old
data,
and
that
works.
Fine,
the
restraint
really
or
the
constraint
really
is
whatever
the
most
recent
timestamp
for
a
stream
is
minus.
A
That
we
use
to
parameterize.
So
what
you
can't
do
is
write
data
from
now
and
then
send
data
from
like
yesterday
into
the
same
stream
at
the
same
time,
but
you
can
write
data
as
timestamps
now
minus
say
two
hours
or
three
hours,
which
is
a
typical
max
junk
age
and
all
of
that
can
be
accepted
and
ordered
and
stored.
A
So
I
think
this
covers
you
know
generally,
like
99
of
the
use
cases
that
people
really
care
about
ordering,
which
is
some
amount
of
jitter,
or
you
know,
different
systems,
sending
data
from
the
same
stream
and
that's
it.
So
I
think
that's
that's
the
last
major
feature
for
loki.
I
think
we're
probably
done.
A
I
don't
remember
anyone
requesting
any
other
major
features,
but
look
for
that
in
so
2.4.
I
actually
think
we
will
try
to
do
in.
They
may
say
like
a
month's
time.
I
don't
think
that
we're
gonna,
because
I
do
want
to
get
this
out
in
to
the
hands
of
folks
to
get
the
experience
of
of
community
folks,
if
you're
very
eager
to
do
this.
What's
the
k-56
tag,
you
can
run
that
one.
C
Yeah,
we
there's
a
bunch
of
branches
k
number
the
letter
k
followed
by
a
number
which
roughly
follow
a
lot
of
the
internal
releases
we
cut
inside
at
grafana,
and
they
are
a
more
bleeding
edge
way
to
deploy
loki.
That
is
not
quite
from
the
main,
the
tip
of
the
main
branch
I'm
looking
yeah
k56
does
not
because,
oh
no,
it.
A
A
We
talked
about
the
k
releases
before
there's
a
bit
of
a
caveat
m
tour
here,
because
if
we
find
bugs
as
we
promote
things
through
the
process,
we
will
re-release
and
essentially
redeploy
and
every
once
in
a
while.
We
abandon
them
because
of
you
know,
timing
and
and
whatnot,
with
being
able
to
get
stuff
through.
So
generally
speaking,
you
know
the
most
recent
k
release
is:
maybe
not
the
one
you
want
like
stay
one
back,
but
we
continue
to
sort
of
automate
that
process.
C
And
so
I
expect
I
checked,
I'm
I'm
doing
that
next
week,
and
so
we
should
cut
it
for
a
few
days
next
week.
We're
supposed
to
do
k-57
this
week.
C
Wasn't
going
to
call
you
out,
but
but
yeah.
A
You
want
k-57
I'll
leave
that
get
one
release
a
week.
Okay,
thanks
owen,
that's
super
cool.
I'm
not
gonna
go
through
all
the
lists.
I
just
wanted
to
copy
it
up
in
here,
because
grafana
8.1
was
released
just
about
an
hour
and
a
half
ago,
and
I
think
all
of
this
stuff
made
it
in
there.
A
So
a
lot
of
really
awesome
stuff
ivana's,
not
here
she
talked
about
it
last
week,
so
go
check
out
the
recording
from
or
last
month,
rather
so,
lots
of
fun
stuff
came
into
fixes
and
improvements
into
grafana,
8.1
so
pair
that
up
with
2.3
tomorrow
and
should
be
pretty
exciting.
A
A
D
I
got
you
guys,
you
guys
will
be
happy
to
hear
I
got.
I
got
guest
blog
drafting
for
for
your
team
to
look
at
for
the
grafana,
I'm
sorry
for
loki
and
tempo
on
fargate.
Finally,
yeah.
A
A
I
I
will
like
so
so
for
those
that
have
both
found
this
recording
and
and
stayed
this
this
long,
so
jen
velo's
on
the
call
jen-
and
I
so
jen-
is
the
product
manager
for
the
parts
of
loki
that
we
tend
to
wrap
up
in
our
enterprise
stuff.
But
we've
we've
combined
our
forces,
our
are
combining
our
forces
with
the
goal
of
making
the
loki
single
binary,
much
easier
to
run
specifically
to
be
able
to
use
it
to
run
in
an
fashion
with
very
minimal
configs.
A
A
What
am
I
missing
table
manager,
which
we
don't
use
so
much
anymore
compactor,
so
loki
as
microservices
is
many,
and
our
goal
is
to
hide
a
lot
of
that
complexity,
so
this
is
primarily
targeted
for
those
one
to
maybe
five
terabyte
a
day
type
use
cases
where
you
want
to
be
able
to
run
loki
with.
We
will
likely
have
separate,
read
and
write
path,
binaries,
because
that
separation
would
be
the
same
binary
like
what
we
do
now.
A
I'm
very
very
excited
about
this,
because
the
real
aim
here
is
to
close.
Some
of
that
like
like
running
loki,
is
a
single
binary
is
very
easy,
but
you
want
to
add
h
a
or
you
want
to
get
parallelizable
queries,
and
it
very
quickly
becomes
a
lot
more
difficult.
If
you
are
on
kubernetes
and
have
you
know
the
distributed
helm
chart
that
reinhardt
contributed,
you
know
that
makes
life
way
way
easier.
We
basically
want
to
make
it
as
easy
for
everyone,
especially
outside
of
kubernetes
environments
and
on
vms
and
anywhere.
A
Yeah,
the
the
I
don't
know
this
is
is
that
I
hope
the
real
goal
is
to
make
you
know
the
the
there's
a.
I
think,
a
large
portion
of
people
that
you
know
fall
into
that
hundreds
of
gigabytes
a
day
of
logs,
and
maybe
you
have
access
to
kubernetes
and
it's
not
so
bad
for
you
to
deal
with
json
or
home.
But
we
want
to
kind
of
simplify
that
too.
E
The
only
when
you
can
give
us,
you
know,
kicking
kicking
it
off
this
week,
yeah.
A
It's
because
I
want
a
loki
release
to
be
exciting.
You
know
if
we
did
it
every
week,
who
would
care
you
know
who
every
every
month
be
like
another
loki
release
boring
now
everybody's
excited.
I
think
I'm
excited
yeah.
E
Was
just
gonna
just
add
on
and
say
yeah,
I
think
we're
we're
all
really
excited
about
this
project
yeah
at
grafana,
labs
and
yeah.
It
also
just
kind
of
makes
you
chuckle
the
it's
like
moved
to
microservices,
but
now
in
a
way
you
know
we're
going
away
from
that,
at
least
to
support.
E
You
know
simpler
use
cases,
and
I
think
it
just
you
know
you
got
to
choose
what
works
for
you,
based
on
your
expertise
and
your
deployment
architecture,
and-
and
we
think
this
is
going
to
help
a
lot
of
people
who
might
otherwise
have
kind
of
felt
locked
out.
A
Yep
full
circle,
you
know
it's
the
way
for
tech,
we're
gonna,
we're
back
to
mono,
repos
and
monoliths.
Actually
loki
will
always
run
as
a
micro
service.
That's
how
we
will
always
run
it
there.
There
will
always
be
advantages
to
having
a
micro
microservices
deployment,
but
that
always
does
come
with
increased
complexity,
and
you
know
I
would
say
it's
sort
of
operational
knowledge
and
things
so
so
you'd
be
able
to
run.
You
know
loki
as
this
multiple,
smaller
binary
and
add
micro
services
components
to
it.
A
Yep
and
or
I
I
ran
two
single
binaries
and
I'm
using
a
local
file
system,
and
I
can't
get
my
query
results
or
I
didn't
set
up
a
ring
or
I
didn't
set
up.
You
know
a
shared
like
that.
That
kind
of
stuff
is
what
we're
really
hoping.
You
know
to
hide
the
complexity
of
member
lists
and
the
communication
that
needs
to
happen.
A
So
that's
I
think
that
will
be
a
a
nice
nice
big
boost
for
for
lots
of
folks
trying
to
get
started
with
loki
to
get
to
that
step.
Two.
D
I
have
a
since
you
mentioned
memberless,
I
got
a
question
is
tempo
I
know
has
been
fighting
like
some
memberless
issues
and
are
those
the
same
that
loki
has
or
are
they
are
they
different,
and
why
is
there
a
divergence
there.
E
A
We
run
a
single
console
instance
and
there's
no
real
reason
that
for
that
aside
from
inertia,
so
given
that
tempo
was,
you
know
the
newest,
they
ventured
out
on
the
memberless
path
and
there's
been
a
number
of
bugs
most
all
of
the
bugs
I've
seen
relate
to
trying
to
get
instances
to
forget
from
from
the
ring
and
not
all
of
those
bugs
are
fixed.
However,
I
can
tell
you
that
we're
going
to
be
working
on
fixing
them,
because
member
list
is
you
know
still
by
and
large.
A
I
think
the
most
requested
way,
like
very
few
people,
seem
keen
on
wanting
to
run,
console
or
etcd
for
this,
and
we
in
general
don't
want
to
have
to
have
people
to
have
another
third
party.
You
know
dependency
that
they
have
to
set
up.
So
I
I
do
feel
pretty
confident
in
saying
we
will
work
out
all
those
bugs
and
we
do
run
tempo
in
our
clusters
now
and
have
for
a
long
time
on
member
lists.
So
it
is,
you
know
it
does
work
well,
but
right
now
it
has
the
the
nice.
A
A
A
E
Yeah,
I
mean
another
interesting
data
point
there
is
when
jacob
did
some
of
our
performance
benchmarking
on
the
metric
side.
He
used
member
lists
and
yeah,
so
he
got
up
to
like
a
ring
of
like
500
ingestors
and
it
worked.
You
know
it
was
working
fine,
but
I
mean
again
this.
That
was
a
load
test
right.
It's
not
like.
We
were
trying
to
like
kill
things
and
then
bring
them
back,
so
we
wouldn't
have
necessarily
seen.
I
guess
the
things
people
would
see
in
real
life
with
ingestors
getting.
E
Oh
yes,
so
not
sure
if
your
streets
of
just
wandering
around
the
streets
of
ecuador.
A
Yeah,
it's
it's
it's
great
speaking
of
forgetting
unhealthy
ingestors
2.3
will
include
a
flag
that
lets
you
auto,
forget
unhealthy
ingestors.
So
a
much
controversial
sort
of
feature
or
way
of
loki
where
you
know
things
that
that
would
leave
without
exiting
properly
would
stay
as
unhealthy
those
can.
It
can
be
enabled
to
auto
forget
them
now
and
especially
with
the
right
head
log.
That's
it's
probably
pretty
reasonable
to
do
that,
especially
if
you
run
into
it
a
lot,
but
so
at
least
that's
an
option
now
for
a
lot
of
people
that.
D
Well,
you
should
put
that
on
the
feature
list
that
was
like
one
of
the
first
things
I
ever
asked
about
fixing
with
loki
like.
Can
you
just
make
it
forget
automatically.
A
I
just
deflect
back
to
the
cortex
issue.
That
was
super
long,
which
was
like
no,
we
like
it
this
way
and
everybody
it's
yeah.
We
don't
need
to
dig
it
up,
but
the
good
news
is,
you
have
a
choice
now
and
which
path
that
you'd
like
to
take
I'll
just
add
that
in
here
can
now
auto
forget
unhealthy.
A
All
right,
thanks
for
thanks
for
joining
everybody,
stay
stay
tuned
there,
I'll,
probably
tweet
some
stuff.
We
should
have
a
blog
post
going
out
with
this
tomorrow.
We
just
try
to
coordinate
those
so
check
the
blog
check,
github
and
you'll
get
the
details.
I
don't
anticipate
any
upgrade.
I
I
gotta
go
through
and
look
at
that,
but
I
don't
think
anything's
changed
in
a
config
standpoint.
That
really
requires
direct
attention,
so
hopefully
the
2.3,
but
also
it's
been
five
months.
So
there's
quite
a
bit.
I
got
to
look
through
to
make
sure
that's
true.