►
From YouTube: Loki Community Meeting 2021-11-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yes,
so
let
me
know
when
you
hit
record,
have
you
done.
B
B
If
you
haven't
registered,
there's
a
loki,
2.4
themed
talk
in
observabilitycon
on
the
10th
next
week
at
1810
utc,
and
you
know
I'm
doing
some
shameless
self-promotion
here.
It'll
be
ivana
myself
and
trevor
we'll
be
talking
about
a
lot
of
changes
that
come
with
the
loki,
2.4
release
and
then
a
lot
of
associated
changes
on
the
grafana
side
as
well,
which
make
it
easier
to
use
easier
to
interact
with
more
intuitive
that
sort
of
thing
so
love
to
see.
There
feel
free
to
follow
the
link
register.
That
sort
of
thing
yeah.
B
Yeah,
I
really
enjoy
that.
Thank
you
and
yeah.
Let's
see
so
2.4
we're
basically
in
the
process
of
right
now
determining
exactly
which
commit
this
will
be,
but
by
and
large
everything
is
is
merged
and
there's
going
to
be
a
whole
host
of
new
features,
as
well
as
regular
improvements,
but
we
can
go
down
the
list
here.
B
This
is
probably
one
of
the
oldest
and
most
highly
upvoted
issues
of
all
time
in
the
loki
repo.
Basically,
it
is
not
super
uncommon
for
people
with
all
sorts
of
use
cases,
but
particularly
if
they're,
using
agents
that
aren't
aware
of
loki's
ordering
constraint
or
they're
using
pre-processing
layers.
B
Basically,
there
can
be
a
bunch
of
different
reasons
why
it
can
be
hard
to
get
the
order.
Just
right
and
loki
has
this
requirement
that
incoming
logs
must
be
ingested
in
timestamp
ascending
order,
so
over
the
past.
What
at
what
like
year
year
and
a
half
we
started
sitting
down
and
figuring
out?
Well,
does
this
need
to
be
this
way
and
ultimately
realize
that
no,
it
did
not.
B
This
was
historically
a
constraint
we
pulled
in
from
prometheus
cortex,
which
our
vortex
me-
and
these
are
one
of
our
upstream
dependencies
for
loki
and
now
this
should
largely
just
disappear
for
99
percent
of
use
cases.
B
A
Yeah
I'll
talk
about
this,
I
don't
wanna,
I'm
gonna
tease,
tease
a
little
bit
because
I
want
people
to
come
to
the
talk
next
week
to
get
more
info,
but
we've
been
working
really
hard
in
the
last
few
months
to
sort
of
make
the
non-kubernetes
use
cases
for
loki.
A
I
mean
it's
probably
true
across
the
board,
but
but
really,
if
you're
running
loki
as
a
single
binary,
we've
been
doing
a
lot
of
work
to
make
that
work
a
lot
better.
So
the
current
limitation
of
the
single
binary
is,
it
doesn't
run
things
like
the
query
front
end.
So
it
doesn't
do
query
parallelization.
A
It
doesn't
horizontally
scale
super
well,
and
you
know
you
just
don't
have
a
ton
of
flexibility.
It's
either.
You
run
that
and
if
you
want
high
availability
or
more
scalability,
then
you
have
to
jump
to
a
micro
services
deployment
and
if
you're,
not
in
kubernetes,
good
luck,
it'd
be
very
difficult.
So
so
it's
not
gonna
be
the
case
anymore.
We've
taken
a
lot
of
the
features
of
the
microservices
deployment
and
found
ways
to
stuff
them
into
the
single
binary,
so
that
you
know
out
of
the
gate.
A
It'll,
do
queries,
query
splitting
and
charting
and
things
so
it's
you
know
on
a
single
instance,
it's
limited
to
the
number
of
cores
that
the
machine
has
available
for
parallelization,
but
it
does
allow
you
to
sort
of
add
more
single
binary
instances
to
be
able
to
horizontally
scale
and
get
high
availability
and
then
we're
introducing
two
new
targets
called
read
and
write,
which
are
intended
to
allow
you
to
separately
scale
the
read
and
write
path
in
a
simple
way.
A
Also,
so
you
don't
have
to
set
up
a
query,
front-end
and
query
scheduler
and
index
gateways
and
all
these
other
microservice
pieces.
So
I
guess
I
thought
I
was
going
to
tease
it.
I
feel
like
that
was
the
the
whole
thing,
but
the
also
coming
hand
in
hand
with
this
is
we're
trying
to
do
a
lot
to
simplify
the
config.
A
I
you
know
just
warn
you
all
on
the
call
here
we're
doing
our
best
here
to
make
sure
that
we
don't
break
things
and
when
I
say
break
I
mean
mostly
like
loki
doesn't
start
because
of
a
config
error,
but
we
are
down
to
the
wire
and
doing
our
best.
It's
a
bit
tricky
to
sort
of
honor
existing
configs,
but
also
do
things
like
apply
some
same
defaults.
A
We've
changed
a
number
of
defaults
based
on
how
we
run
loki,
so
we
will
get
as
much
of
this
info
as
we
can
into
the
upgrade
guide.
But
I
think
if
especially
if
anybody
is
here-
that's
that's
keen
on
trying
to
help
us
we're
going
to
have
an
image.
A
How
would
we
communicate
that?
I
don't
know
I
think
about
that,
while
we're
on
the
call,
but
we're
gonna
have
an
image
we're
using
soon
internally
to
test
stuff
that
anybody
that
has
the
opportunity
to
try
running
it
and
make
sure
it
works
for
them
in
any
sort
of
configs.
They
have
so
very
excited
a
lot
easier
to
run
loki
in
the
small
and
medium
and
even
some
small
large
use
cases,
medium
large
use
cases
outside
of
kubernetes
or
even
in
sort
of
kubernetes.
B
And
thematically
it's
all
about
making
loki
easier
to
use.
That's
one
of
the
things
that
we've
really
really
learned.
What
past
year
and
a
half
or
so
is
we
generally
build
loki
to
run
as
a
single
binary,
because
it's
easy
because
we
can
develop
locally
with
it,
because
it
makes
sense
and
and
you
can
get
started
with
it,
but
the
gap
between
that
and
then
running
loki
as
a
horizontally
scalable
service
is
pretty
large,
and
so
this
is
kind
of
our
approach
at
building
a
middle
ground.
B
For
that
and
then
out
of
order
should
just
make
everyone's
use
cases
way
easier,
and
this
is
one
of
the
things
that
we're
really
intentionable
intentional
about
targeting
moving
forward
is
making
loki
more
intuitive,
simpler,
easier
that
sort
of
thing,
and
then
I
can
jump
into
customer
retention.
But
if
anyone
has
any
questions,
feel
free
to
say
something
in
chat
or
just
interrupt
me,
and-
and
we
can
do
this-
I
mean
I'll
help
answer
stuff
now
or
we
can
do
it
at
the
end
and
then
I'll
just
go
into
customer
retention.
B
This
is
the
ability
now
to
define
retention
rules
for
tenants
and
even
for
streams
within
tenants,
so
loki's
multi-tenanted
by
default,
which
begs
the
question:
why
isn't
or
why?
Historically
hasn't
there
been
different
retention
rules
for
different
tenants,
because
you
can
really
feasibly
say?
Oh,
I
want
my
compliance
teams
logs
available
for
two
years,
but
my
dev
team
only
really
needs
logs
for
a
month
or
two
and
that's
reasonable,
and
so
this,
basically
is
something
that
we
can
do
now
and
you
can
break
it
down
further
to
say
things
like.
B
I
want
my
development
applications
to
be
around
for
a
week,
my
production
application
logs
to
be
around
for
a
month
and
then
my
audit
logs
for
a
year
right
to
be
able
to
slice
and
dice
these
across
tenants
and
streams
as
we
see
fit
and
then
deletion
kind
of
comes
hand
in
hand.
There
it'd
be
really
really
hard
to
release
one
without
the
other,
especially
for
things
like
gdpr
concerns
where
we
have
these
right
to
be
forgotten.
B
So
deletion
is
the
new
api
in
loki
where
you
can
basically
request
that
certain
logs
be
deleted
and
that's
picked
up
and
then
there's
a
cancellation
period.
So
if
you
accidentally,
you
know,
if
you
have
an
oops
moment,
you
have
a
little
bit
of
time
to
cancel
that,
but
ultimately
that'll
help
help
people
stay
in
compliance.
It'll
help
delete
logs
when
we
accidentally
send
a
bunch
of
stuff
that
we
didn't
actually
mean
just
to
to
ingest,
so
it
helps
in
cost
reduction
efforts
as
well
and
all
in
all
works.
B
And-
and
I
guess
that's
not
exactly
that's
less
of
an
ease
of
use
thing
here,
but
it
really
targets
loki
for
more
multi-tenanted,
larger
environments
and
people
that
are
running
that
are
scaling.
Running
loki
from
you
know
their
team
to
their
organization
and
really.
B
Gonna
talk
about
kafka
next.
A
I
think
cereal's
here
is:
he
don't
think
he
can
make
this
time
yeah.
This
was
something
that
we
added
as
sort
of
I
mean.
The
story
here
is
is
follows
a
similar
one,
that
that
owen
just
talked
about
the
existing
tools
that
pull
stuff
out
of
kafka
that
sent
to
loki
we're
having
some
troubles
with
the
ordering
constraints,
so
we
actually
added
a
prom
prom
till
kafka
consumer
and
only
just
recently
got
around
to
adding
that
to
the
open
source
so
that
got
merged
in.
A
I
don't
know
this
morning,
so
that
should
be
part
of
2.4.
So
if
you
want
to
consume
logs
from
kafka
using
prom
tail,
this
would
be
a
thing
you
can
do
now
or
soon.
A
We
merged,
like
50
pr's
in
the
last
two
days.
I
did
add
an
issue
here.
You
know
nobody's
feeling
obligated
to,
but
I
wanted
a
place
to
post
the
image
when
you
get
that
k70
image
cut
if
anybody's
looking
to
kill
some
time
the
rest
of
the
day,
and
they
want
to
try
the
image
in
various
configs
or
against
their
configs
and
help
us
look
for
any
any
bugs.
It's
much
appreciated.
C
B
I
know
we
have
eduardo
joining
us
today.
A
Not
familiar
with
eduardo,
he
is
the
the
man
behind
fluent
bit
and
probably
a
lot
of
or
all
of
fluentd
as
well.
I
don't
know
the
history
back
that
far,
but
eduardo
has
been
super
helpful
in
the
past,
trying
to
make
the
loki
experience
better
working
around
our
ordering
constraint,
which
doesn't
exist
anymore.
So
hopefully
that
makes
your
life
a
bit
easier
on
the
fluid
side
of
things.
D
D
Yeah
the
thing
is
that
in
in
our
world,
people
just
care
about
throughput
send
the
data
as
fast
as
possible,
but
you
know
when
you
make
a
connection
that
connection
fails,
because
a
network
issue-
and
you
get
another
connection-
this
is
going
to
retry
it.
So
this
hits
first
and
yeah
out
of
order
right.
So
many
people
got
this
a
these
problems,
but
they
were
solving
it
with
promptel,
but
they
got
back
hey.
We
need
also
more
performance
or
we
need
some
kind
of
filtering
capabilities
before
sending
out
the
data.
D
B
D
I
think
that
the,
if
no
matter
how
do
we
send
the
data,
you
don't
get
this
kind
of
errors.
I
think
that
that's
pretty
much
it
right,
so
we
are
seeing
many
people
moving
away
from
from
elastic
to
lucky
right
just
talking
with
the
community
users.
So
we
see
that
adoption
is
the
vlog
is
growing
a
lot
in
our
world.
D
D
I
think
the
integration
that
we
have
in
fluent
beta
is
quite
more
basic
than
the
one
that
you
offer
when
you
created
the
gold
extension
for
fluent
bed
so
yeah.
So
we
should
collaborate
more
in
terms
of
a
how
to
get
the
solution
right
for
all
the
new.
Well,
the
current
and
users
that
are
coming
yeah
and
service
discovery
yeah.
That
is
something
that
is
not
there.
B
Okay,
very
cool
yeah,
and
maybe
we
could
maybe
we
can
have
little
collaboration
set
up
a
grafana
cloud
account
that
we
can
use
to
kind
of
end
to
end
some
of
this
stuff
between
our
development
branches.
D
A
I
wonder
what
a
a
good
way
might
be
to
get
feedback
from
the
community,
maybe
an
issue
in
either
the
loki
or
the
fluent
bit
repo
to
sort
of
solicit
feature
requests.
I'm
not
sure
if
you
have
another
process,
for
you
know
soliciting
input
on
like
what
people
might
want
to
see
to
make
fluent
bit
work
better
for
them
with
loki.
D
B
A
Yeah,
I
don't
I
mean
largely
we've,
just
not
done
much
with
it.
It's
just
been
sitting
there.
I
know
a
few
people
use
it.
I
think
eduardo
you
mentioned,
there's
a
couple
things
that
we
pulled
in
from
the
go
libraries
around.
A
Maybe
that's
not
even
true
about
processing
labels,
but
I
don't
think
we
really
have
all
of
the
pipeline
stages
that
promptile
has
so
we
probably
would
make
sense
for
us
to
at
least
in
the
documentation
market
as
deprecated
and
indicate.
I
think
what
we
really
would
prefer
is
to
just
have
the
native
integration.
Be
the
only
one
because
it's
we
don't
want
to
have
to
maintain
to
or
have
people
confused,
so
so
that
might
be
an
interesting
way
for
us
to
get
feedback
about
anything
that
someone
any
reason
they
might
be
using
the
plugin.
A
D
Yeah
also,
we
can
come
out
or
maybe,
if
your
finite
team
can
do
this.
It's
like
somebody
who
has
a
lot
of
expertise
with
a
the
plug-in
that
you
wrote
the
grafana
and
you
suffer
a
bit.
It's
like
trying
to
make
a
dog
hey.
This
is
our
missing
features
that
we
need.
You
need
some
parity
on
these
features,
and
so
we
can
set
up
roadmap
on
that.
Otherwise
people
who's
using
this
connection
will
say
hey.
This
is
deprecated,
but
I
don't
have
this
future
in
the
native
connector
right
and
in
fluid.
D
We
are
not
experts,
unlucky
right.
Everybody
thinks
that
we
expect
on
stackdriver.
Loki
is
three:
actually
we
pretty
much
try
to
do
our
best
to
connect,
understand
protocols
and
chip,
the
data,
but
the
users
are
the
ones
who
has
more
more
voice
on
this
right.
So
if
you
have,
somebody
in
your
community
have
good
experience
with
this
and
can
share
more,
and
that
would
great.
A
That
even
more
awesome,
I
know,
there's
still
a
lot
of
folks
that
use
fluent
bet
that
you
know
we
we
really
just
generally
don't
want
to
have
to
ask
people
to
you,
know,
tear
out
existing
infrastructure
or
also.
I
know
that
one
of
the
things
that
people
really
like
about
fluent
bit
or
fluent
d
is
the
ability
to
send
logs
to
two
places
like
s3
into
a
service
like
loki
so
having
the
ability
to
just
compress
logs
in
a
long-term
storage
is
desirable
for
a
lot
of
folks.
A
So
I'm
very
excited
to
continue
to
make
that
battery
appreciate
the
work
you've
done
so
far
eduardo.
It's
awesome!
Yeah
thanks
for
talking!
A
That's
about
the
I
don't
know
we
didn't
have
a
whole
lot
on
our
agenda.
Today,
we've
been
been
pretty
heads
down
trying
to
get
this
2.4
release
stuff
squared
away
roger.
I
see
you
joined,
we
merged
the
emergency
pr
yesterday,
I
think
we
finally
fixed.
I
think.
A
Speaking
of
prom
tale
roger
found
this
bug
a
bit
ago
and
and
after,
like
fast
forward
to
today
and
myself
and
robert
frodo
are
now
experts
on
this
problem
because
we
chased
it
down
and
then
discovered
that
this
was
the
same
bug.
You'd
reported
as
well
as
somebody
else
and
we've
never
run
into
it
because
our
infrastructure,
we
add
labels
like
pod,
template,
hash
or
controller
hash
things
like
that.
A
But
there
is
a
bug
in
prom
tail
where,
if
the
only
label
that
changes
and
your
this
is
mostly
going
to
apply
to
kubernetes,
the
only
label
that
changed
on
a
file
was
the
file
name
itself.
We
never.
A
We
would
not
follow
the
file
to
the
new
file
name,
because
we
didn't
include
the
file
name
label
in
the
sort
of
hash
of
the
target
that
we
were
following.
So
sorry,
roger
that
it
took
us
so
long
to
help
get
to
be
hard.
D
A
A
I
think
the
reason
that
most
people
probably
didn't
run
into
this
or
don't
run
into
this
is
because
we
made
it
sort
of
a
different
this
in
in
hindsight,
I
would
say
it
was
a
bit
of
a
mistake,
but
the
default
scrape
configs
for
prom
tail
will
include
all
pod
labels,
so
you
know
in
hindsight
I
wish
we
hadn't
done
that,
and
I
would
say
if
you're
doing
that,
to
probably
stop,
because
you,
you
probably
don't
care
about
most
of
the
pod
labels
that
are
that
are
being
auto
added
and
even
though
they
typically
don't
increase
the
cardinality,
like
usually
your
pod
labels
are
one-to-one
with
the
pods.
A
So
it's
it's,
it's
not
probably
causing
you
any
cardinality
issues,
it
just
sort
of
wastes
space
in
your
index.
You
know
with
labels
that
you
don't
use
for
querying.
You
know.
I
can't
remember
ever
you
know
querying
for
a
pod
template,
hash
or
querying
for
what
else
do
we
get
thrown
on
there?
Some
whatever
labels
your
pods
have.
A
So
you
know
if
it's
part
of
your
query
pattern
like
if
you
had
labels
and
use
them
in
your
query,
then
you
know
absolutely
that's
that's
what
they're
there
for,
but
having
them
sort
of
auto
included
was
was
probably
in
retrospect.
I
wish
we
hadn't
done
it
that
way,
but
so
I
think
the
more
people
now
that
are
using
the
agent
we
removed
the
that's
from
the
agent
script
config.
So
I
think
that's
why
we
started
seeing
it
pop
up
but
appreciate
that
roger.
A
B
Yeah
now
we
can
kind
of
open
it
up.
If
anyone
has
questions
that
sort
of
thing
I
see
we
have
one
in
the
chat
from
edinardo.
I
have
a
similar.
B
B
Well,
so
I
guess
for
everyone
else,
then
we
definitely
see
this
when
running
in,
like
a
single
binary
deployment
or
just
not
having
provisioned
enough
queriers,
for
instance,
when
your
read
path
isn't
large
enough
to
query
as
much
data
as
you
need
to
effectively,
you
generally
need
to
add
more
more
compute,
and
so
that's
the
big
one,
notably
the
single
binary.
A
Yeah
I
mean
this
one's
it's
a
five
minute
timeout,
so
it's
it
ran
for
five
minutes,
rather
I'm
almost
sure
that
it
just
timed
out
and
those
that
metrics.go
line
is
one
of
our
favorites
in
loki,
and
so
you
can
see
the
throughput
on
those
queries
about
268
megabytes.
A
second,
that's
not
uncommon,
for
a
single
query,
depending
on
the
query,
could
be
a
bit
faster
than
that
honestly
for
logs
queries
like
this,
where
you
know
single
instance
can
do
300,
400
500
megabytes
a
second.
A
So
when
we
start
seeing
queries
in
our
clusters
that
are
40
gigs
a
second
it's
because
we
have,
you
know
40
or
50
queriers,
each
with
six
cores
in
them
or
six
provision
cores,
and
so
the
parallelism
is
the
key
to
success
here
and
and
until
now
you
do
have.
You
would
have
had
to
set
up
a
query
front-end.
A
So
that's
why
we're
excited
about
the
scalable
deployment
stuff
is
that
the
even
the
single
binary
process
will
have
a
query
front
end
now,
so
so
adding
more
of
them
in
parallel
does
allow
it
to
parallelize
the
query
requests
across
because
they
use
the
ring
to
communicate
and
grpc
to
communicate.
So
that
is
going
to
enable
folks
to
get
much
higher
throughput,
even
in
that
single
binary
case.
So
that's
gonna
be
pretty
exciting.
B
Yeah
exactly
we
really
want
to
bridge
the
gap
between.
Oh
I'm,
I've
tested
out
loki
on
a
single
binary,
and
now
I
want
to
run
it
for
my
team,
or
you
know
my
part
of
my
organization
and
suddenly
I
care
about
things
like
high
availability
and
and
like
durability
guarantees.
So
things
like
replication
factors
and
whatnot,
and
so
the
idea
that
we
can
say
hey.
E
I
got
a
question
for
you
about
the
oh
and
you
had
an
issue
talking
about
improving
or
or
yeah.
I
think
improving,
like
the
ability
to
get
logs
out
of
aws
cloudwatch
yeah,
because
the
solution
at
the
moment
is
like
run
a
crap
ton
of
lambdas
to
do
it.
So
this
is
here.
B
Yeah,
so
here
I
remember
this,
one
sort
of
blog
post
about
this
actually
is
was
early
on
in
the
kind
of
out
of
order
discussions
right.
So
what
am
I
gonna
say?
Old
problem
cardinality
versus
ordering.
B
B
Basically,
you
had
to
spin
up
some
lambdas
and
it
would
read,
read
cloudwatch
logs
and
they
basically
would
be
evented
for
when
new
logs
came
in
it's,
it
would
hit
this
this
lambda
and
then
you
could
basically
write
that
into
prom
tail
which,
because
that
could
then
write
it
into
loki,
and
you
had
to
do
that
for
basically
to
to
make
sure
that
you
were
either
not
blowing
up
your
cardinality
or
dropping
a
log
suited
out
of
ordering
or
needing
to
to
basically
re-time
stamp
them
to
guarantee
ordering.
B
So
recently
rules,
we've
retooled
this
a
little
bit
so
the
so
there's
still
this
lambda
prom
tail,
but
instead
of
and
and
we
have
some
kind
of
docs
on
the
way
as
well.
For
this,
and
the
idea
now
is
that
you
can
spin
up
a
lambda
in
the
same
way,
but
you
can
also
just
use
our
lambdas.
B
We
now
have
an
it's
called
elastic
container
registry
ecr
in
aws,
which
will
host
this
version,
which
we
build,
which
we
then
this
kind
of
part
of
the
loki
process
in
ci,
and
then
that
will
be
updated,
we'll
be
able
to
expose
release
versions
with
this
in
the
same
way,
and
then
there
are
like
terraform
scripts,
for
instance,.
B
B
Is
we
can
basically
now
spin
up
one
lambda,
you
can
just
you
can
provision
it
with
with
terraform.
You
can
point
it
to
the
docker
image
that
loki
hosts
and
we'll
handle
this
for
you
and
you
won't
need
to
run
prom
tail,
for
instance,
at
all.
So
the
idea
is
that
this
should
make
it
just
simpler
to
get
logs
into
loki.
B
And
there's
a
little
guide
somewhere
in
here,
but
that
is
generally.
The
idea
is
that
this
should
just
now
have
a
couple,
less
edge
cases
or
a
couple
more
edge
cases
removed,
so
it
should
be
easier
to
use
in
general.
You
shouldn't
need
to
provision
as
many
things
and
you
shouldn't
need
to
worry
about
packaging
lambda
yourself.
Now
that
lambda
supports
docker
images
and
we're
vending
docker
images
for
this
tool.
You
can
just
point
it
to
the
ecr
repo
and
I
can
update
when
I
find
all
the
docs
for
that.
B
B
F
Yes,
I
don't
know
if
I
don't
know
how
often
we
publish
them
like
if
we
only
do
it
for
releases,
so
yeah
I'll.
Take
a
look.
Okay.
Thank
you.
B
E
Yeah,
I
mean
being
able
to
just
run
one
thing
rather
than
I
think
the
old
solution.
The
other
problem
was,
it
was
one
lambda
per
cloudwatch
log
group,
so
you
you
needed
like
an
entire
fleet
of
them.
B
E
Yeah,
I
I
remember
trying
to
I
was
like
oh,
this
is
fine
I'll,
just
tell
it
to
record
all
the
cloud
blocks
logs
and
it
immediately
was
telling
me
no.
You
can't
do
that.
F
Yeah,
so
obviously,
there's
still
a
question
of
like
how
this
would
scale
it's
more
or
less
the
same
code,
but
it
uses
subscription
filters
instead
of,
like
the
the
rate
limited
api,
you
need
to
scale
it
somehow
right,
but
the
nice
thing
with
the
terraform
is
you
can
just
pass
in
a
list
of
all
of
the
log
watch
or
cloudwatch
log
groups,
instead
of
having
to
like
copy
paste
the
deployment
multiple
times
and
then
just
set
a
number
of
replicas.
You
want
to
run
right,
so
it
should
be
easier.
F
Unfortunately,
owen,
because
we
haven't
released
2.4
and
I
don't
know
how
the
next
version
on
the
website
gets
gets
updated.
The
new
docs
aren't
on
the
website
yet
they're
only
and
they
have
repo.
B
B
All
right,
thank
you,
everyone
for
coming
to
this
week's
or
this
month.
What
are
we
every
three
weeks?
Something
like
that
edition
of
the
loki
community
call
and
would
love
to
see
some
of
y'all
at
the
observability
con
talk
next
week,
we'll
be
there
as
well,
at
least
a
few
of
us
to
do
q
a
and,
in
the
meantime,
happy
looking
take
care.