►
From YouTube: Loki Community Meeting 2022-07-07
Description
Join our next Loki community call: https://docs.google.com/document/d/1MNjiHQxwFukm2J4NJRWyRgRIiK7VpokYyATzJ5ce-O8/edit
What was discussed:
* 2.6.0
* Agent vs Promtail
* Docker service discovery vs docker logging driver, we recommend the first
* Caching adventures
* Simple Scalable Deployment
* Loki tenant options
A
There's
the
intro
to
the
last
call
where
I
just
was
talking
for
a
long
time,
because
it,
it
doesn't
say
it's
recording
on
my
screen
yet,
but
everybody
else
is
probably:
does
it
yep
doing
it
again,
nice,
it
still
doesn't
say
it
at
mine,
so
every
youtube
video
now
every
loki
community
call
is
going
to
start
exactly
this
way
forever
there.
It
goes
welcome
everybody
to
july.
A
2022
seems
like
just
the
other
day.
It
was
june
kind
of
a
short
agenda.
Today
we're
gonna
talk
about
2.6,
which
we
talked
about
last
time
and
said
it
would
be
three
weeks,
so
we
were
off
little.
It
should
be
out,
hopefully
we're
shooting
for
tomorrow.
A
We,
the
good
news,
is
that
we're
delayed
a
little
because
we've
been
increasing
some
of
the
automation
around
how
the
documentation
works.
So
it
should
make
this
easier
in
the
future,
but.
A
A
What
we're
doing
is
removing
me
from
the
loop
which
should
help,
because
it's
been
hard
for
me
to
to
focus
so
we
got
got
more
people
we'll
get
out
of
the
way
so
that
releases.
We
are
really
targeting
there's
an
rfc
about
this,
but
we
really
want
to
target
having
a
minor
release
every
quarter
so
every
three
months,
and
then
we
want
to.
We
really
want
to
get
to
doing
patch
fix
a
more
regular
cadence
for
that
too.
But
that's
going
to
take
just
a
bit
more
work,
but
it's
common.
A
It's
improving
yeah,
go
for
her.
B
Yeah,
I
remember
concerning
the
250
what
I
was
really.
I
was
really
expecting
the
the
release,
but
not
for
not
for
loki
itself,
but
for
features
with
from
tail,
because
I
was
awaiting
the
docker
sd
delivery.
It
was
for
my
previous
company,
and
so
is
there
a
plan
like
maybe
to
to
to
split
the
the
different
applications,
so
the
release
can
be
maybe
separated.
A
Good
question,
I
think
my
answer
to
that
would
be.
Let
me
talk
about
this
because
this
has
come
up
a
few
times
recently.
So
let
me
talk
about
prom
tail
versus
the
grafana
agent,
so
the
graffana
agent
is
vendors,
the
prom
tail
code.
So
it's
basically
a
wrapper
around
prom
tail.
They
actually
do
have
a
much
more
frequent
release.
Cadence
and
the
long
story
short
here
is
when
the
grafana
agent
was
first
created
it.
A
It
didn't
seem
like
the
sort
of
right
time
to
like
condense,
these
into
one
project,
we're,
I
would
say,
nearly
a
couple
years
later
now,
a
year
and
a
half,
maybe
I'm
not
sure
how
long
the
agent
exactly
has
been
around,
but
we've
been
talking
a
lot
more
about
what
is
really
the
difference
between
prom
tail
and
agent
at
this
point,
and
should
we
sort
of
move
the
code
bases
around
or
change
things
so
that,
basically,
what
it
would
most
likely
be
is
we
would
stop
publishing
promtel
as
its
own
entity
and
just
the
just
the
agent
so
by
and
large,
the
agent
is
better
than
prom
tail
in
every
way,
because
it
has
everything
that
prompted
has
except
one
feature
I'll
talk
about,
and
it
does
one
thing
better
than
promtel
can
do,
which
is,
if
you're
sending
from
one
prom
tail
to
multiple
loki
servers.
A
The
current
implementation
for
that
in
prom
tail
is
really
naive
and
it
does
it
in
a
sort
of
synchronous
fashion.
So
if
one
of
the
loki
servers
is
broken,
it
impacts
the
delivery
to
the
others
where
the
agent
doesn't
have
this
problem
it
under
the
hood
effectively
just
bootstraps
or
bootstraps
problem
tail
multiple
times,
which
is
kind
of
how
it's
able
to
solve
that
problem,
because
it
wraps
promptil
as
a
library
kind
of
thing.
A
So
the
agent
will
bootstrap
essentially
a
separate
processes
entirely
with
separate
positions
files
and
is
able
to
then
send
multiple
loki's
in
a
way
that
they
don't
interfere
with
each
other
plus
the
agent
is
getting
things
in
the
near
future.
I
hope
I'm
not
letting
out
any
huge
secrets
here
about
like
sending
in
we're
receiving
otlp
the
open
telemetry
protocol.
Natively
it'll
act
a
little
bit
like
an
old
tlp
collector,
then.
So
it's
kind
of
to
the
point.
A
Where
I
mean
we
have
prom
town
the
loki
repo
because
we
built
it
and
maintained
it.
But
now
we've
built
out
grafana
another
team,
that's
doing
lots
of
asian
stuff
that
I
think
so.
The
very
long
answer
to
your
question
is:
it
might
just
be
the
case
where
we
sort
of
move
the
prom
tail
code
or
the
agent
which
right
now,
vendors.
That
code
just
does
more
frequent
releases
and
we
just
continue
to
push
people
in
that
direction.
B
Okay,
I
had
I,
I
was
wondering
a
lot
of
stuff
about
the
agent
also
like.
I
knew
that
for
tempo
it
was
useful
because
it
could
extract
metrics
and
it
was
really
great
to
use
the
agent,
but
I
didn't
know
for
loki,
and
I
still
don't
know
for
the
matrix
if
it's
really
useful,
to
use
it
and
so
yeah.
You
answered
part
of
my
questions.
Yeah.
A
Basically,
the
the
only
thing
the
agent
doesn't
have
that
prom
tail
does
is
the
ui
that
we
put
in
prom
tail
that
lets
you
view
it
looks
like
the
prometheus
ui,
because
I
believe
we
largely
copied
and
pasted
the
same
code
to
make
it
look
the
same,
which
shows
you
the
targets
that
you're
scraping
and
like
what
labels
they
have
and
why
they
wouldn't
be
scraped.
So
for
some
environments,
like
kubernetes
environments,
that
can
be
really
handy
if
you're
trying
to
you,
know
debug
relabel
configs,
the
agent
doesn't
do
that.
A
That's
as
far
as
I
know,
the
only
thing
that's
sort
of
featured
missing
so
using
the
agent
gives
you
the
advantage
of
the
multi-loki
thing
I
set
up
plus.
If
you
have
metrics
to
scrape
or
traces
or
things,
then
it
has
those
capabilities
as
well.
So
it's
it's
a
superset
of
prom
tail.
For
the
most
part,
I
recommend
it
99
of
the
time
it's
kind
of
the
point
where,
like
I'm,
not
sure
why
I
would
tell
someone
to
use
prom
tail
like
the
binary
might
be
smaller,
but
I
don't
know
if
anybody
really
cares.
B
A
You
have
a
link
to
that
handy,
share
it
with
us
or
open
an
issue
for
it
or
something,
because
we
should
try
to
clean
that
up,
because
that's
yeah
you're
right,
like
the
docker
service
discovery
in
almost
all
cases,
supersedes
the
docker
driver.
The
only
case
that
I
know
of
that
people
had
trouble
is
because
we
we
have
to
pull
the
api,
the
docker
interface,
for
knowing
what
containers
are
running.
A
If
you
have
very,
very
short-lived
containers
like
less
than
five
seconds,
then
you
can
miss
them
with
the
docker
service
discovery,
whereas
the
docker
logging
driver
would
not
have
that
problem,
but
the
docker
logging
driver
has
sort
of
other
challenges,
mostly
around
the
shutdown
mechanism,
and
that
has
caused
a
lot
of
grief
for
folks
like
if
you
shut
it
down
and
it
loki's
not
so
like
this
happens,
a
lot
with
docker
compose
where
you
like,
do
a
docker
compose
down
and
it
stops
loki
before
it
stops
prom
tail,
then
the
or
the
docker
logging
driver
it
will
try
to
send
what
it
has
and
doesn't
want
to
give
up
very
easily,
and
then
it
locks
up
darker
yeah.
B
Because
I
had
multiple
swamp
clusters
and-
and
I
had
one
with
all
of
the
administration
tools
and
it
had
loki
and
when
I
was
like
when
there
were
any
problem
on
this
one,
all
the
automation
on
the
other
clusters
were
were
failing
because
of
the
logging
so
yeah.
B
B
A
So
this
is
great
stuff.
We've
already
doubled
the
size
of
our
agenda,
so
I
appreciate
that
so
yeah,
so
circling
back
around.
I
think
we
definitely
need
to
this.
A
This
has
recently
come
up
too
so
like
and
then
this
is
usually
the
indication
that,
like
it's
probably
past
it,
but
we
need
to
clean
up
the
communication
on
agent
versus
prom
tail
sort
of
make
the
plan
for
that
clear,
like
there's
nothing
that
will
likely
force
it
like
the
only
thing
that
you
can't
sort
of
one-to-one
go
from
the
agent
to
prom
tail
by
copying
the
config
their
configs
are
slightly
different
because
the
agent
has
the
promptil
configs
like
nested
inside
of
another
layer
of
yaml,
and
then
maybe
another
config
changes.
A
A
C
You
wanna
talk
about
caching,
yeah
sure,
so.
We've
been
looking
at
some
different
caching
solutions
for
the
last
couple
of
months
and
basically,
what
we're
going
to
do
is
we've
got
some
folks
who
aren't
super
happy
running
memcached
for
their
caching
layer,
and
so
we've
been
exploring,
adding
and
in
process
distributed
cache
to
the
loki
sort
of
units
of
deployability,
and
we
should
have
something
experimental
out.
C
I
want
to
say
in
the
next
like
month
or
so
to
play
with
it
we're
kind
of
at
the
point
where
we
want
to
start
integrating
it
with
a
few
of
our
environments
and
kicking
the
tires
and
seeing
sort
of
what
waste
it
breaks
us
in
and
again
kind
of
go
from
there.
But
we're
really
excited
and
think
that
it
should
make
deploying
things
a
lot
easier
and
sort
of
are
interested
in
knowing
if
other
people
are
interested
or
feedback
or
anything
else.
A
Yeah,
the
this
is
just
kind
of
continuing
on
the
path
we
started,
which
was
apparently
about
a
year
ago.
Now,
the
I
guess
somewhat,
unfortunately
named
ssd
mode,
simple
scalable
deployment.
We've
learned
two
things
since
that
several
people
have
seen
the
word
simple
in
the
name
and
said
nope
that
can't
be
for
me.
A
It's
like
mini
services
versus
micro
services,
so
like
reducing
the
complexity
of
the
the
operation
to
like
fewer
moving
parts,
so
will
very
much
likely
always
maintain
the
ability
to
run
in
microservices
just
because
of
the
nature
of
how
it's
built,
but
we're
kind
of
adding
these
new
targets
that
are
just
like
read
and
write
specific,
and
so
one
of
the
things
now
is,
if
you're
really
trying
to
run
loki
in
that,
like
terabyte
plus
a
day
range,
you
probably
want
a
better
caching
solution
than
what
this
built
into
the
ssd
mode
now,
which
is
like
some
per
process
in
memory.
A
Fifa
caches
and
have
something
that's
more
of
a
shared
cache,
so
that
the
basically
you
can
take
advantage
of
not
having
to
download
the
same
chunks
multiple
times
or
the
results.
Cache
is
probably
the
more
interesting
one,
because
the
results
cache
will
take
a
computed
value
for
metrics
queries
and
also
we
have
a
it's
kind
of
a
clever
cache
for
the
log
queries
we'll
we'll
cache
the
non-existence
of
logs.
So
if
you
run
a
query
like
a
filter
expression-
and
you
don't
find
any
results,
we
cache
that
that
was
the
case.
A
So
if
you
run
that
same
query
again,
we
don't
have
to
re-process
the
same
data
looking
for
something:
that's
not
there,
so
we're
kind
of
always
improving
the
loki
caching
strategy,
but
right
now,
if
you
wanted
to
run
ssd
mode,
you'd
probably
still
want
to
run
redis
or
memcache.
If
you
were
at
a
reasonable
scale
and
we
want
to
make
it
so
that
you
don't
have
to
do
that,
the
goal
is
to
just
be
able
to
run
loki
and
it
does
all
the
stuff
you
need
so
yeah
go
for
her
loving.
B
The
questions
maybe
stupid
questions
I
I'm
deploying
loki
from
the
head
charts
and
we
were
based
on
loki
distributed.
What's
the
difference
between
the
simple
scalable
and
distributed.
A
So
what
the
scalable
is
is
it
is
lucky
distributed.
Loki's
microservices
are
distributor,
ingestor
on
the
right
path
and
then
we
have
queriers,
and
then
we
also
added
this
index
gateway
and
then
query
front
end
and
query
scheduler,
there's
a
compactor.
A
A
End:
query,
scheduler,
queriers
and
index
gateway
all
in
the
read
process
and
the
idea
is
now
you
just
deploy
two
processes
like
one
for
writes
and
you
scale
that,
according
to
your
right
volume,
just
by
adding
more
right
nodes
and
then
one
for
reads-
and
you
scale
that
by
adding
more
read
nodes
in
kubernetes
with
systems
like
helm,
it's
a
little
easier
to
manage
microservices.
So
this
is
also
intended
around
like
people
trying
to
run
on
vms,
where
trying
to
run
all
these
separate
processes
would
be
really
frustrating
so,
but
even
even
within
kubernetes.
A
I
think
kind
of
what
our
experience
is
showing
us
is
that
the
complexity
of
the
microservices
having
these
more
moving
parts
doesn't
necessarily
add
enough
benefit
to
justify
like
we
can
reduce
the
complexity
by
having
these
two
modes
of
kind
of
read
and
write,
and
it's
a
little
bit
easier
to
understand,
scaling
you
just
kind
of
add,
more
read,
nodes,
add
more
write,
nodes
and
then
internally
inside
loki.
Now
we
do
things
like
control
the
number
of
things
that
run
so
that
we
don't
have
too
many
compactors
running.
A
For
example,
the
compactor
runs
inside
the
read
target.
It'll
only
run
on
one
of
the
processes,
not
all
of
them
same
with
the
query,
scheduler
and
so
out
of
the
box.
You
get
this,
you
know
loki,
so
you
are
on
the
hook
for
routing
traffic.
You
have
to
send
all
of
the
traffic
except
the
push
path
to
the
read
nodes,
and
then
the
push
path
goes
to
the
right
nodes
and
then
loki.
A
You
know
using
memory
list
will
wire
itself
up
to
handle
all
the
internal
communications,
and
so
the
idea
is
to
take
some
of
that
complexity
and
hide
it
in
a
way
that
99
or
even
100,
people
wouldn't
be
affected
by
some
argument.
On
our
end,
whether
or
not
we
will
even
internally
convert
our
microservices
clusters
over
to
this
ssd
model.
The
argument
there
being
that,
if
we
do
that,
then
we'll
have
tons
of
experience,
doing
that
and
make
it
easier
for
us
to
help
other
people
do
it.
A
The
argument
against
is
when
you
get
to
sufficiently
large
clusters
being
able
to
add
and
remove
components.
Specifically,
one
at
a
time
could
maybe
benefit
your
tco
like,
for
example,
you
don't
necessarily
need
the
number
of
distributors
to
match
the
ingestors
one
to
one.
We
often
run
about
50.
We
open
about
half
as
many
distributors
as
we
have
in
jesters,
so
the
right
mode
will
always
put
one
distributor
on
a
node
with
an
ingester,
but
it
becomes
a
bit
complicated
because
yeah,
you
have
many
more
of
them,
but
also
they
use
less
resources.
A
The
load
is
distributed.
So
it's
like.
I
said
we
started
this
path
about
a
year
ago,
and
you
know
we're
we're
now
in
the
process
of
setting
up
our
internal
infra
in
this
ssd
mode
in
our
non-prod
environments
and
fleshing
out
things
like
the
caching
and
you
know
other
support
and
trying
to
see
how
far
we
can
push
this
kind
of
mini
services
approach.
A
What
does
that
mean
for
helm
and
the
distributed
chart?
Not
much
right
now,
like
it's
still
gonna
be
the
way
to
go
like
like
trevor
should
have
been.
Let
trevor
talk
about
the
helm
because
he's
been
doing
a
ton
of
work
on
an
ssd
helm
chart.
A
Ultimately,
what
we
would
need
is
like
a
migration
path
of
sorts
like
which
is
with
memberless,
pretty
easy,
like
you
could
go
in
and
spin
up
a
bunch
of
write
nodes
and
then
turn
down
your
queries
and
schedulers,
and
actually,
I
don't
think
the
home
distributed
chart,
has
a
scheduler
in
it.
I
think
it
just
uses
front
ends.
A
Quick
explanation
of
that
is
that
the
front
end
v1
includes
the
scheduler
and
the
front.
End
v2
separates
it
into
a
separate
component,
because
the
way
the
queuing
works
in
front
end
v1,
as
you
add
more
front
ends,
you
add
more
cues,
which
then
allows
tenants
to
send
more.
So
it
doesn't
really
do
the
scheduling,
fairness.
The
way
that
we
want
so
then
the
front
end
v2
broke
the
scheduler
out
into
a
separate
component.
A
B
D
What
had
was
mentioned
about
like
the
home,
charts
and
stuff?
I
mean
it's
there's
a
lot
of
loki
home
charts
right
now
and
we
just
don't
have
the
bandwidth
as
a
loki
team
to
support
them
all.
So
you
know
we.
We
definitely
look
to
the
community
to
help
us
out
there,
but
the
one
exception
being
that
the
helm
chart
that
we
are
running
internally
is
the
ssd
one,
and
so
that
one
is
just
the
one
that
we
as
a
team,
develop
the
most
actively
and
so
like,
for
example.
D
This
week
I
saw
an
issue
come
in
for
the
distributed
chart
to
add.
The
scheduler
like
ed
was
just
mentioning
to
that
because
it
when
it
was
first
written,
just
uses
the
front
end,
and
I
100
think
everyone
who's
running
loki
distributed
should
run
it
with
the
scheduler.
But
you
know
I
don't
personally,
have
the
bandwidth
to
go
and
add
that
to
the
distributed
chart.
So
I
think
that's
some
of
what
the
like
pain
points
we're
running
into
with
all
these
different
charts.
D
So
we
definitely
want
to
get
to
a
smaller
surface
area
in
terms
of
the
different
ways
to
deploy
loki
so
that
we
so
that
we
can
help
the
community
more
by
providing
more.
A
A
Yeah
we've
just
found
a
couple
of
times:
we've
talked
to
people
and
they
were
like.
Oh,
I
saw
that,
but
it
was
like
what
we
wanted
to
do.
We
didn't
think
was
simple:
I'm
like
it's
all
right,
we'll
figure
it
out,
eventually,
all
right
so
before
we
started
the
recording
for
this
herbert
asked
a
question
about
kind
of
tenant,
design
and
system
design
for
loki,
and
I
thought
that
was
an
interesting
question
that
would
be
worth
kind
of
discussing
here.
So
I
don't.
B
In
the
previous
note,
in
the
notes
from
the
last
month's
meeting
like
if
we
have
a
customer
that
has
software
on
like
two
clusters
and
each
cluster
will
then
have
a
different
organization
id.
Can
one
user
access,
different
organizations.
A
Yeah,
so
let
me
walk
through
the
kind
of
concepts
here
and
sort
of
maybe
give
opinions
on
what
I
think
is
a
reasonable
recommendation,
or
at
least
what
the
trade-offs
are,
and
you
can
kind
of
pick
what
makes
sense,
for
you
know,
for
everybody's
separate
deployments,
but
in
loki
now
or
actually
sorry
in
2.6,
which
comes
out
tomorrow,
is
a
feature
called
cross-tenant
querying.
That
is
an
open
source
loki.
A
A
I
think
auth
underscore
enabled
is
true
and
then
loki
then
requires
all
of
the
http
requests
that
are
made
to
it
to
have
a
header
called
x-scope,
org
id
and
so
there's
no
real
security
handled
by
loki
in
the
sense
that
you
know
we
don't
validate
the
header
or
do
any
sort
of
credential
validation.
So
you
need
a
proxy
of
some
sort
in
front
of
loki
that
can
do
the
proper
auth
and
assign
that
header,
and
then
everything
at
that
point
is
we
call
tenants
inside
of
of
loki.
A
That
terminology
is
a
bit
overloaded
between
like
orgs
and
tenants,
and
but
it's
this
idea
of
a
separation
and
what's
nice
about
tenants
in
loki,
is
that
they
then
can
be
applied
with
separate
limits.
So
you
know
when
you're
running
a
multi-tenant
loki
so,
which
is
what
we
do
we
run.
A
You
know
a
sas
service
and
have
thousands
of
tenants
in
some
cells
cell
is
a
terminology
that
we
use
to
disambiguate
from
cluster,
because
we
have
multiple
loki's
running
inside
of
one
region
inside
of
one
kubernetes
cluster
and
it
got
confusing
so
but
anyway,
the
the
multi-tenancy
allows
you
to
set
limits.
A
Limits
have
this
nice
capability
that
you
can
change
them
at
runtime,
where
there's
an
overrides
file,
so
they
you
can,
you
know,
set
the
amount
of
data
that
a
tenant
can
send
and
put
limits
on,
and
the
other
thing
I
should
say
too
is
certain:
limits
are
applied
in
pretty
harsh
ways
like
stream
limits,
which,
if
you
hit
a
stream
limit,
will
cause
traffic
to
drop
or,
if
you
hit
a
rate
limit
it
so
having
isolation
between
tenants
is
nice,
because
then
you
aren't
eliminating
everybody.
A
So
if
somebody
makes
a
mis-configuration
or
mistake
or
just
all
of
a
sudden
start
sending
a
lot
of
logs
like
you
can
limit
one
tenant
separate
from
others,
largely
limits
are
designed
around
protecting
a
distributed
system's
overall
health,
like
you
usually
don't
use
them
to
control
what
people
do
you
use
them
to
make
sure
that
they
don't
do
something
you
weren't
expecting
and
tip
the
cluster
over.
So
you
know
we,
for
the
most
part,
try
to
run
our
limits
as
transparently
as
possible.
A
We're
kind
of
continuing
always
work
to
improve
that,
but
we
have
them
there
to
make
sure
that,
because,
like
in
a
lot
of
big
distributed
systems,
the
failure
modes
can
often
be
cascading
like.
If
you
overwhelm
something
and
it
fails,
then
it
overwhelms
everybody
else.
So
limits
are
your
best
tool
for
that.
So
that's
a
tenant.
So
to
the
question
you
asked
about
whether
to
run
one
loki
per
like
region
and
then
or
one
centralized,
I
would
say.
Typically,
we
think
of
and
tend
to
run,
loki
and
mimir
would
be.
The
same.
A
Tempo
would
be
the
same
as
sort
of
centralized
aggregation
points
for
lots
of
stuff
the
within
loki
right
now.
There
is
no
cross
cluster
querying
capability,
so
if
you
had
them
in
multiple
regions,
you
wouldn't
be
able
to
query
them
in
one
query:
in
our
enterprise
products
for
mimir
and
gel
or
sorry
tempo,
we
have
support
for
cross.
Cluster
loki
is
working
on
that
and
it
will
be
an
enterprise
feature.
A
So,
in
order
to
query
across
clusters,
you'd
have
to
pay
for
an
enterprise
license
of
loki,
so
that
does
sort
of
lend
you
know
would
generally
push
towards
the
model
of
so
now,
if
you
didn't
want
to
pay
for
an
enterprise
feature,
you
could
do
local
and
central
aggregation
run
a
small
loki
in
a
region
and
then
also
send
the
stuff
to
a
centralized
place.
A
The
like,
I
said,
the
tenant
like
the
cross
tenant
querying
is
an
oss
and
the
way
that
works
is
when
you
configure
a
grafana
data
source
you
would
or,
however,
your
proxying
method
sets
that
header.
You
can
specify
multiple
tenants
in
a
header,
so
I
think
the
typical
way
that
might
be
applied.
Is
you
configure
the
grafana
datasource,
you
specify
the
header
there's
a
syntax,
I
believe
it's
pipe
separated.
A
I
see
it
not
from
jordan,
so
I
think
that
yep,
that's
right.
Okay
and
I
think
there's
actually
an
asterisk.
That's
supported.
No,
the
asterisk
is
only
that's
a
gel
concept.
Isn't
it
yeah.
A
Yeah
because
loki
itself
doesn't
actually
know
all
of
the
tenants
that
it
has
gel,
which
is
the
enterprise
offering.
Has
this
like
sort
of
database
about
who
all
the
tenants
are
so
anyway,
the
the
you
would
send
a
pipe
separated
list
of
tenant
ids
and
then
that
query
would
then
be
executed
across
all
those
tenant
ids.
So
so,
if
you
had
a
centralized
loki
with
multiple
tenants,
you
can
still
do
operations
like
audit
or
security
that
might
want
to
you
know
or
anything
that
you
have,
that
would
want
to.
A
You
know,
search
across
all
of
them.
So
in
terms
of
what
do
I
recommend?
That's
a
good
question.
The
grafana
built
up
our
internal
operating
loki
that
we
use
to
run
all
our
stuff
as
a
single
tenant,
loki
instance
and
honestly,
because
there
was
no
crosstalk
querying
in
loki
at
the
time
it
made
the
most
sense
like.
If
I
were
to
do
it
over
again
today,
I
would
probably
set
up
separate
tenants
across
some
sort
of
like
logical
separation.
A
For
us
it
would
probably
be
well
it's
a
good
question,
but
it
might
be
like
the
different
products
that
we
have.
A
I
would
say
that
it
depends
a
little
on
the
size
of
your
org,
so
as
our
org
grows
or
other
orgs
grow,
and
you
have
like
more
disconnect
between
the
people
that
sort
of
build
and
use
the
services
and
like
the
controls
over
their
behavior,
like
you
need
better
protections
to
make
sure
that
they
don't
accidentally
or
you
know,
sort
of
negligently
cause
sort
of
you
know
a
lot
of,
like
you
know,
impact
on
a
central
operating
service,
so
the
separate
tenant
models
like
make
that
a
bit
easier.
A
If
you
have
you
know
either
your
org
is
small
enough
or
you
have
like
a
central
sort
of
group
that
controls
the
deployment
of
the
monitoring
tools.
Then
you
know,
maybe
you
have
your
own
ability
to
go
change
things
quickly.
If
somebody,
you
know,
does
something
that
I
think
maybe
those
become
the
just
the
considerations,
but
I
would
probably
say
that,
like
the
when
it
comes
to
tenants,
you
know
tens
or
hundreds,
but
like
not
tens
of
thousands
right.
A
So
if
you
try
to
set
up
a
loki
style
that
has
many
many
many
like
like
more
than
5
000
or
10
000
tenants,
you're
gonna
be
in
territory
that
we're
not
in,
and
I
don't
know
what
your
service
like,
what
your
experience
will
be
like.
I
can
tell
you
that
it
works
fine
with
thousands
of
tenants.
A
You
know
three
four
or
five
thousand,
but
you
know
there's
some
number
there
less
than
probably
ten
thousand
that's
probably,
and
unless
you
I
don't
know
like
so
I'm
just
don't
build
a
deployment
model
where
every
user
in
your
company
gets
a
tenant
because,
like
that's,
probably
gonna,
not
work
as
well
as
you'd
like,
but
having
some
separation
can
be
nice
for
kind
of
controlling,
and
then
you
have
the
cross
tenant
capability
to
query
across
that
or
you
can
run
them
all
in
one
big
tenant
and
the
prom
tail
now
has
the
ability
to
rate
limit
on
the
client
side
like
you
can
put
a
little
bit
more
in
place
to
protect.
E
Pretending
is
useful,
but
also
thinking
about
the
operator
workflow
using
a
tenant
id
as
like
a
label
selection
based
on,
if
you
have
maybe
similar
log
traffic
or
or
labels,
I
suppose
between
your
internal
services
or
customers
or
whatever
there's
definitely
a
lot
of
value.
There
too,
I
think,
but
that
multi-tenancy
model,
I
I
guess,
is
allows
interesting
possibilities
depending
on
how
you
and
your
organization
are
using
loki
to
make
that
separation
and
now
supporting
the
querying
allows
that
to
come
full
circle.
B
I
really
like
that
it
opened
something
in
my
mind
about
it's
not
about
just
privacy.
It's
also
about
securing
your
cluster.
So,
even
if
all
of
your
data
is
for
the
same
people,
it's
interesting
to
split
them
into
tenants.
A
Yeah,
so
within
loki,
you
can
do
that's
in
oss,
loki
right
that
label
based
access
control
as
well.
It's
the
wrappers
that
we
add
to
it
are
in
the
enterprise
versions
are
to
make
it
easier
to
use
like
we're.
I
mean
we're
always
trying
to
figure
out
the
right
way
to
monetize
loki
effectively
right,
like
we're
interested
in
in
growing
and
staying
in
business
and
building
it
like
our
primary
avenue
for
monetizing
loki.
Is
the
service
the
sas
offering.
A
So
anybody
that's
watching
this
that
wants
to
support
loki
then
go
use
grifon
cloud
and
then
the
interesting
part
is
like
we
want
the
you
know.
I
personally
want
the
open
source
offering
to
be
fully
capable
like
to
do
everything
need
to
so
then
it's
a
question
of,
like
certain
things
really
do
fall
into
the
world
of
like
features
that
really
bigger
enterprises
would
use
the
most
but
yeah.
A
So
the
the
I
don't
know
off
topic,
there's
not
that
many
things
that
we
really
do
separate
between,
and
I
think
at
this
point
almost
everything
that
we
do.
You
could
recreate
yourself.
If
you
were,
you
know
it's
like
an
ease
of
use
to
pay
us
for
the
code
that
does
it,
but
but
yeah
label
based
access
control
is
another
thing
you
can
do
to
sort
of.
We
talked
a
little
bit
about.
You
know
how
you
would
secure
the
content
inside
of
loki
the
label
based
access
control.
A
A
Do
you
have
an
editor
for
this
that
can
edit
this
out?
If
the
answer
to
that
is
no,
we
do
not
have
an
editor,
so
I'm
not
sure
I'll
have
to
go.
D
D
A
A
But
yeah
so
high
level,
you
know
as
long
as
you're
less
than
you
know,
thousands
of
tenants
loki's
operationally,
doesn't
really
care
a
lot
about
multi-tenants.
It
doesn't
have
a
huge
impact.
It
does
use
a
little
bit
more
memory.
If
you,
the
current
index,
I
say
current
because
we're
building
its
replacement
and
testing
it
right
now
has
a
sort
of
useful
stream
like
if
you
we
rotate
the
file
every
24
hours
and
when
you
get
above,
say
200
to
300
000
streams
in
a
24-hour
period.
A
Then
we
start
to
see
some
latency
on
queries
as
the
index
grows.
Some
of
our
operations
do
basically
row
scans
over
the
nosql
database
so
like
as
you
get
bigger
it,
it
slows
down
a
little
bit
so
less
than
200
000
and
it's
not
noticeable
that's
per
tenant
that
we
create
the
index.
So
if
you
did
have
like
a
lot
of
really
large
loki
deployments,
like
or
sort
of,
you
were
monitoring
like
very,
very
big
kubernetes
clusters
or
something
that
had
thousands
of
streams,
I
mean
we,
you
know
ten.
A
We
have
tens
of
thousands
of
pod
labels
that
come
rolling
through
loki
and
we
do
all
that
in
a
single
tenant
still.
But
if
you
did
find
yourself
wanting
to
have
more
than
200
000
streams
in
a
day,
that
would
be
a
reason
to
use
multiple
tenants.
That's
all
that's
a
little
bit
into
the
the
you
know
less
likely
case,
and
I
will
say
that
the
index
work
that
we're
doing
should
make
that
less
of
a
concern.
A
Having
more
streams,
wouldn't
be
a
performance
issue
as
much
as
it
would
be
like
an
index
size
issue,
but
the
index
is
going
to
get
smaller,
so
we
should
be
able
to
go
up
on
that
in
the
future.
But
that's
another
thing
I
can.
B
Okay
and
okay,
so
that's
one
thing
and
the
other
thing,
if
we
have
a
central
loki
is
how
do
we
test
like
do?
Have
a
test
loki
that
we
can
break
everything
on?
I
guess
the
thing
is:
we
should
not
use
prom
tail.
We
should
use
the
graphene
agent
and
send
to
multiple
loki's.
So
if
one
breaks
it's
not
a
problem.
A
Definitely
yes,
yes,
and
that's
the
part
about
using
the
agent
there
to
make
sure
that
you
don't
interfere
with
the
others.
You
can
also
do
like
we
do
this.
Some
too
is
run
multiple
prom
tail
demon
sets
or
multiple
agent.
It's
not!
I
don't
know
I
mean
you
can
make
those
trade-offs,
but
yeah
you
can
just
use
the
agent
sent
to
multiple
places
and
yeah
have
yourself
a
nice
dev
low-key
versus
prod
loki.
A
It's
not
in
the
sort
of
immediate
road
map,
but
like
the
idea,
is
around
moving
data
from
one
loki
to
another
or
like
sort
of
remote
right
between
you
know.
We
talk
about
that
stuff.
There's
kind
of
like
some
interesting
use.
Cases
around
I
mean
the
cross-tenant
query.
A
In
case,
you
needed
to
see
it
from
somewhere
else
like
so
I
I
suspect,
someday
that
will
be
available.
But
it's
not
in
our
immediate.
A
Yeah
yeah
definitely
well
at
the
risk
of
just
me
continuing
to
talk
forever.
I
don't
know
if
anybody
has
any
more
questions
or
topics,
but.
A
I'm
super
happy
you're
here
anybody,
that's
watching
this
on
youtube
show
up
next
month
and
you
can
ask
all
the
questions
that
you
want.
B
A
A
A
Next
week,
so
I'm
gonna
enjoy
it
all
right.
Well,
I
think
we're
gonna
wrap
this
one
up
then.
So,
thanks
for
joining
everybody,
thanks
for
you
know,
I
know
I
definitely
did
not
pronounce
your
name
right.
Her.
I'm.