►
From YouTube: Grafana Loki Community Call 2022-06-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
A
Yup,
I'm
23
years
old,
now,
all
right,
at
least
in
maturity.
Maybe
how
would
you
do
it
or
something
yeah,
so
we're
just
kind
of
filling
out
the
agenda
today.
We're
gonna
talk
about.
What's
coming
up
in
the
next
release,
which
I
actually
think
we're
going
to
do
in
three
weeks,
you've
heard
me
say
before
timelines
that
we've,
but
we're
in
a
better
spot.
We
actually
have
a
release
candidate
that
we're
starting
to
test
now,
and
things
are
in
pretty
good
shape
and
talk
about
some
updates
to
the
operator.
A
Perry
is
able
to
make
it
today
which
thanks
for
I
know
the
timing
is
probably
not
ideal
for
you,
and
I
want
to
talk
about
that
at
the
end,
too.
Maybe
get
some
input
on
what
another
time
would
be
to
cover
more
time
zones.
I
think
we
should
alternate
or
run
multiple
calls
or
something,
because
a
little
bit
late
on
the
folks
that
are
not
in
the
north
america
time
zones
talk
about
graphonicon
and
q,
a
if
we
have
any
so
loki
2.6.
A
I
don't
think
we're
going
to
do
a
2.5.1,
mostly
because
I
think
it
would
be
better
just
to
do
a
2.6.
There
are
a
couple
things
we
could
back
for
backboard,
but
I
think
we'll
just
do
a
2.6.
Anyway,
we
are,
like
I
said:
are
it's
actually
going
to
be
the
k100
release,
which
is
our
internal
numbering
scheme
that
will
likely
be?
That
is
currently
the
release
candidate
for.
A
2.6,
if
anybody
here
has
a
chance,
can
you
go
find
the
doctor
image
for
that
and
let's
link
to
it
here.
So
if
anybody
wants
to
help
us
sort
of
beta
tests,
2.6
feel
free
to
run
that
k100
image.
It's
made
it
as
far
as
our
ops
environment,
right,
I'm
not
sending
people
after
something
that
we've
okay,
okay,
good.
So
it
should
be
fine
and
big
feature
wise,
the
well
christian,
I'm
gonna
put
you
on
the
spot.
You
did
the
work
for
query
splitting.
A
Do
you
want
to
talk
about
what
instant
query
splitting
is.
F
So
yeah
it's,
it
basically
does
what
it
says
if
you
had
instant
queries
with
long
time
range.
So
the
range
is
like
this
square
brackets
range.
So
if
you're
an
instant
query
that
goes
over
long
range,
we
can
split
up
the
aggregation
into
multiple
sub
requests
that
they
individually
and
in
parallel,
executed
and
merged
together,
then
to
kind
of
returned
as
a
as
a
result
and
yeah.
It
works
for
quite
a
lot
of
range
integration.
F
There
are
few
limitations
regarding
when
you
have
like
a
puzzle,
a
parser
stage
in
the
query
where
sometimes
we
can't
split
that
due
to
the
amount
of
series
it
would
generate
in
the
subqueries
but
yeah
most
of
most
of
the
queries,
kind
of
can
be
parallelized
in
that
way
and
hope
to
get
again
some
improvements.
There.
A
Yeah,
so
the
important
bit
to
make
sure
you
can
split.
Your
query
is
to
make
sure
you
have
aggregations
if
you're,
using
parsers,
like
json
or
log
format,
because
they
can
explode
the
cardinality
of
the
stuff
that
they
process
and
if
you
don't
sort
of
wrap
that
with
a
sum,
we
don't
split
those
queries
because
it
causes
some
sort
of
unb
well
highly
bounded
growth
of
memory
so
wrap
with
sum
or
max
or
average
or
min
or
some
kind
of
aggregator.
A
When
using
log
network
json,
you
can
use
there's
the
json,
explicit
parser.
That
doesn't
have
this
requirement,
though
so
like.
If
you
do.
Json
of
you
know
who
equals.
F
A
You
need
the
front
end
or
front
end
plus
scheduler.
You
just
need
to
have
so
the
front
end
actually.
Does
the
splitting
and
aggregation
using
a
query?
Scheduler
is
generally
recommended
because
it
is
kind
of
an
improvement
over
how
the
front
end
worked,
but
both
will
do
the
same
thing.
Ultimately
go
ahead.
Christian.
F
Regarding
like
enabling
it,
you
need
to
set
the
split
query
by
interval
setting
which
controls
also
the
range
queries
which
are
split
by
range.
It's
the
same.
It's
actually
the
same
setting
and.
B
G
Anyway,
I
would
say
we
can
still
come
still,
let's
say
be
on
the
curry
front
and
query
a
model.
We
don't
need
explicitly
the
scheduler
for
this,
of
course,
not
having
the
schedule
means
your
query
front.
F
H
G
Not
that
easy,
horizontal
scalable,
but
for
most
use
cases
at
least
two
replicas
are
good
enough.
So
why
bother
having
the
scheduler
so.
A
Yep
everything
you
said
is
absolutely
correct
and
yeah
so
you're.
D
A
Too,
most
people
don't
need
a
scheduler.
If
you're
running
the
single
binary
or
the
ssd
modes,
they
enable
the
front
end
and
scheduler
for
you
as
well.
So
you
get
that
included
if
you're
in
microservices
mode,
you
just
need
the
front
end
or
front-end
plus
scheduler,
but
the
scheduler
is
not
a
requirement.
A
It's
a
nice
feature
for
when
you
try
to
horizontally
scale
your
front
ends.
If
you
are
running
a
big
cluster
and
you
want
more
front
ends
the
v1
front.
End
implementation
kept
the
pertaining
queue
in
each
and
so,
as
you
horizontally
scale
them,
you
multiply
the
number
of
queues
which
lets
more
tenants,
send
more
queries.
So
the
scheduler
broke
that
queue
out
into
a
separate
component,
so
that
that
problem
doesn't
happen.
B
C
A
C
What
this
does
is
it
gives
you
either
real-time
query,
filtering
or
real-time
query
filtering
with
actually
deleting
stuff
out
of
your
object,
store
to
go
back
and
retroactively,
delete
logs
sort
of
the
key
use
case
we
were
fitting
is
something
like
we've
leaked
personally
identifiable
information
or
something
into
the
logs,
and
you
can
go
back
and
actually
delete
that
stuff.
It
happens
near
real
time,
so
you'll
see
the
stuff
disappear
from
your
metric
and
log
queries
in
a
matter
of
minutes
and
then
sort
of
on
the
overall
compaction
cycle.
C
When
we
compact
the
indices,
we'll
go
and
also
rewrite
chunks
and
just
sort
of
remove
any
logs.
If
you
have
that
set
sort
of
with
that
bug,
fix
included,
we've
done
some
testing,
we
can
see
it
sort
of
filter
things
as
we
expect
and
we
can
see
it
delete
pretty
large
volumes
of
logs.
I
would
say,
sort
of
like
two
plus
million
logs
over
the
course
of
a
day
and
do
pretty
well.
C
A
I
think
the
with
the
what
you're
describing
there
is
the
compaction
loop
runs
at
a
specific
time
and
with
a
delete
request.
It
gets
processed
in
the
same
loop
that
does
compaction,
and
so,
if
it
has
to
go
through
a
huge
amount
of
data
to
mark
stuff
for
deletes,
it
could
cause
that
to
run
longer
than
the
normal
compaction
cycle.
Is
that.
C
Right
yeah:
well,
it
could
cause
for
that
table
or
only
that
set
of
tables
that
have
got
the
that
are
affected
by
the
deletes,
so
sort
of
the
tables
are
processed
asynchronously
or
in
parallel,
and
so
really
only
only
for
the
tables
that
would
be
affected
by
that
delete
and
then
sort
of
what
might
happen
in
that
case
is
you
might
stack
up
compactions
where,
eventually,
this
thing
will
clear.
A
Yeah
yep,
so
I
mean
there's
a
lot
of
sort
of
future
opportunity
for
us.
These
don't
all
have
to
run
in
the
same
sort
of
process.
It's
just
we're
adding
to
what's
there
so
we're
gonna
play
around
with
it
a
little
bit
and
see
what
we
need
to
do
to
make
sure
we
can
kind
of
hit
reasonable
requests
for
deletes.
E
I
can
oh
jordan,
happy
too
yeah
so
loki
if
you're
running
in
multi-tenant
mode
with
2.6
will
support
a
new
boolean
configuration
value
for
queriers
to
enable
multi-tenant
query
support
and
what
that
will
allow
you
to
do
is
if
you
specify
more
than
one
tenant
id
concatenated
together
with
a
vertical
line
character
in
the
x
scope,
org
id
http
header.
E
When
you
make
requests
to
loki's
query
api
endpoints
you'll
receive
results
back
for
the
multiple
tenants
that
you
specify
in
that
header
and
we
support
label
filtering
on
tenant
ids
with
that
enabled
such
that
you
can
see
results
specifically
for
one
or
multiple
tenants,
but
we
won't
support
that
tenant
id
filtering
in
stages.
Yet
so
that's
important
to
keep
in
mind
and
we
sort
of
when
you
enable
this
configuration,
we
append
a
new
label
value
that
has
that
tenant
id
that
supports
that
filtering
and
yeah.
The
implementation
is
in
the
multi-tenant
query.
E
Sure
so
I
have
some
documentation.
I
can
link
I'll
put
this
here
in
the
doc.
E
A
Oh
okay,
so
right
you
can
only
use
the
double
underscore
tenant
within
the
label:
matcher
right:
okay,
okay!
Was
there
a
dock
that
you
pulled
this
from
that
we
can
link
here
for
anybody.
That's.
D
G
G
F
G
H
E
H
Okay,
so
if
I
mean
I'm
trying
to
understand
like
so,
I
can
pass
any
number
of
tenants
there
and
I
can
filter
via
this
extra
label
underscore
underscore
tenant
id
or
something
in
the
query
itself
is
that
right.
E
H
E
It
would
behave
the
same
as
though
we
were
using
an
individual
query,
or
instance
for
a
single
tenant
and
the
query
didn't
return
results.
So,
but
you
would
still
receive
the
valid
results
merged
together
as
well
as
I
guess.
The
error
for
the
tenant
from
which
the
logs
weren't
successfully
queried.
B
E
It
uses
all
of
them.
I
don't
think
we,
because
I
think
we
just
simply
look
at
splitting
and
merging
them
at
the
request
response
level
from
like
the
individual
query
or
flow
that
would
happen
for
a
single
tenant.
B
A
Depending
on
how
you,
if
you
do
auth
and
how
you
do
that
in
front
of
loki
like
if
you,
for
example,
have
direct
access
between
grafana
and
loki,
you
can
set
headers
in
a
data
source
config
and
make
a
data
source
that
specifies
multiple
org
ids.
A
If
you
have
like,
we
run
a
custom
auth
layer,
it
would
be
responsible
for
saying
taking
whatever
credentials
it
takes
and
turning
that
into
a
request
that
has
multiple
tenant
ids
in
the
x
scope,
org
id
header,
so
there's
some
sort
of
user
or
operator
requirement
to
kind
of
make
this
work,
but
ultimately
like
whoever
sets
the
xscope
org
id
can
now
pipe
to
limit
multiple
of
them
to
get
results
from
multiple
tenants.
A
Go
ahead.
Perry.
G
Do
we
have
any
I
mean,
since
this
is
not
implemented
on
the
chunk
level,
the
chunks
stay
basically
the
same
as
before.
G
We
do
this
all
this
working
querier
do
we
have
any,
let's
say
higher
impact
on
the
either
on
pulling
things
from
the
index
or
from
let's
say,
the
more
tenants
you
use
you
download
more
chunks
in
general.
So
are
there
any
when
you
can
blow
up
basically
either
your
pvc
on
the
queer,
queers?
Okay,
not
anymore
in
the
index
gateway
or
you
can
blow
basically
your
memory,
because
you
download
three
times
more
or
whatever
many
times
ten
times
more
chunks
for
clearing
them
right.
A
D
A
Yeah,
so
it
shouldn't
really
have
a
noticeable
impact.
On
I
mean
you
can
do
the
equivalent
of
saying
make
50
tenants
run
the
same
query
at
the
same
time,
which
you
know
can
definitely
cause
a
huge
amount
of
query
traffic
right
like,
but
it
it
shouldn't
cause
like
an
individual
query
or
any
kind
of
undue.
G
E
It's
better
for
operators,
specifically
in
multi-tenancy
mode
and
should
be
just
like
a
nice
easy
quality
of
life
improvement.
A
Okay,
yeah
the
the
people
that
I
think
benefit
most
like.
So,
if
you're
an
organization
running
loki
and
you
configure
tenants
per
business
units
which
is
nice
because
then
you
can
set
different
limits
and
sort
of
limit
how
much
one
business
unit
can
impact
an
entire
cluster.
But
you
have
like
audit
or
security
use
cases
that
you
need
to
be
able
to
run
queries
over
all
of
those
tenants.
This
enables
that
kind
of
functionality.
A
A
You
know,
basically
to
help
give
you
some
protections
of
limits.
I
mean
this
is
actually
something
that
will
probably
help
grafana
labs.
We
tend
to
run
largely
large
single
tenants
for
our
internals,
but
you
know,
as
our
company
grows,
we
see
more
and
more
that
there
would
be
some
advantages
to
having
separate
tenants
even
internally,
but
still
good
use
cases
for
being
able
to
query
across
all
of
them.
F
Yeah,
I
have
a
question
because
you
mentioned
querying
all
tenants
like
having
this
use
case
of
multiple
departments
and
like
a
single
like
auditing
user
or
whatever,
who
wants
to
to
create
all
of
the
tenants?
Do
we
have
like
a
wild
card
like
a
star
for
all
tenants,
or
do
we
explicitly
need
to
specify.
E
I
might
be
mistaken
in
that
you
can't
pass
that
as
the
x
cop
org
id,
but
rather
the
username
for
auth
on
a
data
source
that
you're
configuring.
So
sorry
I
misspoke
there.
Oh
okay,
okay,.
B
H
For
gelp
is
the
tltr:
that's
how
it
works
in
general,.
A
Yeah
no
worries
I
mean
the
reality
for
loki
is
loki,
doesn't
know
all
of
the
tenants.
I
mean
you
could
ask
every
index
file
that
it
has
what
all
of
the
tenants
are,
but
that's
basically,
we
rotate
the
index
file
every
day.
Each
day
could
have
any
number
of
tenants
in
it,
and
so
a
wildcard
approach
would
either
require
that,
like
we'd
have
to
go,
ask
every
index
file
in
the
time
range.
What
the
tenants
were,
which
is
possible
or
we'd
have
to
track
it
separately,
which
is
likely
what
gel
is
doing.
A
B
A
If
you're
listening
in
in
the
community-
and
that's
really
important
to
you,
I
would
say
you
know,
file
an
issue
or
write
the
pr.
That's
cool
too.
I
Yeah
sure
so,
when
you
are
configuring
from
payout,
you
can
specify
to
ingest
data
from
file,
and
the
problem
is
that,
if
you're,
if
you're,
pointing
to
a
c
link
file
folder-
and
you
are
actually
looking
for
a
file
inside
that
c
linked
folder,
that
would
not
work.
So
we
fixed
that
for
2.6.
A
Yeah
specifically,
if
the
singling
sim
link
was
a
relative
sim
link,
so
if
the
sim
link
was
defined
with
dot
dot,
slash
to
do
directory
traversal,
it
wouldn't
work.
So
you
would,
I
don't
remember
how
it
how
it
failed,
but
there
was
another
function
within
go
somebody
from
the
community
fixed
this
also,
which
is
nice.
It's
just
a
little
bit
scary
to
make
changes
like
this,
because
you
know
you're
changing
the
core
way.
A
We
resolve
sim
links
and
prompt
tail,
but
we've
not
had
any
trouble
so
far
in
our
testing,
but
so
now
it
just
uses
a
different
function,
which
I
believe
just
resolves.
The
relative
sim
link
to
be
absolute
and
then
uses
that
instead,
so
if
you
have
are
trying
to
tailor
file,
which
is
a
relative,
sim
link
should
work
in
2.6.
G
Yeah
there
isn't
any
0
1
0
yet
and
first
of
all
I
need
to
apologize.
I
have
I
should
have
been
in
a
community
meeting
by
december
or
january
when
we
officially
merged
the
code
into
the
grifana
local
repository,
but
yeah.
A
I
think
that
I
should
properly
introduce
perry
as
the
very
nice
fellow
from
red
hat
that
has
created
and
spearheaded
the
loki
operator
that
was
merged
into
the
loki
repo
in
december
that
time
frame.
Yes,.
G
G
Yeah,
thank
you
ant
for
the
introduction
and
yeah.
Besides
the
loki
operator,
which
is
not
built
by
me
only,
but
also
with
many
other
people
across
three
companies.
We
I'm
a
father
of
twins.
So
that's
why
I
need
to
apologize
that
I
haven't
been
here
for
quite
so
long
anyway.
I
managed
to
get
yeah
a
free
free
beer
from
my
webs.
Today
she
takes
care
of
the
kids,
so
I
can
beat
you
and
finally
announce
this
yeah.
G
G
Is
in
a
zig
inside
griffin
and
lucky
with
the
same
governance
model,
the
same
licensing
thing
and
the
purpose
of
this
project
is
to
be
a
kubernetes
native
operator.
First,
although
I
work
for
retina
and
yes,
we
build
openshift
and
we
sometimes
have
this
openshift
b.
Isn't
it
we
try
to
build
things
for
for
kubernetes
first,
this
works
quite
so
far,
quite
well,
yeah,
a
couple
of
highlights
because
I
should
probably
put
a
small
slide,
take
the
next
time
and
give
a
proper
intro.
G
But
a
couple
of
highlights
here
is
this
operator
is
currently
capable
to
to
manage
two
types
or
two
sizes
of
blocky
clusters.
We
call
them
sizes
because
we
know
loki
is
not
easy.
G
I
mean,
if
you
are
not
an
expert,
it's
not
easy
to
put
requests
and
limits
on
it
with
without
being
the
expert,
so
we
say:
okay,
there
is
something
we
call
one
x
small,
which
has
some
let's
say
defined
cpu
and
memory,
let's
say
requirements,
and
if
you
run
it
it
is
it's
a
beast
that
you
can
ingest
200,
2,
3,
500,
gigabyte
per
day
and
one
x
medium.
G
I
think
where
you
can
ingest
two
terabyte
of
locks
per
day,
so
it's
it
aligns
well
with
what
we
have
in
case
on
it
and
upstream,
but
we
have,
let's
say
variation
that
we
call
small
and
medium
is
probably
close
to
what
we
have
in
prefinancy
production
case
on
that
files
yeah
it's
it's
a
thing
that
has
only
member
list
and
volte
blue
shippers.
G
So
it's
not
a
general
loki
operator
that
you
can
decide
use
for
cassandra
bigtable
whatever
it's
it's
a
thing
for
most
kubernetes
users
that
require
just
run
into
kubernetes
and
give
me
something
like
an
s3
to
write
in,
and
you
can
pick
anything
currently
you
find
this
necessary.
We
support
them
all
the
types
yeah.
So
the
thing
that
is
another
highlight
here
is
you
don't
get
lucky
you
get
a
lucky
stack,
that's
how
we
call
our
crd,
because
you
get
also
a
small
reverse
proxy
in
front
of
it.
G
That
can
do
authentication
authorization
for
you,
authentication
is
basically
you
can
configure
tenants
and
their
oidc
providers
and
authorization
can
either
be
a
little
bit
static.
G
Like
you
know
your
users,
or
it
can
be
something
like
open
policy
agent,
where
you
send
things
back
to
someone
implementing
opa
for
you,
yeah
and
basically
tennessee
is
included
in
the
crd,
so
it's
optional,
but
you
can
start
right
ahead
and
say:
tenant
a
goes
y
d
c
here
then
b
goes
y
d
c
there,
and
then
you
have
different
accesses
and
you
can
give
that
this
is
a
little
bit
of
what
actually
the
non
open
source.
G
Cortex
gateway
does,
but
I
think
by
we
are
by
far
smaller
than
what
the
cortex
gateway
does
with
grafana.
It's.
Let's
say
you
think
that,
and
it's
not
an
engine
x.
Definitely
not
it's
a
go
project
we
use
also
at
red
hat,
which
we
call
the
loki
state
gateway
yeah.
Besides
that
most
of
the
work
has
been
the
last
couple
of
months,
completing
pieces
that
make
loki
a
robust
or
the
loki's
take
a
robust.
G
Let's
say
package
like
we
have
alerting
rules
and
recording
rules
or
crds,
like
you
have
been
used
with
prometheus
rules
from
the
prometheus
operator
and
also
the
ruler
conflict
is
a
new
crd.
It's
basically
a
thing
that
gives
you
the
capabilities
to
connect
to
an
alert
manager
or
a
remote
right.
G
So
it's
basically
configuring
the
ruler
for
the
actual
use
cases
like
sending
and
notifying
alert
ninja,
and
the
last
piece
is:
if
you
take
this
and
you
anything,
we
have
here
that
we
say
you
can
use
them
optionally,
it's
it's
everything
is
managed
through
feature
flags
which
you
can
when
you
install
the
operator,
you
can
say
give
me,
for
example,
tls
and
points
for
service
monitors.
Give
me
tls
for
grpc.
G
If
you
want
to
do
this,
and
we
have
let's
say
a
small,
let's
say
yaml,
that
has
all
the
feature
flags
and
we
use
it
in
openshift
because
we
know
in
openshift
you
have
a
prometheus.
You
have
tls
by
default.
G
So
it's
a
little
bit
of
automation
for
people
running
on
openshift
this
operator,
and
we
launched
this
as
red
hat
the
same
code
base.
We
just
only
take
the
golan
code,
build
it
on
the
go
compiler
on
rel
and
just
ship
it.
That's
all
that
we
do.
We
don't
have
any
downstream
repository.
G
At
least
I
do
my
best
not
to
have
ever
one
downstream
repository,
because
this
is
just
more
trouble,
so
we
develop
only
upstream
here
we're
a
very
small
zek,
four
to
five
developers
in
total.
Although
there
are
more
editors,
there
are
another
two
or
three
that
commit
to
there,
but
they
are
not
part
of
the
team.
Yet
we
haven't,
let's
say,
gave
them
permissions
yet
or
we
haven't
called
it
called
them.
Let's
say
we
have.
G
We
haven't,
take
taken
any
vote
to
include
more
members
here,
but
I
would
like
to
see
that
the
community
joins
and
tests
this
thing
from
what
I
can
tell
from
our
telemetry.
I
have
at
least
21
customers
currently
testing
this
in
tech
preview
at
redhead.
I
would
like
to
see
more
and
more
non-rated
people
testing
these
at
least
envisionness
ipm.
Isn't
this
so
you're
welcome
to
join
openpr's
issues?
G
Whatever
you
find
us
on
the
slack
with
on
loki
operator,
deaf
and
yeah,
you
are
welcome
to
join
and
bring
your
source
on
how
this
may
make
this
operator
even
better
for
to
brenda's
deployments
yeah.
That's
all
I
want
to
share.
We
are
here.
We
are
upstream
yeah,
come
and
join
us.
That's
my
key
message
for
today.
G
D
Different
thing,
oh
good,
I
mean
that's
still
hard
with
twins.
I
mean
18
months
was
hard
to
just
want,
so
I
can't
imagine
too
much
easier.
Yes,
that's
permanent,
so
yeah,
my
question
there
about
limits.
Are
there
does
this?
Does
the
loki
stack
currently
impose
any
sort
of
limits
per
tenant
limits
or
are
the
limits
just
sort
of
per
cluster.
G
No,
there
are
no
pertainance
limits
you
can
defined
in
let's
say,
and
what
whatever
list
of
tenants
you
have
it's
in
the
crd
and
it
gets
replicated
into
the
limits
or
the
overheads
config.
If
you
use
any
limits
per
tenant,
the
gateway
per
cell
that
we
use,
we
currently
use
the
same
gateway
for
our
internal
still
not
operator
managed,
but
at
least
lucky
installations
and
rented,
and
we
are
running
currently
around
per
per
region
around
10
cannons.
So
this
is
a
piece
of
cake.
G
D
Got
you
so
when
you're
sizing
your
cluster,
you
sort
of
think
about
it
at
the
holistic
level,
it's
not
necessarily
on
a
tenant
size.
G
Yeah
I
mean
one
x,
small
can
be
I
mean
if
you
have
200
tenants
and
but
they
still
keep
in
the
200
gigabyte
and
to
between
200
and
500
gigabyte
per
day
thing.
You
are
fine,
I
mean
it's
still.
One
big
noisy
tenant
can
put
the
queriers
because
in
the
one
x
small,
we
have
just
two
queries.
It's
for
small
installations,
basically
thought
yeah,
it
might
blow
up.
Yes,
if
you
just
overuse,
it's
not
meant
to
have
too
many
tenants,
but
you
don't
need
to
have.
G
D
Yeah
cool
and
then
I
have
another
question:
do
you
think
it
would
be
possible?
And
if
so,
like?
Maybe
like
a
rough
guess
of
how
big
of
a
lift
it
would
be
to
be
able
to
introduce
some
sizes
that
could
deploy
loki
in
the
ssd
mode?
Or
do
you
think
that
that
would
be
too
much
of
a
lift?
Given
the
current
sort
of
architecture
of
the
of
the
operator.
G
Although
we
are,
we
started
with
microservices
only
because
this
predates
ssd
and
it
makes
it
slightly
easier
for
many
kubernetes
pieces
to
deploy
them
a
little
bit
of
background.
One
reason
why
we
chose
microservices
was
because
we
we
are
hinting
towards
auto
scaling.
For
example,
if
you
pick
the
size
one
x
small,
you
cannot
go
and
give
more
cpu
or
memory
to
each
component.
G
We
would
like
that,
if
you
ever,
if
we
ever
get
something
like
kida,
brilliant,
finally,
production
ready
across
the
board,
we
might
go
and
say:
okay,
pick
one
x
small
and
please
let
me
scale
to
one
x
medium,
something
like
that
or
we
think
also
especially
around
distributors,
maybe
introducing
features
like
bpa,
because
they
are
a
good
candidate
for
vertical,
auto
scaling,
and
we
would
like
to
see
this,
let's
say,
being
more
flexible
in
that
regard,
as
is
d
by
by
default,
introducing
it,
it
isn't
nice.
G
In
my
opinion,
contribution
there
wasn't,
let's
say
no
volunteer
to
do
this
yet,
but
it
would
basically
be
just
this
small
code
branch
and
having
let's
say
new
package,
which
has
just
two
stateful
sets
one
for
the
read
and
one
for
the
yeah,
and
you
can
then
say:
if
you
pick
one
x,
small
dash.
This
is
deep.
G
Go
in
it.
Maybe
I'm
sorry
for
for
talking
too
much.
But
may
I
just
give
one
hand
here
the
schema
here
with
one
x.
Small
is
just
it
replicates
a
little
bit
how
aws
instance
type
work,
so
you
define
one
x.
It
means
you
have
just
one
loki
instance
and
we
can
foresee
that
it
will
be
for
our
tenancy
customers
at
2x
not
implemented.
G
Yet
it
shouldn't
be
that
hard
and
then
you
can
have
medium
like
that,
gives
you
the
size,
and
I
assume
there
might
be
something
like
variations
like
another
dot
and
you
have
dot
ssd
and
you
get
let's
say
one
x,
minimum
ssd
or
if
you,
if
you
drop
it,
you
get
it
in
the
micro
services
mode,
because
this
was
before
there.
So
it's
easy
extendable
in
my
opinion
and
easy
to
contribute.
A
Google
popped
up:
are
you
talking
all
right?
So
the
sort
of
last
thing
on
our
agenda
is
just
to
talk
about
some
things
that
are
happening
in
a
bit
of
shameless
self-promotion.
A
In
addition
to
sending
me
birthday
cards,
you
can
all
join
a
webinar
that
I'm
doing
next
week
on
loki
configuration
settings.
I
haven't
actually
finished
all
the
material
for
this.
Yet
so,
if
there's
things
that
you
want
to
see,
I
don't
know
find
me
on
twitter
or
github
or
public
slack
or
something.
But
basically
what
I
want
to
cover
is
the
common
configurations
that
people
seem
to
get
tripped
up
on.
A
We've
done
a
lot
to
improve
defaults
over
time,
but
I
want
to
talk
about
that
and
maybe
what
you
have
as
an
existing
config,
where
you
should
remove
things
and
you
know,
rely
on
defaults,
how
things
have
changed,
configs
for
query
performance
ingestion,
maybe
a
little
bit
about
agents.
I
think
the
the
description
talks
about
agents,
but
I
don't
know
how
much
we'll
talk
about
that.
A
Yeah
come
check
that
out
and
then
that's
next
week
and
then
the
week
after
next
week
is
grafanacon
line,
which
is
also
free
and
virtual,
and
there
are
three
talks
in
there
that
talk
about
loki
so
be
sure
to
check
those
out
and
all
the
other
ones,
because
why
not?
But
there's
a
couple
links
here
for
those
talks
in
the
meeting
notes
if
you're
watching
this
on
youtube
and
I'm
realizing,
I'm
not
sharing
my
screen.
Oh
well,
I'll
link
the
meeting
notes
in
youtube
as
well.
A
A
A
Yeah
yeah,
so
all
right.
Well,
this
is
this
is
a
funny
story,
finish
off
on
a
funny
story
for,
like
six
months
of
this
year
coming
around
towards
you
know
christmas
or
whatever.
I
was
just
like
having
a
small
existential
crisis
about
turning
40.
and
like
oh
man,
you
know
like
this.
A
Is
it
right
like
I'm,
halfway
turned
in
40
and
then
I
did
the
math
one
day
and
I'm
like
adding
it
up
on
my
fingers
four
times
and
I'm
like
oh
man,
I
I'm
returning
41
this
year,
I
turned
40
last
year
like
wait.
A
minute
did
I,
like?
Oh
yeah,
I
even
had
like
kind
of
because
you
know
pandemic
times
are
a
bit
weird
with
stuff,
but
actually
even
just
had
a
birthday
party.
You
know
for
me
for
my,
my
friends
so
I'll
be
41
this
year.
E
Thanks
for
sharing
that
that's
hilarious,
I.
A
A
All
right
thanks,
everybody
see
you
in
a
moment.