►
From YouTube: Loki Community Meeting 2021-07-01
B
A
Make
sure
you
make
sure
we
include
this
part
in
the
like
youtube.
A
B
How
much
intro
banter
kick.
A
Yeah
yeah:
what
is
it
july?
1St
2021
loki
community
call
welcome
everyone.
We
actually
have
slightly
prepared
for
this
one
more
so
than
usual,
which
is
that
I'm
pretty
excited
about,
and
I
think
we're
gonna
talk
about
the
index
gateway
first,
which
yeah
thanks
sandeep
for
that
one.
It's
a
new
component
in
loki,
which
was
kind
of
put
together
pretty
quickly,
largely
based
around
finally
migrating,
all
of
our
internal
clusters
to
use
volt
tv
shipper
and
it's
basically
a
in
part
a
large
cost
saving
component.
A
So
it's
a
pretty
dramatic
cost-saving
benefit
here
and
has
worked
out
very
well.
The
life
cycle
on
it
has
been
pretty
quick
in
terms
of
development
and
we
already
have
them
in
prod
correct
me
if
I'm
wrong,
but
I
think
it's
everywhere
now.
B
Yep
and
the
big
gain
for
us
there
and
why
we
were
so
with
every
query
or
downloading
the
index.
Some
of
our
clusters,
with
especially
very
heavy
multi-tenant
clusters,
will
build
a
pretty
reasonably
large
index
file
over
30
days,
so
we
tend
to
to
pre-download
a
lot
of
index
files
and
our
pv
costs
or,
like
the
disk
cost
to
say
those
queries
is
getting
a
little
bit
untenable
and
every
querier
has
to
download
the
same
data.
B
D
A
We
run
it
as
a
stateful
set
primarily
to
take
advantage
of
disk
volumes,
but
it
basically
pulls
data
that
already
exists
in
object,
storage
and
then
allows
you
to
kind
of
do
that
in
one
place
and
so
it'll.
It
just
pulls
the
index
from
object,
storage
post
it
locally
and
then
we'll
be
able
to
basically
feed
queries
through
this
new
component
and
then
docs.
Our
queries
will
come
and
use
that
now.
C
Sorry,
I've
heard
yes
it
so
it's
kind
of
I
was
just
asking
because
I
was
curious
like
is
it
offloading
the
statefulness
to
this
new
component?
Basically,.
A
Yes,
there's
generally
kind
of
two
two
parts
here
around
state
right
like
we
run
it
as
a
stateful
set
in
kubernetes.
We
which
allows
us
to
kind
of
persist
disk
volumes
right,
so
you
can
reschedule
them
across
nodes
and
things
like
that,
which
is
very
helpful,
but,
like
you
can
you
could
kill
these
things.
A
You
can
delete
the
disks
and
all
the
data
is
actually
hosted
object,
storage,
so
they
would
just
you
know,
start
up
a
new
and
pull
it
and
then
kind
of,
and
in
that
sense
right,
there's
there's
not
like
any
data
loss.
Okay,
problem
yeah,
so
that's
kind
of
the
the
other,
the
other
part
that
we
generally
care
about
for
state,
and
so
that's
not
really
a
problem.
A
A
Yes,
there's
an
internal
hackathon
going
on
right
now
at
grafana
and
jen,
and
I
have
been
working
together
on
a
fun
little
tool
and
we
think
it'll
be
helpful
for
the
oss
community
and
also
want
to
use
this
as
a
as
a
way
to
kind
of
do
a
rough
dry
run
for
for
what
we'll
have
to
present
internally
next
week.
A
This
is
all
very
rough
around
the
edges
so
bear
with
us,
but
take
it
away.
Jen.
C
Sure
yeah,
so
our
team
theme
is
in
case
you
weren't
familiar
it's
a
plush
shark,
so
there's
gonna
be
a
lot
of
photos
of
that
through
the
slides,
but
generally
like
the
the
project.
That
kind
of
own-
and
I
wanted
to
take
on
here-
is
helping
sort
of
take
some
of
the
grafana
labs
like
expert
knowledge
around
how
you
would
size
and
set
up
a
cluster
and
make
that
knowledge
more
accessible
to
other
other
people.
So
next
slide
on.
C
So
basically
like
we've,
we've
seen
some
issues
already
like
in
github
with
people
kind
of
asking
like
hey,
you
know,
given
a
certain
log
in
just
volume.
What
should
my
cluster
look
like?
What
kind
of
resources
am
I
going
to
need
to
run
this
and
so
right
now,
there's
not
great
great
information
out
there.
C
So
probably
what
we
see
people
is
either
like
trial
and
erroring,
so
you
either
over
under
provision,
each
of
which
has
downsides,
or
maybe
you
just
give
up-
and
you
say
hey.
I
just
don't
understand
this
enough.
C
So
yeah,
I
just
screenshotted
like
a
couple
of
the
issues
that
we
saw
where
people
asking
for
some
of
this
support,
and
so
this
is
kind
of
meant
to
help
with
that,
because
otherwise
you
will
be
like
this
very
sad
shark
in
this
photo
and
just
feel
very
overwhelmed,
and
so
yeah,
like
I
said,
even
even
within
grafana
labs.
C
We
maintain
kind
of
like
a
spreadsheet,
to
try
to
help
with
some
of
this,
but
the
spreadsheet
can
go
stale
and
even
our
spreadsheet
is
not
as
detailed,
probably
as
you'd
need
to
like
design
an
actual
system.
So
we
wanted
to
to
kind
of
provide
that
and
then
the
goal
here
again
is
like
another
pain
point.
That's
somewhat
specific
to
your
funnel
labs
is
like,
even
though
we
have
a
bunch
of
people
that
have
a
bunch
of
knowledge
around
how
loki
should
be
run
and
sized.
C
We
actually
have
many
different
loki
clusters
that
we
run,
which
are
kind
of
special
snowflakes.
And
then
you
know,
the
on-call
engineer
is
if
they
ever
have
to
like
increase
a
cluster
and
size
they're
having
to
like
do
some
math
to
figure
out
like
okay,
like
how
many
more
of
this
component
do
I
add,
so
we
just
wanted
to
automate
that
for
them.
Therefore,
we
have
what
we
are
calling
the
definitely
a
fake
trademarked,
smart
sizing
tool
that
you,
as
a
user,
able
to
give
like
here's
the
ingest
volume.
A
A
And
yeah
so
kind
of
going
down
the
list
here
you
can
see
things
like
required:
node
counts.
These
are
largely
derived
from
like
anti-affinity
rules
internally,
so
you
don't
want
to
schedule
in
gestures,
for
instance,
with
alongside
other
adjusters,
and
then
you
can
also
get
things
like.
You
know,
cluster
benchmarks,
from
memory
both
for
the
floors
and
for
the
ceiling
cpu
and
then
storage
both
on
the
on
disk
for
your
components
as
well
as
how
much
you
need
an
object,
storage
for
whatever
your
retention
period.
A
Ultimately,
this
kind
of
comes
out
with
both
a
floor
and
a
ceiling
for
costs
based
on
all
these
constants
that
you
can
either
configurable.
But
I
think
dan
you
said:
they're
based
off
yeah.
A
Right
now,
yeah,
which
is
how
we
run
a
lot
of
stuff
internally,
most
of
our
infrastructure
comes
out
of
google,
and
then
you
can
also
break
it
down
per
component.
A
So
you
can
see
you
know,
memory,
cpu,
floor
ceilings,
how
many
replicas-
and
this
should
will
probably
be
where
the
tool
stops
for
most
people
can
be
used
to
kind
of
resize
clusters
over
time,
but
the
the
really
fun
part,
for
me
at
least,
is
kind
of
what
I
want
to
show
next,
which
is
we
can
take
this,
but
we
kind
of
wrote
this
as
a
library
internally,
and
we
want
this
to
to
really
be
at
least
as
an
initial
driver
within
grafana
labs
as
like
a
a
continuous
planning
tool
that
we
can
hook
into
our
ci
and
cd
systems
to
make
sure
that
our
clusters
are
always
appropriately
sized
for
the
volume
that
we
expect
and
to
also
kind
of
force
us
to
continually
keep
this
tool
up
to
date.
A
You
know
that's,
this
is
my.
You
know
sales
pitch
internally
right
now,
so
we
can
also
do
things
like
templating
right
and
so
for
those
that
may
know
or
may
not
know,
we
vend
a
jsonnet
library
for
loki,
which
is
the
tool
that
we
use
internally
to
deploy
all
of
our
clusters,
and
so
you
can
actually
take
this
cluster
definition
that
we
generate
and
then
kind
of
overlay,
that
into
a
set
of
jsonnet
overrides
and
we,
the
idea
here
is
which
I
was
frantically
trying
to
hook
up
right
before
this
call.
A
The
idea
is
that
you
then
kind
of
would
add
this
into
a
jsonnet
specification
and
it
would
size
all
of
your
your
components,
both
in
terms
of
replicas
and
as
well
as
memory
disk
cpu
that
sort
of
thing
according
to
whatever
throughput
and
retention
period.
You
expect-
and
that's
the
end
of
the
demo
here
so
we'll
go
kind
of
go
back.
C
Oh
yeah
and
I
think
that
covers
most
of
it
and
want
to
make
sure
we
leave
time
for
other
folks.
I
think
the
only
thing
we
wanted
to
kind
of
hit
on
on
the
next
slide
was
a
little
bit
is
like
you
know.
We
will
eventually
probably
like
put
this
out
there
and
what
we'd
like
to
see,
maybe
is
like
folks
in
the
community
like
they
can
run
with
it
with
their
own,
like
whatever
they
use
for
deployment
like.
C
Maybe
you
don't
use
json
it,
but
you
know,
maybe
you
want
to
write
some
sort
of
like
helm,
adapter
that
can
take
this
output
and
then
give
you
like
the
helm
chart
that
you
want,
or
you
just
pipe
it
directly
into
like
whatever
your
manifests
are.
So
you
know
you
can
see
generally
peak
expected
usage
right
like
in
a
kubernetes
world,
that
pretty
much
map
to
like
your
resource
limit
for
a
pod,
so
yeah
excited
to
see
kind
of.
A
D
A
Yeah,
can
you
auto-scale
clusters
on
these
values?
A
It
kind
of
depends
on
what
your
internal
tooling
looks
like
like
these
aren't
like
auto-scaling
groups,
in
something
like
aws,
for
instance,
right,
like
they're,
they're,
more
they're,
more
constant,
like
generated
definitions,
depending
on
kind
of
what
your
capacity
planning
would
look
like,
and
so
we
don't.
Actually
we
don't
use
a
ton
of
auto
scaling
internally.
A
We
use
a
lot
of
kind
of
like
ancillary
tools
to
do
ci
and
cd
based
on
things
that
are
more
or
less
automated,
but
not
quite
the
same
cloud
primitives
that
you
may
be
familiar
with,
and
this
really
helps
us
do
things
like
like
scaling
on.
You
know,
maybe
metrics
that
aren't
exposed
by
cloud
providers
right.
B
Thanks
someone
and
jen
that
was
I'm
pretty
excited
because
I
know
from
an
operations
standpoint
I
would
say
we
probably
err
on
the
side
of
being
over
conservative
on
how
we
scale
things
so
it'd
be
nice
to
have
that
be
a
little
bit
more
reasonable.
A
B
What
are
the
next
steps
like
when,
if
someone
wants
to
see
the
progress
of
this
or
keep
up
with
it,
where
can
they
go?
Look.
A
So
I
I
was
deliberately
like
you
know
sidestepping
questions
like.
Is
this
going
to
be
in
in
the
main
branch
right
like?
Are
we
going
to
vent
a
tool
because
it's,
it's
largely
just
been
a
side
branch
work
project
that
that
jen
and
I
have
been
kind
of
putting
together
over
a
few
days?
A
A
A
C
A
Yeah
we
we're
airing
on
the
side
of
demoing
very
early
here
right.
My
last
commit
was,
you
know,
like
work
in
progress.
Tagged
probably
five
minutes
ago,
so
cut
us
a
little
bit
of
slack.
B
So
I
added
an
update
here
that
cavi's
not
on
the
call,
but
thanks
to
kavi
for
fixing.
I
don't
think
we
have
an
issue
well,
actually
we
probably
have
numerous
issues
for
it
to
be
honest,
but
the
graffana.com
docs
loki
defaults
to
we
something
we
call
latest,
which
was
pointing
to
the
unreleased
versions
of
the
docs.
Basically,
whenever
we
commit
something
to
main
that
gets
auto
deployed
to
docs
and
that's
very
confusing,
because
the
default
landing
page
basically
showed
docs
from
stuff
that
wasn't
released,
that
has
been
fixed
now.
B
So
when
you
go
to
com,
slash,
docs,
slash
loki,
you
will
land
on
latest
still,
but
latest
returns
the
latest
release
docs.
This
should
be
consistent
with
how
grafana
is
doing
their
docs
as
well,
and
there
is
a
drop
down
in
there
that
you
can
go
pick
next.
If
you
want
to
see
the
unreleased
stuff,
basically
things
that
are
continuously
deployed,
so
hopefully
that
simplifies
some
of
the
problems
that
we've
seen
and
saves
people
some
time.
B
I
don't
think
we've
talked
about
the
pattern
parser
that
cyril
put
together
fairly
recently
yet
because
it
all
happened,
probably
within
the
span
of
the
last
month,
or
at
least
us
deploying
it.
So
let
me
find
the
docs
for
that,
but
the
pattern
parser
is
exciting
because
it
is
the
fastest
loki
parser
to
date.
So
it
is
now
faster
than
the
log
format
and
json
parsers,
which
are
faster
than
the
regex
parser.
Parser
is
the
way
you
turn
your
log
content
into
labels
at
query
time,
and
I
definitely
recommend
you
check
this
out.
A
B
A
Good
for
things
that
have
consistent
structures
that
historically
you've
had
to
write
super
complex,
regex,
parsers
for
which
are
slower,
as
I
mentioned,
but
also
just
incredibly
tedious,
to
write.
B
So
right
so,
instead
of
having
to
write
a
regular
expression
for
something
like
the
common
log
format,
which
I've
done,
it's
terrible,
I'm
sure
if
you've
done
it
before
too
it's
it's
in
the
common
log
format,
isn't
necessarily
common
amongst
different
distros.
So
so
yes,
this
works
very
well
for
sort
of
you
know
space
delimited
or
could
be
anything
delimited,
but,
like
kind
of
a
I
mean,
csv
probably
would
work
just
fine
too.
For
this,
as
a
matter
of
fact,
so
that's
another
case
this
is
unreleased.
B
If
you
want
to
play
around
with
this
well,
you
gonna
sign
up
for
grafana
cloud
because
it
is
actually
running
on
a.
B
B
We'll
sort
of
caveat
this,
it
has
been
a
couple
months
since
our
last
release-
and
you
know,
don't
just
run
rates
of
prod
with
this
image
that
I'm
about
to
paste.
B
B
But
we
sort
of
have
to
figure
out
a
good
way
to
communicate
because,
like
once
in
a
while,
we'll
pull
them
if
we
have
trouble
with
them,
but
typically
okay,
release
makes
it
through
to
prod.
There
might
be
multiple
iterations
of
them
too,
if
we
sort
of
re-release
them
in
lower
environments,
if
we
run
into
troubles,
but
that
image
there
is.
B
That
so
run
this
in
dev,
first,
just
likely
config,
flag
changes
or
other
changes
things
you
just
want
to
be
aware
of,
but
the
pattern
parser
check
it
out,
like
I
said
you
can
go
sign
up
for
a
free
account
on
grifano.com
or
could
find
a
cloud
and
play
around
with
it
there
or,
if
you're,
already,
using
that
or
check
out
that
that
docker
image.
B
Alongside
with
that,
we
have
so
just
briefly
talk
about
2.3.
B
I
think
that
our
tentative
plan
it's
this
is
the
time
of
year
that
well
between
our
hackathon
internally
and
griffon.com,
or
graphonic
online
last
couple
weeks
and
pto
that
things
do
slow
down
just
a
bit,
but
I
think
the
end
of
this
month,
maybe
early
next
month
from
a
feature
set
2.3,
is
actually
going
to
be
really
exciting.
B
I
just
want
to
give
us
a
little
bit
more
time
internally
on
things
like
custom
retention
and
deletes,
make
sure
we're
happy
with
those
the
pattern
parser
and
that
we
should
be
able
to
release
those
either
in
a
you
know,
beta
fashion,
which
basically
means
that
you
know
it
should
be
in
good
shape
or
just
un
unencumbered
with
a
a
beta
flag.
So
that's
kind
of
what
we're
doing
is
getting
our
ducks
in
a
row.
B
Also,
karen
we
met
last
month
is
helping
us
actually
generate
some
release,
notes
and
better
documentation
around
the
release
and
make
sure
that
the
features
that
we're
releasing
have
documentation.
I
actually
just
went
looking
for
like
the
index
gateway
docs,
and
I
don't
think
we
have
anything
for
that.
Yet
so
there's
some
gaps
that
we
want
to
close
there
to
do
a
little
bit
better
job
of
giving
people
documentation
for
releases.
B
All
right,
so
that's
the
sort
of
story
for
2.3,
I'm
not
sure
who
added
the
8.1
thing
about
loki
so.
D
B
D
Yeah,
it's
me
I
should
have
added
the
name,
but
I
was
thinking
about
sharing
some
work
that
we've
been
doing
and
that
is
probably
and
very
likely
going
to
be
part
of
grafana
8.1
release
which
is
scheduled
for
like
mid-july.
So
it
should
be
available
in
two
to
three
weeks
and
I
will
share
my
screen
and
show
you
what
we've
been
working
on,
because
I
feel
like
we
for
this
release.
D
We
tackled
a
lot
of
things
that
was
also
mentioned
here
on
community
calls,
and
there
were
a
bunch
of
issues
that
were
upvoted
quite
a
lot.
So
so
I
hope
I
will
make
everyone
happy
with
this
like
features
that
that
we've
done
so
here
is
the
list
of
the
things
that
is
kind
of
in
our
backlog,
that
I
will
walk
you
through
the
issues
and
feature
requests
that
are
either
already
merged
in
the
master
or
are
in
the
pr
stage.
D
So
we
are
feedbacking
it
and
they
should
be
emerged
quite
soon.
D
So
the
first
thing
is
and
like,
as
you
can
see,
some
members
of
our
community
has
already
has
already
seen
it,
because
it
was
like
very
upvoted
feature,
and
that
is
support
for
this
kind
of
pattern,
where
you
are
basically
when
you
are
requiring
for
temp
template
when
you
are
using
template
variables.
D
You
are
using
the
series
endpoint
in
loki
and
it
allows
you
to
basically
create
queries
like
this,
and
you
are
also
able
to
use
nested
templating,
which
basically
add
support
for
this
kind
of
syntax,
where
you
are
able
to
use
the
log
stream
selector
and
also
specify
label.
D
Another
thing
that
I
feel
like
it
was
very
confusing
for
our
new
users,
but
also
for
a
lot
of
users
was
the
fact
that,
if
you
use
in
your
lock
query
parser,
we
suddenly
had
a
bunch
of
labels
and
the
filtering
didn't
work
for
the
parsed
labels
and
it
threw
error
because
we
were
expecting
it
to
be
the
real
label
that
it
was
part
label
and
we
have
tackled
this.
D
Actually,
cereal
came
up
with
idea
how
to
how
to
kind
of
fix
this
without
having
to
change
anything
in
rocky
and
basically
not
much
in
grafana.
D
D
Let
me
see
this
should
be:
oh,
no
because
I'm
not
using
the
I'm
not
using
the
parser.
So
let
me
use
the
parser
run
query,
and
now
we
have
more
labels
and
we
don't
know
which
are
actual
labels.
We
hate
our
parse
labels,
but
we
can
basically
parse,
for
we
are
basically
able
to
filter
for
parsed
and
also
for
actual
labels
by
this
syntax.
D
So
that's
another
thing
fix
highlighting
for
logs
when
using
backticks
we
haven't
updated
the
logic
that
was
handling
the
highlighting
in
quite
a
while
and
locally
changed
a
lot.
So
we
weren't
handling
a
case
when
there
were
backticks.
D
So
basically
that's
another
change
where
we
are
handling
this
case
when
you
have
backpacks
or
more
complex
queries,
and
you
are
able
to
show
context
in
this
case
because
highlighting
is
connected
to
the
showing
of
context
in
logs
panel,
we
have
added
option
to
show
common
labels
so,
as
we
have
bunch
of
toggles
where
you
can
decide
like,
if
you
would
like
to
see
log
details,
unique
labels
and
so
on,
we
have
added
option
to
see
common
labels
as
well.
D
We
have
added-
or
it's
still
open
pr,
but
we
are
soon
to
merge
this,
where
we
are
adding
a
range
variable
that
is
currently
supported
in
prometheus,
but
it
was
requested
also
for
loki.
So
it
adds
support
for
all
of
these
variables.
D
This
is
also
open
pr.
We
are
still
figuring
out
the
like
the
final
ux
ui
staff,
but
basically
we
are
adding
an
option
to
prettify
json
and
possibly
also
lock
format,
log
lines,
so
it's
more
readable
for
the
user.
So,
instead
of
just
seeing
like
this
long
string,
you
are
able
to
see
nicely
formatted,
jason
vlog
and
the
last
thing
I
know
there
there's
quite
a
lot
of
nice
updates.
D
So
I
think
81
is
definitely
worth
to
try
as
soon
as
possible,
but
the
last
thing
is
adding
ad-hoc
filtering
in
dashboards
and
that's
another
thing
that
was
possible
in
primitives,
but
not
in
logi
loki.
So
we
have
added
this
ad
hoc
filter
where
you
can
basically
and
hopefully
specify
label
and
value
and
that
label
and
value
will
be
added
to
all
of
your
loki
queries
so,
for
example,
to
just
demo
this
we
can
use
the
debug.
A
Can
that
be
amazing,
could
you
specify
that,
for
things
that
aren't
labeled
or
like
parse
labels,
too,.
B
So
exciting
couple
things
I
would
the
backticks.
That's
a
that's
a
loki
pro
tip
if
you've
never
come
across
that
before.
If
you
use
back
tips
back
ticks,
you
don't
have
to
escape
double
quotes.
So
that's
really
handy
inside
of
certain,
like
especially
line
format.
I
use
that
one
a
lot
where
the
line
format
itself
will
be
wrapped
in
double
quotes.
But
if
you
use
templating
functions
inside
your
line,
format
regexes,
so
that
you
don't
have
to
escape
very
very
handy.
B
B
Windows,
open
yeah
just
throw
each
of
those
pr's
in
there,
because
that's
that's
fantastic!
I'm
excited,
I
didn't
realize
all
that
stuff
was
getting
fixed.
The
range
one
in
particular
I'm
really
excited
about,
because
I
still
use
loki
as
a
prometheus
data
source
to
get
access
to
range
and
ivana
is
tired
of
me
doing
that.
D
D
I'm
not
sure
if
it
get
we'll
get
to
eight
one.
I
I
I
think
it
might,
but
if
not
then
h2
is
definitely
like.
It's
definitely
something
we
plan
to
work
on
very
soon.
B
Awesome
yeah
that
I
run
into
that
one.
There's
a
couple
use
cases
around
prometheus
where
I
force
the
step
right.
So
if
you
want
to
do
like
hour
buckets
or
day
bucket
aggregations,
you
can
force
the
step
and
force
the
range
and
the
query
to
both
be
one
day,
and
that's
really
handy
so
that'll
be
really
nice
to
be
able
to
do
that
in
loki
in
explore
or
and
or
in
dashboards.
B
Look
at
all
that
lists
the
json,
pretty
fine
log
ones.
It's
nice
to
have.
I've
actually
found
myself
when
you
use
the
json
parser
and
you
expand
the
log
line.
It's
almost
easier
in
some
ways
to
read,
because
the
labels
will
basically
list
the
contents,
but
that's
not
perfect,
because
our
json
parser
doesn't
handle
arrays
currently.
So
it
ends
up
just
sort
of
dropping
that
data,
and
that's
not
the
best
experience
so
having
a
pretty
fire
will
be,
would
be
nice.
D
C
D
Like
a
toggle
for
all
of
them
could
be
nice.
B
D
B
Really
clever
way
to
we,
we
have
a
long-standing
issue
around
extending
the
loki
api
to
be
able
to
disambiguate
those
parsed
versus
non-parse
labels,
but
we
just
haven't
really.
I
don't
know
if
there's
a
real
reason
other
than
revving
apis
is
kind
of
annoying,
so
this
is
kind
of
nice
to
not
have
to
just
kick
that
can
down
the
road
a
little
farther.
B
B
We're
like
located
now,
we
should
clean
that
up.
It's
not
experimental,
it's
extremely
stable.
It
is
by
far
the
most
stable
part
of
loki.
The
bush.
B
B
Okay,
I
can
commiserate
with
whoever.
B
B
Yeah,
so,
and
actually
for
what
this
is
worth
like,
we,
we
have
a
bug
somewhere
with
grafana,
eight
or
our
infra,
but
our
one
of
our
environments.
B
Right
now
is
timing
out
queries
with
a
502
after
like
10
or
20
seconds,
and
I
can't
figure
out
where
it
is,
but
there
are
a
whole
series
of
timeouts
that
exist
between
rafana
and
loki
and
within
loki,
and
it
goes
something
like
grafana
has
a
data
source
timeout
the
founder
has
a
request:
timeout
there's
the
data,
the
data
source
proxy
timeout
is
the
first
one
that
I
just
said
and
then
a
request
time
out
on
grafana
as
a
whole.
I
don't
remember
exactly
where
that
one
plays
it's
in
the
server
config.
B
I'm
pretty
sure
there
is,
if
you
say
like
then
from
there.
What
we
run
into
like
I
at
home
have
nginx
between
as
a
reverse
proxy.
You
have
reverse
proxy.
Reverse
proxy
is
going
to
have
timeouts
if
you're
using
nginx,
there's
a
read,
reverse
proxy
and
right
reverse
proxy
that
you
both
need
to
adjust.
B
If
you
have,
let's
see,
is
there
another
one
in
there
I
mean
you
could
have
as
many
as
you
have
components
between
so
like
we
run
in
google,
and
we
have
google
load,
balancer
sit
in
front
and
those
have
timeouts
and
ultimately
you
get
to
loki
and
loki
has
a
few
timeouts
in
here
too,
there
are
http
request,
timeouts
in
the
server
section.
There's
query
timeouts
on
the
query
engine,
and
there
are,
I
believe
there
are
grpc
timeouts,
although
you,
I
think
you'd
be
hard-pressed
to
hit
them,
but
anything
can
happen.
B
So
that
is
a
poor
documentation,
but
I've
actually
sort
of
recently
ran
into
this
myself.
I
wanted
to
write
a
blog
post
about
it
because
I
spent
like
a
whole
day
chasing
doubt.
So,
if
you
use
nginx,
for
example
as
and
you
set
a
pool
of
upstreams,
it
has
a
default
timeout
in
there.
That's
for
the
upstreams
is
different
than
the
reverse
proxy
timeouts,
and
it
has
a
really
neat
behavior
of
when
it
times
out.
It
tries
the
next
upstream
silently.
B
So
that's
nothing
to
do
with
grafana
or
loki.
It's
just
an
nginx
behavior,
but
you
see
context
canceled
anywhere
in
your
logs.
That's
likely
because
something
closed,
the
tcp
connection
between
you
and
grafana
and
that's
very
likely,
your
reverse
proxy
timeouts
grafana
itself
should
log
at
a
debug
level
when
the
reverse
proxy
time
or
the
data
source
proxy
times
out
and
loki
will
log
one
of
the
other
things
you'll
see
a
lot
is
loki
will
actually
return.
A
query
result
after
grafana
has
timed
out.
B
B
Gateway
fronted,
querier,
etcetera,
yeah
within
loki,
there's
not
like
within
loki
if
you're
running
it
in
microservices,
I
don't
think
you're
likely
to
run
into
much
for
timeouts,
most
commonly
you'll
run
into
grpc
resource
exhaustion.
Errors
where
the
defaults
in
some
of
the
cases
for
grpc
are
like
16,
megs
and
big
queries
can
exceed
that
yeah.
A
We
need
to
document
this
both
for
the
public
and
for
some
of
our
like
on-call
yeah,.
B
Like
play
boxing
stuff
internally,
I
know
I
know
that
that
the
defaults
in
loki
we
talked
about
this
couple
weeks
ago,
a
little
bit
and
I
think
we
sort
of
tentatively
agreed
that
we
should
try
to
favor
the
defaults
to
probably
be
closer
to
how
our
operating
experiences
are.
I
know
that
there's
a
number
of
cases
where
some
of
these
have
diverged
and
we've
not
gone
back
to
sort
of
close
that
gap
like
we
added
override
in
our
config,
but
it's
not
in
the
default
case
on
it
or
it's
not
the
default
for
loki.
B
So,
yes,
sorry,
timeouts
are
annoying
and,
like
I
said,
we
have
one
now
that
I
can't
quite
figure
out.
It
did
start
with
grafana
eight.
That
could
be
a
coincidence
or
it
could
be
that
there's
a
bug
somewhere,
not
necessarily
in
grafana
a
but
interestingly,
it's
not
really
that
either,
because
I
have
two
grafana
eights
connected
to
the
same
data
source
and
one
does
this
and
the
other
one
doesn't
so
not
sure
yet
get
back
to
you.
B
Oh,
I
don't
have
much
more
to
say
about
timeouts
other
than
there's
a
lot
of
them
and
we
could
certainly
use
to
have
more
tools
to
help.
I
would
say,
like
I
said,
watch
out
for
context
cancelled
on
the
grafana
logs
or
in
loki's
logs,
because
that
often
indicates
something
has
closed.
The
connection.
B
A
A
Yeah
pretty
happy
we're,
finally
like
starting
implementation
on
removing
the
ordering
constraint
for
loki.
This
goes
way
back
to
some
kind
of
early
issues.
You
know
like
two
years
ago
and
then
probably
like
a
year
and
a
half
ago
ed,
and
I
one
night
like
sat
down
and
and
we're
kind
of
like
just
running
through
the
motions
seeing
if
this
was
like
kind
of
feasible
and
realize
that
oh
hey
like
it,
it
could
actually
work
pretty
well
and
it's
been
a
very
slow
process
to
kind
of
close
that
loop.
A
B
Really
trying
to
stretch
this
out
maybe
to
loki
three
in
the
fall,
I'm
just
kidding.
A
It
should
make
everything
a
lot
easier,
for
I
mean
people
running
not
who
aren't
running
prom
till
or
are
running
on
ephemer
like
hyper
ephemeral
infrastructure
like
it
like
lambda
functions
or
who
just
spin
up
infrastructure
for
jobs
very
quickly.
You
know,
with
a
high
degree
of
high
frequency,
it'll
help
people
using
other
agents
like
fluent
that
don't
really
account
for
loki's
previous
order
and
constraint
very
well
or
people
who
want
multiple
aggregation
layers
for
their
logging
pipelines.
It
should
just
make
a
whole
a
whole
lot
of
use
cases
dead,
simple
and
yeah.
B
Worth
pointing
out
that
you
know
the
biggest
reason
that
it
doesn't
exist
now
is
because
it's
two
things:
it's
a
very
nice
simplification
for
loki's
code
base
and
it's
the
most
performant
way
to
ingest
logs.
So
by
removing
the
ordering
constraint,
it
will
come
at
a
cost
right,
so
inserts
are
going
to
be
slower
and
memory.
B
Consumption
on
the
injectors
is
going
to
increase,
so
that's
kind
of
what
owen's
been
trading
off
is
trying
to
figure
out,
like
you
know
what
the
exact
story
for
out
of
order
that
we
want
to
support
is,
like
you
know,
in
infinite
ordering
type
things
loki's
actually
pretty
good
at
this
already
like
chunks
themselves,
can
be
sort
of
whenever
it's
just
inserting
within
one
chunk
needs
to
be
in
order,
so
I
mean
so
far.
I
guess
you've
seen
some
benchmarking
here
owen
to
see
what
you
know.
A
Yeah
it
honestly
things
are
looking
very,
very
good
at
the
moment
in
terms
of
like
associated
costs,
resource
costs
that
are
looking
to
be
pretty
negligible,
but
you
know
we'll
have
to
get
a
little
bit
farther
along
before.
I
say
that
with
a
high
degree
of
confidence,
but
without
a
shark
in
your
lap
yeah
exactly
but
a
initial
initial
kind
of
feedback
looks
like
no
one's
gonna
have
to
alter
any
of
their
deployments.
A
So
hopefully
we
can
finish
that
way.
C
A
Yeah,
I
mean
one
of
the
kind
of
design
goals
here
is
to
make
this
as
close
to
zero
cost
as
possible
for
things
that
do
come
in
in
order
okay,
but
there's
going
to
be
a
little
bit,
but
it's
like
that
use
case
is
actually
looking
pretty
strong
and
the
parts
of
the
code
base
of
that
would
change,
especially
for
inordered
rights,
like
you
know,
we're
making
certain
things
slower,
but
it's
such
a
small
part
of
the
pie
anyways
in
terms
of
like
the,
and
this
all
happens
on
the
adjusters
in
terms
of
the
ingesters,
like
cpu
cycles,
for
instance,
that
it's
like
that
part
looks
pretty
negligible
at
the
moment.
D
A
B
A
We'll
probably
talk
about
like
this
tool's
progress
or
whatever
happened
to
it
at
next
at
the
next
call.
We
don't.
A
B
B
Yeah
all
right,
we
should
be
mean
time
to
all
right,
yeah,
let's,
let's
call
it
there
thanks
everybody.
It's
still
excited
to
see
more
folks
join
in
considering
we're
still
not
doing
a
great
job
of
advertising
this,
but
just
yeah
feel
like
it
sneaks
up
something.
Isn't
it
like
months
that
sneak
up
on
you
like
you,
look
at
the
calendar
and
it's
like
didn't.
We
just
do
this
yesterday
last
week,
that's
good!
We
got
a
lot
going
on
a
lot
coming.
2.3
is
going
to
be
exciting
too.
B
I'm
very
excited
for
for
loki
loki's
future.
So
thanks
everybody.
We
will
see
you
in
a
month
take
care.
We
need
outro,
music,
see.