►
From YouTube: Grafana Mimir Community Call 2022-07-28
Description
Join our next Grafana Mimir community call: https://docs.google.com/document/d/1E4jJcGicvLTyMEY6cUFFZUg_I8ytrBuW8r5yt1LyMv4/edit
Learn more at https://grafana.com/oss/mimir/ and if all of this looks like fun, feel invited to see if there’s a role that fits you at https://grafana.com/about/careers/
A
B
A
I
guess
I'm
gonna
start
going
through
the
agenda.
First
up
we
have
the
2.2
release,
one
of
the
one
of
the
big
feature.
Probably
one
of
the
most
things
I
frequently
hear
about
prometheus
is
people
being
tripped
up
by
sending
samples
out
of
order,
and
this
adds
experimental
support
for
out
of
order
ingestion
of
samples
which
has
been
a
whole
bunch
of
work
by
a
bunch
of
people
who,
I
don't
think
are
on
this
call.
But
it's
pretty
exciting.
It's
been
a
long
time
in
the
making.
A
One
of
the
other
other
big
things
has
been
sort
of
a
focus
for
us.
These
last
few
months,
ease
of
use
marco
was
actually
doing
a
lot
of
work
on
this
sort
of
coming
up
with
a
standardized
format
for
errors
and
giving
each
error
an
id.
So
you
can
easily
look
it
up
in
the
documentation
and
figure
out.
You
know.
Oh,
this
is
what's
wrong.
This
is
what
I
can
take
to
these
reactions.
I
can
take
to
remedy
it.
A
The
bucket
index
prefix,
I'm
not
sure
what
that
feature
is
dimitar.
Did
I
do
I
remember
correctly
that
you
were
working
on
that
or.
C
A
That
something
else
the
book
of
index
yeah
bucket
index
prefix
it's
on
the
agenda,
but
I'm
not
sure
what
it
actually
means.
D
Yeah,
so
that's
that
allows
to
use
the
same
buckets
to
for
different
components
so
for
block
storage
rulers
and
load
manager
yeah.
It
basically
prefixes
all
files
that
these
that
all
components
applaud
to
the
bucket
with
with
the
prefix
you've
configured.
So
they
don't,
they
no
longer
interfere.
I
think
there
was
a
problem
with
compactor
discovering
rules
and
the
ruler
trying
to.
D
C
D
C
The
the
block,
storage
and
the
ruler
and
the
other
manager
storage
on
the
other
side
basically
have
some
conflicting
prefixes.
If
you
run
with
the
default
configuration,
so
you
have
basically
to
store
the
blocks
in
the
dedicated
pocket
and
then
ruler
and
dialog
manager
can't
flick
either
in
dedicated
packets
or
both
of
them.
In
the
same
bucket,
introducing
a
prefix
for
the
block
storage,
we
are
able,
if
you
set
this
prefix
to
some
value
like,
for
example,
slash
blocks.
D
Also,
some
club
providers
have
pricing
per
bucket,
so
for
some
people
it's
actually
cheaper
to
just
use
one
bucket-
or
at
least
that's
some-
that's
what
some
community
member
claimed.
A
Then
next
up
is
a
faster
ingester
rollouts
due
to
wall
replay
improvements.
I
think
jesus
and
maybe
brian
worked
on
that.
B
There
is
actually
right
now
in
prometheus
another
pull
request.
That's
revamp,
rewriting
the
entire
code
so
like
it's
actually
removing
the
the
loop
that
waits
and
replacing
it
with
a
different
implementation,
making
it,
I
think,
a
bit
more
efficient,
but
that's
still
being
worked
out
so,
but
maybe
another
improvement
on
on
this
for
the
future.
A
Very
nice
next
up
store
gateway
performance,
optimization.
I
can
go
over
this.
There
are
a
few
improvements
to
how
we
use
memcache
in
the
store
gateway
that
should
reduce
a
number
of
connections
and
make
things
a
bit
faster
and
steve
has
a
change
to
the
way
the
store
gateway
uses
mmap.
That
should
reduce
the
frequency
of
those
hangs
that
people
are
experiencing
where
the
store
gateway
becomes
entirely
unresponsive.
E
Yeah
I'll
go
over
the
highlights.
I
talked
about
most
of
these
in
the
previous
committee
meeting
as
upcoming,
but
so
ham
is
a
tool
to
deploy
software
on
kubernetes
and
we
are
packaging
vimeo
for
for
helm,
tooling,
and
in
that
package
we
implemented
a
couple
of
features
such
as
meta
monitoring,
support
built
in
which
means
that
you
can
set
up
your
mimir
to
monitor
itself
its
contours
and
send
them
to,
for
example,
graph
on
a
cloud
and
which
means,
for
example,
with
the
even
with
the
three-tier
graphical
cloud
account.
E
You
can
start
having
your
own
meta
monitoring
for
your
for
your
mimir
and
which
is
useful
for
you
know:
performance
management
and
and
fault
management,
meaning
you
know
troubleshooting.
E
So
that's
that's
a
pretty
cool
feature
and
the
other
bigger
changes
that
we
made
improvements
to
the
chart
so
that
you
can
install
it
on
openshift
and
that
was
mainly
requested
by
by
enterprise
customers,
because
it's
quite
prevalent
for
especially
for
like
financial
institutions
and
such
so
that's
a
change
and
in
connection
with
that,
we've
we've
changed
how
we
use
memcached.
E
We
used
to
kind
of
integrate
it
by
a
sub
chart,
but
that
subchart
had
some
issues
lately
and
also
it
prevented
us
from
from
having
enough
control
to
do
this
for
openshift.
So
we
are
not
directly
managing
memcached
processes
from
from
the
mimirrhem
chart,
then
there's
improvements
to
the
how
we
manage
configuration,
and
that
was
requested
a
lot
of
times
that
we
should.
E
Make
that
easier
it
in
the
previous
releases
of
the
of
the
mimir
hem
chart
you
had
to
copy
the
whole
configuration
from
from
the
preset
basically
and
make
modifications,
and
it
was
a
bit
complicated
plus
that
configuration
is,
is
a
big
template,
basically
to
support
a
lot
of
things,
and
now
you
can
actually
just
modify
parts
of
it
through
a
structured
way
through
like
normal
yaml,
not
not
a
text
field,
and
also
now
you
can
keep
that
set
configuration
in
a
config
map
instead
of
a
secret
which
is
useful
when
you
upgrade
you
can
use
hamdiff
to
actually
see
what's
going
to
what's
going
to
be
changed,
but
it
also
means
that
you
know
configment
is
not
really
meant
to
store
secrets
and
credentials.
E
So
the
another
thing
that
we
turned
on
is
that
we,
we
cannot
expand
environment
variables
in
the
configuration,
so
you
can
store
your
credentials
in
actual
secret
or
vault
or
wherever
safe
and
inject
them
into
the
configuration
which
is
a
nice
touch
and
let
me
see
yeah,
another
notable
change,
maybe
is,
is
that
we
turned
on
multi-tenancy
by
default
for
oss,
but
we
did
it
in
a
backward
compatible
way.
E
A
And
there
was
a
blog
post
a
little
while
back,
you
may
have
noticed
that
we
now
open
source,
the
graphite
and
datadog
ingestion
proxies.
So
this
this
is
sort
of
a
follow-up
to.
I
think
something
marco
has
mentioned,
that
we
want
mimir
to
be
a
time
scale
database
for
all
your
metrics,
no
matter
what
format,
and
so
this
is
sort
of
the
first
step.
A
These
the
proxies
were
developed
by
different
teams,
so
they
they
work
a
little
bit
differently,
and
this
is
really
just
like
an
initial
get
the
code
out
there
release
but
like
it's
a
kind
of
a
kind
of
a
huge
deal
that
we
can
now
support,
graphite
and
datadog
metrics
with
mimir.
A
F
Yeah,
so
I
can
add
a
bit
about
that
as
well,
because
I
was
part
of
the
squad
who
was
basically
doing
this
effort
to
open
source
these
things
so
yeah.
Basically,
we've
implemented
the
dog,
slash,
graphite
right
apis,
though
in
the
background,
everything's
translated
into
prometheus,
and
it
interacts
with
mimir
via
remote,
right,
yeah,
basically
making
it
really
low
effort
to
such
through
prometheus,
just
by
updating
agent,
ui
data
agents
and
stuff
to
point
at
the
proxies
and
yeah.
F
As
a
note,
as
you
mentioned,
it
is
experimental
these
even
like
these
open
source
proxies,
but
we
do
actually
also
run
them
in
grafana
cloud
and
they're
like
production,
basically
they're
running
production
and
production,
ready
that
way.
It's
just
that
the
code
and
config
needs
a
bit
more
for
consistency
before
we
we
want
to
before
we
say
the
open
source
version
is
is
is
not
experimental
anymore,
just
because
yeah,
because
they
were
developed
by
different
teams.
F
There's
like
different
ways
of
configuring,
stuff
and
we'd
like
them
to
be
a
lot
more
consistent,
so
easier
to
use,
also
mentioned
in
the
blog
post,
is
the
influx
proxy,
which
has
been
open
source
for
a
while,
but
we've
been
doing
some
work
on
it.
It's
it's
a
similar
idea
to
the
graphite
and
datadog
stuff,
but
yeah.
Eventually,
we
want
to
just
measure
them
all
together
and
make
it
more
consistent
and
easier
to
use.
Yeah.
A
Nice
and
then
there's
the
blog
post
about
the
about
the
the
new
write
proxies
and
a
blog
post.
I
think
that
details,
query
sharding
how
that
was
used
to
speed
up
quote
performance
up
to
10x.
A
Now,
if
marco,
you
want
to
provide
some
context
on
that
or
should
go
on
to
what's
next.
C
C
Team
so
yeah,
you
know
I
invite
everyone
to
check
it
out
later.
A
And
I
so
under
what's
next,
we
have
otlp
ingestion
support
that
gotham
has
been
working
on
for
a
while.
I
think
that
was
merged
recently,
I'm
not
mistaken,
which
is
yeah
exciting.
A
So
I
think
oleg
was
mentioning
that
he
was
trying
to
do
remote
right
from
like
a
a
small
embedded
device
on
his
home
lab,
and
it
was
a
bit
difficult
to
work
with.
So
it's
a
very
nice
format
to
to
work
with
tenant
federation
on
metadata
endpoints,
so
just
supporting
querying
multiple
tenants,
something
changed
in
grafana
9,
where,
like
a
multi-tenant
request
to
the
metadata
endpoint,
the
display
is
an
error
now.
So
this
just
fixes
that
susannah
you
want
to
go
over
the
split
by
interval,
parallelization.
G
And
yes,
I
created
a
bit
of
context.
Yes,
so
this
is
basically
a
part
two
from
loki
in
lucky.
We
did
this
with
instant
queries,
because
our
range
queries
only
have
only
had
one
type
of
parallelization
and
this
is
to
add
a
new
level
of
polarization,
the
same
to
the
instant
query.
So
basically
we
grab
the
the
range
interval
of
instant
queries
and
split
it
currently
in
a
static
interval,
but
that
may
change
in
the
future.
This
is
still
the
experimental
mark.
G
Wendell
like
are
helping
me
a
lot
with
with
this
and
grinding
me
on
the
on
the
best
practice
from
emir
and
yeah,
the
poc,
the
main
pc
was
already
merged,
but
we
are
still
currently
working
on
it.
I
don't
know
if
marco,
you
want
to
say
to
add
something
else.
C
Yeah
susanna
did
a
great
job
and
and
there's
actually
one
reason
why
I'm
very
excited
about
this
work
now
you
know
we
already
support
query,
shard
or
instant
queries
as
well,
so
instant
queries
are
already
accelerated
by
the
query
shot
now
splitting
by
by
time
as
well.
It's
just
yet
another
dimension
to
shard
the
query,
but
since
we
already
support
query
sharing,
you
know
I
don't
expect
a
huge
performance
benefit
having
the
split
by
time
as
well.
C
If
you,
if
you
take
an
instant
query,
processing
samples
over
the
last
one
day,
for
example-
and
you
split
it
into
one
hour
blocks-
we
could
cache
all
the
blocks
except
the
first
one
and
the
latest
one
so
by
the
basically
every
block
in
between
starting
from
hour
one
until
hour
eleven.
If
the
split
interval,
sorry,
if
the
the
query
range
is
12
hours
to
make
it
happen.
First
of
all,
we
need
to
split
the
query
by
time
and
then
we
can
add
declare
result.
C
A
And
then
some
some
smaller
improvements
and
fixes
to
the
health
chart.
I
don't
know
if
there's
anything
you
want
to
add
cryo
or
just
move
on
to
qa.
E
E
A
All
right
and
we
have
reached
the
open
mic
or
qa
section.
So
if
anyone
wants
to
share
anything
or
you
have
questions
feel
free
to
speak
up,
please.
A
Well,
it
looks
like
there
are
no
questions.
I
am,
I'm
super
excited
about
2.2
and
everything
that
is
coming
up.
I
like
that
there
are
performance
fixes
in
every
new
release.
That's
a
refreshing
change.
Some
software
seems
to
get
slower
every
new
release.
A
If
nobody
has
anything
else,
I
guess
we
can
end
it
early
all
right,
take
care
everyone.