
►
From YouTube: Grafana Mimir Community Call 2022-07-28
Description
Join our next Grafana Mimir community call: https://docs.google.com/document/d/1E4jJcGicvLTyMEY6cUFFZUg_I8ytrBuW8r5yt1LyMv4/edit
Learn more at https://grafana.com/oss/mimir/ and if all of this looks like fun, feel invited to see if there’s a role that fits you at https://grafana.com/about/careers/
A
B
A
I
guess
I'm
gonna
start
going
through
the
agenda.
First
up
we
have
the
2.2
release,
um
one
of
the
uh
one
of
the
big
feature.
Probably
one
of
the
most
things
I
frequently
hear
about
prometheus
is
uh
people
being
tripped
up
by
sending
samples
out
of
order,
and
this
adds
experimental
support
for
out
of
order
ingestion
of
samples
which
has
been
a
whole
bunch
of
work
by
a
bunch
of
people
who,
I
don't
think
are
on
this
call.
But
it's
uh
pretty
exciting.
It's
been
a
long
time
in
the
making.
A
uh
One
of
the
other
uh
other
big
things
has
been
sort
of
a
focus
for
us.
These
last
few
months,
uh
ease
of
use
uh
marco
was
actually
uh
doing
a
lot
of
work
on
this
sort
of
coming
up
with
a
standardized
format
for
errors
and
giving
each
error
an
id.
So
you
can
easily
look
it
up
in
the
documentation
and
figure
out.
You
know.
Oh,
this
is
what's
wrong.
This
is
what
I
can
take
to
uh
these
reactions.
I
can
take
to
remedy
it.
A
C
D
um
Yeah,
so
that's
that
allows
to
use
the
same
buckets
to
for
different
components
so
for
block
storage
rulers
and
uh
load
manager
um
yeah.
It
basically
prefixes
all
files
that
these
that
all
components
um
applaud
to
the
bucket
with
with
the
prefix
you've
configured.
So
they
don't,
they
no
longer
interfere.
I
think
there
was
a
problem
with
uh
compactor
discovering
rules
um
and
the
ruler
trying
to.
D
C
D
C
The
the
block,
storage
and
the
ruler
and
the
other
manager
storage
on
the
other
side
um
basically
have
some
conflicting
prefixes.
uh
If
you
run
with
the
default
configuration,
so
you
have
basically
to
store
the
blocks
in
the
dedicated
pocket
and
then
ruler
and
dialog
manager
can't
flick
either
in
dedicated
packets
or
both
of
them.
In
the
same
bucket,
introducing
a
prefix
for
the
block
storage,
we
are
able,
if
you
set
this
prefix
to
some
value
like,
for
example,
slash
blocks.
A
B
um
There
is
actually
right
now
in
prometheus
another
pull
request.
That's
revamp,
rewriting
the
entire
code
so
like
it's
actually
removing
the
the
loop
that
waits
and
replacing
it
with
a
different
implementation,
making
it,
I
think,
a
bit
more
efficient,
but
that's
still
being
worked
out
so,
but
maybe
another
improvement
on
on
this
for
the
future.
A
Very
nice
uh
next
up
store
gateway
performance,
optimization.
uh
I
can
go
over
this.
uh
There
are
a
few
improvements
to
uh
how
we
use
memcache
in
the
store
gateway
um
that
should
reduce
a
number
of
connections
and
uh
make
things
a
bit
faster
and
uh
steve
has
a
change
to
the
way
the
store
gateway
uses
mmap.
That
should
reduce
the
frequency
of
those
hangs
that
people
are
experiencing
where
the
store
gateway
becomes
entirely
unresponsive.
E
Yeah
I'll
go
over
the
highlights.
I
talked
about
most
of
these
in
the
previous
committee
meeting
as
upcoming,
but
so
ham
is
a
tool
to
deploy
software
on
kubernetes
and
we
are
packaging
vimeo
for
for
helm,
tooling,
and
uh
in
that
package
we
implemented
a
couple
of
features
uh
such
as
meta
monitoring,
support
uh
built
in
which
means
that
you
can
set
up
your
mimir
to
monitor
uh
itself
its
contours
and
send
them
to,
for
example,
graph
on
a
cloud
and
which
means,
for
example,
with
the
even
with
the
three-tier
graphical
cloud
account.
E
We
used
to
uh
kind
of
integrate
it
by
a
sub
chart,
but
that
subchart
had
some
issues
lately
and
also
it
prevented
us
from
from
having
enough
control
to
do
this
for
openshift.
So
we
are
not
directly
managing
memcached
processes
from
from
the
mimirrhem
chart,
then
there's
improvements
to
the
how
we
manage
configuration,
and
that
was
requested
a
lot
of
times
that
we
should.
E
A
And
uh
there
was
a
blog
post
a
little
while
back,
you
may
have
noticed
that
we
now
open
source,
the
graphite
and
datadog
ingestion
proxies.
uh
So
this
uh
this
is
sort
of
a
follow-up
to.
I
think
something
marco
has
mentioned,
that
we
want
uh
mimir
to
be
a
time
scale
database
for
uh
all
your
metrics,
no
matter
what
format,
and
so
this
is
sort
of
the
first
step.
A
F
Yeah,
so
um
I
can
add
a
bit
about
that
as
well,
because
I
was
part
of
the
squad
who
was
um
basically
um
doing
this
effort
to
open
source
these
um
things
um
so
yeah.
Basically,
we've
implemented
uh
the
dog,
slash,
graphite
uh
right
apis,
though
in
the
background,
everything's
translated
into
prometheus,
and
it
interacts
with
mimir
uh
via
remote,
right,
um
yeah,
basically
making
it
really
low
effort
to
such
through
prometheus,
um
just
by
updating
agent,
uh
ui
data
agents
and
stuff
to
uh
point
at
the
proxies
and
uh
yeah.
F
As
a
note,
as
you
mentioned,
it
is
uh
experimental
these
um
even
like
these
open
source
proxies,
um
but
we
do
actually
also
run
them
in
grafana
cloud
and
they're
like
production,
uh
basically
they're
running
production
and
production,
ready
um
that
way.
It's
just
that
um
the
code
and
config
needs
a
bit
more
for
consistency
before
we
we
want
to
before
we
say
the
open
source
version
is
um
is
um
is
uh
not
experimental
anymore,
um
just
because
yeah,
because
they
were
developed
by
different
teams.
F
There's
like
different
ways
of
configuring,
stuff
and
we'd
like
them
to
be
a
lot
more
consistent,
so
um
easier
to
use,
um
also
mentioned
in
the
blog
post,
is
the
influx
proxy,
which
has
been
open
source
for
a
while,
but
we've
been
doing
some
work
on
it.
um
It's
it's
a
similar
idea
to
the
graphite
and
datadog
stuff,
but
yeah.
Eventually,
we
want
to
just
measure
them
all
uh
together
and
make
it
uh
more
consistent
and
easier
to
use.
um
Yeah.
A
C
A
A
So
I
think
oleg
was
mentioning
that
uh
he
was
trying
to
do
remote
right
from
like
a
a
small
embedded
device
on
his
home
lab,
and
it
was
a
bit
difficult
to
work
with.
So
it's
a
very
nice
format
to
to
work
with
um
tenant
federation
on
metadata
endpoints,
so
just
supporting
querying
multiple
tenants,
uh
something
changed
in
grafana
9,
where,
like
a
multi-tenant
request
to
the
metadata
endpoint,
the
display
is
an
error
now.
So
this
just
fixes
that
uh
susannah
you
want
to
go
over
the
split
by
interval,
parallelization.
G
And
yes,
I
created
a
bit
of
context.
Yes,
so
this
is
basically
a
part
two
from
loki
uh
in
lucky.
We
did
this
with
instant
queries,
because
our
range
queries
only
have
only
had
one
type
of
parallelization
uh
and
this
is
to
add
a
new
level
of
polarization,
the
same
to
the
instant
query.
So
basically
we
grab
the
the
range
interval
of
instant
queries
and
split
it
currently
uh
in
a
static
interval,
but
that
may
change
in
the
future.
This
is
still
the
experimental
uh
uh
mark.
G
C
uh
Yeah
susanna
did
a
great
job
and
uh
and
there's
actually
one
reason
why
I'm
very
excited
about
this
work
now
you
know
we
already
support
query,
shard
or
instant
queries
as
well,
so
instant
queries
are
already
accelerated
by
uh
the
query
shot
now
splitting
by
by
time
as
well.
uh
It's
just
yet
another
dimension
to
shard
the
query,
but
since
we
already
support
query
sharing,
um
you
know
I
don't
expect
a
huge
performance
benefit
having
the
split
by
time
as
well.
C
If
you,
if
you
take
an
instant
query,
uh
processing
samples
over
the
last
one
day,
for
example-
and
you
split
it
into
one
hour
blocks-
um
we
could
cache
uh
all
the
blocks
except
the
first
one
and
the
latest
one
so
by
the
basically
every
block
in
between
starting
from
hour
one
until
hour
eleven.
If
the
split
interval,
sorry,
if
the
the
query
range
is
12
hours
um
to
make
it
happen.
First
of
all,
we
need
to
split
the
query
by
time
and
then
uh
we
can
add
uh
declare
result.