►
From YouTube: Grafana Mimir Community Call 2022-10-27
Description
Grafana Mimir Community Call from 27th of October 2022. We have discussed upcoming Grafana Mimir 2.4 release and then we had open mic discussion.
Agenda:
- 00:00 - Introduction
- 00:46 - Mimir 2.4 release
- 05:55 - Discussion
- 30:34 - Thanks for joining!
Join us on the last Thursday of every month. Please see the linked document for meeting notes, agenda and details on how to join: https://docs.google.com/document/d/1E4jJcGicvLTyMEY6cUFFZUg_I8ytrBuW8r5yt1LyMv4/view
A
Ended
the
world
back
to
you
Marco
very
quickly,
so
welcome
everyone
to
this
community
call.
Thank
you
for
joining
us
today.
We
have
only
two
items
on
the
agenda.
One
is
to
speak
briefly
about
linear
2.4
release,
and
then
we
have
question
times
if
there
is
something
you
would
like
to
discuss
and
feel
free
to
add
more
items
to
the
agenda.
If
you
want
the
link
is
in
the
chat,
let's
also
send
it
once
again
for
new
people.
A
B
Sure
so
we
published
the
release
candidate
a
couple
of
weeks
ago.
We
are
running
this
version
in.
B
In
Dev
and
staging
for
for
about
a
couple
of
weeks,
we
haven't
noticed
any
issues:
So
the.
D
B
To
publish
the
stable
release
tomorrow,
this
release
does
not
include
any
major
feature
compared
to
the
previous
ones.
It's
mostly,
you
know
a
set
of
announcements.
A
few
bug
fixes
there
are
actually
a
couple
of
new
features,
not
major
ones.
B
One
is
that
we
introduced
another
way
to
do
to
discover
the
query
scheduler
instances.
So
basically,
the
queen
scheduler
is
one
of
the
components
of
of
a
mimir
cluster
is
one
of
the
the
micro
services
and
previously
the
only
way
to
connect
to
the
query
schedule
was.
You
know
configuring.
C
B
B
B
B
B
We
introduced
some
new
limits,
including
a
new
limit
on
the
maximum
range
query
length
by
length.
We
mean
the
maximum
time
range
in
terms
of
and
the
timestamp
minus
start
time
step
and
as
any
other
limiter
in
mimir
can
be
configured
on
a
per
Talent
basis.
B
Like
it
happened
in
the
previous
releases,
we
did
a
bunch
of
improvements
to
the
helm.
Chart
the
helm
chart
nowadays
is
the
way
the
majority
of
the
community
install
promises.
Sorry
installed
me
mere:
probably
they
also
stop
Prometheus.
B
B
We
introduced
support
for
openshift
router
for
for
for
basically
the
nginx
Ingress,
which
we
use
to
balance
the
request
across
multiple
replicas
of
the
same
microservice,
and
there
are
other
other
improvements
as
well
like
improvements,
the
documentation,
improvements
to
some
anti-finitive
rules
and
so
on.
B
C
Think
Peter.
B
A
There
is
one
more
important
change
and
that
is
the
anonymous
usage
statistics
tracking
is
enabled
by
default.
So
this
has
been
coming
for
several
releases.
It
was
added
I
think
two
releases
ago,
but
now
it
will
be
enabled
by
default.
There
is
a
link
in
the
agenda
to
learn
more
about
the
feature
to
see
what
exactly
is
tracking.
A
There
is
basically
no
private
information,
just
some
statistics
so
that
we
know
how
the
community
uses
memory,
but
there
is
also
a
flag
to
disable
the
feature
if
you
are
not
comfortable
running
it
with
this,
that's
like
tracking,
enabled
I
think
that
concludes
our
nimir
2.4
release
agenda
item.
If
there
are
questions
we
can
move
on
to
open
mic,
which
is
also
question
time,
so
yeah
feel
free
to
raise
your
hand
and
ask
questions
for
either
minir
2.4
or
anything
else,
maybe
related.
A
A
C
C
My
name
is
Russian
and
yeah
actually
I'm
just
a
contribution,
not
a
Premiere
user,
because
now
I'm
working
for
a
company
called
pinpad.
We
built
a
distribute
system
called
Thai,
DB
and
I'm
very
interested
in
engineer
and
I
I
wanted
to
learn
how
to
build
a
10
series
database.
So
I
tried
to
contribute
to
yeah
so
I'm,
actually
not
a
user.
Just
a
computer
and
I
learned
how
to
memorize
works.
Yeah.
A
C
Yeah
I
actually
recently
I
try
to
learn
how
how
many,
how
many
mayor
organized
the
service,
because
you
use
a
a
very
common
pattern
for
your
own
components
and
all
code,
like
user
adjuster
service
and
manager,
I
try
to
try
to
understand
how.
How
does
it
work
and
yeah.
C
C
B
Before
we
started
the
recording
of
This
call,
we
were
chatting,
and
you
mentioned
you,
have
you
know
a
complex
setup?
Is
it
something
you
could
share
with
us
like
you
know
what
you
are
struggling
with?
If
there's
anything
you
are
struggling
with
Prometheus
today,
so
yeah
we
could
get
a
better
idea
if
Emir
could
be
or.
D
Not
yes,
yes,
okay,
the!
What
would
be
great
would
be.
We
have
a
blog
post
for
explaining
that
if
I
could
find
it
about
Prometheus
and
our
shading,
but
yeah
basically
to
to
to
to
to.
In
a
few
words,
we
are
sharing
our
promise
use
by
cluster,
because
we
generate
a
lot
of
clusters,
and
so
we
have
one
Prometheus
per
cluster
and
and
then
we
we
have,
we
have
a
small
tool.
D
That's
called
prox
proxy
that
enables
to
query
a
whole
set
of
clusters
with
with
one
query,
but
but
I'm
not
really
happy
with
it,
because
sometimes
we,
if
we
don't
have
if
we
don't
give
the
specific
the
specific
cluster
label,
we
may
have
some,
let's
say
wrong
data
like
if
we
want
to
to
send
the
list
of
resources
from
different
clusters-
and
we
don't
say
something
by
cluster
it
may
some
data
may
be
overridden
from
one
we
just
we
may
just
have
one
from
the
data
from
one
cluster
and
not
from
all
of
the
all
of
them.
D
So
so
that
is
one
problem,
and
the
other
program
is
that
we
may
have
some
quite
different
clusters
in
one
group.
What
we
call
an
installation
that
will
be
basically
for
One
customer
most
of
the
time
and
meaning
we
may
have
one
kubernetes
cluster
with
five
nodes
and
another
one
with
100
nodes,
and
so
it
makes
the
size
of
the
premise.
Use
is
quite
different
and
the
one
with
100
nodes
will
require
100
gigabytes
of
memory,
and
it
means
we
need
to
on
the
cluster
that
hosts
our
premises.
D
B
B
D
D
We
we
have
a
kind
of
a
partnership.
We
even
have
a
shared
the
slack
Channel,
but
I
don't
use
it
most
of
the
time.
I
prefer
to
go
through
the
community
one,
but.
B
Right
so
yeah
I
was
mentioning
you
know
we,
you
typically
run
mimir
as
a
centralized
cluster,
and
then
you
configure
Prometheus
or
the
agent
and
then
I
want
to
talk
a
little
bit
about
the
agent
to
scrape
the
metrics
running.
Sorry
in
each
of
your
kubernetes
clusters.
They
scrape
the
metric
from
their
own
local
targets
and
then
remote
write
the
metrics
to
mimir.
B
Just
point
your
grafana
or
the
preference
client
to
mimir,
which
expose
Prometheus
API
compatible
endpoints
and
you
run
the
query
against
them.
So
since
mimir,
the
centralized
Cloud
cluster
collects
the
metrics
from
all
your
promethey.
When
you
run
a
query,
you,
basically
you
get
a
global
view
right.
You
run
a
query.
D
B
B
Which
is
basically
a
sort
of
you
know,
stripped
down
version
of
Prometheus,
because
the
agent
supports
sharding
as
well.
So
what
you
could
do
is
when
you
run
the
agent
you
configure
the
agent
front
in
each
kubernetes
cluster
and
like
Prometheus,
scrape
metrics
from
the
configured
targets
and
remote
write
the
metrics
to
the
centralized
menu.
But
you
could
configure
the
agent
in
a
cluster
mode
itself.
So
the.
D
B
D
B
No
I
I
think
you
still
need
the
exporters.
C
D
E
For
metrics
it
it
at
the
end
of
the
day,
you
went
you
end
up
with
a
very
unbalanced
load
anyway,
because
you
got
something
like
Cube,
State
metrics,
which
exports
you
know
vast
set
of
metrics,
and
that
has
to
go
to
one
place.
So
the
the
demon
set
is
not
a
huge
win
for
for
metrics.
The
the
sharding
thing
works
pretty
well
and
then
there's
a
thing
where
it
tries
to
do
a
a
coordination
ring.
So
don't
use
that
the
the
very
simple
sharding
that
works.
D
Okay,
okay,
to
to
be
to
be
honest,
we
used
to
have
a
lot
of
our
Prometheus
is
where
each
one
was
creating
data
from
from
one
cluster
through
the.
So
it
was
on
the
separate
cluster
and
we
were
going
through
through
the
the
the
kubernetes
proxy
API
proxy
and
we
are
switching
to
to
a
premise
use
agent
mode
so
that
our
customers
can
can
install
so
some
some
like
service
monitors
and
put
monitors
by
themselves.
So
that's
what
we
are!
D
We
are
moving
to
that
and
we
know
that
it
will
enable
us
in
the
future
to
more
easily
change
our
premises
to
to
to
maybe
Mimi
or
something
else.
But
we
are
just
in
the
beginning
of
our
journey
with
the
agent
and
I
had
another
question
about
the
agent.
As
you
mentioned.
If
I
want
to
retrieve
Logs
with
the
agent
I
will,
I
will
need
to
have
a
demon
set,
but
I've
read
that
I
should
have
a
different
set
of
agent
for
logging
and
for
metrics.
E
Of
things
like
like
I,
said
earlier,
you
you
almost
certainly
want
a
demon
set
for
the
logs,
but
the
demon
set
is
kind
of
unhelpful
for
metrics.
So
the
yeah
I
mean
it's
it's
kind
of
possible
to
get
away
with
one
agent
doing
everything,
but
that's
that's
more
of
a
marketing
thing
that
I
think
you're
better
off
configuring,
the
things
doing
specific
jobs,
and
then
you
can.
You
can
kind
of
track
down
if
something
goes
wrong
or
more
easily.
D
Yeah
and
I
I've
had
problems
with
debugging
The
Edge
enter.
So
that's
why
we
like
for
the
moment.
I
I
prefer
to
go
with
prompted
on
one
side
and
and
the
agent
for
the
metrics,
and
because
we've
had
some
issues
with
like
service
discovery.
That
was
failing
and
we
had
no
clear
error
messages
with
the
agent
so
yeah
good.
D
So
I
guess
then,
when
we
know
better
how
to
use
it,
we
can
change
easily
from
promptail
to
to
to
graph
an
agent
or
from
promise
use
agent
to
prove
an
agent.
But
I
don't
know
with
between
premise
usage
and
graphene
agent
is
there
is
how
to
say.
Does
it
work
like
with
promptail
that
graphene
agent
is
just
encapsulates
the
features
of
of
a
promise
use
agent
or
is
it
at
the
same
level.
E
I'll
carry
on
answering
because
I've
been
talking,
yeah,
the
the
so
by
and
large
grafana
agent
is
bundling
things
that
already
exist,
like
Prometheus
agent
and
open
Telemetry
collector,
and
a
few
more
things
like
like
the
node
exporter
and
so
on.
So
it's
yeah,
it's
it's.
It's
primarily
pitched
as
a
kind
of
ease
of
use
feature
for
for
people
who,
like
grafana,
Labs,
it's
what
we
use
internally
but
there's,
though
I
don't
think,
there's
anything
kind
of
extremely
special
internally.
E
D
Yeah
but
I
mean
I
mean
my
problem
is
that
with
promiscuous
agent
I
have
a
Prometheus
UI
I
can
see
the
status
of
the
service.
Discovery
I
can
see
a
lot
of
stuff
and
I
can
debug
easily.
What
is
wrong
and
I'm
not
sure
I
can
have
that
with
with
graphene
agent.
B
C
B
Organograph,
an
agent
doesn't
have
an
UI,
but
it's
something
it's
something
in
development.
I,
don't
know.
If
the
developed.
D
B
I
know:
yeah
I
was
mentioning
a
new.
Why,
since
you
mentioned
you
know,
when
you
run
the
graphing
agent,
the
agent
also
exposes
a
new
eye.
Sorry,
when
you
run
the
Prometheus
agent,
the
agent
expose
an
UI,
but
when
you
run
the
grafana
agent,
the
agent
doesn't
have
any.
Why
but
I
know
the
graphene
agent
UI
is
in
development.
D
Yeah
but
I
I've
heard
I've
heard
about
a
graphene
agent
UI.
That
would
help
show
kind
of
of
an
interface
about
the
the
the
interaction
between
the
different
scrapers
and
the
way
the
data
flows
from
scraping.
D
D
D
But
okay
anyway,
as
you
said,
that's
not
the
point
for
for
this
Meetup
I
try
to
remember
what
we
said
before.
So
the
sharding
thing
and
the
way
we
could
send
data
with
remote
rights
to
a
central
yeah
and
the
fact
that
it's
it's
multi-tenant,
so
I
guess
the
multi-tenancy
is
done
exactly
the
same
way
as
with
looking
foreign.
B
Yeah
I
think
so
mimir
and
Loki
share
a
similar
architecture.
So
if
you
are
familiar
with
Loki
and
you
want
to
adopt
me
me,
you
will
find
that
the
architecture
of
the
mirror
is
very
similar
to
Loki
both
on
the
right
path
and
the
read
path.
Obviously,
there
are
some
differences,
especially
on
the
Storage
level,
but
again
mimir
using
object,
storage
to
store
the
long-term
data
like
Loki,
even
the
header.
You
just
mentioned
the
access
scope
already
header,
which
is
that
they're
used
to
inject
the
tenant
ID
is
the
same
in
Media.
B
A
little
bit
of
code
as
well
or
the
ash
ring
which
in
Loki
is
used
for
sharding
the
replication
and
for
service
Discovery
as
well
the
same
method
way
it
works
in
the
mirror.
We
actually
share
the
same
code,
same
Library,
so
yeah.
There
are
many
things
in
common
and
once
you
learn
one
of
these
tempo
as
well,
once
you
learn
one
of
this,
you
know
the
other
one
share
pretty
much
the
same
architecture
and
at
grafalabs.
One
of
our
goals,
you
know,
is
to
move
forward
to
provide.
B
You
know
to
continuously
work
to
provide
a
better
consistency
between
the
the
different,
the
three
different
databases,
so
that
when
you
start
adopting
one-
and
you
want
to
adopt
the
second
one,
you
know
you,
you
will
find
a
smooth
path
in
front
of
you.
D
And
that's
really
great,
because
when
you,
when
you
talk
about
multi-tenancy,
it's
not
easy
to
use
multidency
and
it's
a
little
bit
on
purpose,
I!
Guess,
because
if
we
really
want
to
have
multi-tenancy
it's
third
with
a
graphene
Enterprise
within
a
cloud
so
right
now
we
are
building
our
multi-users
Gateway
and
when
it's
ready,
I
guess
you
can
we
can.
We
can
use
it
with
with
Tempo
and
with
Loki
and
with
mimia
yeah.
B
D
B
And
it's
typically
very
coupled
with
your
own
infrastructure
yeah
sure.
So
one
of
the
reasons
why
there's
no
open
source
Gateway
authentication
Gateway
is
also
because
the
use
cases
we
see
is
that
people
need
typically
need
to
bundle
it
with
their
own
infrastructure.
Agri
final
apps,
we
have
a
closed
Source
One,
but
it's
very
coupled
with
you
know
the
way
we
we
manage
our
customers
and
stuff
like
this
and.
D
B
Yeah
I
think
the
cool
thing
is
that
once
you
build
this
proxy,
if
you
put
this
proxy
in
front
of
me
Mir,
it
will
basically
just
work.
B
A
sort
of
HTTP
proxy
doing
the
Authentication
and
since
the
header
is
the
same,
just
put
it
in
front
of
the
mirror,
and
you
know
it
will.
D
B
Yeah
I
think
the
only
thing
obviously
I
I
understand
not
a
priority
right
now,
but
you
know
it's
since
you
already
run
committees
for
the
agent
building,
a
small
scale.
Proof
of
concept
should
have
been
much
difficult.
I
mean
you.
You.
B
B
It
may
be
from
you
know,
a
Dev
cluster
or
a
staging
cluster.
You
test
from
there
and
once
you're
happy
you
progressively
roll
out
the
production.
One.
D
D
D
I
I
definitely
I
have
a
lot
of
it's
quite
clear.
In
my
mind,
I'm,
not
sure
if
everybody
in
my
team
is
so
optimistic,
but
and
we
we
have
a
yeah,
we
have
a
lot
of
stuff
to
do
and,
like
everybody,
I
guess
we
are
running.
We
are
running
after
time.
Oh
that's
one
of
the
the
main
blocking
point,
but
yeah
I
definitely
want
to
try
it
as
soon
as
possible.
C
B
B
Nice,
so
thank
you
very
much
for
for
joining.
This
call
see
you
in
Slack.