►
From YouTube: Grafana Mimir Community Call 2023-08-31
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
are
just
a
few
right
now,
but
usually
people
join
a
bit
later,
so
I
guess,
let's
get
started
and
on.
We
have
a
a
pretty
short
agenda
for
this
time,
because
last
time
we
went
through
a
lot
of
the
features
upcoming
into
the
10
release.
But
now
the
2.10
race
is
up
on
us
and
I
will
go
I'm,
going
to
ask
OLED
to
give
an
update
on
the
status
of
the
release
because
he's
been
the
greatest
Shepherd
or
he
is.
B
B
We
are
going
to
test
this
in
production
next
week
and
if
everything
goes
fine,
then
in
September
10th,
which
is
next
Monday,
not
this
Monday,
but
next
Monday
the
Monday
after
we
will
publish
the
final
release.
You
have
the
draft
release
notes
which
were
approved
by
someone
but
I'm
waiting
for
product
manager
to
review
them
before
I
match
them.
Everything
is
quite
fine,
so
far
so
far,
and
also
quick
reminder
that
we
have
switched
to
quarterly
release
cycle
because
we
are
now
we've
we've
seen
recently
that
we
are
shipping,
bigger
features
and
mostly
bug
fixes.
B
A
All
right,
thank
you
and
I
have
a
couple
of
updates
on
ongoing
things
regarding
the
helm
chart,
but
do
your
hand
chart
and
actually
the
first
thing
is
related
to
the
release.
A
We
will
create
a
release
candidate
from
the
ham
chart
once
we
have
a
release
candidate
from
the
Enterprise
version
of
the
product
as
well,
because
we
keep
those
in
sync,
so
that
will
come
a
bit
later,
I
guess
next
week,
yeah.
That
would
be
my
guess,
because
we
had
some
issues
with
AWS
and
multi
architecture
images,
the
curmages.
There
was
something
compatibility
that
we
had
to
solve
in
CI,
so
that
we
can
actually
push
those
images
so
anyway,
so
about
ongoing
work
in
in
New
Hampshire
as
well.
A
Last
time,
I
told
about
the
out
of
order,
support
for
Native
histograms
that
we
started
to
work
on,
and
that
has
gone
through
a
couple
of
iterations
regarding
people
who
work
on
it
already,
but
we
have
now
A
Team
working
on
it
and
the
status
is
that
the
basic
function
they
already
is
there
in
Prometheus
there's
a
pull
request,
Improvement
tools
that
that
has
this,
but
the
histograms
native
histograms.
Have
this
feature
called
automated
counter
reset
handling,
which
is
a
way
to
optimize.
A
The
detection
of
counter
results
in
histograms,
because
it's
not
it
is
costly
to
do
the
countries
that
detection
histograms
and
also
very
non-trivial
like
how
you
handle
this
information.
A
So
anyway,
the
out
of
order
stuff
is
going
on.
The
other
thing
we
talked
about
was
the
Auto
scaling
to
introduce
some
experimental
Autos
getting
into
the
ham
chart
and
that
didn't
progress
too
much
because
of
vacations
like
either
the
contributor
was
on
vacation
or
Beaver
vacation
to
review.
So
it's
really
slowed
down.
The
last
information
is
that
the
contributor
is
going
to
take
a
look
at
a
bunch
of
comments
that
we
added
to
the
pr.
A
So
hopefully,
hopefully
we
will
progress
with
that
and
the
last
one
which
I
put
in
the
middle
sorry.
A
So
the
last
one
regarding
the
ham
chart
is
that
currently
the
media
distributed
time
chart
has
is
including
the
grafana
agent
operator
ham
chart
and
that
operator
ham
chart
isn't
going
to
be
supported,
it's
getting
replicated
and
the
also.
We
are
not
a
huge
fans
of
operators,
even
though
we
do
have
our
own
operator
also,
because
the
operator
used
the
custom
resource,
definitions,
crds,
and
we
always
got
questions
and
and
problems
out
of
it.
In.
C
A
As
well,
because
it's
really
bad
at
handling
custom
research
definition,
so
you
have
to
do
some
manual
tasks
to
to
get
this
to
work,
even
in
in
the
getting
started
guide
that
we
have.
You
have
to
do
a
manual
step
to
load
them,
which
is
really
well,
not
not
very
nice.
A
So
we
are
going
to
use
this
better
supported,
chart
internet
as
a
sub
chart,
and
also
a
very
nice
feature
of
the
new
flow
setup
is
that
it
doesn't
need
crds.
You
can
just
annotate
your
services
or
mods
in
kubernetes
to
get
them
discovered
by
the
girlfriend
agent
and
get
script
for
metrics
and
logs,
so
we
will
so.
Hopefully
we
can
in
about
a
quarter's
time.
I
would
estimate
do
away
with
this
manual
thing
and
just
you
know,
make
it
more
much
simpler
to
use
this.
A
All
right
any
questions
or
comments.
A
C
A
A
May
I
ask
like
how
you
are
using
it?
Is
it
replacing
some
old
metrics
or
through
hotel
or
like
what?
What's
your
use
case?
Basically.
C
B
C
But
currently
we
extract
all
of
the
like
quantiles,
like
the
targeted
quantiles
and
just
store
those
which
means
like
after
the
fact
aggregation
is.
You
know.
Obviously
it
doesn't
work
and.
C
Graphite
still
so
we're
not
even
on
Prometheus
we're
currently
evaluating
the
mirror,
but
we're
pretty
pretty
far
along
in
that
evaluation.
But
the
idea
of
storing
the
histograms
that
we've
been
generating
for
years
has
like
always
been
there
and
like
people,
you
know
being
wanting
to
some
across
multiple
hosts
or
like
whatever,
and
then
then
get
the
targeted
quantiles.
So
it's
like
you
know.
C
Histogram
implementation
always
supported
that,
but
we
had
to
like
require
them
to
do
this
aggregation
before
we.
We
extract
the
quantiles
just
because
graphite,
you
know,
obviously
didn't
support
story
in
the
histograms,
so
we're
really
looking
forward
to
as
we
move
to
to
mimir
being
able
to
you
know,
sort
of
make
our
histograms
work
with
with
the
native
histograms.
A
Cool
wow,
that's
good
to
hear
yeah.
That's
actually
I'm
surprised
that
every
time
I
hear
about
NPC,
Instagram
usage,
it's
a
different
use
case.
I
thought
I
would
be
seeing
some
repetition
by
now,
but
every
time
it's
something
new
just
shows
that
how
versatile
it
is
cool,
yeah,
the
the
actually
not
that
you
mentioned
it.
I
should
say
that
my
personal,
like
okr.
A
My
goal
for
this
next
release
is
to
add
a
documentation
around
native
histograms,
because
we
know
that,
like
there
might
be
some
hidden
things
in
the
implantation,
but
actually
our
biggest
problem
is
not
the
implementation.
It's
more
that
the
Prometheus
documentation
itself
has
information
on
Native
tutorials,
but
but
it's
all
over
the
place
and
there's
no
one
place
that
you
can
go
in
and
see
like
how
do
I
get
started
like
what's
the
things
I
need
to
be
aware
of
and
stuff
like
that.
A
So
after
some
discussion
with
Prometheus
maintenance,
we
decided
that
we
put
this
information
on
the
grafana.com
in
the
memory
document
in
the
cloud
Matrix
documentation
and
then
be
me
here
and
promitus
can
copy
copy
it
and
you
know
reuse
it
as
they
want.
C
I
had
a
question:
that's
completely
unrelated
to
I
would
say
like
ongoing
work.
It's
just
a
generic
question
that
I
I
threw
into
the
slack
chat,
but
I
never
got
a
response.
I
think
it's
a
quick
one.
I
just
noticed
that
the
the
compactor
ranges
are
two
hours
12
hours
24
hours.
C
A
Which
would
you
skip?
Because
the
two
hours
is
coming
from
injectors.
C
B
I
would
say
that
if
you
make
it
four
hours,
it
should
take
the
two
hours
blocks
and
make
four
hours
in
one
go:
I
I,
actually
wonder.
I
was
going
to
check
like
we
are
compacting
two
hours
into
one
two
hours
talk,
because
that
might
not
make
sense
right
because,
like
your
question
makes
sense,
we
may
we
can
compact
all
the
to
our
to
our
block
into
one
single
12
hour
book,
because
we
don't
query
the
two-hour
stocks
at
any
point.
C
B
C
My
we
we
have,
we
have,
we
use
a
an
internal
S3
endpoint.
B
C
They're
really
worried
about
the
the
compaction.
The
number
of
like
read,
write
amplification
from
compaction
and
they're,
asking
if
there's
any.
C
We
had
to
like
reduce
that,
and
that
was
the
first
one
that
came
to
my
mind,
was
cut
out.
One
middleman
thing:
there
yeah
that
I,
maybe
I'll
check
that
out
see
how.
B
C
Interesting
because
I
mean
we
have
like
fairly
strict
bandwidth
limits
from
our
S3
provider,
internal
S3
provider,
and
we
see
a
lot
of
the
bandwidth
comes
from
the
compaction
because
you
know,
while
we're
writing
a
ton
of
data,
you
know
a
lot
of
it's
not
really
queried
very
often
or
like
I
would
say
like
some.
Some
tenants
are
pretty
high
queries
and
a
lot.
B
C
Comes
from
just
that,
12
hour,
that
initial
12
hours
right
so
that
that
helps
a
lot
as
well
yeah,
maybe
that's
something
that
we
should
try
to
get
down
to
more
like
of
a
science
rather
than
I,
guess
and
try
to
see
if
we
can
reduce
the.
B
B
C
Yeah
yeah
it's
a
fast
and
cheap
and
interesting.
B
B
B
A
Yeah,
the
one
thing
I
was
wondering
about
if
we
would
end
up
with
you,
know
blogs
that
are
too
large,
but
but
that
doesn't
happen
because
your
last
page
is
the
same
24
hours.
So
if
you
don't
get
too
large
books
now,
then
you'll
won't
get
done
later
either
so
yeah
I
agree
with
you
like
it
should
be
safe
to
do
test.
It.
A
All
right
another
question
or
something
we
can't
help
it.
A
If
not,
then,
let's
close
the
meeting
and
the
con
longer
agenda
next
time
all
right.
Thank
you.
Bye,
bye,.