►
From YouTube: Mimir Community Call 2022-06-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
All
right
so,
first
up
my
mirror
release
candidate
for
2.2
is
available
peter.
Do
you
want
to
go
over
some
of
the
highlights
from
that.
A
If
you
don't
go
too
far
back,
we
are
still
just
going
to
test
this
in
production,
so
make
sure
not
to
use
it
on
anything,
that's
critical
for
you!
Marco!
Do
you
want
to
take
the
next
one
ease
of
use,
improvements.
C
Yeah
sure,
hello,
everyone
we
are
working
to
make
mimir
easier
to
to
configure,
operate
and
run.
We
are
doing.
We
are
working
on
several
things
and
in
the
next
reviews,
just
there's
just
one
single
improvement,
which
is
simplify
the
header
messages.
The
idea
is
that,
starting
from
this
release,
you
will
have
other
than
the
descriptive
error
message.
C
You
will
also
have
a
unique
carry
when
you
receive
an
error,
for
example
through
the
http
api
response
or
through
the
logs,
and
then
for
each
of
these
errors,
there's
a
run
book
in
our
documentation.
So
the
run
book
explains
you
what
the
error
means,
what
are
the
common
causes
and
how
to
fix
it
again.
This
is
just
a
starting
point,
not
an
ending
point.
C
We
are
working
on
other
things
like
simplifying
further
simplifying
the
configuration
we
are
currently
discussing,
an
alternative
deployment
model
to
to
the
monolithic
and
microservices
deployments,
and
that's
something
we
will
cover
in
the
next
community
calls
and
releases.
B
And
after
that,
I
think,
is
the
the
bucket
index
prefix
that
I
think
dimitar
was
working
on,
allows
you
to
give
each
component
a
prefix
to
use
within
a
single
object,
storage
bucket,
so
that
you
can
use
a
single
object,
storage
bucket
for
all
of
the
mirror,
such
as
the
the
ruler,
the
tsdb
and
there's
another
one
in
there.
I
think
alerts.
A
So
we
have
actually
two
bug
fixes
in
the
injester
rollouts
one
is
the
wall
replay
and
brian
has
fixed
some
bottleneck
there
or
after
just
shortening
the
sleep,
and
that
actually
makes
it
quite
faster
by
about
50.
So
that's
very
nice.
The
other
bug
fix
we
we
did
is
when
you're
not
unregistering
your
injustice
during
rollout
and
at
the
same
time
you
use
very
long
hard
bid
period.
Now
injectors
will
update
their
state
much
faster
in
the
ring,
so
these
two
improvements
will
surely
help.
We
already
see
how
they
hold
in
our
routes.
D
D
Right
yeah
well
yeah,
so
I
think
that
sleep
gets
cold
every
record
in
the
wall
and
and
a
record
is
broadly
speaking,
the
same
size
as
a
scrape
so
in
in
prometheus
you,
you
would
not
call
that
so
many
times
in
in
mimir.
D
B
All
right
crayo:
do
you
want
to
talk
about
the
upcoming
version
of
the
helm,
chart.
F
Yes,
yes,
I
do
hi,
my
name
is
crayol
and
we've
been
working
on
a
new
version
of
the
mini
distributed
ham
chart
and
it
contains
such
changes
to
the
hem
chart
itself
that
that
caused
some
breaking
changes,
but
also
it
has
many
new.
F
This
does
mean
that
the
hand
chart
will
be
version
3.0
and
it
will
contain
reference
to
the
mimir
2.2
release.
But
since
the
ham
specification
helm,
specification
says
that
we
should
use
semantic
versioning
before
that
was
important
to
follow.
F
So
the
these
versions
will
be
slightly
different,
but
it
reflects
the
fact
that
mimir
2.2
is
is,
is
much
more
compatible
with
2.1
millimeter
than
the
hand
chart
version
3.0
with
the
hand,
chart
version
2.2
just
to
clarify
that
so
on
to
the
the
features
that
we
we
added
here
and
the
first
is
one
liner
that
navi
support
installing
on
openshift.
But
actually
I
think
this
contains
the
most
work
we
have
done,
because.
F
It
we
started
out
from
a
relatively
small
change,
but
then
run
into
issues
with
how
we
have
how
we
install
and
use
memcached
beside
mimir
for
caching,
you
know
results
indexes
and
and
so
on,
and
also
at
the
same
time
this.
So
this
was
a
bit
complicated
and
also
at
the
same
time,
bitnami,
which
is
the
maintainer
of
the
memcached.
F
You
can
read
up
on
the
issue
in
the
bitnami
repo,
it's
a
fun
read
but
anyway,
that
pushed
us
to
just
make
the
decision,
which
has
already
been
like
requested
by
the
committee,
to
just
use
the
memcached
directly
and
and
manage
memcached
stateful
sets
and-
and
you
know,
kubernetes
resources
directly.
So
the
upshot
is
that
that
the
openshift
openshift
support
contains
two
changes.
One
is
actual
support
for
openshift
and
the
other
is
that
we
are
removed.
This
sub
chart
that
contain
memcache.
F
Then
we
are
not
directly
managing
memcached,
which
gives
us
much
more
consistent
view
of
our
resources
inside
mimir
and
also
it's
it's
more
consistent
with,
for
example,
loki
the
logs
solution
from
grafana,
because
that
is
directly
managing
memcached
as
well,
and
this
is
part
of
the
this
work
has
introduced
two
kind
of
breaking
changes.
One
is
some
differences
to
how
we
configure
memcached
in
the
hem
chart
and
also
a
bit
change
to
how
we
set
up
the
role-based
authentication
and
the
service
account.
F
So
that's
one
thing
and
the
next
is:
let
me
run
oh
yeah,
so
we
are
introducing
meta
monitoring
into
the
hem
chart.
This
is
something
that
you
could
already
do
by
hand,
but
this
makes
it
much
easier
and
also
integrated
with
grafana
cloud.
F
So,
basically,
you
would
be
able
to
have
dashboards
in
graphenocloud
that
shows
the
health
and
performance
of
of
your
mimir
or,
alternatively,
you
can
also
use
metamonitoring
to
scrape
matrix
into
local
parameters
and
look
at
meta
monitoring
data
there
and
so
on
or
combined
both
I,
for
example,
tested
it
while
having
parameters,
operator
and
also
the
cloud
integration,
and
I
could
see,
I
could
see
the
my
half
metrics
and
dashboards
in
both
places,
which
is
super
nice.
This
is
probably
aimed
at
our
enterprise
customers
so
that
we
can
provide
better
support
for
them.
F
But
it's
since
the
chart
is
the
same
for
enterprise,
customers
and-
and
you
know,
oss
users.
You
can
use
this
feature
as
well
right
and
then
oh
yeah,
regarding
the
configuration,
how
we
handle
confusion
if
you
have
been
using
the
mimi
chart
so
far
the
how
we
handle
the
application
configuration,
meaning
the
configuration
for
mimir
itself.
F
F
It
had
its
pros
and
cons
having
the
conversion
that
they
meant
that
you
had
like
very
clearly
what
was
going
to
happen
and
also
you
could
use
ham
templates
in
the
configuration
to
make
it
variable
and
and
like
customizable
between
clusters
and
stuff,
like
that,
on
the
other
hand,
for
for
the
for
more
simple
use
cases,
it's
it's
kind
of
an
overkill.
F
So
we
made
some
changes
to
support
structured
config,
which
is
what
loki
has
as
well
in
their
hem
chart,
which
means
that
you
don't
need
to
copy
the
configuration
anymore.
You
can
change
parts
of
it
by
overwrite
or
adding
values
by
the
structured
config
and,
for
example,
you
can
use
this
for
setting
up
the
storage
backus
on
the
object
store
in
a
more
simple
way.
F
We'll
have
documentation
on
how
to
do
this
as
well,
possibly
a
bit
later
than
actual
the
release
date.
But
that's
you
know
it
depends
on
scratching
a
little
bit
and
also
what
we
added
as
a
new
feature
here
is
that
now
you
can
store
the
conversion
in
a
config
map
instead
of
secret.
That
was
all
again
requested
by
community,
and
it
makes
sense
as
well
that
if
you
store
it
in
config
map,
that
means
that
hand
template
command
will
show
it
as
clear
text
and
then
hand.
F
F
But
since
this
is
a
config
map
and
not
a
secret,
we
also
added
support
for
injecting
secrets
and
credentials
from
the
environment,
so
that
you
don't
need
to
write
your
passwords
into
the
configuration
anymore.
You
can
inject
them
from
the
environment
and
we
have
some
additional
features
around
environment
handling
as
well
that
now
you
don't
have
to
set
it
for
every
mimir
pod.
F
F
And
then
yeah,
the
hem
chart
actually
didn't,
have
explicit
support
for
one
of
the
caches
that
the
the
system
can
use,
which
was
the
which
is
the
results
cache
used
by
the
query
front-end.
So
now
that
is
a
feature
of
the
chart
as
well.
You
can
turn
it
on
and
and
use
caching
on
on,
the
query
results,
and
there
are
some,
let's
say,
more
minor
changes.
I
would
just
talk
about
two
things
or
actually,
actually
I'm
going
to
ask
patrick
to
talk
about
one
of
them,
the
testing
one.
F
But
first
I
want
to
mention
that
a
lot
of
people
ask
like
what
are
the
first
things
to
do
and
how
to
get
started
with
with
the
hand
chart.
So
the
getting
started
guide
for
the
hand
chart
is
now
in
review.
It's
going
to
be
emerged
soon,
and
this
will
be
part
of
this
release
and
it
will
give
you
a
step-by-step
guide
on
how
to
get
from.
F
You
know
not
having
mimi
running
in
your
kubernetes
cluster
to
having
having
it
in
a
cluster
and
and
having
some
metrics
in
it
already
and
then
the
last
but
not
least,
we
added
some
testing
support
for
the
home
chart
and
I
will
ask
patrick
to
say
a
few
words
about
that.
G
Yeah,
so,
in
addition
to
all
this
actual,
you
know
like
user-facing
stuff
for
the
help
chart
a
lot
of
our
work
for
this
release
has
been
on
improving
the
experience
of
contributing
to
the
home
chart.
So
this
is
a
part
of
that.
It
also
happens
to
be
a
user-facing
thing,
and
so
that's
why
it's
going
to
end
up
in
the
release
notes,
but
basically,
we've
added
loads
of
testing
along
the
entire
lifecycle
for
the
home
charts.
G
So
we
now
for
every
pr
can
tell
what
exactly
has
changed,
because
we
commit
a
golden
record
of
the
actual
rendered
home
chart
to
the
repo,
and
so
the
git
diff
is
really
easy
to
read
and
see
like
kind
of
what's
actually
changing,
even
if
the
templates
are
kind
of
complicated.
G
We
are
installing
and
testing
the
helm
chart
in
several
different
configurations,
both
open
source
and
enterprise,
on
every
pr,
every
commit
we,
and
so
as
supported
that
and
in
support
of
that,
we
added
a
smoke
test
based
on
the
existing
vermeer,
continuous
test
project
where,
basically,
after
you
install
the
chart
with
home,
you
can
run
helm
test
and
it
will
spin
up
a
job
kubernetes
job
that
just
runs
the
mimir
continuous
test
binary
once
and
so.
G
It'll
do
a
read
and
write
to
the
cluster
and
validate
that
everything
looks
okay
and
so
that
same
thing
is
available
both
for
you
to
use
after
installing
or
upgrading
the
mirror,
and
it's
also
used
in
ci
when
we
are
testing
all
of
our
changes
so
over
time,
as
my
mirror
continuous
test
changes,
you
know,
like
we've-
talked,
for
example,
about
extending
it
to
test
other
parts
of
the
mimir
api.
G
Besides
reading
and
writing
those
things
will
become
a
part
of
the
home
chart,
testing
process
and
that
smoke
test
after
install
can
automatically,
and
so
all
the
stuff
that
craig
just
mentioned
about
the
migration
and
everything
I'm
working
on
a
migration
guide.
Right
now
to
walk
you
through
how
to
go
from
2.1
to
3.0,
and
it's
not
going
to
be
fully
exhaustive.
Yet,
as
chris
mentioned,
there's
a
lot
to
cover,
and
so
probably
some
of
the
documentation
will
come.
G
You
know
a
little
bit
after,
but
at
the
release
of
3.0
we
will
at
least
have
a
document
describing
how
to
do
the
migration
with
some
examples
of
like
you
know.
If
your
config
looks
like
this
in
the
first
place,
here's
what
it
should
look
like
after
and
hopefully
that
can
help
walk
you
through
it.
If
you
run
into
anything,
obviously
the
community
slack
or
this
meeting
or
github
discussions,
feel
free
to
ask
and
we'll
be
in
there
and
help
you.
F
Yeah,
thank
you,
and
the
last
thing
I
want
to
say
is
looking
good
into
the
future.
A
little
bit
we
have,
you
know
mentioned
documentation
a
number
of
times,
and
actually
we
have
a
number
of
well.
You
know
a
long
list
of
features
lined
up
for
the
ham
chart,
but
we
also
recognize
that
documentation
is
very
important,
so
we
will
make
an
effort
to
to
balance
the
number
of
features
that
we
introduced
next
time.
With
the
number
of
documents
we
will
write
so
look
forward
to
more
documentation
around
helm.
B
Nice.
Thank
you
very
much.
That's
all
we
have
on
the
agenda.
If
there's,
if
anyone
from
the
community
has
any
feedback
about
anything
we
covered
or
just
in
general
or
questions,
feel
free
to
to
speak
up
now,.
E
I
see
the
right
written
comments,
but
also
we
don't
bite
and
also
feel
free
to
put
stuff
onto
the
agenda
for
next
time.
If
you,
if
there's
anything,
which
you
want
to
talk
about
on
on
the
community
side
or
something
which
you
would
like
to
see,
discussed
or
propose
something
or
maybe
even
work
with
one
of
us
to
create
something
blah
people
won't,
we
don't
buy
it.
A
E
H
You
hear
me
yes,
great,
I'm
working
with
diego.
I
think
we
you've
been
talking
a
bit
in
the
slack.
H
We
are
a
really
small
company,
we
build
a
product,
a
product
similar
to
heroku,
and
we
we're
just
starting
we're
just
starting
out.
So
we
don't
have
a
big
production
yet,
but
we
are
trying
to
use
mimi
to
store
matrix.
H
A
I
So
we
would
want
to
you
know,
set
up
the
ruler
and
the
alert
manager
in
a
way
where
we
don't
have
to
copy
the
configuration
from
directly
so
the
repo,
the
others.aml,
the
the
rules.yml
that
is
generated
by
the
mixin
config,
and
so
we've
raised
an
issue
about
this.
I
think
two
weeks
ago,
which
is
about
like
deploying
some
sort
of
working
on
some
sort
of
a
mimir
operator
to
automatically
sync,
you
know
permit
prometheus
operator
crds.
I
I
can
link
it
in
the
meeting
if
you
want
to
and
so
yeah
I
think,
like
the
main
focus
for
us
has
been
more
integration
with
the
prometheus
kubernetes
native
stack,
which
has
been
worked
on
quite
a
lot
in
2.2.
I
think,
like
with
the
meta
monitoring
in
in
the
ham
chart,
so
that's
a
big
plus
for
us
and
I
hope
we
will
see
more
of
that
and
thanks
for
the
work
again.
G
Yeah,
I
guess
we
can
comment
on
that
part
specifically
so
yeah
we
saw
your
issue.
Thank
you
very
much
for
opening
that
and
we've
been
talking
not
super
actively
yet
because
we're
focused
on
this
throughout
the
release
for
the
hump
chart.
But
we
have
talked
a
little
bit
about
you
know
what
what
would
we
want
to
do,
whether
it's
an
operator
or
if
there's
some
way,
we
can
reuse
the
prometheus
operator
or-
or
something
like
that.
G
So
I
guess
like
it
sounds
like
there's
two
separate
things
here
that
that
you
brought
up
one
is
having
mimir's
own
alerts
that
are
in
the
repo
available
easily
in
the
home
chart
and
then
there's
a
separate,
separate,
potentially
thing
which
is
being
able
to
configure
arbitrary
rules
for
arbitrary
metrics
and
tenants
via
the
prometheus
rule
crds.
I
Yeah,
I
think,
like
the
first
point,
is
like
mimir
shipping,
its
own
prometheus
rule
ciardi
with
like
its
own
alerts
and
rules
and
then
providing
the
ability
to
configure
the
ruler
automatically
when
running
in
kubernetes,
where,
like
you,
can
because
we
have
a
cluster
with
like
a
lot
of
different
technology,
stacks
deployed,
and
so
we
have
tons
and
tons
of
of
alerts
and
tons
and
tons
of
rules
to
configure
if
we
were
to
type
these
by
hand
and
copy
them
and
upload
them
through
mimir
tool.
I
We
would
kind
of
like
get
lost
and
not
be
able
to
follow,
updates
with
the
helm
charts,
and
so
what
we've
been
doing
before
was
using
prometheus
operator,
because
we
had
a
prometheus
setup,
and
so
we
would
just
rely
on
it
to
automatically
discover
the
crds
and
configure
the
alert
manager.
The
ruler
I
mean
the
the
prometheus
server
itself,
and
so
with
mimir
we're
running
into
the
issuer.
We
have
to
upload
them
through
mimir
tool
manually,
but
we're
working
on
it
and
we'll
be
happy
to
share
like
our
our
solution
to
this.
G
Yeah
we'd
love
for
you
to
share.
I
think
the
first
thing
about
getting
mimir's
own
alerts
into
the
home
chart.
I
think
that's
probably
a
relatively
straightforward
thing
that
I'm
sure
we'll
do
in
the
next
couple
of
releases.
G
The
crd
part
will
need
to
yeah
it's
just
like
because
of
its
nature,
a
little
bit
more
complicated,
but
definitely
I
think
something
we've
talked
about.
So
whatever
you
end
up
coming
up
with
we'd
love
to
see.
I
I
The
agents
on
our
cluster
are
like
retrying
requests
a
lot,
and
so
sometimes
they're
going
to
be
retiring
requests
which
are
very
old,
and
so
this
is
where
memory
is
going
to
kind
of
going
to
like
replay
what
it's
lost,
like
the
data
points
it
like
didn't
see
because
it
was
a
rebooting,
and
so
this
is
where
we're
having
a
lot
of
out
of
order
sample
errors.
So
we
have
a
use
case
for
that
which
is
when
mimir
goes
down.
I
guess.
C
C
So
it's
a
it's
very
light.
Unless
the
voltage
lasted
more
than
an
hour,
it's
very
likely
that
the
out
of
order
errors
you
get
are
for
samples
that
were
already
ingested
before
the
hostage
because
prometheus
or
the
agent
just
replace
everything.
C
Yeah
and
then
you,
but
but
in
practice
no
one
of
your
samples
have
been
skipped
as
far
as
the
hostage
doesn't
last
more
more
than
an
hour,
because
the
other
problem
we
have
in
mimir,
which
is
also
solved
by
the
let's
say
out
of
order
feature,
is
that
right
now
we
can't
ingest
any
sample
which
is
older
than
one
hour
compared
to
the
most
recent
sample
received
for
that
specific
tenant.
C
H
I
have
another
question
which
is
unrelated
and
what
we
we're
not
we're,
not
experts,
but
we
we,
like
your
yoga
stock
stack.
So
if
you
have
any
advice
on
our
setup-
and
I
will
briefly
explain
to
you
and
our
problem
and
so
basically
we
are
about
a
cluster
and
we
have
a
few
nodes
and
those
nodes
are
collecting
metrics
and
all
pushing
them
to
linear.
H
And
so
this
works
fine.
What
what
we
would
like
to
consider
is
to
have
a
lot
of
tenants.
Actually,
so
what
we
would
like
to
have
is,
like
I
don't
know
thousands
or
ten
thousands
of
clients
of
tenants,
but
the
we
don't
think
it's
a
problem
on
mimer's
side.
We
think
it
will
be
an
issue
on
the
nodes
that
are
pushing
the
data
side
because
we
run
one
agent
per
node
and
and
if
we
understood
correctly,
one
batch
equals
one
tenant.
H
H
Okay
and
then
we
didn't
try,
so
we
don't
know
if
it's
really
a
problem
that
we
imagine
it
will
be
a
problem
and
do
you
have
any
ideas
on
how
to
do
better
or
maybe
do
something
else?
I
don't
know.
F
What
is
the
reason
for
having
so
many
tenants?
Is
it
some
kind
of
security
like
you,
don't
want
to
the
reads
the.
H
F
H
F
H
E
C
Are
you
going
to
run?
What
I
did
understand
is:
are
you
going
to
run
one
agent
per
customer.
H
No,
no
okay,
we
have
like
nodes,
let's
say
around
a
dozen
of
nodes
for
now
and
on
each
node.
There
are
many.
C
And
yeah,
I'm
not
an
agent
expert.
But
how
can
you
inject
the
http
header
with
the
tenant
id
for
for
different
tenants
from
the
same
agent.
I
Yeah,
that's
actually
like
a
really
like
a
real,
very
real
pro
problem
we
have
so
we
have
to
distinguish
between
like
each
metric
belongs,
to
which
tenants-
and
this
is
also
a
problem
we're
having
agent
site,
which
is
how
do
we
take
like?
I
don't
know,
a
thousand
metrics
and
cut
them
into
like
small
chunks
of
oh
okay.
I
I
know
this
one
belongs
to
each
to
this
tenant
or
this
tenant,
and
so
I
think
we
would
fix
this
either
through
proxy
or
through
another
solution
by
actually
like
modifying
the
code
of
the
exporter,
which
exports
the
metrics
we're
trying
to
scrape.
So
this
is
a
bit
like
technical
thing
that
we
haven't
really
decided
upon,
but
if
it
were
to
be
fixed,
we
still
don't
know
if
we
would
be
able
to
push
those
metrics
to
mimir
without
causing
some
overhead.
Because
of
how
many
requests
we
would
be
doing.
C
You
will
have
anova
both
in
the
distributor,
but
also
in
the
injustice,
because
from
the
distributor
to
injustice,
the
number
of
samples
that
we
are
going
to
push
in
every
single
request
will
be
very
small
like
one
or
two
samples
a
per
request
between
each
distributor
and
injustice,
because,
right
now
the
distributors
don't
do
any
sort
of
buffering.
C
I
think,
just
today,
brian
or
yesterday,
brian
were
mentioning
this.
This,
like
brian
I'm
a
bronco,
or
you
recently
mentioned
an
idea
about
buffering.
They
received
the
samples-
I
guess
in
the
distributor
or
so.
D
Multiple
requests
for
the
same
tenant,
so
I
don't
think
it
would
be
relevant
to
if
you're
talking
about
the
where
the
problem
is.
You've
got
tens
of
thousands
of
tenants.
D
D
Currently,
try
to
sync
that
to
the
disk,
so
it
could
conceivably
relax.
That
requirement.
D
I
don't
know
or
you
could
you
could
kind
of
slow
things
up
at
the
agent
if,
if
you,
because
because
bigger
bigger
sense,
are
better
from
that
point
of
view,
no
matter
what
you're
doing
well,
you
know
up
to
a
point,
so
the
the
default
and
prometheus
is
500.
Symbols
per
cent,
and
I
personally
think
that's
a
bit
on
the
low
side
like
1000
or
2000,
I
think,
would
be
more
appropriate.
B
I'm
trying
to
think
through
how
you
could
inject
some
label
automatically
to
all
the
metrics
you
scrape
and
then
turn
that
into
a
tenant
id.
I
think
that
sounds
like
it.
It
would
need
you'd
need
to
write
proxy
to
do
that.
Yeah.
Sorry.
I
Yeah,
it's
an
issue
we're
like
trying
to
solve
at
the
moment
and
we're
gonna
like
think
about
it
a
lot,
and
we
can
actually
come
back
to
you
guys
and
tell
you
what
solution
we've
like
decided
upon
and
I
think
we're
going
to
default
to
injecting
a
label
which
is
still
some
heavy
work
on
like
a
proxy
or
some
other
mechanism
to
inject
this
label,
especially
because
we
have
to
do
like
a
db
lookup
which
we
can
afford.
I
So
we
have
to
like
stream
configuration
or
something
else,
so
we're
gonna,
maybe
like
try
something
like
just
injecting
ejector
in
a
label,
but
maybe
I'm
thinking
like
maybe
mimir
could
provide
a
a
way
to
like,
say:
okay,
so
this
label
represents
your
org
id
and
receive
the
request
as,
like
I
don't
know,
a
single
tenant
or
like
the
anonymous
tenant
and
say:
okay,
I'm
now
going
to
look
at
every
metric
and
say:
okay,
this
one
belongs
to
each
tenant
because
it
has
this
specific.
B
Label
so
we
sort
of
have
a
feature
like
that
in
the
enterprise
version
label
based
access
control,
where
you
can
sort
of
limit
a
tenant
to
a
subset
of
metrics
based
on
some
particular
label.
So
you
could
have
like
your
customer
id
label
and
then
you
could
hand
out
access
based
on
that,
but
I
don't
think
there's
anything
in
my
mirror.
That
is
comparable
to
that.
B
A
A
So
to
say
to
the
query:
you
are
diego
talking
about
the
right
path
and
marco's
answer
in
the
chat
is
basically
a
proxy
which
can
do
this.
It
can
take
a
look
at
the
label
of
incoming
series
and
then
distribute
them
to
different
tenants,
but
if
they
are
in
different
tenants,
then
you
can
use
the
solution
mentioned
by
nick,
because
that
assumes
that
everything
is
in
single
talent
and
you
basically
get
subset
based
on
some
bible,
which
would
be
some
kind
of
ten
on
the
label.
Okay,
so.
I
B
If
that's
all
we
got
on
the
agenda,
then
see
you
later.
Everyone
thanks
see
you
thank
you.