►
From YouTube: Grafana Mimir Community Call 2022-03-31
Description
First Grafana Mimir Community Call, in which we celebrated the launch of Grafana Mimir, and talked about future plans for Mimir.
Join us on the last Thursday of every month. Please see the linked document for meeting notes, agenda and details on how to join: https://docs.google.com/document/d/1E4jJcGicvLTyMEY6cUFFZUg_I8ytrBuW8r5yt1LyMv4/view
A
So
welcome
everyone.
My
name
is
marco.
I'm
one
of
the
engineers
working
at
grifan
labs
on
this
new
project
called
mimir,
which
we
just
announced
yesterday.
If
you
joined
this
call
and
you're
not
part
of
grafana
labs
and
mimir
sounds
new
to
you.
Let
me
give
you
a
super
quick
introduction.
A
A
A
A
But
yeah,
I
think
I
think
the
lunch
was
was
pretty
good.
You've
got
positive
feedback
from
the
community.
We've
got
several
questions
as
far
as
I
know,
the
three
videos
we
published
on
youtube
got
several
utilizations,
so
I'm
personally
quite
happy
about
the
launch
and
in
particular
I'm
also
happy
about
open
sourcing.
Some
of
the
you
know
features
like
the
scalable
compactor
or
query
sharding
that
was
previously
closed
in
jam,
which
is
grafana
enterprise
metrics,
and
now
we
open
sue
stem
in
in
mimir.
A
What's
your
yeah
and
brian
mentioning
we
reached
almost
eight
github
stars
in
in
a
day,
so
I
would
like
to
to
ask
you:
well
what
are
the
features
or
the
reasons
why
you
are
excited
about
mimir.
B
I'm
excited
that
it's
public
finally,
after
so
long
time
and
then
that
we
can
actually
have
users
who
know
that
they
are
using
mimir
because
we
used
mimir,
obviously
in
hosted
metrics
on
our
backends
for
a
long
time
but
yeah.
It
was
secret
projects
for
a
long
time.
So
I'm
happy
that
it's
public
we
can
talk
to
people,
we
can
tell
them
what
we
have
done
and
what
we
worked
on
and
what
features
are
coming.
That's
amazing.
A
Yeah
joanna
is
mentioning
the
split
and
merge
compactor
by
the
way,
split
and
match.
Compactor
is
the
let's
say
internal
name
we
gave
or
the
codename
we
gave
to
the
scalable
compactor
and
it's
called
the
split
and
merge
because
it's
it
says
two-stage
compactor,
so
basically
compaction
happen
over
two
stages.
The
first
one
is
called
split
and
the
second
merge
so
yeah
we
didn't
have
much
creativity
coming
up
with
the
name,
so
we
just
called
it
split
and
merge
brian.
A
C
D
Yeah
I
had
anything
else.
I
think
I
I
would
echo
what
peter
was
saying
that
that
having
an
open
source
is
is
something
of
a
relief.
D
You
know,
join
grafana
labs
to
work
on
open
source,
and
then
we
were.
We
were
kind
of
preparing
this
launch
in
behind
closed
doors.
So
so
that's
great
for
me,
I
so
you
you
gave
a
talk
at
the
last
grafana
con
about
the
query.
Sharding.
D
I
think
I
think
that's
a
great
feature
actually
goes
with
the
the
split
part
of
the
compactor
that
we
we
can
end
up
with
specific
sets
of
time
series
on
different
nodes,
and
then
we
can
so
we
can
split,
for
instance,
16
ways,
and
then,
when
we
get
a
query
in
that
something
like
give
me
the
average
of
these
two,
these
two
metrics,
then
we
can.
We
can
do
all
the
sum.
D
We
need
to
split
the
sum
16
ways
for
the
top
of
the
fraction
split,
the
sum
16
ways
for
the
bottom:
diffraction
add
them
up,
and
and
now
we
finished
the
query,
so
that
maybe
wasn't
the
best
explanation,
but
it
goes
fast
and.
D
It
is
kind
of
cool
that
that
it
comes
together,
because
we
did.
We
did
the
splitting
of
the
compactor,
mostly
for
scalability
reasons,
but
we
also
get
a
performance
benefit,
because
we
we
can
carefully
lay
out
those
shards,
the
subsets
of
t
of
series
across
the
nodes.
A
C
I
was
going
to
say
I'm
one
of
the
things
I'm
excited
about
for
the
launch
is
that
all
the
documentation
is
now
public.
There's
been
a
ton
of
work
that
went
into
the
documentation,
content
and
organization
and
being
able
to
to
link
people
to
that
or
just
browse.
It
is
a
is
really
nice
now
that
that
is
public.
A
D
I'll
give
another
example:
something
oleg
worked
on
who's
on
the
call
when
we
get
a
label
values
query
so
so
typical
use
for
this
is
if,
if
you
have
a
a
dashboard,
that's
trying
to
show
you
a
drop
down
like
maybe
switching
different
name
spaces
or
different
clusters,
or
something
like
that
in
in
your
environment.
D
It's
going
to
send
in
a
request
saying
give
me
all
the
possible
values
of
this
label
like
namespace,
and
if
you
zoom
out
your
dashboard,
so
you're
looking
at
say,
30
days
on
the
dashboard,
then
then
it
actually
that
the
front.
E
D
Will
ask
the
back
end
for
all
of
the
possible
values
of
over
30
days
of
data
and
this
used
to
crash
the
the
process
that
looks
up
the
data,
basically,
because
because
it
would
do
a
incredibly
intensive
index
lookup
to
try
and
figure
this
out,
and
so
mimir
has
a
much
more
efficient
way
of
doing
that.
Do
we
actually,
we,
I
think,
put
that
into
prometheus
as
well,
but
that
you're,
perhaps
less
likely
to
have
a
lot
of
data
back
in
time
in
prometheus.
A
F
Yeah
we
we
have
to
add
some
caching
because
of
the
query
starting
feature,
because
since
we
are
now
querying
the
same
blocks
for
different
sorts
of
series,
we
should
cast
the
results
in
the
intermediate
step.
So
we
don't.
We
don't
recalculate
the
same
query
again
again
before
starting,
and
so
we
started
with
that.
F
F
A
We
also
introduced
some
optimizations
in
the
label-
regular
expression
matcher
as
well.
So
when
you,
when
you
query
matrix
and
you
use
a
regex
matcher
for
your
label-
name
value
pairs,
the
regular
expression
needs
to
be
evaluated
for
every
single
value
we
find
for
that
label.
Name
in
the
in
the
database
and
running
you
know
the
regular
expression,
the
same
regular
expression
against
thousands
or
even
millions
of
entries
could
be
pretty
slow
and
could
slow
down
your
query.
A
We
oleg
and
cyro,
worked
on
some
cool
optimizations
to
avoid
running
regular
expression
at
all.
For
most
of
the
use
cases
we
see
in
production
and
yeah.
That
was
yet
another
cool
performance
optimization
we
did
in
mimir.
F
F
A
D
Which
it
wasn't
urgent
because
of
the
thing
you
were
talking
about
before,
that
we
actually
pre-optimize
our
most
of
the
common
cases
but
yeah
so
so
prometheus
has
a
hook
to
so
that
we
can
link
it.
But
we
actually
need
to
to
change
the
go.mod
file
to
link
it.
A
Okay,
we
can
move
to
the
next
item
in
the
agenda,
which
is
actually
from
me
because
yeah,
I
think
one
of.
A
Among
you
know
the
big
stream
of
news
from
from
yesterday
lunch,
I
think
something
where
we
didn't
put
enough
in
the
focus
yet
is
that
our
plan
in
terms
of
what's
coming
next
next
is
to
make
mimir
a
general
purpose
time
series
database
at
the
beginning
of
this
call,
I
mentioned
it's
a
distributed,
tsdb4
prometheus,
but
we
actually
want
to
be
able.
We
want
to
to
support
other
protocols
as
well.
We
want
to
be
able
to
ingest
metrics
in
multiple
products
or
multiple
formats
like
open,
telemetry,
influx
graphite
and
data.
A
And
the
main
reason
is
to
give
people
a
smoother
migration
path
in
case
they
want
to
migrate
from
influx
fight
and
so
on
to
to
prometheus
to
a
prometheus-like
data
storage.
A
There's
already
a
pr
open
from
gotham
gotham
is
nothing
disco
today,
no
to
add
the
open,
telemetry
support
and
yeah,
the
other
protocols
will
come
next.
B
There
was
a
question
on
grafana
mimir,
slack
channel
about
flex,
ql
language
and
support,
and
the
question
was
basically:
do
we
plan
to
support
also
the
query
path
and
different
languages
on
the
query
path,
including
flexql,
which
is
something
I
never
heard
about.
To
be
honest,
so
what's
the
plan
there.
B
A
Another
very
interesting,
ongoing
work,
which
is
which
is
happening
right
now,
is
about
having
the
ability
to
ingest
out
of
order
samples
in
mimir,
and
it's
something
ganesh
jesus
and
dieter
are
working
on.
So
here
we
have
a
couple
of
people
from
the
team
working
on
it.
I
would
be
glad
if
you
could
share
with
us
the
latest
news
on
it.
G
Yeah,
so
thanks
marco,
for
bringing
the
topic
we've
been
working
on
this
for
the
past
couple
of
months,
as
you
know,
like
prometheus,
only
supports
samples
that
are
in
order
and
we've
been
seeing
the
scenarios
where,
for
some
reasons,
client,
the
clients
had
like
out
of
order
samples,
and
we
were
this
prometheus-
actually
discards
them,
so
we're
trying
to
add
support
or
for
them,
and
so
so
that
we
avoid
using
data.
G
The
work
is
public
today,
with
the
release
of
mimi
like
there
is
the
mimi
promises
repository,
and
there
is
a
branch
there
where
we
can
see
what
we
have
it's
a
bit
messy
still
like
it's
not
ready
to
be
merged
into
main,
but
it's
been
actively
worked
right
now
and
at
this
moment,
we're
working
on
the
ingestion
path
and
the
query
path
of
the
samples
into
the
tsdb
and
in
the
coming
weeks
we
plan
to
work
on
the
right
ahead,
log
and
and
the
memory
mapping
files
lifecycle,
and
the
idea
is
to
have
like
an
mvp
that
we
can
test
by
the
end
of
april.
G
We're
slightly
behind
on
this.
But,
like
that's
the
that's
the
that's
the
entire
idea,
yeah,
I
can
actually
not
show
you
or
did
I'm
sure
if
I'm
missing
anything
in
case,
you
want
to
speak
up.
H
Yeah,
so
we
have
a
very
hacky
design
dog
that
we
use
to
communicate
between
ourselves,
but
we
plan
to
now
that
memory
prometheus
is
public.
We
plan
to
write
a
publicly
facing
dock,
which
can
explain
things
to
people
who
don't
know
tsp
in
a
new
terms
and
share
it
in
upstream
channels,
like
option
promises
chance
to
be
communicated
with
the
community,
because
we
want
to
donate
it
to
upstream
eventually.
So
we
want
the
upstream
maintenance
to
also
agree
with
the
design
that
we
choose.
A
G
A
So
if,
if
you
have
any
question
whatever
it
is
or
if
there's
any
topic
you
want
to
to
bring
up
to
to
this
call
yeah,
this
is
your
your
time.
So
please
do
it.
I
What's
the
next
secret
project,
we're
gonna
open
source.
B
I
would
just
mention
jesus
mentioned
a
minor
prometheus.
It
is
a
fork
of
prometheus.
We
have
a
separate
repository,
but
we
don't
plan
to
fork
prometheus
for
real.
We
only
use
that
to
keep
our
changes
that
were
not
yet
upstreamed
into
prometheus
and
we
are
trying
to
push
them
into
the
upstream.
So
we
don't
have
any
plans
to
fork
promise
you
so
just
to
mention
that.
B
G
Right
we're
only
using
it
like
because
it's
it
speeds
up
for
us
and
we
can
test
it
without
him.
You
know
having
our
own
branches
and
everything,
but
the
idea
is
to
upstream
all
the
changes
back
to
promisius,
eventually.
A
Yeah,
we
also
opened
an
issue
with
the
reference
implementation.
Just
to
give
you
an
example
about
up
streaming,
the
gstp
changes
we
did
for
for
query,
sharing,
there's
still
an
ongoing
conversation
about
it,
but
yeah.
Basically,
it's
our
playground
where
we
can
quickly
iterate
on
tsdb
and
pronq,
and
engine
changes
before
trying
to
upstream
them.
E
A
A
So
ireland
is
asking
congratulations
for
on
the
lunch
out
of
curiosity.
How
did
you
come
up
with
the
name
mimir?
There
was
an
internal
you
know
poll
to
to
suggest
and
vote
nades.
The
only
requirement
was
that
it
had
to
start
with
m
m
as
matrix
in
order
to
fit
into
our
lgtm
stack.
A
L
is
loki
for
lux
g
is
grafina
for
visualization
t
is
tempo
for
tracing
m
is
mimir
for
matrix.
I
don't
remember
who
proposed
mimir?
Does
anyone
remember
about
it?.
E
Yeah,
I
think
tom
pitched
it
originally
because
he
liked
the
the
nordic
side
of
it,
and
so
in
the
q
and
a
with
raj
who's
like
it.
It
fits
with
our
roots.
They're
really
excited
about
that.
C
A
I
think
we
can
sum
up
that
once
we
heard
about
mimir
proposal,
we
all
loved
it,
so
it
was
immediate
choice
for
us,
probably.
A
And
if
you
haven't,
we
can
give
you
the
link
to
the
10
minutes
tutorial
to
to
run
mimir
on
your
local
machine
using
docker
compose
I'm
going
to
grab
you
the
link
and
I
will
share
the
dock.
F
A
Yeah,
I
have
a
question
to
peter
about
this,
because
the
reason
why
member
list
is
not
supported
by
the
aha
tracker
is
because
of
the
propagation
time
so
there's
some
latency
between
when
you
do
a
change
to
a
distributed
data
structure
and
this
change
is
propagated,
sorry
propagated
across
all
the
replicas
peter.
I
I
remember
that
this
this
decision
came
from.
You
know
very
hard.
E
A
Of
the
the
member
list
implementation,
but
then,
while
while
testing
it,
we
actually
fixed
or
improved
the
propagation
deal,
and
now
the
the
memberless
propagation
dealing
is
way
lower
than
the
initial
one.
B
So
I
would
ask
you:
what
is
the
propagation
delay
now,
because
any
propagation
delay
longer
than
I
o?
I
don't
know
a
couple
of
hundred
milliseconds.
This
is
really
bad
in
case
of
aj
tracker
right.
We
don't
want
different
distributors
to
accept
samples
from
different
replicas.
At
the
same
time,
yeah
brian
feel
free
to
I.
D
Have
something
to
say
about
that
the
so
this
was
this
was
another
change
that
is
in
mimir
previously.
D
The
the
distributors
would
make
the
decision
in
band,
so
sample
would
come
in
it
would
go.
I
don't
that
my
last
decision
was
a
long
time
ago.
I'm
going
to
decide
this
one
is
the
primary
and
it
would
do
a
a
cas,
a
compare
and
swap
on
on
the
kv,
store
and
wait
for
that
to
finish
before
passing
on
the
sample.
D
So
I
think
that
may
have
been
an
as
well
as
propagation
delay
the
fact
that
it
was
coded
as
a
cars,
which
is
something
you
can't
effectively
do
in
member
list,
or
it
doesn't
really
mean
anything
in
member
lists
because
you
gossiping
so
so
this
is
changed
in
mimir,
so
the
decision
is
made
out
of
ban.
So
basically,
if,
if
you
allow
the
decision
to
be
30
seconds
out
of
date,
then
every
15
seconds
it
will
remake
the
decision
and
and
store
that
back
to
the
the
kv
store.
D
So
I
think
it
it
will
actually
work
a
lot
better.
Now
that
you'll
you'll
be
able
to
have
a
propagation
delay
in
seconds
and
it
will
still
work,
we
might
need
to
tweak
it
a
little
bit
because
they're
like
if
you
have
100
distributors,
then
100
of
them
will
consider
making
this
decision
and
we
try
to
randomize
which
one
so,
which
we
try
to
get
one
of
them
to
make
the
decision
and
not
100
of
them.
D
So
you
might
need
to
tweak
that
bit
but
yeah.
I
think
the
fact
that
we've
moved
the
h.a
the
primary
secondary
decision
out
of
band
is
going
to
make
it
work
better
with
member
list.
B
D
When
I
talk
about
in
band
and
out
of
band,
I
mean
the
samples
are
coming
in
and
it's
the
decision
used
to
hold
up
the
samples
coming
in
decision,
the
decision
and
the
cars,
and
so
I
did
that.
I
made
that
change
because
of
tail
latency,
because
the
as
you
as
the
system
got
bigger,
the
the
cars
might
fail
because,
like
two
of
them
might
have
gone
in
at
once,
and
it
started
to
get
more
and
more
expensive
as
you
make
the
system
bigger.
D
So
we
move
the
thing
completely
to
the
background,
so
that
the
as
long
as
the
decision
keeps
being
made
the
same
way,
which
is
the
normal
case,
the
happy
path
the
samples
will
carry,
will
go
straight
through
they
don't
get
held
up
on
the
decision
and
then,
in
a
background,
go
routine.
We
make
the
decision,
we
do
a
car,
so
the
cars
itself
is
still
synchronous,
but
the
it
doesn't
hold
up.
The
samples
coming
in.
D
B
A
But
we
have
documentation
on
the
tracker
on
a
high
level.
B
A
Yeah,
I'm
previously
you
mentioned
that
if
propagation
time
you
know
are
high
or
propagation
is
low,
you
may
end
up
with
ingestor
ingesting
series
from
more
samples
from
multiple
prometheus
replicas
at
the
same
time,
but
the
failover
triggers
when
we
are
not
receiving
any
more
the
data
from
one
replica.
So
is
the
issue
really
real.
B
B
B
I
was
talking
to
you
because
I
recognize
your
name
from
the
helm
chart
of
cortex,
sorry
from
the
maintainers
of
the
cortex
zone
charts.
I
was
wondering
if
you
plan
to
take
a
look
at
mimir
and
maybe
do
some
helm,
charts
there
and
perhaps
use
the
ones
that
we
are
working
on.
Those
should
be
hopefully
public
soon.
I
don't
know
what
the
planetary
there
is,
but
we
have
some
home
charts
in
work.
J
I
work
on
the
the
helm
chart
professionally
and
so
before
I
could
work
on
this
project.
I'd
have
to
have
our
legal
team
sign
up
because
we
have
to
get
approval
for
everything
at
gpl,
so
so
not
yet.
B
I
I
E
I
From
going
from
the
cortex
home
chart
to
the
mere
helm
chart,
the
only
thing
that's
been
actively
worked
on
right
now
is
that
we
didn't
support
service
monitors
as
of
the
day
zero
launch-
or
you
know,
hour,
zero
launch,
but
we
now
have
support
for
service
monitors
in
there,
so
you
won't
lose
them
if
you
migrate.
G
C
I
I
think
we're
we're
keen
to
be
more
active
in
maintaining
this
chart.
I
I've
been
working
on
the
enterprise
metrics
chart
that
was
previously
this
helm
chart
and
then
completely
diverged,
and
I
think
it
was
difficult
for
us
not
having
more
involvement
and
a
more
active
maintainership
of
the
the
sub
chart.
So
we're
we're
going
to
hopefully
sub-chart
the
hr
and
we'll
be
available
for
reviews
for
that
in
the
for,
for
both
the
enterprise
and
the
chart.
I
But
we
do
have
some
community
maintainers
there
as
well.
We're
not
the
sole
maintainers
in
this
repo
believe
we've
got
people
outside
of
grafana
they're,
actively
reviewing
emerging,
pull
requests,
but
it's
something
we
need
to.
We
need
to
focus
more
time
on.
I
think.
I
I
A
Yeah
jack
you
mentioned
about
the
migration
guide
from
cortex
to
mimir
using
hell,
but
there
was
quite
a
lot
of
work
and
to
offer
people
who
want
to
migrate
from
cortex
to
mimir
is
moves
migration
process,
including
a
tool
to
convert
to
automatically
convert
the
conf.
The
cortex
configuration
to
the
mimir
configuration
who
want
to
spend
a
few
words
about
it.
E
I
Instead
of
having
to
to
explain
all
of
the
changes
we've
made
in
words
and
explain
why
you
might
want
to
change
them
or
what
you
need
to
you
know,
change
this
value
from
and
to
this
tool
was,
was
perfect
for
automating,
that
it
supports
input
in
cli,
flag
form
or
config
file
form
and
then
just
gives
you
the
equivalent
vermeer
config.
So
you
give
it
a
cortex
config
and
it
gives
you
the
mimir
config
back.
I
If
so,
we've
renamed
some
of
the
configuration
flags
which
I
think
were
quite
confusing
in
the
cortex
config
and
we've
removed
some
that
don't
require
tuning.
So
those
also
get
removed
automatically.
For
you,
the
one
that
comes
off
the
top
of
the
head
is
auth.enabled,
being
the
cortex
term
for
whether
or
not
to
enable
multi-tenancy,
that's
renamed
in
mimir.
I
But
yeah
a
brilliant
tool-
I
I
don't
think
dimitar's
on
the
call,
but
he
put
a
lot
of
work
into
that
and
has
made
it
very
effective
and
I'm
hoping
that
we
can
use
it
for
all
future
releases
as
well.
We
can
give
users
a
really
convenient
transition
between
all
updates
to
the
configuration
file
and
codify
those
changes
for
them.
H
D
I
Yeah,
I
think
we
fixed
some
of
the
last
of
those
quirks
on
the
on
the
day
of
the
launch.
So
hopefully
people
want
to
try
that
we
can
get
close.
I
mean
if
anyone
wants
to
record
a
video.
K
Yeah,
so
by
the
way
yeah,
I
think
it's
definitely
like
kind
of
interesting
situation,
but
I
think
it's
it's
already.
We
we
need
to
thank
you,
because
we,
we
were
really
going
through
wedding
code
just
literally
today,
because
we
are
implementing
that
for
thanos
like
kubecon
talk,
so
we
are
like
could
totally
you
know,
dive
into
that
and
other
features
and
kind
of
yeah
kind
of
validation
of
our
design,
so
yeah
we
can
in
some
form
still
kind
of
collaborate
and
learn
from
each
other.
So
yeah,
that's
great.
B
We
have
mentioned
that
we
have
this
mimir
prometheus
repository,
where
we
have
some
or
a
lot
of
code
that
we
did
to
change
primitives
to
support
the
new
features,
and
it's
absolutely
not
our
plan
to
fork
promise
use,
but
to
upstream
these
changes
as
much
as
possible,
and
this
is
of
course,
apache
2
licensed
code.
So
if
you
want
to
take
a
look
at
some
of
the
details,
how
we
did
something
and
that's
a
good
place
to
check
out.
K
This
is
actually
very
good
right
because
we
sell
you
know
some
kind
of
asd
parsing,
whatever
that
we
might
want
to
extend
and
stuff
like
that
and
you
yeah.
There
are
lots
of
things
that
takes
time
to
get.
You
know
like
any
of
those
charting
to
be
honest
as
well.
It
takes
time
to
get
upstream
so
yeah.
If
we
can
collaborate
here
yeah,
we
have
so
many
things
like
that
and
we're
just
should
we
just
copy
the
code
and
for
now,
for
you
can
just
deliver
it
or
focus
to
like
properly
upstream
it.
A
That
are
working
to,
or
at
least
experimenting
to
add
out
of
order,
samples
support
to
tsdb
yeah
and
that's
one
of
the
experimental
or,
let's
say
in
progress,
work.
We
are
doing
in
the
four
crypto,
but
we
plan
to
to
offer
upstream
if
the
promising
community.
K
Is
interested
yeah
definitely
sounds
cool
yeah.
I
I
just
saw
this
item
of
agenda
so
we'll
take
a
look
yeah,
pretty
good
amazing.
K
Oh,
maybe
one
question
since
there's
a
silence
here
I
can
see
the
point
about
using
different
ingestion
apis
and
just
till
they
are
like.
Was
there
any
troubles
or
interesting
like
gaps?
When
you
were,
you
know
when
you
were
implementing
otlp
support,
I
think
you
already
did.
There
is
a
pr
right,
any
interesting,
worries
or
incompatibilities
that
you
were
kind
of
tackling
with.
A
Yeah
yeah
the
people
working
on
it
is
nothing
this
call,
so
I'm
not
the
best
person
to
answer
this
question.
Gotham
is
one
of
the
persons
who
worked
on
it
yeah
I
just
I'm
just
reviewing
the
pr
and
the
priority
is
actually
pretty
straightforward,
but
I
don't
know
anything.
You
know.
D
D
It
then
translates
the
data
into
open,
telemetry
data
structures.
Then
they
might
go
through
a
pipeline.
Then
they
get
sent
over
the
wire
and
otlp
metrics
format.
Then
they
they
come
into
mimir,
maybe
where
they
get
translated
back
into
so
so
basically,
the
two
translations
at
each
end
is
is
not
is
not
great.
You
know
it's
like.
K
Yeah
that
makes
sense
yeah.
Thank
you
yeah,
so
I'm
just
you
know
getting
trying
to
get
the
picture
if
this
is
something
that
people
needs,
and
you
know
if
tunnels
or
other
system
has
to
support
those
other
apis,
and
I
think
we
don't
have
those
requests
for
now,
especially
given
collector.
Is
you
know,
amazingly
it
it?
It
supports
the
exporter
promotions,
exporter
with
remote
right,
so
yeah,
it's
interesting
to
see
how
marrying
those
two
standards
actually
looks
like
inside
and
what
problems
you
have.
So
I
would
take
a
look.
A
Yeah,
thank
you.
The
next
call
is
on
the
last
thursday
of
april
and
obviously,
if
you
want
to
follow
the
development
more
closely,
you
are
more
than
welcome
to
check
out
the
mimir
github
repository.