►
From YouTube: For billion series scale or home IoT projects, get started in minutes with Grafana Mimir
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thank
you,
christopher
and
mikhail
ness
presentation.
Welcome
to
the
talk,
get
started
in
minutes
with
grafana
mimir.
My
name
is
marker,
I'm
a
software
engineer
at
profana
labs
and
I'm
also
a
graphene
maintainer
today,
along
with
me,
there's
my
colleague
and
friend
peter
peter.
Would
you
like
to
introduce
yourself.
B
A
In
this
talk,
I'm
going
to
give
you
an
introduction
to
grafana
minmir
our
new
open
source,
scalable
time
series
database,
and
then
peter
will
show
you
how
to
get
it
started
in
few
minutes
now.
The
three
pillars
of
observability
are
metrics
logs
and
traces
with
metrics.
A
You
can
find
what
went
wrong
and
then
you
can
find
more
details
exposed
through
the
logs
traces,
go
a
step
further
and
allows
you
to
have
a
detailed
view
on
a
single
request,
and
you
see
where,
in
the
code
or
in
which
service
of
your
distributed
system,
it
went
wrong.
Metrics
are
the
foundation
of
any
observability
stack.
A
One
of
the
most
common
use
cases
is
having
a
global
view
across
your
infrastructure.
If
you
are
familiar
with
prometheus,
the
recommended
way
to
run
it
is
to
keep
it
close
to
the
monetary
targets.
The
typical
scenario
is
that
you
run
a
prometheus
server
in
each
kubernetes
cluster
or
cloud
region
or
data
center,
and
then
you
want
an
effective
way
to
run
any
query
on
all
the
prometheus
server
metrics.
A
For
example,
how
can
you
query
the
99
percentile
latency
for
your
application
across
two
different
regions
and
then
what
happens
when
a
single
prometheus
server
is
restarted
or
goes
out
of
memory?
If
you
don't
have
a
highly
available
setup,
you
will
experience
gaps
in
your
metrics,
which
effectively
means
you
have
a
blind
spot
in
your
observability.
A
So
we
recently
announced
grafana
mimir
our
new
open
source
distributed
time
series
database
mimir,
provides
high
availability,
horizontal
scalability,
multi-tenancy,
a
durable
storage
and
fast
query.
Performance
mimir
has
been
designed
to
be
easy
to
configure
and
run
at
any
scale
from
small
blob
projects
to
large
companies
having
hundreds
of
millions
or
even
billions
of
active
time
series.
A
Now
it's
very
important
to
understand
that
mimir
does
not
replace
prometheus,
but
it
aims
to
extend
prometheus
when
using
mimir.
You
keep
running
prometheus
configured
to
script,
metrics
from
your
applications
and
infrastructure,
and
you
configure
prometheus
to
remote,
write
these
metrics
to
graphanomir,
and
then
you
configure
grafana
to
query
the
metrics
from
mimir
leveraging
on
mimir
scalability
and
performance
optimizations.
A
A
Now,
the
advantage
of
using
the
grafana
agent
relative
to
prometheus
is
that
the
agent
is
far
more
lightweight
in
terms
of
cpu
memory
and
disk
utilization.
The
agent
doesn't
store
metrics
locally,
but
just
push
them
to
mimir.
The
agent
is
also
suitable
for
being
deployed.
On
the
hedge,
for
example,
you
can
have
an
agent
running
on
every
single
machine
of
your
infrastructure.
A
The
typical
setup
is
to
run
a
pair
of
prometheus
servers
or
grafana
agent
per
kubernetes,
cluster
or
cloud
region
or
data
center,
and
these
prometheus
servers
run
with
the
same
configuration.
They
script
the
same
metrics
from
the
same
targets
and
they
both
remote
write
these
metrics
to
a
centralized
mimir
cluster
mimir.
Then
deduplicates
the
received
metrics
and
exposed
a
compatible
prometheus
api
to
query.
A
A
So
related
to
the
challenges
I
was
showing
you
before,
mimir
offers
a
global
view
across
all
your
metrics
and
it's
deployed
as
a
centralized
cluster
storing
and
querying
all
your
metrics.
A
A
The
long-term
metrics
data
is
stored
in
object,
storage,
which
means
the
cost
of
the
long
term.
Retention
is
low
and
the
capacity
is
virtually
unlimited.
Mimir
employs
multiple
caching
layers
and
some
query.
Acceleration
techniques,
like
query
sharding,
to
offer
best-in-class
problem,
qr,
query
performance,
multi-tenancy
is
natively
built
into
mimir
from
day
zero
and
the
tenant's
data
is
isolated
in
the
storage.
B
Thank
you
for
introduction.
Marco
mimir
uses
a
microservices
architecture.
However,
you
are
not
forced
to
deploy
every
single
miner
component
as
a
separate
deployments.
You
can
deploy
grafana
miner
in
two
different
modes,
microservices
mode
and
monolithic
mode
in
microservices
mode.
The
deployment
mode
follows
the
mini
architecture
and
components
are
deployed
in
distinct
processes.
B
B
The
monolithic
mode
runs
all
required
components
in
a
single
process,
and
then
you
scale
the
cluster
running.
Multiple
processes
across
different
machines.
Monolithic
mode
is
the
simplest
way
to
deploy
and
operate.
Graphene
linear
both
are
production
rate.
For
example,
we
have
customers
running
one
noisy
mode
at
scale.
B
C
B
Currently,
more
flexible
than
our
helm,
charts
in
terms
of
supported,
customizations,
both
helm
and
jsonnet,
install
linear
in
microservices
mode
binaries,
give
you
the
freedom
to
run
linear
on
any
platform
and
infrastructure.
Using
your
preferred
deployment
mode,
binaries
are
typically
used
by
people
who
want
to
deploy
outside
of
kubernetes
or
in
monolithic
mode.
B
B
B
During
the
demo,
we
will
assume
that
our
helm
is
configured
with
grafana
repository
of
home
charts.
We
expect
to
have
a
grafana
agent
operator
running
in
the
kubernetes
cluster.
We
assume
that
our
kubernetes
cluster
supports
ingress
and
we
have
an
existing
mimir
home
dns
entry,
pointing
to
the
ip
address
of
the
ingress.
B
B
B
This
will
be
used
in
the
next
step
when
we
configure
grafana
agent
to
scrape
metrics
from
emir,
and
we
also
set
a
scraping
interval
for
a
minimum
matrix
to
10
seconds
by
default.
Grafana
agent,
scrapes
metrics
every
60
seconds,
but
operational
rules
provided
by
linear
team
assume
a
lower
script
interval
meanwhile,
requires
an
object.
Storage
to
store
data
in
your
home
chart
comes
reconfigured
to
use
min
ilo,
which
is
s3,
compatible,
object,
store.
A
B
B
B
Our
configuration
for
graphana
agent
needs
to
specify
url
for
remote
right.
Since
graphana
agent
runs
inside
kubernetes
server,
it
cannot
use
ingress
hostname,
but
we
can
send
metrics
directly
to
linear
distributor
status
before
continuing
our
demo
with
graphana
setup
and
manual
dashboards.
Let's
import
the
rules
provided
by
mimir
team.
B
B
B
B
This
query
checks
for
the
number
of
mineral
build
info
metrics
which
is
exported
by
each
running
linear
pod.
We
can
see
11
of
them
now.
We
are
ready
to
import
some
minimal
dashboards
to
monitor
the
state
of
our
linear
deployment.
We
start
with
mineral
rights
dashboard,
which
shows
health
tricks
for
the
right
path.
B
B
You
can
also
manage
your
linear
rules
and
alerts
via
alerting
support
in
graph
hana.
In
this
view,
you
can
see
all
installed
rule
groups
and
their
state.
You
can
also
create
new
rules
and
alerts
or
explore
and
modify.
The
expressions
limit
includes
a
rich
set
of
production
grade,
graphina
dashboards
and
promises
alerts
which
you
can
use
to
monitor
miner
itself,
dashboards
both
give
you
an
overview
of
the
health
and
traffic
on
the
system
and
allow
you
to
go
down
into
performance
and
insights
about
potential
issues
for
each
alert.
B
B
B
B
A
A
A
A
This
will
happen
in
the
following
releases
also,
similarly,
to
prometheus,
we
currently
have
two
main
limitations
on
the
right
path.
Mimir
cannot
ingest
data
points
older
than
one
hour
compared
to
the
most
recent
data
points
received
and
also
mimir
cannot
ingest
data
points
out
of
order
for
a
given
metric.
A
This
means
any
data
point
we
receive
must
have
a
timestamp
which
is
a
newer
compared
to
the
previous
one
for
the
same
metric.
We
are
currently
working
to
ingest
samples
with
any
timestamp
in
any
order.
We
plan
to
have
an
experimental
support
in
few
months
and
the
stable
version
by
the
end
of
the
year.
A
A
A
C
Michael
and
peter
here
and
my
name
is
ben
pia
I'm
going
to
be
held
monitoring
the
q
a
session,
I'm
a
product
manager
on
the
vermeer
team.
So
we've
got
a
lot
of
a
lot
of
great
questions
here,
so
I'm
just
going
to
jump
right
into
it
and
the
first
one
here
is
going
to
be
for
marco.
So
can
you
talk
about
how
mimir
is
different
from
thanos
and
then
do
you
have
any
perform
performance?
Comparisons
between
mamir
and
thanos.
A
Right
sure,
well,
thanos
is
a
great
piece
of
software,
and
the
thanos
team
has
done
a
great
job
building.
You
know
a
vibrant
community.
I
literally
love
and
respect
tunnels.
A
lot
the
mimir
storage
engine
has
been
inspired
by
tunnels
and
we
also
reviews
some
part
of
the
code.
Now
mimir
makes
it
even
more
scalable
and
performant.
A
We
had
that
features
to
scale
it
further.
Like
you
know,
the
new
scalable
compactor
or
the
query
sharding
engine
mimir
is
also
multi-tenant
since
day
zero
and
the
different
tenants
data
is
isolated
in
the
storage.
We
also
support
features
like
shuffle
sharding
for
a
better
isolations
between
different
tenants.
A
You
also
mentioned.
If
we
provide
any
performance
comparison,
we
typically
recommend
users
and
also
our
customers
to
run
comparison
by
themselves.
We
don't
want,
you
know
to
provide
a
comparison
based.
You
know
on
synthetic
data,
we
prefer,
you
run
your
own
queries
on
your
own
data.
A
If
you
want
to
run
benchmarks
in
the
mimi
reaper,
you
can
also
find
a
k6
script.
K6
is
a
graphanal
labs
load.
Testing
tool
to
you
know,
look
test
both
systems,
you
set
up
tunnels,
you
set
up
premiere,
you
load
test
both
of
them
writing
the
same
data,
query
back
the
same
data,
and
then
you
can
compare
the
performances
based
on
your
your
case,
use
case
based
on
your
installation
running
on
your
infrastructure,
with
your
your
own
object,
storage
and
so
on.
C
All
right
thanks,
marco
next
question
here
is
for
peter,
so
peter.
Can
you
help
folks
understand?
They
want
to
know
what
format
mamir
stores,
data
in
object,
storage,.
A
Mirror
yeah
so
yeah,
quoting
was
britain
was
trying
to
say
we
use
the
prometheus
the
sdp
forward.
So
basically
we
use
the
same
data
format
on
disk
as
prometheus
and
astanos
as
well.
A
You
know
following
up
the
the
previous
question,
so
this
makes
very
easy
to
migrate
from
prometheus
to
mimir
or
eventually
from
thanos
to
mimir
and
eventually
even
back
if
you
want,
given
the
storage,
is
compatible
between
these
projects.
C
B
And
in
tsdp
format,
usually
one
sample,
which
is
one
timestamp,
plus
one
sample
in
flow
64
format,
would
normally
without
any
compression
take
16
bytes.
However,
the
sdb
compresses
that
and
every
h1
sample
takes
1.3
bytes.
So
that's
the
compression
that
we
provide.
C
All
right,
another
kind
of
related
storage
question
that
I
see
you
came
in
here.
Is
there
any
sort
of
concept
of
hot
or
warm?
I
imagine
this
is
related
to
storage,
but
hotter,
warm
storage
in
mimir.
Now,
marco,
you
wanna
take
this
one.
A
Yeah
sure
so
the
most
recent
data
is
stored
and
queried
from
the
mirroring
gestures.
If
you
run
me
with
a
default
configuration,
the
last
12
hours
of
matrix
data
are
queried
just
from
the
injustice.
So,
from
this
perspective,
ingestors
are
our
hot
storage.
A
All
their
data
is
stored
on
the
object
storage
like
aws
s3
or
google
cloud
storage,
azure,
blob
storage,
as
mentioned
in
the
during
the
presentation
as
well,
and
they
are
queried
through
a
mimir
component,
which
is
called
the
store
gateway.
The
data
you
know
the
tsd
blocks
are
never
fully
downloaded
on
disk.
A
C
Awesome
question
for
you
here
peter,
and
maybe
this
is
something
we
can
clarify
for
the
person
that
asked
the
question
so
that
the
original
question
was.
Is
it
recommended
best
practice
to
separate
loki
and
prometheus
data
is
different,
tenants
and
mimir?
So
maybe
you
can
clarify
there
with
loki
being
kind
of
a
separate
system,
but
maybe
just
go
over
kind
of
how
you
think
about
how
you
create
tenants
in
mimir
and
like
what
that
tends
to
map
to
in
in
the
user's
organization.
B
So,
if
you
store
the
locking
data
in
the
same
bucket,
mimir
will
try
to
actively
scan
what's
inside
the
data
and
we'll
try
to
maintain
the
content
of
it.
However,
we
have
recently
introduced
a
new
feature
where
you
can
configure
mimir
with
a
specific
prefix
which
can
be
used
in
the
bucket.
So
using
this
feature
you
might
be
able
to
mix
the
loki
and
mimir
data.
However,
I
would
still
recommend
different
buckets
for
this
setup.
B
So
we
have
a
miner
tool.
Sorry,
we
have
meta
convert
tool
which
is
able
to
modify
the
metadata
of
up
of
blocks
that
you
manually
put
into
your
bucket.
However,
we
are
currently
working
on
the
backfilling
api,
which
will
also
be
supported
by
miner
tool,
which
will
allow
you
to
upload
existing
blocks
from
your
prometheus
or
channels
into
linear
markets.
B
To
mimir,
you
need
to
manually
copy
your
data
and
run
meta
convert
at
the
moment
in
the
future.
We
will
have
this
big.
Fill
api
and
linear
tool
will
support
the
api
natively.
C
And
then
maybe
just
one
thing
I
wanted
to
add
there
is
you
know
what
what
peter
marco
touched
on
was
about
kind
of
historic
backfill
of
data,
but
if
you're
running
prometheus
today-
and
you
don't
necessarily
care
about
importing
that
historic
data,
there's
nothing
stopping
you
from
just
setting
up
a
mirror
and
starting
to
point
your
remote
right
at
it
and
there's
no
actual
migration.
That's
required.
C
I
think
we
are
at
time
here
so
thanks
everyone,
sorry,
we
didn't
get
all
the
questions,
but
please
come
find
us
on
the
community
channel
on
our
github
page.
We're
excited
to
keep
growing
the
project.
So
thanks.