►
From YouTube: Prometheus Deep Dive - Goutham Veeramachaneni, Grafana Labs & Bartłomiej Płotka, Red Hat
Description
Don’t miss out! Join us at our upcoming events: EnvoyCon Virtual on October 15 and KubeCon + CloudNativeCon North America 2020 Virtual from November 17-20. Learn more at https://kubecon.io. The conferences feature presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
Prometheus Deep Dive - Goutham Veeramachaneni, Grafana Labs & Bartłomiej Płotka, Red Hat
The Prometheus deep-dive will present advanced use cases, in particular how to run and scale up a vanilla Prometheus setup for large organizations. A number of Prometheus maintainers will be around for the Q&A.
https://sched.co/Zey2
A
Hello,
everyone
welcome
to
the
prometheus
deep
dive.
We
are
super
excited
to
show
you
how
to
use
prometheus
and
how
to
take
prometheus
usage
to
the
next
level
before
we
start
a
quick
intro
hi,
I'm
gautham,
I'm
a
software
engineer
at
grifana
labs
and
I'm
a
prometheus
and
cortex
maintainer.
I've
actually
started
contributing
to
prometheus
about
three
and
a
half
years.
Four
years
ago
and
later
I
started
working
on
cortex
to
provide
a
hosted
prometheus
service
between
prometheus
and
cortex.
B
Amazing,
okay,
my
name
is
bartok
podka
and
I'm
engineer
working
in
the
monitoring
team
at
red
hat.
I
love
open
source
and
solving
problems.
I
am
part
of
the
promoters
produce
team
and
I'm
also
a
cauter
of
thanos
and
on
the
top
of
that,
you
might
know
me
from
the
newly
created
cnc
special
interest
group
observability,
where
we
focus
on
the
cloud
native
observability
topics
like
and
projects
like
you
know:
cortex
frontianos,
prometheus,
open,
telemetry
and
so
on.
So
if
you
find
this
interesting,
please
visit
us
on
on
this
github
story.
A
So,
let's
start
with
the
typical
team,
in
this
case
the
team
in
which
kate
and
tom
are
a
part
of
so
they
run
an
api
service,
they've
successfully
built
and
launched
that
api
service
and
it
works
really
well,
but
a
couple
of
days
later,
they
noticed
that
the
users
are
looking
at
500
issues
or
500
errors
and
they're
trying
hard
to
debug.
What
is
throwing
these
500?
Is
it
the
load?
Balancer?
Is
it
their
own
application?
Is
the
database
locking
up?
Is
the
database
being
slow?
What
was
the
cause
of
it
to
debug
this?
A
A
In
this
case,
kate
actually
chose
prometheus
and
grafana
the
golden
stack
and
she
added
an
exporter
in
front
of
the
load
balancer
in
the
database,
exposing
all
the
metrics
of
the
load,
balancer
and
database,
and
she
also
instrumented
the
application
itself
to
expose
prometheus
metrics
prometheus
now
collects
all
this.
Data
and
kate
can
now
alert
on
500
errors
or
other
issues.
She
also
added
dashboarding
on
top
of
it.
So
whenever
there's
an
alert
or
an
issue,
they
can
directly
look
at
all
this
dashboard
with
all
the
information
to
directly
figure
out.
A
Okay,
the
database
is
having
issues,
which
is
why
we
are
throwing
500
errors,
so
their
outages
didn't
stop,
but
whenever
there
was
an
outage,
they
could
very
quickly
figure
out
what
was
happening.
What
was
going
wrong
and
what
to
debug-
and
they
were
very
quick
at
fixing
all
these
issues
looking
at
their
success,
the
other
teams
also
started
using
prometheus,
as
is
the
prometheus
best
practice.
Each
team
was
maintaining
their
own
prometheus,
they
were
metering
their
own
grafana,
and
all
of
this
was
working
really
well.
A
Now
all
the
teams
in
the
organization
were
using
prometheus
and
as
the
usage
of
the
organization
and
the
application
grew,
they
started
deploying
to
several
data
centers
again,
as
is
the
prometheus
best
practice.
You
deploy
a
prometheus
poor
data
center,
the
next
slide
yeah.
This
is
because
prometheus
needs
to
be
close
to
the
application.
It's
monitoring.
A
So
in
this
case
they
were
deploying
to
us
central
one
and
us
two
and
whenever
there's
an
alert,
the
alert
actually
contains
which
prometheus
generated
this.
So,
if
they're
looking
at
an
alert
from
u.s
central
one,
they
directly
go
to
the
grafana
of
your
central
one,
look
at
look
at
the
system
and
see
what
was
going
wrong
and
it
was
all
working
really
well
until.
B
Until
at
some
point,
kate
and
tom
wanted
to
figure
it
out
how
to
aggregate
the
data
on
the
global
level
from
the
multiple
promoter
servers
right,
it
was
so,
let's
imagine
that
tom
wanted
to
answer.
Maybe
simple
question
right:
what
is
the
error
rate
of
of
the
http
request
made
by
or
received
by
his
service
right,
and
he
want
to
aggregate
and
know
the
rate
across
all
the
and
some
of
those
trades
across
all
the
all
the
clusters
and
and
as
you
remember
in
each
of
those,
there
are
promoters
servers.
B
So
we
have
essentially,
we
have
to
aggregate
from
more
than
one
promote
use
servers
how
to
do
that
just
in
from
just
using
promote
use
alone
now.
Well,
you
cannot
use
query
api,
which
is
which
is
kind
of
the
thing.
The
first
thing
you
would
you
would
try
to
use,
and
this
is
because
chrome
ql
elevation
is
made
on
the
leaf
node.
So
once
you
have
the
data
from
two
sources,
there
is
already
chrome,
ql
evaluation
made.
B
So
you
would
need
to
have
another
layer
of
query
evaluation
to
be
made
to
adjust
and
to
add
additional
aggregate
like,
for
example,
sum
to
summarize
those
results
together
to
tell
you
the
overall
error
rate,
for
example.
Now
this
is
not
trivial
work,
because
there
are
lots
of
catchy
things
and
cabinets
like
additional
load
on
doing
prom,
ql
and
and
resolutions
and
steps.
So
all
of
this
is
not
easy
to
solve
with
using
using
query
api.
B
B
We
are
deploying
another
prompt
use
server
on
top
of
those
cluster,
maybe
in
another
cluster,
maybe
in
one
of
those
clusters-
and
you
configure
this
promoters
to
scrape
a
federate
endpoint
that
those
leaf
prometuses
expose
and
it
scrapes
like
a
normal
scrape,
very
similar
and
so
even
create
a
scrape
configuration
for
that
and
this
kind
of
works
great
until
you
have
a
huge
or,
like
you
know,
bigger
amount
of
data
in
those
leave
prompt
users
because,
as
you
can
imagine,
if
you
are
scraping
and
replicating
all
of
the
data
into
the
global
prometus
or
like
yeah
into
the
global
chromato
server,
it
grows
in
the
same
with
the
same
kind
of
pace
as
the
leaf
from
it
uses.
B
Because
you
replicate
all
of
this
data
and
suddenly
you
do
that.
For
the
single
instance
that
is
running
on
the
single
machine.
This
causes
lots
of
problems,
so
the
the
thing
that
we
suggest
as
prometheus
maintainers
and
the
community.
B
You
should
only
use
federation
for
only
only
for
subset
of
your
data,
and
the
easiest
way
to
do
that
is
to
essentially
create
recording
call
rules
for
the
things
that
you
want
to
have
on
the
global
level,
and
also
you
can.
You
know
things
of
that
reduce
the
cardinality
of
the
recorded
data,
because
you
can
sum
aggregate
across
those
and
you
configure
federation
to
only
federate
to
only
scrape
the
rules
by
the
parameter
of
musher
and
well.
B
Because
of
that,
you
need
to
also
adjust
the
query
that
that
tom
has
to
make
to
query
essentially
recording
rule,
not
the
data
itself
anyway.
The
valid
and
the
precise
answer
is
available
for
tom,
so
tom
is
most
likely
happy.
However,
there
are
other
options
as
well.
The
third
option
is
really
similar
to
the
query
api,
where
you
and
just
provide
a
query
that
that
has
to
be
evaluated
for
your
answer.
B
Instead
of
that,
you
are
going
deeper
and
trying
to
access
the
actual
samples
stored
in
the
database
of
properties,
and
this
is
the
api
that
allows
that
is
called
remote
read.
So
during
that
api,
you
are
using
a
slight
different
payload.
It's
not
a
json,
it's
protobuf
is
is
actually
should
be
familiar
to
you.
B
That
is
tunnels
and
essentially
thanos
allows
you
to
add
a
side
card
to
each
promote,
use
that
using
that's
just
using
this
reit
protocol
and
then
expose
it
into
the
grpc
that
prompted
thanos
is
using,
and
then
you
essentially
allow
yourself
to
create
and
to
deploy
and
global
query
component,
which
does
the
pronquial
evaluation
on
the
global
level
having
the
data
from
each
promote
users
separately.
So,
and
this
is
how
you
can
essentially
transparently
have
the
global
view
without
recording
rules
and
without
replicating
all
of
your
data
multiple
times.
B
Challenges
when
we
are
thinking
about
you
know,
with
bigger
adoption
and
more
users.
So
at
some
point,
kate
was
kind
of
annoyed
that
she
has
very
short
metric
retention.
Only
you
know
couple
of
weeks,
so
she
thought
about
hey
what
about
maybe
longer
one,
maybe
years
of
data,
so
I
can
analyze
my
my
data,
maybe
also
some
teams
have
the
policy
to
store
the
metrics
for
whatever
was
like
life
period
of
your
service.
So
it
is
kind
of
crucial
requirement
for
some
companies.
B
There
is
a
misconception
that
prometheus
is
not
suitable
for
long-term
storage
and
you
have
to
deploy
some
external
systems
and
and
have
some
integration.
That's
not
always
the
case,
because,
especially
from
prometus
2
and
the
wrestling
versions
of
prometheus,
we
made
sure
that
the
older
data
that
you
store
actually
adds
marginal
resources
when
they
are
not
used.
So
whenever
you
are
just
querying
short
period
of
times
and
and
mostly
from
from
our
experience,
people
are
queries.
B
You
know
the
very
fresh
data,
those
all
their
data
even
for
years
time
are
not
really
increasing
your
resources,
so
it
doesn't.
The
the
the
resources
and
consumption
of
those
resources
for
promotion
server
doesn't
scale
with
the
retention
that
you
give
it
and
to
make
it
easier.
A
B
We
would
really
recommend
setting
you
know
some
large
disk
and
precisely
try
to
precisely
plan
the
capacity
of
the
disk
space,
because
you
want
to
avoid
kind
of
you
know,
resizes
and,
and
things
like
that
after
those
years
time,
so
it's
totally
doable
and
we
have
lots
of
users
who
are
having
you
know
two
years,
data
on
their
primitive
server.
B
However,
yes,
there
are
some
trade-offs.
After
all,
prometheus
was
mainly
focused
on
the
monitoring
for
resident
and
data
and
alerting.
So
there
are
some
caveats.
One
of
it
is
that
it's
super
hard
to
efficiently
or
like
effectively
plan
capacity
of
your
disks,
because
there
are
lots
of
un
unpredictable
spikes,
it's
hard
to
sometimes
control
cardinality
of
the
data
you're
ingesting.
So
it's
not
the
easy
task.
After
all,
the
second
problem
might
be
backup.
B
You
know,
if
you
have,
those
disk
hardware
can
fail
and
you
have
to
have
some
backup
plan
and
operational
kind
of
you
know,
structure
of
it,
some
scripts,
automation
and
it's
not
always
the
easiest
way,
especially
on
bare
metal
and
last,
but
not
the
least.
Well.
There
is
no
native
down
sampling
on
from
two
side,
which
means
that
the
large
range
queries
like,
for
example,
for
two
years
for
two
years,
will
fetch
all
those
samples
into
the
promptql
engine
and
fromql
has
to
run
through
all
those
samples
to
calculate
your
response.
B
And
while
this
is
doable,
it
will
take
some
time
and
there
is
definitely
some
room
for
some
down
sampling.
That
will
reduce
the
resolution
that
you
don't
need
for
querying
such
a
long
time
range.
A
Cool
so
now
in
the
organization
every
team
is
using
prometheus
and
now
more
and
more
users
want
to
use
prometheus
and
there
are
more
use
cases
coming
up.
For
example,
let's
say
the
marketing
team
comes
to
kate
and
us
asks
if
they
could
use
the
data
in
their
data
lake
or
on
another
database,
or
maybe
some
someone
wants
to
use
the
prometheus
data
and
store
it
in
a
replicated
or
distributed
database.
That
was
that
is
much
better
at
storing
long-term
long-term
storage.
A
A
A
Now,
typically,
your
prometheus
will
script
millions
of
samples
a
second,
and
sometimes
you
only
want
to
send
a
subset
of
the
data
to
the
remote
endpoint
and
doing
that
is
also
extremely
use.
Easy
you
can
use
the
right
relabel
configs
to
kind
of
specify
what
data
you
want
to
send
over
and
what
data
you
want
to
keep
or
like
what
data
you
want
to
drop
and
it's
a
very
powerful
config.
You
can
also
manipulate
the
data
before
sending
it
over.
A
So
with
this
you
can
send
as
much
or
as
little
as
you
want.
If
marketing
just
wants
to
look
at
the
ui
level
visits,
you
can
only
send
the
metrics
for
the
ui
level
visits
to
a
remote
store,
and
now
this
is
extremely
popular
and
we
already
have
almost
every
single
popular
time
series
database
out
there,
integrating
with
the
prometheus
remote
right
and
read
natively.
A
So
we
have
a
non-exhaustive
list
of
projects
like
about
25
of
them,
which
kind
of
support
prometheus,
read
and
prometheus
right,
and
you
will
most
likely
see
all
your
favorite
dsdbs
in
there,
but
in
case
you
want
to
send
a
prometheus,
remote
right
data
to
a
different
database.
That's
not
in
the
list
that
doesn't
have
a
existing
integration.
It's
extremely
easy
to
build
an
integration.
A
We
use
the
same
protobufs
if
you're
familiar
with
grpc.
You
will
be
familiar
with
this,
so
in
this
case
we
are
basically
sending
a
prototype
message
called
write
request,
which
is
a
list
of
time
series.
Now.
What
is
a
time
series?
A
time
series
is
basically
the
labels
for
the
time
series
and
the
samples
in
that
time
series.
A
So,
with
this
extremely
simple
protobuf,
you
just
create
the
protobufs
and
send
it
across
like
prometheus,
creates
this
protobuf
and
sends
it
across,
and
if
you
can
actually
accept
this
protocol,
you
can
store
all
the
data
that
prometheus
sends
to
you
now.
One
of
the
typical
use
cases
for
this
is
actually
long-term
storage.
A
This
is
also
another
solution
for
the
global
view
problem
that
bartek
was
talking
before.
If
you
want
to
combine
the
data
between
us
central
one
and
eu
s2
prometheus,
you
just
basically
remote
write
data
from
both
the
prometheus
to
a
central
cluster
and
because
all
the
data
is
in
a
central
place,
you
can
query
the
single
single
central
cluster
and
you
get
all
your
data
that
you
want.
One
of
the
examples
of
the
projects
that
you
can
use
for.
This
is
cortex,
which
I
helped
maintain.
It's
also
a
cmcf
project.
A
You
can
also
use
thanos
or
m3
or
several
different
time
series
databases
out
there,
so
this
is
about
remote
right.
I
just
also
want
to
talk
to
you
about
metadata.
This
is
extremely
useful,
as
you
scale
your
teams.
So
one
of
the
things
that
we
notice
is
new
people
come
to
the
teams.
They
look
at
dashboards
and
sometimes
it's
hard
to
understand
what
a
particular
prongql
query
is
doing.
A
Pronquel
itself
is
simple,
but
if
you
don't
understand
what
the
metric
means,
it's
extremely
hard
to
figure
out
what
the
query
means
and
in
most
cases
you
will
be
able
to
figure
out
just
by
looking
at
the
metrics,
for
example,
node
cpu
seconds
total.
So
this
you
can
kind
of
guess.
Okay,
this
is
the
cpu
usage
per
node,
and
the
reason
of
behind
this,
like
behind
the
obviousness,
is
because
we
have
an
exhaustive
and
really
nice
guide
on
how
you
should
label
your
metrics
and
the
labels
always
follow
the
naming
best
practices.
A
But
sometimes
when
you
deploy
an
exporter,
it
might
not
follow
the
best
practices
or
even
after
following
the
best
practices,
sometimes
the
name.
The
metric
or
the
query
is
not
clear,
for
example,
cube
node
status
capacity.
You
kind
of
don't
understand
right
away
what
it
is
to
help
with
that.
We
actually
expose
the
help
and
type
information
for
every
single
metric
as
part
of
the
exposition
format.
So
if
you
hit
the
slash
metrics
page,
you
will
already
see,
for
example,
in
this
case
you
we
can
see
that
go.
Gc.
A
A
So
in
previous
versions,
prometheus
was
not
able
to
store
all
of
this
data,
but
in
the
recent
version
starting
2.17,
I
think
we
actually
store
and
expose
all
this
metadata
inside
prometheus
itself.
So
we
have
now.
We
now
have
the
metadata
api,
where
you
can
query
for
the
metadata
of
a
particular
metric
next
slide.
A
Here
you
can
see
it's
slash,
api,
slash,
v1,
slash
metadata.
You
can
specify
the
limit
or
the
number
of
metrics
you'd
like
to
return
and
also,
if
you
want
to
look
at
a
metadata
of
a
particular
metric,
you
just
pass
in
the
metric
name
and
it
will.
It
will
give
you
the
metadata
for
that.
A
You
can
already
use
this
like
you,
don't
need
to
hit
this
api
you.
It
is
part
of
the
ui
in
grafana
7.0.
So
if
you
are
using
grafana
70
or
the
explore
tab
or
even
the
dashboards,
whenever
you
type
in
the
metric
name,
it
kind
of
helps
you
figure
out
what
the
metric
is.
In
this
case,
you
can
see
this
cube.
Node
status
capacity
cpu
course
is
a
gauge
and
it's
a
total
cpu
course
of
the
node.
So
this
is
super
useful,
especially
for
newcomers
to
understand
what
is
happening
now.
A
Just
as
an
aside,
we
are
having
a
new
react
ui
as
part
of
prometheus.
So
if
you
are
running
prometheus,
we
have
a
tiny
new
shiny,
try,
experimental,
ui
button,
just
click
on
it
and
you
will
be
taken
to
the
new
react.
Ui
we're
actively
developing
this
ui
and
the
same
feature.
The
metadata
expansion
will
be
part
of
this
ui
as
well,
and
we
would
like
to.
We
would
like
all
of
you
to
press
this
ui.
A
If
you
have
bugs
please
file
issues
and
if
you're
a
front-end
developer,
please
contribute
to
all
the
like
all
the
issues
around
the
new
ui.
This
is
going
to
be
the
future
ui
of
prometheus
and
don't
worry
about
breaking
things.
If
something
breaks,
you
can
basically
click
on
the
classic
ui
and
go
back
to
the
old
ui,
all
right
so
going
to
the
future
of
metadata.
We
are
just
not
done
with
metadata
yet
so
we
currently
store
metadata
in
memory,
but
now
in
the
future,
we
want
to
persist
this
metadata.
A
So
over
time
you
will
be
able
to
see
how
the
help
text
and
types
of
each
metric
evolved.
We
also
want
to
be
able
to
write
these
metadata,
write
all
this
material
to
remote
systems
like
cortex
or
thanos,
so
that
even
when
you're
using
the
remote
system,
you
have
the
same
apis
and
same
data.
A
We
already
have
a
pr
for
it
and
there
was
a
lot
of
discussion
around
how
this
pr
should
be
structured
out
of
contention.
But
in
a
recent
dev
summit
that
happened
a
few
weeks
ago,
we've
reached
consensus,
and
hopefully
you
know
in
the
next
release
or
two
you
will
have
remote
right,
remote
writing
of
meta
data.
B
Thanks
god,
so
in
the
same
dev
summit
we
actually
discussed
more
more
items
and
one
of
the
further
discussion
points
were
backfilling,
and
this
is
like
very
wanted
feature
of
prometheus.
So
imagine
that
you
know
our
team.
A
B
And
katie
really
wants
to
import
the
metrics
into
the
prometheus
from
other
systems.
Maybe
she
is
using.
You
know
on
some
other
promoters
or
some
other
systems.
She
want
to
migrate
the
data
into
her
existing
instance.
So
how
do
I
do
it
with
prometheus
right?
So
it
was
always
available
this
kind
of
way
of
migrating
and
and
allowing
to
query
your
data
from
the
other
systems
using
prometheus.
It's
called
remote
read,
so
you
can
configure
your
pro
materials
to
read
from
the
external
system,
for
example
influx
db.
B
So
every
time
you
query
something
it
will
go
to
that
system,
but
this
is
not
really
backfilling
right,
because
backfilling
means
that
you
want
to
import
of
data
into
your
prometeur
storage
directly
to
persist
that
and
use
it
for
future.
So
how
do
I
do
this?
Maybe
it
would
be
super
amazing.
If
you
know
katie
could
just
put
and
write
some
csv
file,
which
is
easy
to
you
know,
play
with
and
just
generate
and
put
it
imported
into
the
promoter
storage
now
well.
B
This
is,
or
will
be
possible
very
very
soon,
so
we
started
like
the
actual
work
on
adding
support
for
importing
from
two
file
formats
csv
and
open
metrics,
and
it
will
look
like
this.
You
will
essentially
have
an
a
tool
called
tsdb
import
and
you
just
pass
the
file
that
will
stream
the
data
into
the
rows.
Let's
say
into
the
into
this
tool,
which
will
generate
the
blocks,
which
is
you
know,
kind
of
parsed
and
understandable
by
pro
materials
tsdb
database.
B
B
B
And
well
I
don't
have
any
csv
file
anywhere
handy,
so
maybe
let's
do
something
quick,
so
I
will
go
to
demo
robust
perception
server.
Let's
pick
some
nice
interesting
metric
to
import.
I
really
like
this
one
hour
range
of
gold
for
all
four
services.
So
let's,
let's
just
grab
that
and
so
let's
let
I
just
quickly
wrote
a
very
handy
bash
script,
which
essentially
queries
that
server
and
generate
the
csv
file
from
it.
B
So
let's
quickly
run
that
in
the
csv
file
I
can
specify
a
different
fields
by
headers
and
essentially,
let's,
let's
show
it
and
you
can
see
that
I
kind
of
defined
the
type
of
the
field
by
by
the
field
name
and
by
the
head
in
the
header.
So
you
know
all
those
libel
name.
Little
values
are
defining.
Essentially
what
part
of
the
csv
file
field
is
is
for
label
values
or
names
or
what
are
the
actual,
where
you
put
timestamp
and
where
put
values,
and
things
like
that.
B
So
now
we
generated
like
1000
rows
of
data.
So
let's
install
our
tsdb
tool.
You
can
install
this
via
one
such
command,
where
I
just
go
get,
and
essentially
I'm
pulling
a
certain
comment
that,
as
is
essentially
part
of
my
pull
request,
so
ongoing
work,
but
should
be
available
soon.
Once
this
is
installed,
I
can
hopefully
run
tsdb
tool
to
generate
my
my
blog
once
this
is
done.
Let's
check
our
help
of
such
tool.
You
can
see
the
new
available
commands
imports,
open,
metrics
and
import
csv.
B
Let's
actually
do
that
right,
so
we
will
cut
our
file
into
the
db
import
tool
and
output
that
in
some
directory
this
is
actually
kind
of
fast,
because
we
don't
have
much
data,
it's
only
four
series
and
it
generated
a
block
of
one
hour
block.
Essentially
that
has
following
id.
You
can
see
that
block
is
written
and
have
all
the
necessary
files
for
our
tsdb
to
understand
that.
So,
let's
just
let's
just
run
from
it,
use
right,
so
I
would
just
create
empty
configuration.
It
doesn't
matter.
B
B
So,
as
you
can
imagine,
app
is
not
there
because
we
actually
uploaded
gogo
routines
and
it
is
available.
So
when
we
execute
for
our
needed
time
range,
we
will
see
the
data
available
for
us
and
you
can
see
and
that
this
data
is
exactly
the
same
as
on
our
robust
perception
server,
but
actually
robust
perception.
Server
has
more.
A
B
A
That
was
great
to
summarize.
We've
initially
talked
about
how
to
do
global
view:
how
to
aggregate
data
between
several
different
prometheuses,
the
different
ways
you
can
use
to
do
that,
and
then
we've
talked
about
remote
right
and
long
term
storage.
A
How
you
can
use
just
prometheus
what
the
caveats
are,
how
you
can
use
remote
right
to
write
to
a
different
server
and
how
you
can
you
do
global
view
through
remote
right,
we've
talked
about
metadata,
one
of
the
new
features
that
I'm
super
excited
about,
and
finally,
we've
also
talked
about
backfilling
and
the
future
features
that
are
coming
into
backfilling.
So
this
is
something
I'm
super
excited
about
again,
because
it
will
help
people
migrate
from
older
systems
to
prometheus,
with
all
their
data.
A
All
right,
that's
it.
If
you
have
any
questions,
feel
free
to
ask
them
now
or
you
can
reach
us
via
prometheus
community
or
on
github.
Thank
you.