►
From YouTube: What s new in the Prometheus ecosystem?
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
Okay,
sorry
about
that,
we
had
a
bit
of
an
echo
here,
hello,
everyone
again
and
welcome
to
another
episode
of
open
observability
talks,
I'm
your
host
totan
horvitz,
and
here
at
openobservability
talks.
We
talk
about
anything,
devops,
observability
and
open
source.
This
is
actually
the
closing
episode
of
2022
and
I.
Just
got
the
end
of
year,
podcast
statistics
from
Spotify,
so
I'm
glad
to
share
that
we
have
listeners
from
over
40
countries
with
around
270
percent
increase
in
followers
and
several
hundred
listeners
ranked
us
in
the
top
10
podcasts.
A
So,
first
of
all,
thank
you
so
much
for
joining
me
on
the
show
and
for
following
us
and
giving
us
high
ranking
the
show
is
available
on
all
the
popular
podcast
apps
Apple,
Spotify,
Google
and
more
as
well
as
on
YouTube.
So
if
you
are
not
yet
a
follower
go
ahead
and
join
us.
A
I'd
also
like
to
thank
our
sponsors
logs.io.
The
cloud
native
observability
platform.
Logsio
takes
the
best
of
read
open
source
projects
such
as
Prometheus,
open
search
and
Jaeger,
and
offers
them
as
a
unified
observability
platform
built
for
scale
for
those
joining
the
live
stream
or
on
YouTube
or
twitch
feel
free
to
share
questions
and
comments
on
the
chat.
We'd
be
more
than
happy
to
take
them
in
into
the
fireside
chat,
and
with
that,
let's
move
on
to
today's
episode
when
I
was
at
cubecon
in
Detroit.
A
I
was
amazed
by
how
many
updates
were
in
the
Prometheus
and
the
Prometheus
ecosystem,
and
then
a
few
weeks
later,
at
promcon
in
Germany,
so
I
decided
to
ask
Julian
pivoto
to
join
me
to
discuss
all
these
updates.
Julian
is
a
maintainer
of
Prometheus
he's
also
a
co-founder
of
Ollie,
a
company
that
provides
support
for
open
source
observability
tools
such
as
Prometheus
Thanos
and
prafana,
and
he's
definitely
the
authoritative
source
for
all
things
Prometheus.
So
let
me
invite
Julian
into
the
live
stream:
hey
Julian,
hello,
hello!
A
Glad
to
have
you
here
on
the
on
the
show.
Finally,.
B
Yes,
yes,
it
it's
glad
to
be
here
for
the
very
last
of
this
year,
yeah.
A
A
B
Yes,
I
have
been
working
since
2011
and
I've
been
always
focused
on
open
source
in
my
work,
because
that's
the
way
I
I,
like
the
the
software
I'm
using
and
I,
have
been
contributing
to
several
opens
of
software.
I
have
also
been
involved
in
the
automation
world,
with
tools
like
pet
and
foreman
and
since
2017
I
have
also
shifted
towards
Primitives
and
monitoring.
I
already
did
some
monitoring
before,
but
I
already
started
with
forages
when
I
see
the
potential
and
from
2017
to
2019.
B
I
was
a
contributor
to
Prometheus
and
I
joined
the
formative
team
in
2020
to
become
the
properties
server
maintainer
in
2021,
and
now
that,
where
I'm
working
on
so
I'm
working
most
of
my
time
on
permit
use
supporting
customers
and
developing
the
software
further
and
also
I'm
involved
in
the
community
of
several
exporters.
So
it's
a
different
project
to
be
in
and
that's
where
I
am
now
in
my
career.
So.
A
B
B
The
project
itself
is
still
a
bit
older
than
that,
but
you
get
to
to
select
a
date
and
that's
the
the
actual
date
that
we
have,
because
these
are
these
are
lost
in
time.
So
yeah
we
celebrate
10
years
since
the
first
capital
meters,
even
if
the
world
only
knows
committees
since
2015
but
yeah
permit.
This
is
quite
old
and
much
a
project.
B
Now
as
you
as
you
know,
it's
the
second
project
in
the
cncf,
which
is
also
a
great
achievement,
and
by
coincidence
it's
also
a
quick
call
just
like
even
it
is
a
Greek
word,
so
yeah
the
letter
projects
and
there
was
the
first
project
in
the
cncf.
So
it's
quite
nice
to
see
all
of
the
history
happening.
Yeah.
A
It's
amazing
so
maybe
before
diving
into
the
latest,
updates,
let's
zoom
out
a
bit
and
look
at
Prometheus
as
a
project.
We
don't
need
to
do
the
historical
review,
but
maybe,
as
you
perceive
it
towards
the
the
current,
let's
say,
mission
statement
where
the
project
is
heading
in
the
in
the
macro
like
in
the
major
major
items
that
that
you
see
the
project
heading
to
and
evolving.
B
B
I
also
want
the
project
to
continue
to
be
very
simple
to
use,
which
means
that,
for
example,
when
I'm
giving
a
training
about
remedies,
the
same
trainings
worked
over
the
years,
because
I
want
the
basics
to
always
work
the
same,
and
that's
very
important
that
when
we
add
complexity,
we
can
still
have
newcomers
coming
and
using
the
project
quite
easily,
and
that's
a
key
point
of
ready
that
it
is
really
easy
to
set
up.
You
just
need
to
start
the
binary.
B
You
have
your
server
and
then
you
configure
it
the
way
you
want
know
in
the
way
that
it
is
evolving.
Now
we
are
actually
having
a
lot
of
users
from
many
different
areas,
so
we
used
to
be
monitoring
for
infrastructure,
and
now
we
see
that
we
are
monitoring
a
lot
of
different
things,
because
we
just
take
a
lot
of
data
and
we
enable
you
to
query
the
best
way
that
you
want.
So
we
have
seen
people
doing
wine
turbines.
We
have
seen
people
doing
their
home
monitoring
like
electricity
consumption
and
everything
around
that.
B
So
we
are
really
versatile
in
what
we
can
offer
and
what
people
are
doing
with
the
project,
which
is
also
very
nice,
because
as
an
open
source
project,
you
should
be
just
free
to
use
it
in
the
way
you
want.
If
you
want
to
use
it
in
production
in
your
kitchen
in
your
basement
anywhere,
you
want,
you
can
run
permit
this
it's
available
for
everyone
for
every
purpose.
What
we
are
also
seeing
in
the
future
is
that
we
are
also
evolving
with
the
community
and
the
world
around
us
like
in
the
last
Dev
Summit.
B
A
Yeah
it's
funny
when
I
saw
the
update
on
adding
the
the
trigonometrical
functions.
A
That
just
showed
me
how
how
different
use
cases
that
we
might
have
not
imagined
originally
for
Prometheus
is
a
less
common,
maybe
on
the
it
infrastructure,
but
are
much
more
common
in
other
use
cases
just
to
show
you
that
you
get
some
requests
for
for
features
that
just
expose
all
different
domain
of
use
cases
so
I
think
one
of
the
most
most
impressive
things
for
me
in
Prometheus
is
an
open
source
project
and
maybe
even
a
de
facto
standard
in
many
cases
in
many
Essences
is
the
rich
ecosystem.
A
The
fact
that
many
Frameworks
many
tools
today
provide
out
of
the
box
and
expose
their
their
metrics
in
this
format
and
they
provide
all
sorts
of
Integrations.
That's
I,
think
the
charm,
because
essentially
whatever
you
have
in
your
stack,
if
you
draw
Prometheus,
it's
very
easy
to
start
collecting
metrics.
A
So,
let's
start
I
think
it's
a
good
point,
maybe
to
start
covering
where
we
stand
with,
with
with
Prometheus
and
the
updates
there
can
you
share
with
us
maybe
about
the
service
Discovery
elements
that
have
been
added
and
where,
where
you're
heading
in
this
respect,.
B
Yes,
so
the
first
point
that
we
had
it
is
regarding
scraping
and
service
Gallery
was
an
agent
put.
It's
not
such
more
quite
recent.
It
was
added
I
think
two
years
ago,
but
it
enables
you
if
you
only
want
to
use
committees
to
integrate
with
projects
like
Thanos,
cortex
mimir,
you,
you
just
want
the
remote
right,
you
can
do
it
and
you
don't
need
the
local
time
series
database,
which
means
that
your
community
server
will
use
a
lot
less.
B
And
on
the
resources
that
you
would
need
otherwise,
so
we
listened
to
the
user,
they
say:
okay,
we
have
a
global
solution.
We
have
a
provider
that
will
just
ingest
a
pre-prometized
data
and
we
we
just
agree
that
okay,
let's
do
something
extreme
so
that
you
can
continue
using
promoters
with
a
lower
footprint.
B
You
have
to
do
some
trade-off,
but
you
can
just
we
can
lower
the
trend,
the
footprint
of
images,
if
you
need
to
run
it
only
on
one
note
or
if
you
need
to
just
run
in
at
the
edge
of
your
cluster,
you
can
just
do
it
with
the
agent
mode
so
that
that's
further
the
the
overall
Improvement
that
we
have
another
Discovery.
So
we
have
a
completely
new
mode
of
operating
parameters.
Then,
regarding
the
ecosystem,
we
have
added
a
lot
of
service
discoveries.
B
The
last
one
is
the
ovh
cloud
service
Discovery.
We
also
had
a
lot
of
different
Cloud
providers
coming
to
us
wanting
to
what
support
for
their
eye
on
the
top
of
my
head,
Lynnwood
Walter
joined
the
the
project
to
integrate
the
monitoring
solution.
A
B
Before
that
we
had
only
the
first
discovery,
so
you
needed
to
write
a
file
on
the
property
server,
which
is
not
super
handy
and
that's
why
we
added
that
new
HTTP
service
career
and
that's
a
very
important
point
when
I
am
trying
to
drive
from
it
used
to
one.
Is
that
if
you
are
using
sidecar
next
to
your
community
server,
let's
look
together.
Do
we
still
need
that
psychiara?
Can
we
integrate
our
stream
as
the
project
is
growing?
B
We
want
to
meet
users
need
we
need
to
remove
the
barrier
to
use
Primitives
and
really
like.
We
want
to
work
better
for
you
work
better
for
the
users,
so
we
are
really
like
working
with
the
community
to
integrate
all
those
different
use
cases
directly.
Informative
is
we
want
to
reduce
the
Cycles?
We
really
want
to
make
it
easier
to
use
committees,
and
it's
also
always
better.
When
you
run
a
monitoring
solution
to
have
less
moving
pieces,
you
know,
because
then,
if
you
have
a
side
care,
you
need
to
monitor
it.
B
A
Think
that
that
point
that
you
said
is
also
very
important:
the
emphasis
of
the
project
on
the
productization
and
the
ability
to
make
it
as
easy
as
friendly
as
possible
for
integration
in
United
environments,
with
all
the
constraints,
I
think
one
of
the
other
examples
that
it
relates
to
the
rich
Suite
of
service
Discovery
options
available.
A
Is
that
there's
a
lot
of
them,
but
then
again
obviously
for
a
specific
environment,
you
don't
need
all
of
them
and
then
the
new
plugin
system
lets
you
compile
promises
without
any
unnecessary
service,
Discovery
plugin,
so
you
can
only
compile
it
to
the
size
that
is
relevant
for
you
and
keep
it
clean.
Maybe
you
want
to
say
a
word
about
this
new
plugin
system.
Yes,.
B
So
back
in
2021,
I,
think
of
2020.
Well,
a
user
came
in
and
said
Okay
I
want
to
help
on
one
issue
in
committees
which
is
instead
of
the
config
package
of
from
it
is
required,
depending
on
all
the
service
galleries.
I
want
the
service
coverage
to
depend
on
config
and
to
register
themselves,
and
that
enables
us
now
to
when
we
add
a
mission
Discovery,
you
can
decide
to
obtain
or
opt
out
of
the
service
career.
When
you
compile
formatives,
it
is
also
possible
to
use
a
note
of
three
service
Gallery.
B
So
if
you
really
want
to
use
service
Gallery
informatics
in
your
way-
and
you
want
to
not
to
share
it
publicly
or
we
cannot
integrate
it
Upstream,
you
can
also
just
compile
your
own
plugin
and
simplicity
integrate
into
Primitives.
So
we
have
a
plugin
system
at
build
time
that
enables
you
to
build
in
a
hot
service,
Discovery
plugins.
B
So
some
of
the
tests
have
shown
that
if
you,
if
you
only
pick
one
or
two
service
curve
which
you
can
gain
up
to
60
percent
of
your
disk
usage.
A
Wow,
that's
amazing!
That's
that's
really
impressive.
You
know
we
ran
into
the
details
of
the
service
Discovery
boot.
Maybe
you
should
take
a
just
a
quick
introductory
word
for
those
who
are
not
familiar
with
the
system,
essentially
service,
Discovery,
being
the
way
that
Prometheus
discovers
its
targets
or
the
components
in
the
architecture
for
scraping
or
for
pooling
the
metrics.
Can
you
give
us
just
a
quick
word
about
the
context
of
service
Discovery
mechanic.
B
Yeah
so
service
career.
Actually,
it's
a
big
selling
point
for
chromatis,
because
when
you
have
a
monitoring
solution,
if
it
is
not
aligned
with
what
you
with
your
infrastructure,
you
get
a
lot
of
rules.
Alarms
or
you
don't
see.
Some
services
and
service
career
means
that
promoters
can
automatically
connect
to
the
source
of
truth
in
your
infrastructure
to
know
exactly:
okay,
what
is
in
the
infrastructure?
What
should
be
there?
What
should
I
Monitor
and
when
you
will
unregister
or
register
new
Services?
The
monitoring
will
be
adjusted.
B
So,
for
example,
we
natively
connect
to
kubernetes
keys
to
console
to
a
lot
of
different
Cloud
providers
to
Dockers,
so
we
can
do
to
a
lot
of
different
sources
that
actually
enjoy
your
infrastructure,
and
so
you
only
need
to
change
to
change
those
sources
and
directly.
The
new
targets
will
be
scraped
by
Primitives
and
committees
came
out
of
the
box
with
a
lot
of
different
service
companies,
and
we
added
more
than
10
in
the
in
the
last
two
years.
I
think.
B
So
it's
really
growing
a
lot
with
a
lot
of
demand
and
it
it's
really
fun
to
manage
and
to
personally
work
with
all
those
clouds
provider
to
test
everything
out
so
that
when
you
come
to
promoters,
we
already
have
support
for
your
cloud
provider,
and
on
top
of
that
we
not
only
do
we
fetch
the
targets.
Do
we
decide
which
targets
we
can
monitor
for
you,
but
you
also
have
the
flexibility
to
answer
some
labels
to
your
target.
So
if
you
monitor
virtual
machines,
you
can
have
a
label.
Is
the
data
center
they
are
in?
B
You
can
have
a
label
with
anything
you
want.
Basically,
if
you
have
some
text
apply
to
your
virtual
machines,
like
many
provider
Cinema,
you
can
select
the
one
you
want
to
make
based
on
the
text,
so
we
really
have
offer
a
lot
of
flexibility
in
the
way
that
you
fed
your
Target
and
that
from
it
is
knows,
okay
I
know
I
need
to
monitor
that
Target.
A
Yeah,
that's
amazing,
and
actually
one
of
the
extra
flexibility
That
You
released
recently
in
the
project
is
the
ability
to
even
have
scraping
interval
being
adjusted
because
maybe
not
the
same
interval
fits
all
of
the
types.
So
we
can
actually
have
a
an
interval
defined
per
specific
labels
right.
B
Yes,
so
what's
important
is
that
we
want
to
to
make
it
easy
for
you
to
decide.
Oh,
you
want
to
monitor
each
Target
and
we
also
don't
want
you
to
do
10
different
API
calls
to
your
cloud
provider.
So
we
are
bringing
more
capabilities
to
do
a
feature
called
relabeling,
which
means
that,
based
on
a
set
of
initial
information
you
get
from
the
cloud
provider
or
for
your
orchestration
layer.
You
can
decide.
Okay,
that
Target
I
want
to
monitor
again
a
certain
amount
of
time.
B
So
now
you
can
monitor
you
can
change
the
script
interval
and
the
script
and
the
script
time
mode
for
each
Target
individually.
A
That's
amazing
so-
and
you
mentioned
just
before
about
the
agent
mode,
I
want
to
say
a
word
for
for
audience
is
less
familiar,
so
essentially
since
Prometheus
is
is
so
as
so
many
facets
to
it,
and
so
many
capabilities,
the
the
so
the
auto
Discovery,
the
scraping
capabilities.
We
just
talked
about
and
then
again
also
the
the
time
series
database
that
we're
going
to
talk
about
in
others.
A
So
the
agent
mode
is
sort
of
a
lightweight
version
or
mode
for
Prometheus
that
takes
away
the
the
database
piece,
which
is
obviously
very
heavyweight
in
case
there
is
a
a
back-end,
a
database
where
the
tados
nimir,
cortex
or
any
other
that
is
promises
compatible
so
that
they
need
the
Prometheus
piece
solely
for
the
scraping
for
the
fetching
of
the
of
the
metrics.
That's
the
agent
mode
that
you
mentioned
that
was
released
one
or
two
years
ago
and
is
gaining
a
lot
of
popularity
for
this
type
of
architectures.
B
A
B
A
B
So
and
people
are
just
using
a
lot
of
different
backends
to
to
store
the
metrics
that
point,
this
is
flashing,
is
really
amazing
to
see
all
of
these
backends
working
together
to
to
get
the
best
out
of
the
the
promises
protocol.
A
Actually
that
that's,
why
I
think
that
the
title
of
Prometheus
ecosystem
update
is
is
adequate
because
a
lot
of
the
Innovation
actually
or
or
the
new
stuff
happened
also
and
the
surrounding
peripheral
elements
around
Prometheus,
which
is
also
enabled
by
prometheus's
capabilities
for
remote,
right,
remote,
read
and
other
apis
that
allow
for
this
integration.
So
I
think
it's
it's.
Definitely
it's
a
point
of
strength
for
the
project
and
if
we
talked
about,
we
mentioned
the
time
series
database.
A
So
maybe
first,
if
you
want
to
say
a
word
about
what
what
it
means
the
time
series
database
and
then
specifically
what
Prometheus
offers
and
then
we
we
can
go
to
discuss
some
some
new
stuff
that
was
added
there.
B
Stores
this
data
in
a
Time
series
database,
so
it
is
basically
blocks
of
data
written
every
couple
of
hours
and
those
blocks
are
basically
immutable
and
then
we
compact
them
together
to
optimize
the
displays
and
and
the
query
time
and
that
kind
of
things.
So
this
is
all
managed
by
Primitives.
You
don't
have
to
maintain
anything.
The
only
input
that
input
that
you
have
is
I
want
to
keep
that
amount
of
data
for
that
much
time
and
then
properties
will
just
deal
with
that
all
on
its
own.
So
it's
a
database.
B
You
can
only
query
with
Primitives
or
and
yeah
it's
it's
very
like
tail
or
made
for
chromatives
it's
built
on
the
it's
built
on
top
of
the
Korea
paper
from
Facebook,
so
it's
very
efficient
at
compacting
the
data
and
and
encoding
the
data
in
a
very
efficient
way,
and
we
are
still
looking
into
ways
to
improve
that
to
improve
that
further.
So
it
should
still
improve
in
the
future
there
that
we
create
those
blocks
to
make
them
smaller
and
easier
to
to
query.
A
And
one
of
the
nice
things
that
you
added
recently
is
the
ability
to
backfill
data
to
do
out
of
order
injection
in
case
we
need
to
fill
in
some
some
gaps
of
information
right.
Yes,.
B
So
there
are
many
use
cases,
so
the
easy
way
to
do
that
is
by
using
the
remote
right
protocol,
which
you
can
see.
Basically,
you
can
see
that
as
a
replication
protocol
for
Chrome,
which
is
basically
it
is
the
way
that
romitis
will
write
to
tunnels
to
mimir,
but
you
can
see
that
just
as
an
application
protocol,
if
you
use
it
between
two
property
server,
they
will
just
replicate
the
data.
B
We
have
added
backfilling
data,
which
means
that
if
you
want
to
fill
in
data
from
the
past,
you
can
do
it
with
with
the
tstb
now
and
then
the
data
will
be
nicely
put
into
which
is
blocks
later
on.
So
this
is
still
very
early,
but
it
brings
new
use
cases
or
new
migration
Parts.
If
you
move
from
another
monitoring
solution,
you
could
use
that
to
backfill
some
of
your
data
and
generate
from
registrative
login
in
a
different
way.
B
B
We
are
also
adding
next
to
that
back
feeling
support
for
Native
histograms.
Basically,
it
means
that
you
can
now
calculate
your
their
percentiles
more
easily
with
more
accuracy,
which
means
that
you
don't
need
to
Define
in
advance.
The
buckets
that
you
want
to
Instagram,
basically,
is
you,
you
have
a
buckets
and
you
will
see.
Okay,
I
have
one
of
my
query,
which
is
coming.
It
is
taking
nine
seconds,
so
I'll
put
it
in
this
in
the
bucket
nine
eight
six
and
zero
up
to
zero.
B
Second,
and
then
you
can
say:
okay,
how
did
I
reply
to
99.5
percent
of
my
request
and
then
you
get
a
number
and
the
new
Instagram
enables
you
to
have
very
precise
number
relative
to
the
old
way
of
Instagram
we're
working,
and
it's
really
nice
to
see
that
Improvement
coming
and
that
permitted
csdb
can
store
more
complex
types
than
just
metrics,
which
is
also
more
efficient,
because
we
don't
need
to
repeat
all
the
labels
10
times
in
the
time
series
database.
B
Now
we
just
have
an
object
coming
on
and
it's
a
lot
easier
to
see
and
it's
really
important
not
for
when
you
want
to
calculate
slos
when
you
want
to
really
know
what's
going
on
you,
it
is
always
better
to
have
the
closest
estimation
to
the
result.
If
that's
available
and
it
comes
more
or
less
fluently
to
committees.
B
Now
you
need
to
use
the
correct
line
library
and
then
the
rest
will
just
work
automatically
the
resources,
some
adjustment,
to
the
query
that
you
make,
but
at
the
end
of
the
day,
this
should
work
pretty
well
for
most
of
the
users.
So
I
really
recommend
you
to
have
a
look
at
those
native
histograms
because
they
will
really
help
you
and
avoid
things
like
calculating
averages
that
kind
of
things,
because
there
will
there
will
become
a
lot
more
practical
than
just
trying
to
guess
your
latency.
You
can
get
the
correct
number
now.
A
B
Yeah
so
currently,
when
you
define
Instagram,
you
need
to
say:
Okay
I
want
the
bucket,
and
then
you
need
to
name
the
bucket
so
like
I
want
to
look
at
that
once
again,
two
seconds
for
a
second
a
second,
so
you
need
to
know
up
front.
What's
going
on.
The
thing
is
that
when
you
have
an
actual
incident
well,
who
knows
the
bucket
that
you
would
have
needed
to
put
in
right?
Because.
B
Some
incidents
are
clearly
out
of
what
you
could
expect
and
then
the
only
thing
you
alert
will
tell
you
is
that
oh,
the
requests
take
more
than
10
seconds,
but
you
don't
really
know
like
if
they
are
blocked
forever
or
if
they
are
30
11
seconds.
So
it's
really
important
that
we
remove
that
that
kind
of
prerequisite
that
you
need
to
Define
your
book
at
the
front
because
also
like,
when
you
respond
pretty
fast,
you
don't
need
that
10
second
package
right,
so
those
native
histograms.
B
They
make
a
big
change
regarding
that
one,
and
it's
really
a
great
addit
value
for
everyone
in
the
community
to
just
have
better
data,
and
that's
the
key
Point
here
is
that
we
want
you
to
have
better
data.
A
And
it's
pretty
early
on
with
the
native
histograms
right
in
terms
of
the
maturity
they
want
to
say
where
it
stands
and
what's
the
timeline
for
releasing
that
so.
B
They
are
now
an
experimental
feature,
but
there
are
multiple
people
working
on
them,
full-time
well
or
at
least
last
check
of
the
time
it
was
out
in
the
latest
from
this
release.
2.40.0-
and
there
has
been
quite
a
few
bug
fixes
on
that
tree.
So
now
we
have
a
2.40.7
so
and
a
couple
of
the
releases
fixed
system
works
with
the
native
physical
grams,
so
they
are
already
used
by
people
that
notice
some
issues.
So
it's
really
nice
to
see
that
the
community
is
already
engaging
with
them.
B
A
Sounds
good
and
another
thing
that
I
I
found
very
very
exciting
is
the
topic
of
exemplars,
maybe
just
for
our
listeners.
The
the
ability
to
we
talk
about
observability
a
lot
here
in
the
in
the
podcast
and
a
lot
of
the
aspects
of
observability
go
beyond
just
monitoring
by
the
ability
to
correlate
the
monitoring
and
the
metrics
with
additional
types
of
telemetry
to
get
a
broader
picture
and
exemplars
as
being
a
the
tool
to
actually
correlate
from
the
metric
to
a
respective
exemplars
or
samples.
A
Shall
we
say
of
let's
say
traces
or
others.
Would
you
like
to
say
word
about
the
what
we're
aiming
to
achieve
with
Prometheus
and
where
we're
starting
with
this
feature.
B
Yes,
so
example
are
basically
when
you
have
when
you
will
see
with
your
histogram
that
you
have
a
spike
in
your
response
time.
You
have
a
check
exemplars
and
that
you
can
enable
on
the
Committees
UI
on
the
grafana
UI,
and
then
you
can
see
okay
at
that
moment,
I
have
an
example
of
a
request
that
took
a
long
time
and
then,
when
you
click
on
it,
you
are
upload
back
to
Jaeger,
for
example,
that
will
tell
you
okay.
B
This
is
the
full
trace
of
a
query
that
took
a
long
time
and
then
you
can
start
seeing
okay.
Where
did
the
query
took?
Why
did
the
query
took
so
long
and
then
you
can
actually
see
okay.
It
was
in
this
piece
of
the
span
and
then
you
can
really
investigate
directly.
So
you
you
don't
need
to
jump
from
one
tool
to
another
and
trying
to
guess,
because
you
have
the
correct
Exemplar
attached
to
the
to
the
metric
with
the
correct
term
stem
and
the
correct
value.
A
Yeah,
so
that's
great,
and
obviously
we
see
that
not
just
with
Prometheus.
We
see
that
also
with
open
Telemetry
that
adopts
it
as
part
of
the
specification
and
others.
I
think
this
is
an
essential,
a
power
digging
for
for
getting
these
signals
together,
enabling
the
correlation
and
the
baggage.
By
the
way
we
should
say
it's,
it's
essentially
an
identifier.
The
example
of
a
trace
ID
is
a
classic
example,
but
it
doesn't
necessarily
have
to
be
a
trace
ID
right.
B
A
Log
but
again
using
ID
based
to
jump
to
the
relevant
sample
logs.
But
yes,.
B
A
That's
true
if
we
follow
the
best
practices
of
embedding
the
contextual
logs
within
the
spans.
Many
organizations,
unfortunately,
are
not
there
yet
so
so
yeah,
that's
an
important
point
to
say.
The
next
part
that
I
would
like
to
talk
about
is
the
prom
ql.
This
is
the
query
language
that
Prometheus,
developed,
natively
and
actually
back
in
the
day,
was
pretty
novel
because
everyone
was
used
to
a
graphite
at
the
time
or
similar
that
used
a
hierarchy
model
hierarchical
model
and
chrome.
A
Ql
came
with
a
labeling
system
that
was
much
more
flexible,
better
fit
for
high
cardinality
metrics
for
doing
slicing
and
dicing
ad
hoc
queries
by
different
dimensions.
So
could
you
give
us
a
bit
of
the
background
on
from
ql
and
then,
where
we're
heading,
where
we're
standing
with
the
with
this
aspect?
Yes,.
B
Basically,
well,
you
have
the
point:
is
it
a
model,
as
you
have
said,
which
is
a
matrix
all
have
labels
which
are
key
values,
and
then
you
need
something
to
query
the
data
that
you
have
and
pump.
Ql
is
really
simple
to
start
with,
so
you
don't
need
to
know
a
lot
to
start
getting
your
the
rate
of
HTTP
request.
B
But
the
idea
is
that
it
is
a
powerful
query
language
to
just
get
your
data
out
of
your
server
and
not
just
like
the
auditor
that
you
can
do
relevant
calculation
that
you
can
extract
really
like
what
is
important
to
to
you
and
your
infrastructure,
and
those
labels
are
also
full
Integrity
to
concrete
in
the
way
that
Primary
Care
is
working.
So
you
can
indeed
select
based
on
some
data
centers
based
on
some
nodes
and
also
do
the
execution
like
I
want
to
do
the
HTTP
request
per
data
center.
B
B
Depending
on
the
sign
of
your
of
your
data,
we
have
trigonometric
function
as
we
have
discussed,
so
we
are
running
new
function
to
communities
that
we
think
are
useful
for
the
users
and
it's
really
important
to
us
to
continue
extending
the
language
without
also
adding
everything.
That's
possible,
of
course,
because
they
still
want
to
be
able
to
know
and
to
support
the
users.
And
so
it's
not
like
we
will
add
it
Android
functions,
but
we
really
try
to
keep
on
with
the
usage
that
people
are
doing
discrimatives
and
trying
to
to
integrate.
A
That's
I
think
something
that
is
very
prominent
in
the
way
that
the
project
has
been
running,
that
is
very
driven
by
the
community
and
the
actual
use
cases
and
then,
what's
being
put
to
use
even
things
that,
as
you
said,
you
and
I
maybe
have
not
anticipated
in
advance
that
when
you
see
again
and
again,
for
example
the
trigonometry,
then
yes,
there
is
a
demand.
There
are
valid
use
cases
that,
apparently
are
are
more
frequent
and
popular
that
than
we
assumed
and
the
community
serves
that
that's
I
think
that's
amazing.
B
So
we
also
had
I
did
support
for
a
negative
offset.
So
if
that
means
that
when
you
will
select
your
data,
you
can
offset
the
selection
by
an
amount
of
time,
and
now
we
have
added
support
or
negative
offset.
It's
not
really
a
big
thing,
but
it's
the
kind
of
small
quality
of
life
improvements
that
we
make
over
the
years.
That's
like
when
you,
when
you
start
using
Primitives
and
you
you
see
something
you
want
to
try
to
trade
out
and
it
doesn't
work,
it's
quite
frustrating.
B
So
we
also
try
to
bring
those
more
Improvement
to
the
community.
The
next
change
is
the
ad
modifier,
which
means
that
you
can
have
more
complex
queries.
So
if
you
want
to
to,
for
example,
graph
the
top
five
CP
application
that
every
icpus
is
right
now
over
time,
you
can
use
more
complex
queries
to
do
those
kind
of
things
using
the
ad
modifier.
So
you
can
select,
you
can
do
a
selection
of
Matrix
based
on
a
calculation
that
you
would
do
at
the
end
of
your
current
query.
B
So
it's
really
like
a
lot
of
different
Improvement
that
we
are
doing
to
the
language
that
is
completely
optional,
so
you
don't
have
to
use
them.
You
can
still
do
your
perfectly
simple
wage
function
if
you
want,
but
we
are
also
present
for
the
power
users
that
need
more
advanced
queries
and
I.
Don't
want
to
rely
on
features
of,
for
example,
grafana
to
throw
their
own
features.
So
the
ad
modifier
was
possible
to
do
in
combination
with
some
complex
things
on
graphene
side,
but
now
we
also
have
the
same
capabilities
directly
in
your
computer
function.
A
Yeah
sounds
good
and
we
we
started
mentioning
the
ecosystem
around
like
rafana
and
others.
I
think.
One
very
important
recent
update
in
the
Prometheus
ecosystem
is
the
contribution
of
prom
lens
to
the
Prometheus
project.
A
We
talked
about
the
power
of
prompt
ql,
but
sometimes
it
could
be
a
bit
complex
to
build
complex
queries,
just
textually
and
having
some
visual
aid
and
UI
could
be
significantly
beneficial,
and
that
was
the
project
prom
lens
that
Julius
vaults,
from
from
Labs,
has
developed
for
for
quite
some
time,
and
now
it's
been
contributed
open
source
and
contributed
to
to
the
Prometheus
as
a
native
citizen
of
Prometheus,
by
Julius
walls,
problem,
labs
and
and
also
chronosphere,
so,
first
of
all
kudos
for
them.
A
For
for
this
very
significant
contribution,
maybe
you
can
help
us
with
some
information
about
this.
B
Yes,
so
prominence
is
a
query:
Builder
we're
going
to
use
secure
explainer.
So
basically
you
type
your
quantity
query
and
it
will
run
it
and
explain
you
what
it
is
doing.
One
of
the
more
one
of
the
nicer
skills
of
prominence
and
a
lot
of
people
that
are
using
properties
have
got
that
issue
sometimes
that
you
have
a
query
and
it
does
not
return
anything.
B
If
it
is
a
complex
query,
it's
not
always
easy
to
figure
out
why
it's
not
returning
anything
and
that's
where
prominence
can
help
you,
because
when
you
will
type
your
query,
it
will
actually
not
run
your
query
at
once,
but
it
will
run
all
the
different
parts
of
the
query
enter
UK
at
that
step
of
the
query.
I
still
have
20
results
when
I
compare
to
that
other
things.
B
Oh
now,
I
have
zero
result
as
the
output,
so
you
can
easily
find
which
part
of
the
query
is
generating
output
and
at
which
point
the
output
is
dropped.
So
that's
the
main
use
case
for
prominence.
It's
really
understanding!
Okay,
why
don't
I
have
anything
outside
of
my
query
and
I
think
it's
kind
of
again
Game
Changer,
in
the
way
that
you
can
work
with
from
queries
that
you
don't
understand
or
that
are
more
complex
because
directly,
you
see.
Okay,
I
have
a
zero
there
I
know
that's
where
I
have
to
look
at.
B
So
what's
on
one
side,
what's
on
the
other
side,
what's
what
doesn't
match?
So
it's
really
nice
to
see
that
coming
of
stream
on
pages
and
also
some
part
of
problems
will
probably
be
directly
available
inside
Primitives.
So
not
everything,
for
example,
in
problems.
You
can
share
your
queries.
You
can
do
a
lot
of
things,
but
the
main
path
to
debug
your
Chronicle
queries
are
probably
coming
in
the
following:
amongst
inside
the
community
server.
A
Nice,
that's
that's
important
and
this
so
it
will
be
available
in
terms
of
the
UI
as
part
of
the
Prometheus
UI.
Yes
excellent.
So
that's!
A
Actually,
we
haven't
talked
about
Prometheus
UI
itself,
so
everyone
knows
Prometheus
from
this
infrastructure
elements
for
scraping
for
time
series
otherwise,
but
Prometheus
does
does
offer
UI
many
use
grafon
and
others,
but
it
is
important
that
I
think
there
are
quite
a
a
good
few
changes
that
were
made
in
enhancements,
whether
the
the
make
it
friendlier
like
a
dark
mode,
but
even
more
something
more
more
significant,
like
Auto
completion
of
the
of
the
procure
syntax
and
others.
A
So
you
know
tell
us,
maybe
a
bit
about
people
things
that
people
might
not
be
aware
of
that
are
available
on
the
prom
and
the
permittance
UI.
B
Has
seen
a
lot
of
changes,
so
we
basically
changed
the
complete
UI
in
the
last
few
years
to
go
to
a
react,
UI
based
UI,
and
what
what
is
really
important
is
that
now
we
are
working
towards
the
performance
of
the
UI
like,
for
example,
the
target
space
now
is
not
loading
all
in
one,
but
we
are
loading
every
job
separately
for
each
other,
because
some
people
that
has
informatics
these
crazy
numbers
like
they,
they
run
committees
with
thousands
of
of
targets,
and
we
want
to
provide
them
with
a
fluent
way
of
using
the
UI.
B
The
new
UI
also
brings
direct
support
for
completion
of
computer
queries
based
on
the
metrics
that
you
have
in
your
community
server.
So
you
can
type
start
your
query
and
it
will
give
you
suggestions
or
complete
your
metrics.
So
it's
really
nice
to
see
that
you
are
really
helped
a
lot.
When
you
write
a
query
in
Primitives
now,
we
have
also
seen
some
third
party
using
the
same
library
that
we
have
been
put
together
to
do
the
auto
completion,
because
Chrome
code
is
exported
also
by
some
cloud
provider
tools
now.
B
So
it's
really
nice
to
see
that
used
more
and
more
and
that
it's
becoming
easier
to
write
from
QR
queries,
as
you
mentioned,
which
is
really
well.
It's
an
interesting
story
actually
so
yeah.
Okay,
there
is
a
dark
mode
button,
but
it
was
contributed
by
the
community.
So
not
by
someone
from
the
information
statement
is
a
kind
of
query.
When
you
see
the
first
time
you're
like
yeah,
this
will
be
too
much
work.
B
It
will
never
like
be
finished
or
it
will
never
start,
and
then
the
community
is
working
together
with
the
quantities
team
I
like
at
the
end,
it's
the
kind
of
video
that
happens
even
if
at
the
beginning,
you
really
wonder
like
will
there
be
someone
to
maintain
that
because
it's
quite
you
think
it's
easy,
but
it's
actually
quite
difficult
to
to
get
everything
correctly.
B
So
it's
really
nice
from
the
community
to
have
support
for
that
kind
of
features,
because
you
know
Primitives
is
still
a
very
small
project
when
you
compare
it,
for
example,
to
kubernetes,
because
we
have
a
much
more
lower
focus
on
monitoring
only
and
we
try
not
to
go
in
to
do
in
every
direction.
So
it's
really
a
small
project.
When
you
look
at
the
big
open
source
project
out
there,
so
yeah,
it's
really
nice
to
see
that
the
community
is
mature
enough
to
produce
that
kind
of
output.
A
And
we
mentioned
before
you
mentioned
grafana
a
few
times.
Grafana
is
I,
think
maybe
the
most
popular
visualization
front-end
tool
for
for
Prometheus.
It's
not
part
of
the
cncf
stack.
It's
provided
by
grafana
Labs
as
an
as
an
agpa
licensed
open
source
and
I
know
that
many
use
that,
but
but
on
the
other
hand,
maybe
people
did
wonder
what
is
going
to
be
at
the
cncf's
native
Native
solution.
For
that.
B
Yeah,
so
basically,
when
you
look
at
graphene,
it's
really
nice
open
source
project,
and
but
it
is
not
part
of
the
cncf
and
it
is
managed
by
many,
a
single
company
which
decide
what
they
want
to
do
with
their
trademark.
What
they
want
to
do
with
their
software,
and
so
just
like
comedies,
is
a
multi-company
project.
I
think
it
is
nice
to
see
open
source
alternative
to
to
that
visualization
tool
and
yeah
I.
We
have
seen
the
rise
of
Percy's
recently
it
is
still
early
on.
Is
this
still
work
in
progress?
B
Currently
it
is
under
the
Linux
Foundation
directly,
but
who
knows
what
the
future
will
will
be
for
a
person,
so
I
think
it's
really
important
to
have
some
Choice
when
you
are
using
committees
that
you
can.
Okay,
not
only
rely
on
one
visualization
solution,
but
that
we
can
work
with
many
of
many
of
them.
A
And
maybe
even
make
some
sort
of
a
an
agreed,
the
open
specification
that
can
enable
moving
between
tools,
I
think,
that's
one
of
the
things
that
was
mentioned,
I
was
exposed
to
persists
primarily
in
the
recent
prom
con.
When
I
saw
the
talk,
and
although
it
was
a
short
one,
it
got
me
intrigued.
So
I
think
it
looks
like
a
lot
of
potential
there
and
definitely
a
project
worth
worth
following
up
on
yeah.
B
I
think
that
the
key
the
key
thing
is
that
you
should
have
the
choice
of
the
solution
that
you
use,
and
it's
also
something
that
is
always
in
my
mind
and
has
always
been
in
the
mind
of
the
point,
is
maintenance
that
we
have
I
think
we
are
very.
We
have
never
designed
a
feature
specifically
for
a
visualization
solution,
in
particular
with
always
try
to
be
agnostic
to
that
and
really
try
to
avoid
some
kind
of
caching
mechanism,
which
that
would
be
very
specifically
that
kind
of
thing.
B
A
And
I
think
this
is
the
power.
That's
why
I
said
that
Prometheus
for
me
goes
beyond
just
a
very
successful
tool,
but
also
a
de
facto
standard
in
many
ways.
Because
of
this
approach
that
turned
the
Prometheus
apis
to
some
sort
of
a
specification
that
that
others
can.
So
you
don't
need
to
have
Prometheus
per
se.
You
can
work
with
other
open
source
projects
or
even
close
Source
Alternatives,
as
long
as
they
provide
the
same
API
support.
Obviously,
the
query:
language
support
and
I
think
another
is
very
successful.
A
Example
that
emerged
from
the
Prometheus
Community
is
open,
metrics
the
exposition
format
that
span
out
to
its
own
Standalone
open
source
project
under
the
cncf
that
standardizes
on
the
way
to
expose
metrics
irrespective
of
Prometheus.
Obviously,
Prometheus
is
a
big
consumer
of
that,
but
it's
it's
now
no
longer
just
with
the
focus
of
permittance
in
mind
right.
A
Yeah
definitely,
and
another
very
important
element
in
the
Prometheus
stack
is
the
alert
manager
that
provides
obviously
the
alerting
function
on
top
of
Prometheus.
Can
you
give
us
a
bit
of
a
background
on
this
on
this
on
the
alert
manager,
its
uses
and
where
it
currently
stands
so.
B
The
rat
manager
is
a
core
project
of
committees
and
it
is
the
part
that
will
actually
make
sure
that
you
know
about
the
alerts.
So
it
is
the
part
that
will
knock
to
your
door
and
say:
oh,
you
know
you
have
a
server,
that's
done
and
it
can
talk
to
you
via.
It
cannot
actually
knock
on
your
order
unless
you
write
about
to
do
that.
But
if.
B
B
And
now
we
are
adding
support
for
WebEx
Cisco
WebEx,
which
is
another
used
by
less
manager
user.
They
want
to
get
rid
of
their
third-party
integration
to
be
natively
integrated
into
the
art
manager.
So
it's
really
nice
to
see
that
we
know
integrating
all
those
new
receivers
directly
into
the
elect
manager.
B
We
are
also
adding
support
for
time-based
muting
for
receiver.
So
if
you
have
an
uncalled
team
that
the
only
lead
to
receive
right
overnight,
you
can
also
do
that
natively
in
your
admire
Journal
and
we're
also
adding
a
quality
of
life
improvement
with
the
negative
matchers
when
you
can
apply
negative
matches.
So,
instead
of
saying
I
want
those
slippers
to
go
to
that
receiver.
You
say:
okay,
everything!
That's
not
deaf
goes
to
the
on
the
call
team
sounds.
A
B
B
The
upgrade
of
this
week
will
also
bring
you
a
significant
memory
reduction
if
you
are
using
that
measure.
So
we
are
also
working
very
hard
on
Direct
merger
itself
to
make
it
more
useful
and
also
adapt
to
what
users
are
actually
doing
with
it.
A
B
Yeah,
so
one
of
the
strength
of
chromatries
is
that
it
works
with
plain
text,
so
you
can
use
Curl
in
your
local
browser
and
see
the
list
of
metrics
and
any
software
that
can
expose
an
HTTP
endpoint
can
expose
metrics.
We
have
not
done
that
for
my
SQL
yet
so
my
SQL
cannot
expose
metrics
natively.
So
in
some
cases
you
need
a
small
piece
of
software
that
can,
in
one
side
talk
to
your
software
and
on
user,
have
saved
talk
to
promotivists.
B
So
that's
what
is
called
an
exporter
and
it's
also
the
case
for
operating
system.
For
example,
the
Linux
system
did
have
an
HTTP
server
in
Linux
2.4,
but
that's
long
time
gone
and
it
could
not
export
Community
physics
anyway,
but
so
you
need
to
kind
of
exporter
to
be
like
a
very
knowledgeable
about
the
business
and
on
the
other
side,
converted
to
Prometheus
Matrix
that
you
can
create
it.
B
In
the
exporter
world,
we
have
a
pro
communities
Community
project
when
people
can
share
their
exporters
and
then
it
will
be
taken
off
by
the
community
and
instead
of
being
under
one
single
user,
GitHub
namespace,
we
put
them
on
the
conscious,
Community
ticket
page.
If
the,
if
there
is
a
need
for
maintainers,
for
example,
it's
a
common
place
for
the
exporters,
one
of
the
most
popular
exporter
that
we
have
here
is
the
windows
exporter.
So
it
has
gone
to
long
journey
from
wmi
exporter
to
now
Windows
exporter.
B
It
can
actually
do
more
than
Windows
because
it
can
monitor
your
MSS
keyword,
for
example,
that
is
running
on
windows.
So
it's
really
an
export
of
issue
of
Windows
machine
Windows
Services
have
a
look
at
the
exporter.
That
exporter
has
been
worked
on
by
a
large
community
of
users
and
we
are
working
towards
making
it
an
official
exporter,
so
it
will
be
completely
under
the
point
is
governance
so
it
will
be
better
aligned
with
the
goal
of
the
coin.
This
project
we
will
have.
B
It
will
be
easier
to
talk
to
the
maintainers
and
it
will
also
be
easy
for
the
users
because
they
will
directly
find
it
in
the
download
page
of
communities.
Next
to
the
exporter
for
Linux,
which
is
the
node
exporter,
they
will
directly
have
the
windows
exporter
available
for
them.
B
Another
exporter
that
we
are
improving
is
the
MySQL
exporter.
We
are
bringing
multi-target
support
for
the
mystical
exporter,
which
means
that
you
will
not
need
to
run
one
single
instance
of
the
MySQL
exporter
for
for
each
database,
but
you
will
just
be
able
to
run
one
for
10
different
databases.
If
you
want
to
that's
also
again
with
the
manuality
with
the
maturity
of
the
project,
we
can
do
those
kind
of
quality
of
life,
improvements
for
the
administrators
that
are
using
pointers.
A
Sounds
good
and
again,
it's
I
think
it's
the
mindset
of
supporting
production
use
cases
so
the
one
that
runs
with
the
cluster
with
many
databases
and
still
want
to
use
permitters.
They
need
an
easy
way
to
Aggregate,
and
this
is
a
very
useful
one.
I
wanted
to
say
a
proposed
supporting
a
production
environment
and
maybe
Enterprise
grade
use
cases
there's
been
a
need
for
long-term
support
and
I.
Think
your
you
and
your
team
jump
into
this
Gap
and
and
took
charge
of
this.
Can
you
say
a
word
about
the
long-term
support.
B
Yeah
so
like
we
have
mentioned,
we
released
2.40.0
with
Native
Instagrams,
and
then
they
have.
They
are
seven
different
patch
releases
in
less
than
six
weeks
right,
because.
B
Was
less
than
six
weeks
ago
so
when
you
make
that
kind
of
breaking
changes
in
the
code
that
you
touch
a
lot
of
different
Ghostbusters
and
even
if
you
have
a
lot
of
testing
committees
well,
there
is
always
with
new
features.
New
risk
coming
and
from
it
is
releasing
every
six
weeks
and
for
many,
so
everyone
is
a
software
company,
but
not
everyone
is
a
natural
company,
so
upgrading
every
six
weeks
or
every
week
is
not
that
easy
for
everyone.
B
So
we
have
decided
to
take
certain
information
to
say:
okay,
instead
of
releasing
every
six
week,
we
will
just
like
keep
that
3D
is
available
for
a
year,
so
it
was
six
months.
We
have
extended
that
two
years
we
have
a
LTS
version
of
ridges
that
enterprises
can
use
and
we
are
going
to
support
it
for
one
year
completely,
so
they
will
get
back
fixes
and
security
fixes
for
a
year.
B
So
you
don't
get
all
the
shiny
new
features,
but
when
you
are
in
an
Enterprise
anyway,
it's
not
always
about
a
new
feature
but
more
about
the
reliability
and
the
rest
that
you
take.
So
we
are.
It
is
completely
Upstream,
so
we
work
with
the
community.
We
work
with
other
maintenance.
You
don't
have
to
pay
to
use
it.
Of
course,
it's
completely
on
the
permittee's
website,
but
it's
available
for
one
year
completely
supported
release
affirmities
we
will.
B
If
you
want
the
new
releases,
you
will
still
have
to
wait
for
a
few
months
and
then
you
put
another
LTA
freeze
and
then
you
will
be
able
to
jump
from
one
LTS
to
the
other
quite
easily.
So
even
if
I'm,
creating
information
is
always
should
always
be
like
worry.
Free
like
it
should
be
quite
simple
to
upgrade,
but
still
getting
earliest
release
is
really
helpful
for
those
customers.
They
don't
need
all
the
shiny
features.
They
need
a
working
Primitives
all
the
time.
B
So
that's
the
Target,
and
there
are
quite
a
few,
though
that
are
using
it
already.
So
we
are
quite
happy
with
the
outcome
with
the
lt3s
and
we
we
are
working
to
bring
a
newer
test
in
the
coming
months.
So
it's
really
it's
really
nice
to
see
that
coming
with
the
maturity
that
we
can
also
listen
to
the
users
like
what
upgrading
every
six
weeks
here
you
know
in
my
company
it
takes
two
months
just
to
get
a
VM.
So
what
are
you
talking
to
me
about
changing
a.
A
B
What
what
I
notice
as
a
maintainer
is
that
sometimes
we
have
some
quite
impressive
bugs
and
they
have
only
reported
like
one
or
two
months
after
the
release
and
you
are
like
or
many
people
are
eating,
that
burger
and
just
not
speaking
up
in
the
GitHub
issues,
not
telling
us
about
it.
So
yeah,
it's
really.
There
is
a
really
long
time
before
you
publish
a
release
and
people
actually
use
it
in
production.
That's
where
the
LTS
also
helps
that
we
know
that.
Okay,
this
is
a
quite
stable
release.
A
Yeah,
that's.
That
sounds
like
some
many
many
have
dreamed
about
this
for
for
some
time,
it's
great
to
have
that
as
part
of
the
upstream
and
way
to
go
to
you
to
you
and
your
team
for
actually
taking
this.
That's
amazing
I
want
to
maybe
in
the
bit
of
time
that
is
left
talk
about
another
important
part
of
the
ecosystem,
around
Prometheus,
which
is-
and
you
mentioned
that
a
few
times
over
the
the
court.
The
discussion,
Thanos
cortex
Mimi-
are
sort
of
the
ecosystem
of
of
long-term
storage.
A
Let's
call
it
that
that
provides
sort
of
a
back-end,
scalable
back
end
for
Prometheus
I.
Think
the
original
intent
was
that,
or
the
original,
let's
say
constraint
in
a
way
was
that
Prometheus
by
Design
is
a
single
node.
You
mentioned
the
Simplicity
as
a
core
core
value
in
in
the
in
the
community,
so
obviously
it
has.
It
has
its
limitations
with
the
vertical
scaling
and
when
people
started
there.
B
So,
basically,
you
can
scale
parameters
vertically
you.
We
have
seen
people
actually
scaling
a
lot
like
we
have
I've
seen
people
with
more
than
one
terabyte
of
ram
from
which
is
with
many
millions
of
Time
series.
But
still
yeah
committee
is,
as
I
said,
we
are
a
monitoring
solution
and
if
you
introduce
some
kind
of
distributed
system,
it's
a
lot
of
new
trade-offs.
You
need
to
deal
with
and
it's
a
lot
more
complex,
and
so
we
are.
We
have
the
remote
right
protocol
that
can
write
to
a
lot
of
different
backends.
B
Some
of
them
are
based
on
databases.
You
have
Chrome
scale
that
will
dump
your
data
into
postgresql
storage
engine.
You
have
cortex
mimir
antennas.
They
work
in
the
Prometheus
way
of
dealing
with
the
data,
which
means
that
they
will
write
the
same
blocks
as
more.
It
is
why
more
or
less-
and
they
will
use
external
solution
to
store
it
like
a
S3
bucket,
which
takes
a
lot
of
Maintenance
away
well,
which
makes
it
a
lot
less
costful.
B
With
regards
to
the
storage
that
is
needed,
but
it's
a
lot
of
trade-offs
and
inside
of
the
cncf
there
used
to
be
two
major
players
there,
so
cortex
antennas.
They
are
still
there,
but
now
there
is
a
mini
which
has
been
out
of
Cortex
into
grafana
Labs,
a
GPA
license
and
GitHub,
so
yeah.
The
future
is
quite
interesting
for
the
long
term
Solutions
we
are,
we
are
seeing
lots
of
customers
running
internals,
some
customers,
and
we
also
have
a
few
that
are
running
cortex.
So
it's
really
interesting
to
see.
B
All
of
that
will
evolve
in
the
future,
because
every
solution
or
the
each
kind
of
trade-off-
and
it
is
in
for
a
certain
public
and
it's
interesting
to
see
what
will
happen
in
the
future,
because,
as
though,
as
in
this
case,
communities
is
actually
the
downstream
project
and
we
are
just
like
seeing
what's
going
on
in
the
ecosystem
and
we
just
try
to
work
the
best
as
best
as
we
can
to
the
other
players.
A
Yeah-
and
you
mentioned
the
the
community,
then
I
think
it's
important
to
mention
for
our
audience
the
different
places,
the
SLA
GitHub
events.
Irc.
Do
you
want
to
mention
where
people
can
join
the
conversation
around
for
me.
B
Yeah,
if
you
want
to
join
the
conversation,
you
can
go
to
promoteus.io
community.
You
have
all
the
information
to
the
select
DFC
channel.
We
have
some
events,
so
if
you
go
to
pomcon.io,
you
have
the
conference
as
well,
and
if
you
need
help
and
promoters,
you
can
join
the
mailing
list
or
the
discourse
The
Forum
that
we
have
so
just
find
us
discuss
with
us.
We
are
always
open
to
the
discussion.
B
Also
importantly,
we
hope
if
you
are
interested
in
to
develop-
and
we
also
have
developer
Summits,
which
are
completely
open
and
online,
so
you
can
also
join
the
conversation
there.
We
really
want
to
expand
Primitives,
also,
the
team
and
the
community
to
more
diversity,
more
people,
more
voices,
that's
very
important
to
us
to
know
exactly
how
we
can
improve
the
project
and
stay
relevant
because
it's
not
like
I
want
to
live
in
my
own
bubble,
but
we
really
want
to
expand
and
to
welcome
everyone
discuss.
B
A
I
highly
recommend
the
chromecon
and
the
one
that
was
in
in
Munich
in
in
Germany
was
excellent
and
also
around
the
kubecon.
We
had
the
Prometheus
day
on
the
cubicle
North
America
we'll
need
to
see
the
format
for
the
next
cubicle
North
America.
A
Maybe
it
will
Unite
with
with
some
others
around
observability,
but
generally
speaking,
if
you
are
in
one
of
these
conferences,
an
excellent
opportunity
to
see
people
also
face
to
face,
if
you
are
there
in
person
and
and
get
the
the
vibe
of
the
community,
a
very
Vibrant,
Community
and
and
lots
of
things
going
on.
Also,
maybe
you
Julian,
if
you
want
to
mention
how
people
can
reach
out
to
you
after
the
show.
B
Yeah,
so
you
can
reach
out
to
me
at
at
11y.eu,
so
you
can
also
find
me
on
different
social
medias
or
on
GitHub.
So
you
can
also
check
on
my
company
early
11y.eu,
so
I
think
I
will
be
also
linked
somewhere
in
the
description
where
you
can
just
find
find
us.
If
you
need
to
to
have
some
committee
support,
we
can
help
you
with
that.
B
A
So
we
we
ran
out
of
time,
actually
decided
to
skip
the
breaking
news
for
this
episode,
because
we
had
so
many
interesting
things
to
cover
with
you
on
the
chat
that
I
didn't
want
to
waste
any
of
the
time.
So
apologies
for
our
listeners,
but
I'm
sure
it
was
worth
the
while
and
I
also
will
add
for
the
show
notes.
A
The
prom
con
EU
2022
talk
that
Julian
provided
I'm,
also
putting
it
here
on
the
for
the
YouTube
and
twitch
audience
so
it'll
be
also
on
the
show
notes
and
the
other
references
that
will
help
you
get
going
with
both
the
updates
that
we
mentioned
briefly
and
also
on
on
getting
in
touch
with
the
community
and
getting
involved,
because
it
is
an
amazing,
Community
Julian.
Thank
you
very,
very
much
for
joining
me
on
this
episode.
Yeah.
B
A
Show
thank
you
very
much.
We
need
to
do
more
about
Prometheus,
not
wait
so
long
before
the
next
update,
so
I'll
definitely
keep
in
touch
and
make
sure
that
we
cover
that
more
adequately
and
with
that
I'd
like
to
also
thank
our
listeners
for
joining
our
episode.
As
always,
all
the
episodes
are
available
on
the
face
on
your
favorite
podcast
apps
and
also
on
YouTube,
so
do
check
it
out
if
you're
here
on
the
live
stream
and
if
you're
not
on
the
live
stream.
A
If
you
are
listening
on
the
podcast,
then
do
know
that
we
stream
the
episodes
live
on
Twitch
and
YouTube
live
so
find
all
the
details
on
on
our
Twitter
page
at
open,
observe
for
when
the
live
streams
occur
or
follow
me
at
horvitz,
h,
o
r,
o
v.
A
I
t
s
and
obviously
also
share
your
comments,
your
suggestions,
especially
for
the
new
year
for
the
2023,
if
you
have
any
suggestions
and
things
that
you
would
like
to
see
or
if
you
have
something
that
you
would
like
to
talk
about
that
your
subject
matter,
expert
in
and
you
can
share
with
the
community,
whichever
feel
free
to
reach
out
in
whichever
way.
Thank
you
very
much
for
listening,
I'm,
Dotan
horvitz
and
see
you
next
month
on
the
2023
first
episode
of
the
Year.
Thank
you
very
much.
Everyone.