►
Description
In this session, Brian Brazil, Founder, RobustPerception.io and core Prometheus developer, explains the core ideas behind Prometheus, how to get useful metrics from your applications, processing that data, getting alerts on what matters, and creating dashboards to aid debugging on OpenShift.
Presenters:
Brian Brazil– Founder, RobustPerception.io
A
Well,
hello
and
welcome
everybody
again
to
another
openshift
commons
briefing,
this
time,
brian
brazil,
who
was
one
of
the
core
contributors
to
the
prometheus
project
and
the
founder
of
robust
perceptions.
The
a
new
member
of
the
openshift
commons
is
joining
us
today
and
he's
going
to
give
us
an
overview
of
prometheus.
A
So
the
format
of
this
is
he's
going
to
take
it
away,
give
a
bit
of
a
presentation
a
little
bit
of
a
demo
of
working
with
it
in
context
of
openshift
and
then
we'll
have
an
open
q
a
afterwards
you'll
notice,
because
we're
using
blue
jeans
there
is
a
chat.
So
you
can
ask
questions
in
the
chat.
I
will
try
and
repeat
them
all
and
in
the
q
a
and
open
it
up
for
conversation
afterwards.
So
with
that
brian
take
it
away,
and
thank
you
very
much
for
joining
us
today.
B
Well,
thank
you
yeah.
So
a
little
bit
of
an
echo
stream
here.
So
I'm
one
of
the
four
core
developers
of
prometheus.
I
found
robust
perception.
She
does.
You
know
trying
to
do
consulting
and
support
route
for
supporting
open
source.
I've
attributed
to
many
open
source
projects
over
the
years,
and
I
did
work
on
google
for
a
while
as
well
here
in
dublin.
B
So
prometheus
was
founded
in
2012
by
matt,
proud
and
julius
walls
who
were
located
in
berlin
at
the
time
and
like
in
2013.
It
started
as
a
side
project
and
in
2013
it
took
it
into
sandhead.
We
were
working
for
at
the
time
expanded
to
support
bazooka,
which
you
can
consider
to
be
kind
of
like
openshift
and
and
they
added
instrumentation
clients
for
go
java
and
ruby,
which
was
what
was
used
inside
soundcloud
at
the
time
in
2014.
B
B
We
have
a
new
storage
system,
the
v2
storage
and
a
new
text,
format
which
we're
still
using
today
and
then
in
2015
we
publicly
released
and
because
previously
everything
was
on
github
and
it
was
there,
but
we
didn't
really
tell
a
lot
of
people,
but
since
then,
we've
seen
quite
an
uptick
in
usage.
So
to
give
you
an
example,
there
are
about
300
contributors
to
core
repositories.
B
We
have
actually
over
150
third-party
integrations
and
we've
got
several
hundred
people
on
the
mailing
list
and
rc
and
it's
always
hard
to
tell
with
open
source
software,
but
we
figured
there's
over
500
companies
using
prometheus
in
production
and
there
are
several
companies,
including
my
own,
funding,
prometheus
development.
In
fact,
there
was
a
recent
stat
that
came
out
as
well
that
prometheus
is
56th
in
the
world
for
open
source
contributions.
Sorry,
the
56th
biggest
project,
which
is
a
bit
of
a
surprise
right
on
top,
is
linux
itself.
B
So
previous?
What
is
it
so?
It's
a
metrics
monitoring
system.
It
doesn't
do
logs,
it
doesn't
do
tracing
it's,
not
a
profiler.
It's
at
its
core
time
series
database
with
a
query
language.
It
has
clarion
libraries
to
get
the
data
in
and
a
general
ecosystem
and
in
general,
it's
looking
for
a
cloud
native
approach
to
monitoring
services.
B
If
you
look
over
the
last
20
years
service
management,
it
used
to
be
the
case.
Your
sysadmins
were
manually
configuring,
everything
on
the
machines
and
then
we
went
to
things
like
chef
and
then
we
went
to
kubernetes,
which
is
to
step
up
again
where
we're
just
moving
ourselves
further
and
further
away
from
the
individual
machine,
the
individual
process-
and
we
need
to
do
the
same
thing
for
monitoring.
B
B
We
have
to
look
at
what's
the
roi
of
looking
at
a
particular
alert,
so
in
terms
of
integration
into
kubernetes,
say
openshift
agrees
can
discover
all
the
pods
services
everything
automatically
from
the
kubernetes
cluster
and
put
in
all
the
labels
of
annotations,
because
both
kubernetes
and
prometheus
have
a
label-based
data
model.
So
that
meshes
nicely
and
for
me
this
will
automatically
pick
up
the
changes.
B
So
you
can
just
set
it
up
once
in
principle
and
have
everything
happen
via
magic
from
there
on
in,
to
add
your
actual
application,
and
you
need
to
instrument
your
code
to
capture
the
metrics.
That
matter
to
you
because
sure
it's
great
to
know
that
cpu
works,
because
we
can
do
that
out
of
the
box.
But
you
care
about
hey.
How
many
times
is
this
cache
being
hit
or
how
many
times?
Is
it
a
paying
versus
non-paying
order?
And
that
sort
of
thing?
B
And
the
great
thing
is
if
everyone
instruments
and
you're
using
the
libraries
which
are
instrumented?
Well,
you
get
that
for
free,
get
those
instrumentations
for
things
that
aren't
instrumented,
because
well
it's
not
likely
we'll
get
our
code
inside
my
sql
and
there's
export,
which
will
convert
between
them.
So
lots
of
integrations
there
s
mp
for
your
network,
stuff,
console
jmx,
hd,
proxy
and,
of
course,
minecraft
and
factorio.
B
If
you
look
across
the
cloud
native
computing
foundation
kubernetes
to
give
you
an
idea
of
the
sort
of
integrations
that
are
out
there
and
how
prevalent
integrations
of
prometheus
are
kubernetes
itself
is
instrumented
with
regis,
so
we
integrate
with
each
other,
and
so
you
can
monitor
the
health
of
the
cluster
link.
Your
d
has
metrics
in
our
format
and
interceptors
for
grbc
and
plug-ins
for
fluency.
B
B
We
have
the
prom
ql
query
language,
which
can
do
basically
any
math.
You
want
on
time
series
data
it
can
aggregate,
it
can
join
series
together.
It
can
slice
things
and
it
can
do
some
predictions
like
how
close
you
are
to
your
quota,
like
I'm,
going
to
run
out
of
space
in
four
hours
is
more
interesting
than
am
I
at
95
percent
of
my
quota,
because
it
could
be
at
95
for
years
or
aggregating
across
an
entire
data
center
for
latency.
B
B
So
this
is
taking
an
example
of
a
cpu
usage
in
docker
containers
and
getting
that
per
container
summing,
that
up
by
docker
image,
not
by
machine
not
by
service,
not
by
user,
not
by
data
center,
but
docker
image
and
getting
the
top
five.
So
you
can
figure
out
which
containers
you
might
want
to
use
some
better
optimization
flags
for
as
an
example,
then,
okay,
you've
got
your
alerts.
B
It's
also
designed
to
work
reliably
during
narrow
productions.
So
the
whole
architecture
is
around
reliability
such
that,
even
if
your
network
is
falling
apart,
we
will
do
our
utmost
to
make
sure
you
get
your
notifications.
B
There
is
some
links
there
as
well.
This
slides
it
up
later,
so
you
can
look
at
it.
So
I'd
like
to
just
show
you
through
a
bit
of
a
demo,
so
you
may
wonder
how
easy
is
it
to
instrument
things
with
prometheus
we're
going
to
use
an
example
from
python,
so
this
is
included
in
client
python
at
the
top.
So
I'm
going
to
copy
and
paste
this
off
screen
and
run
that
then,
okay,
that's
running,
so
this
is
just
importing
the
basics
tracking
request
time.
B
B
So
now
we
can
look
at
openshift,
so
here
we've
set
up
c
advisor,
which
is
what
you
should
be
using
to
get
all
your
container
stats
and
as
a
parities
as
well
running
here
now.
What's
interesting,
if
you
look
at
the
conflict
file,
so
this
is
the
just
changes,
so
it's
a
little
longer
than
usual,
but
this
here
has
not
been
hard
coded
to
know
where
things
are,
it's
actually
dynamically
pulling
in
everything
from
kubernetes
and
then
just
looking
at
node,
it's
finding
all
the
c
advisors
and
using
them.
B
So
you
end
up
also
discovering
your
c
advisors,
also
discovering
all
your
machines
noise
support.
You
get
machine
metrics,
so
as
machines
are
added
and
removed,
it'll
automatically
show
up
now
here.
I've
just
got
oc
running
on
my
desktop,
so
there's
only
one
node,
but
this
approach
would
work,
no
matter
how
many
machines
I
had
and
it
automatically
show
up
as
machines
are
added
and
removed.
So
there's
no
management
beyond
the
initial
setup.
B
B
The
disk
utilization
on
my
tree
disks
and
the
cd
drive
which
is
empty
and
all
the
file
systems,
and
it's
also
predicting
here
as
well,
when
my
thousands
will
fill
up
as
it
turns
out
my
home,
desktop
nothing's,
going
to
fill
up
so
everything's,
reducing
in
size,
just
kind
of
handy
for
once
and
as
well
from
regis
itself,
also
exposes
a
whole
ton
of
metrics.
Now
this
is
a
very
deep
dashboard
that
I
actually
use
for
benchmarking
prometheus.
B
A
There
haven't
been
any
questions
in
chat
yet
so
I'm
gonna
ask
a
couple
myself
so,
and
we
were
talking
a
little
bit
about
this
beforehand
as
well
the
how
does
the
open
tracing
project
surface
in
prometheus
and
how
is
that
related
to
that
this
project.
B
So
open
tracing
and
prometheus
are
separate
because
they're
kind
of
different
approaches
to
monitoring
so
from
ets.
It's
all
about
metrics,
so
time
series
we're
tracking
individual
pieces
of
information
over
time,
open
tracing
by
contrast,
is
about
tracing,
and
so
what
that
means
is
we're
tracking
a
single
request
as
it
passes
through
a
distributed
system,
because,
if
you're
getting
into
a
tail
latencies
and
that
sort
of
thing
or
hey
a
cache
was
hit
here,
which
causes
all
these
knock-on
behaviors
things
can
get
very
complex.
B
So
when
you've
got
something
like
open
tracing,
you
could
produce
a
graph
that
looks
something
like
this
and
actually
see.
Oh
right,
which
servers,
where
are
all
the
slow
blitz
and
tracing?
Does
that?
Whereas
prometheus
would
more
tell
you
that
hey
the
95th
percentile
latency
is
blah
and
I
need
to
use
tracing
to
say,
okay,
give
me
an
exemplar
of
when
it's
slow
and
I
can
dig
down
into
it.
So
we
are
actually
planning
on
using
open
tracing
inside
prometheus,
largely
for
our
queries.
B
We
want
something
in
memory,
that'll
trace
them,
so
we
can
say:
hey
here's,
my
slow
queries
and
a
few
other
components,
but
it's
kind
of
two
different
views
of
the
same
data,
because
a
metrics
based
approach
is
going
to
look
and
say
right.
You've
got
all
these
requests
coming
in
I'm
going
to
throw
away
most
of
the
information
because
it's
not
efficient
to
keep
it
around,
but
I'll
be
able
to
tell
you
for
each
pass
and
each
http
method.
What
the
average
latency
is,
what
the
byte
sizes
are
and
so
on.
B
So
you
can
get
an
idea
of
the
general
trend.
A
logging
based
approach
will
be
tracking
every
single
one
of
those
requests,
but
it
can't
track.
You
know
every
single
function
that
goes
into
because
that'll
be
too
much
bandwidth
and
then
tracing
is
kind
of
a
logs
based
approach,
but
stitching
together
logs
across
machines
to
track
a
request
for
its
entire
lifecycle,
that's
kind
of
where
the
kind
of
the
three
different
size
and
monitoring
you
have
between
tracing
logging
and
metrics.
A
Well,
the
reason
that
comes
up
for
me
as
a
someone
asked
me
the
other
day
about
integrating
you
have
zit
up
here,
but
also
integrating
jaeger
into
prometheus.
It
sounds
like
that's
not
something
you
would
do
so.
B
The
address
some
similar
call
sites
you'd
want
like,
if
you
have
a
http
routing
the
http
root
of
your
code,
you
might
want
to
both
instrument
for
open
tracing
and
say,
hey
here's
a
span,
which
is
what
you
know.
Each
of
these
here
is
a
span
and
also,
at
the
same
time,
incremented
prometheus
metrics.
So
potentially
this
has
been
done
to
be
a
little
library
that
does
build,
but
you
wouldn't
necessarily
want
to
do
that
all
the
time.
But
that's
where
the
similarity
is
because
they're
fundamentally
instrumentation,
but
for
different
goals.
A
Okay,
cool
well
there's
a
couple
of
questions
that
are
coming
in
and
the
first
one
is:
can
you
provide
a
comparative
feature
set
for
prometheus
compared
with
elk
and
grafana.
B
Okay,
so
that's
kind
of
there's
a
number
of
different
tools
there.
So
grafana
is
a
dashboarding
solution.
It
is
what
we
recommend
for
usage
with
prometheus
and
these
demonstrations
here.
This
is
all
rafana,
so
that's
what
we
recommend
and
we
kind
of
see
them
as
partners.
We
used
to
actually
have
our
own
dashboarding
system
called
prom
dasher
prometheus,
but
we
deprecated
it
in
favor
of
grafana
because
well,
why
develop?
B
You
might
get
50
to
100,
but
you
can
get
every
single
individual
request
and
that's
for
elk
shine
is
then
the
logs
based
approach,
so
they're
really
complementary
systems,
because
the
way
it
is
here
is
use
them
like
prometheus
or
a
different
metrics
based
system
to
figure
out
roughly
where
the
problem
is
and
drill
down.
And
then
you
figure
out,
okay,
I've
drilled
down,
I
can
see
things
are
slow,
you've
looked
a
bit.
It's
all
the
billing
end
points.
B
Then
I'm
going
to
go
over
to
elk
and
see
all
right,
so
who's
been
making
billion
requests,
which
ones
are
slow.
It's
these
exact
requests
from
these
users
go
over
chat
with
them,
so
they're
very
complimentary
solutions.
You
need,
you
need
a
metric
system
and
a
logging
system.
A
I
think
I
think
you
just
answered
one
of
the
next
questions.
Do
you
see
prometheus
as
a
solution,
a
full
monitoring
solution
for
openshift,
and
I
think
from
your
previous
answer?
Is
you
need
probably
both
vlogging
and
metrics?
There.
B
You
can
also
do
blackbox
monitoring
the
prometheus,
find
a
black
box
exporter.
So
that's
your
very
simple.
Is
it
turned
on
probes
like
http
checks,
icmp
checks,
that
sort
of
thing,
but
that's
just
something
you
can
do
for
meteos
and
works
out,
but
you
might
also
need
a
tracing
tool,
something
based
on
open
tracing
you're,
also
going
to
need
profiling
tools
like
your
gdp,
your
cprof,
and
all
that
as
well
and
source
code.
A
B
B
The
reason
why
is
the
docker
is
trying
to
hide
things
about
the
machine
and
the
noise
forward
is
trying
to
get
information
about
the
machines
they're
kind
of
conflicting
there.
So
we
recommend
running
the
node
exporter
on
bare
metal.
You
can
run
it
inside
inside
docker.
You
just
don't
have
quite
as
good
stats
in
terms
of
how
to
duplicate
the
efforts
you
know,
there's
a
blog
post
and
jen
is
just
tweaking
things
to
your
setup.
There's
examples
inside
the
prometes
documentation
directory,
for
example,.
B
So,
usually,
you
would
be
instrumenting
not
classes
or
methods,
but
like
if
statements
are
at
that
level,
because
you're
going
to
personally
choosing
what
things
are
important
to
you
from
like
a
business
or
engineering
level,
if
you're
trying
to
instrument
every
method
or
every
class
that's
getting
into
profiling
and
profiling,
because,
ultimately
in
a
monitoring,
everything
is
an
event.
So
we've
got
this
thing
is
happening
at
this
time.
A
So
there's
another
question
that
just
came
in
that's
a
great
explanation
of
the
class
method.
One
can
you
compare
prometheus
with
heapster
ocular,
okay,.
B
So
those
are
two
different
tools
screen
locking
up.
So
if
you
look
at
kubernetes
itself,
the
metrics
are
provided
by
the
prometheus
format,
and
the
thing
is,
though,
that
that's
not
great
if
you're
running
a
graphite
or
an
influx
db.
Although
influx
me
now
actually
supports
that
as
well.
Capacitor
but,
let's
say
you're
running
graphite
and
you
want
metrics
out
of
kubernetes
what
hebster
does
it
goes,
pulls
all
those
metrics
and
takes
them
and
puts
them
into
graphite
for
you.
B
B
We
don't
want
to
rely
on
distributed
systems
which
doesn't
means
we're
limited
to
the
size
of
a
single
machine
in
terms
of
how
much
data
you
can
store
and
how
far
back
you
can
store
it,
but
then
we'd
hand
off
to
something
else:
maybe
hawk
killer,
and
maybe
we've
cortex
and
maybe
open
gstb,
maybe
influx
to
me
for
the
longer
term,
storage
of
data
sort
of
kind
of
complementary.
In
that
way,
of
course,
ocular
potential
you
can
use
as
well
yourself.
B
A
So
since
you're
one
of
the
core
contributors
to
prometheus,
can
you
tell
us
a
little
bit
about
that?
What's
coming
in
the
next
release
for
prometheus
and
what's
on
the
road
ahead
and
where.
B
Are
you
well
the
big
thing
that's
coming
up,
like
our
points
released
like
we
had
1.7
come
out
yesterday
or
today,
and
which
is
largely
incremental
changes
like
with
openstack
services,
covering
a
slight
tweak
to
kubernetes
service
discovery
yeah
this,
but
the
big
thing
is
coming
up
is
for
me
to
zero,
which
we're
currently
working
on,
and
that
has
a
new
storage
engine
which
is
far
more
efficient
than
the
current
one,
and
also
one
of
the
things
I
was
working
on
is
new
stainless
handling,
which
is
a
semantic
thing
which
is
important
like
right
now.
B
If
a
target
disappears
like
you,
stop
a
pod
it'll
take
five
minutes
for
the
data
points.
That
committee
is
ingested
to
stop
being
returned
with
new
staleness
that
goes
away
much
faster
than
that,
and
so
it
takes
only
once
to
scrape
into
so
that's
much
better
for
alerting
and
there'll
be
a
few
other
changes
as
well.
Prometheus.
Two
zero
is
the
big
one.
Also
the
alert
manager,
the
just
the
zero
seven
zero
release
just
out
has
a
new
ui,
which
is
all
been
redone.
A
So
where
are
you
looking
for
help
on
the
prometheus
project.
B
So
we
there's
lots
of
places
to
help,
so
there's
lots
of
integrations.
If
you
have
some
tool
that
you
wish
had
metrics,
you
can
add
permedia's
instrumentation
to
them
or
write
an
exporter
if
that's
not
possible,
for
whatever
reason
so,
because
you've
already
got
150
integrations
the
more
we
have,
the
better.
I've
also
got
the
core
repositories.
There's
like
25-ish
of
them.
There's
lots
of
bugs
there
that
needs
fixing
features
that
needs
implementing
in
a
variety
of
languages
and
as
well.
B
It
helps
if
you
can
help
with
users
support
because,
as
the
project
becomes
busier,
it's
not
always
possible
for
prometheus
team,
which
is
about
15
people.
To
answer
all
the
questions,
the
more
people
will
help
the
community
questions
as
well.
The
better
and
documentation
is
always
good
if
you
want
to
write
a
blog
post
and
how
to
do
something,
that's
great
because
once
again,
the
core,
just
like
four
core
developers,
but
there's
a
15
core
like
on
the
team
who
are
committers,
and
we
can't
do
everything
so
generally,
it's
like.
A
Well,
there's
a
couple
of
folks
asking
more
questions
and
gary's
talking
about
integrating
yaga
with
open
tracing
soon,
so
hopefully,
he'll
have
something
to
blog
about
around.
That
guillermo
is
asking.
Does
prometheus
plan
to
support
long-term
storage
of
metrics.
B
So,
as
I
mentioned
just
there
a
few
minutes
ago,
prometheus
itself
is
designed
to
be
for
reliability
as
the
first
goal
and
reliability
means
that
your
alerts
will
be
received.
It
does
not
mean
that
all
your
data
is
safe,
so
release
itself
is
more
as
an
ephemeral
cache.
So
it's
not
meant
for
long-term
storage
of
metrics.
So
the
question
is:
where
do
I
store
my
long-term
metrics
like
stuff
from
a
year
ago?
So
I
can
do
capacity
planning.
B
B
For
example,
we've
cortexes
that
influx
we're
talking
about
supporting
that
as
well
and
possibly
some
other
companies
and
then
the
idea
is
that
that
will
be
there
and
because
it's
a
distributed
system,
it
might
be
less
reliable
under
network
partitions
and
so
on,
whereas
for
meetings
itself
keeps
on
alerting,
and
we
can
also
read
back
that
that
as
well
transparently.
B
A
B
So
because
openshift
is
kubernetes,
there
was
no
work
because
it's
already
kubernetes
integration
for
prometheus.
So
now
we
have
integrations
like
the
service.
Discovery
is
the
main
thing
which
is
figuring
out.
Where
are
all
the
pods?
Where
are
all
the
endpoints
for
all
the
ec2
instances?
B
So
we've
got
about
10
of
those
now
like
openstack
was
the
most
recently
added
one,
and
we've
got
ones
for
baritone
and
we've
got
nerfs
nerve
has
been
added,
which
is
near
being
beating,
there's
whatever
the
twitter
equipment
that
is
as
well.
This
is
like
10
of
these
across
different
systems.
Azure,
I
think
we've
added
gce
by
now
as
well,
but
if
you
don't
support,
if
we
don't
support
your
ting,
because
it's
a
little
too
specialized
or
no
one's
sent
a
pull
request.
B
Yet
there
is
you
just
basically
chuck
this
json
with
a
list
of
all
your
targets
and
that
works
too,
but
everything
else
is
just
normal.
This
is
just
another
way
to
run
applications
because
service
discovery
is
the
only
thing.
That's
kind
of
special
all.
A
Right,
I
think
that
answered
michael's
question.
There's
one
more
request
for
a
little
bit
of
a
demo.
Can
you
demo
how
to
to
alert
on
an
item
since
you
have
okay?
Well,
let
me
just
use.
B
B
B
A
B
That's
that's
all
for
me,
but
just
go
prometheus.
Everybody's
perception.I
will
get
to
me
all
right
one
more
time.
Those
are.
A
A
C
I'm
just
curious
how
this
is
tested
in
openshift,
so
I
presume
prometheus,
says
unit
tests
and
there's
whole
testing
infrastructure
there.
Then
it's
further
test
into
the
kubernetes
community
and
then
from
there.
Presumably
it
goes
into
the
openshift
community
and
more
testing
occurs
and
then
openshift
qe,
you
know,
does
productize.
You
know
testing
of
the
productized
bits.
So
is
that
how
the
testing
accountabilities
lay
out
here.
B
So
actually,
integration
testing
is
one
of
those
things
that
we'd
love
help
on
with
prometheus.
B
B
Yeah,
in
that
case
yeah,
but
the
problem
is
with
something
like
azure
service
discovery.
We
need
to
run
azure
with
ec2
service
discovery.
We
need
an
ec2
with
marathon,
we
need
to
set
up
marathon
and,
having
done
all,
these
integrations
is
a
lot
of
work
and
figuring
out
how
to
maintain
those.
So
that's
definitely
something
we
would
like
assistance
with
for
those
who
are
you
know
that
way,
inclined.
B
Yeah
yeah,
but
if
you
want
to
go
the
previous
developers
list
is
there
if
you
want
to
chime
in
there,
because
I
know
it's
something
we
really
want
to
do
because
we've
been
bitten
more
than
we
would
like
by
regressions.
A
Definitely
focus
on
the
openshift
team
that
can,
I
can
point
you
to
michael.
If
you
drop
me,
an
email
that
have
been
playing
with
and
working
with
and
testing
prometheus
for
use.
Oh.
A
Yeah,
I'm
not
I'm
not
sure
like
we're,
not
productizing
prometheus
in
any
way
at
red
hat,
but
I
I
know
people
are
using
it
so
and
have
been
testing
with
it.
So
it
definitely
drop
me
a
note
and
I'll
see
if
I
can
hook
you
up
with
them.
C
B
So
it's
normally
with
well.
Integration
with
external
systems
is
normally
ish,
such
as
service
discovery,
because
a
lot
of
them
their
api
docs,
aren't
great
and
semantics
of
some
of
their
fields
that
they
return,
aren't
great.
It's
not
clear
exactly
what's
going
on
and
the
error
handling
can
be
undocumented
and
we
only
discover
it
when
it
breaks,
because,
especially
when
you're
talking
like
a
cloud
provider
like
you
know,
is
or
gce
ec2.
You
know
you
only
can
only
discover
some
problems
when
they
happen
to
someone.
B
So
those
are
very
hard
for
us
to
test
and
also
if
you
have
to
get
an
account
with
everyone,
you
know
we're
an
open
source
project.
On
the
other
hand,
we've
got
things
like
null
exporter,
which
is
our
machine
monitor,
and
so
how
do
we
test?
The
freebsd
integrations
when
none
of
us
run
free
at
bsd
or
the
open,
bsd
integrations
and
all
those
sort
of
questions
so
figuring
out
how
to
test
all
this
stuff
like.
B
I
know,
because
we're
packaged
in
debian
by
mark
ferrari,
that's
discovered
a
few
bugs
in
our
locking
stuff
because
it
breaks
on
arm.
So
that's
not
tests.
We
run
ourselves
because
we're
all
64-bit
x86
so
being
able
to
expand
our
testing
infrastructure
will
be
quite
useful
and
especially
your
integrations
with
kubernetes
and
openshift,
because
a
lot
of
our
users
are
on
that.
B
Yeah,
so
for
me
it's
an
independent,
open
source
project.
No
one
company
controls
it
and
I'm
one
of
the
core
developers
and
then
separately.
Robust
perception
is
my
company
and
we
do
consulting
and
support
for
committees.
B
A
One
more
question
came
in,
and
actually
it's
a
good
one,
maybe
to
end
on
now
that
prometheus
is
under
the
cncf.
What
benefits
have
you
found
being
part
of
the
cncf?
Is
that
helping
grow
the
project.
B
Yes,
it
does
help
to
go
to
product.
It
gives
us
it's
easier
to
talk
to
the
other
projects
as
well
and
all
work
towards
the
cloud
native
vision,
because
you
know
each
of
us
are
seeing
similar
issues
but
in
different
ways
around
people
getting
ready
for
your
idea
of
cloud
native
and
how
you
have
to
move
away
from
the
older
mindsets
when
every
machine
was
special
as
well.
I
think
help
a
lot
with
marketing
and
just
general
advice
on
how
to
run
an
open
source
project
because
of
some
very
experienced
people.
There.
A
Yeah,
no,
you
found
that
it's
been
pretty
helpful
too
for
lots
of
things
and
there's
lots
of
new
things
coming
under
the
cncf
and
they're
all
part
and
parcel
of
what
we're
trying
to
integrate,
with
it,
openshift
and
with
into
the
openshift
ecosystem.
And
it's
been
really
helpful
for
us
to
have
that
connectivity
and
will
be.
Will
you
be
coming
to
the
austin
event
in.
A
But
hopefully
we'll
get
some
of
these
demos
and
with
jager
and
other
things
through
the
cfp
process,
this
time
for
kubecon
in
austin
and
we'll
see
some
more
integration,
work
done.
A
From
us
is
going
to
be
there
yeah,
I'm
sure
I
know
at
least
alexis
will
be
there.
He
can't
get
out
of
it.
So
he's
he's
been
demoing
prometheus
through
weave
for
quite
some
time,
so
they'll
definitely
be
a
few
folks.
So
anyways
I
want
to
say
thank
you
very
much
great
presentation.
A
I
will
be
putting
this
and
a
pdf
of
the
slide
deck
up
on
the
openshift
blog
at
blog.openshift.com
as
well
as
I
will
dig
around
for
all
the
references
for
the
blog
posts
that
brian
has
mentioned
and
put
the
links
to
those
as
well
there,
so
it
usually
takes
a
day
or
two
to
edit
the
videos,
but
thank
you
all
for
your
questions
and
take
a
look
at
the
events
calendar
on
commons.openship.org
for
the
upcoming
ones,
we're
going
to
two
days
a
week.