►
From YouTube: 44. #EveryoneCanContribute cafe: SLO Management with Prometheus: Pyrra, Nobl9, OpenSLO
Description
Pyrra starts after the introductions at ~ 11:00
Blog: https://everyonecancontribute.com/post/2021-09-08-cafe-44-slo-management-prometheus-pyrra-nobl9-openslo/
Pyrra: https://pyrra.dev
Nobl9: https://nobl9.com/
OpenSLO: https://openslo.com
SLOConf YT playlist: https://www.youtube.com/playlist?list=PLLNq9CBV7AFwyRzICyCRKdcsAPAlG5bPu
SLO book: https://www.oreilly.com/library/view/implementing-service-level/9781492076803/
A
And
hello
again
after
a
short
summer
holiday
vacation
break,
we
are
back
in
our
everyone
can
contribute
cafe
and
for
today
we
thought
about
talking
about
slos
and
as
a
low
management
and
from
yeah.
Actually
it
was
a
fun
tweet
series
yesterday
and
I
want
to
quickly
share
my
screen
so
that
you
know
what
I'm
talking
about
like
kid
was
asking
the
hello
world
of
slos
and
I
thought
about
well:
let's
do
a
prom
kill
query
because
chromesis
and
stuff
and
yeah
matthias
jumped
in
and
said
hey.
A
We
should
talk
about
pyra,
pura
and
making
slos
with
promisius,
manageable,
accessible
and
easy
to
use
for
everyone,
and
there
is
a
demo
and
everything
else
and
was
like
okay
and
I
think
at
a
certain
point,
nicholas,
jumped
in
and
said
hey.
We
can
totally
talk
about
this
about
this
today,
and
so
we
made
it
happen.
Maybe
we
will
touch
port
base
with
nova9
and
then
opens
the
load
later
on.
A
But
for
now
I
would
just
say:
maybe
let's
do
a
quick
short
round
of
introductions-
and
I
just
want
to
say
pick
one
in
in
in
the
gallery.
I'll
pick
matthias,
who
are
you
short
introduction.
B
Who
am
I
yeah,
I'm
matthias,
I
I'm
based
in
berlin.
I
work
for
a
new
startup
called
polar
signals
and
we
actually
don't
do
anything
with
slos
as
a
product.
We
build
a
performance
tooling
to
basically
have
continuous
profiling
and
that's
what
I
do
on
during
my
day
job
basically,
but
we
also
have
to
run
stuff,
and
I
also
previously
worked
at
red
hat,
where
we
had
to
run
a
host
of
prometheus
internally
based
on
thanos
and
yeah,
like
that's,
where,
like
running
anything
really
like,
where
slos
really
shine.
B
I
learned
about
the
sre
books
and
then
came
to
know
about
the
amazing
book
that
alex
chidago
wrote
on
slos
and
I
don't
know
like
I
got
involved
into
like,
like
the
kubernetes
mix
and
edit
slo's
there
and
like
the
slo
lips
on
it,
like
small
json
generator
files
below
so
I've.
I've
been
around
slos
quite
some
time,
but
yeah.
I
have
something
new
and
it's
exciting
to
share
now
and
with
that,
let's
just
straight
go
to
nadine,
and
then
she
can
pick
the
next
person.
C
Yeah
thanks
yeah,
so
I'm
working
in
the
ux
design
team
at
grafana
labs
and
before
yeah.
I
started
this
open
source
project
with
matthias
yeah
to
for
me
to
learn
a
little
bit
more
about
slos
and
from
the
design
perspective,
yes,
and
also
I'm
based
in
berlin.
C
Yes,
I
will
pick
necklace.
D
Okay
yeah,
my
name
is
currently
what
I
am
doing
at
work
is
doing
only
buzzwords,
so
I
doing
blockchain
and
kubernetes
mostly
for
money,
and
of
course
we
need
also
to
monitor
all
that
stuff,
and
that's
I'm
also
curious.
You
know
about
learning
about
a
bit
about
slos
and
also
about
all
the
other
topics
in.
E
F
Howdy
folks,
my
name
is
zach
nickens.
I
work
for
a
company
called
outsystems,
where
I
I
run
a
sre
team
across
the
us,
the
eu
and
and
aipac
I'm
located
in
san
antonio
texas.
So
I'm
one
of
the
co-organizers
of
the
sre
meetup,
and
so
I'm
deeply
interested
in
all
things.
Slos
and
all
things
sre.
So
I'm
very
excited
to
to
be
here
with
everybody
and
to
see
the
exciting
new
stuff
that
we
get
to
see
today,
I'll
throw
it.
G
Over
to
kit,
hey
guys
kim
merker
here,
I'm
kind
of
a
twitter
slow
guy,
I
guess-
is
the
best
way
to
describe
it.
But
I've
been
involved
with
the
sre
meetup
with
zach
with
I
started
slow
conf.
If
you're
familiar
with
that,
it
was
a
little
event.
We
threw
yeah,
see
the
ties
has
it,
but
I'm
the
ceo
at
noble9
and
I'm
trying
to
make
sure
that
software
runs
reasonably
reliable
and
I'll
pass
it
over
to
somebody
from
my
team
sal.
H
Hey
there
thanks
kit
as
kids,
that
I
also
work
for
noble
nine,
my
name's
sal,
I'm
their
first
cre
I
started
doing.
I
started
really
getting
interested
in
sre
when
I
read
the
famous
google
book
in
about
2017
or
so,
and
it
really
was
like
the
eureka
moment
for
me
in
terms
of
how
infrastructure
should
be
run
at
scale
fast
forward
a
couple
years.
You
know
I
found
this
guy
alex
hidalgo
on
on
twitter
started
following
him,
and
then
I
read
his
book
next
thing.
H
You
know
I
got
a
job
at
noble
nine
and
I'm
ending
up
here
with
all
you
folks.
I'm
gonna
pass
it
on
here
to
packet
zero.
Hopefully
I'm
reading
that
lead
speak
right.
Yes,
yes,
that's.
I
Right,
that's
right,
so
my
name
is
benjamin
and
what
I
mainly
do
is
I
focus
on
security,
so
I
do
kubernetes
and
cloud
security
stuff,
but
I'm
currently
still
a
student.
So
I'm
doing
my
bachelor
in
bochum
and
yes,
that's
what
I
currently
do.
I
mostly
do
security
security
for
kubernetes
and
cloud.
J
K
Okay,
I
have
not
used
zoom
in
half
a
year,
so
yeah,
so
I'm
super
surprised
by
the
amount
of
people
here
today,
yeah,
I'm
max,
I'm
also
based
in
germany,
currently
living
in
girlitz
and
yeah.
I'm
working
as
a
system
engineer
the
germans
from
you
probably
know
the
hosting
company
hetzner
that,
where
I
work
and
to
keep
the
cloud
running
and
yeah
happy
to
meet
you
all
here
next
can
be
johannes
or
johannes.
I
don't
know
johannes.
L
It's
german,
okay,
hi,
I'm
johannes,
I'm
also
based
in
germany.
I
work
for
deutsche
bahn,
german
railway
company,
especially
db
systel,
and
I'm
a
deferred
engineer
there
and
we
run
multiple
clusters
of
amazon,
a
cms
content
management
system,
and
we
are
also
interested
in
slos
and
monitoring,
and
I
think
I
can
learn
a
lot
here
so
who
wasn't
I
give
over
to
daniel
daniel.
M
Hi
I'm
daniel
phel,
I'm
in
north
carolina
in
the
u.s
I
work
for
noble
mine.
Actually
kit
is
my
manager,
so
I'm
one
of
the
ses
at
noble9
got
really
interested
in
slows
and
how
they
worked
about
nine
months
ago
and
eventually
came
over
to
noble
nine
and
enjoy
helping
customers
understand,
slows
and
make
their
environment
more
reliable.
M
And
I'll
I'll
try
nicholas.
D
E
I
still
haven't
gone
hi
everybody.
This
is
not
our
mortgage
avi.
I
am
another
sales
solution
engineer
at
noble9
and
I
work
with
daniel
and
sal
and
kid
is
my
manager
as
well
very
happy
to
be
here.
I
recently
started
at
noble
9,
and
previously
I
was
working
at
new
relic
and,
as
some
of
you
might
know,
neurologic
is
a
monitoring
tool
and
one
of
the
major
things
that
we
really
couldn't
solve
in.
E
It's
almost
loud
in
here
yeah,
one
of
the
major
things
that
we
couldn't
solve
in
the
relic
was
just
the
fact
that
we
can
bring
all
of
our
informations
in
one
tool
to
understand
what
are
the
slos
that
our
customers
could
have
worked,
but
that
is
the
problem
that
noble9
is
solving,
so
I
got
very,
very
excited
about
this
and
I've
been
joining
the
team
for
a
month
and
I'm
here
to
learn
a
lot
as
well.
N
Go
on
michael
come
on.
I
saw
that
I
can
fly
under
the
radar
hi.
My
name
is
mike
wagner.
I'm
a
senior
product
manager
and
senior
software
engineer
at
the
company
called
ckw,
be
a
part
of
the
lg
corporation
and
we
build
headlamp
for
the
automotive
industry,
and
my
main
part
is
my
day.
Job
is
to
build
high
performance
applications
like
cid
systems
or
retracing
such
stuff.
N
N
A
Perfect,
thank
thanks.
Everyone
for
the
kind
introduction,
so
we
have
luck
at
the
worst
audience
today
and
with
that
I
would
just
say
we're
all
all
waiting
for
learning
more
about
pyra
matias
and
nadine.
Do
you
want
to
kick
us
off.
B
Sure
thing
yeah,
let
me
quickly,
as
I
said,
we
hadn't
had
a
lot
of
time
to
prepare,
but
well
that's
like
super
big
how's
that
looking
for
you,
because
I
have
a
ultra
white
screen.
Let
me
know
perfect.
L
B
Good,
okay,
yeah,
so
basically
we're
here
to
talk
about
pura.
I
think
that's
how
I
pronounce
it
from
like
the
greek
wording,
but
pyra
whatever
is
fine
as
well
and
nadine,
and
I,
as
as
nadine
kind
of
mentioned,
set
out
to
making
slos
with
prometheus,
more
manageable,
accessible
and
easy
to
use
for
everybody.
That's
a
slogan
and
yeah
like
in
the
in
the
process
kind
of
like
started.
I
think
like
in
spring
sometime.
B
C
Yes
sure
thanks
matthias
yeah,
so
I
can
imagine
that
you're
curious
also
about
the
process,
so
I
would
like
to
dive
there
a
little
bit
deeper.
So
first,
maybe
matthias.
Could
you
too?
Yes
thanks.
C
C
So,
for
example,
how
might
we
create
a
very
minimalistic
ui
to
really
guide
users,
and
how
can
we
showcase
actionable
steps
and
how
could
it
be
a
just,
an
mvp
which
is
extensionable
in
the
future
yeah
and
for
this
mvp
and
for
feature
prioritization?
C
We
use
next
slide.
Please
we
used
different
methods,
for
example
this
one
and,
as
you
can
see
here,
we
came
up
with,
must
haves
and
could
haves
also
should
haves
and
won't
have
which
we
would
like
to
include
in
the
future
and
as
a
next
step
yeah,
we
iterated
a
lot
on
these
and
yeah
and
still
iterating
a
lot
and
the
next
one
already
yeah
and
the
testing
we
did
with
this
prototype,
which
mathias
wants
to
explain
a
little
bit
more.
B
Okay
yeah,
so
we
basically
started,
while
figuring
out
all
of
these
things.
We
also
started
hacking
on
this
or
yeah
like
like
on
this
prototype
kind
of
mvp
thing
that
isn't
supposed
to
look
anything
fancy,
but
it's
kind
of
proving.
Are
we
showing
the
correct
things?
B
Is
it
kind
of
like
in
line
with
all
the
research
and
all
the
interviews
that
we
did,
and
can
we
build
something
that,
based
on
on
our
recent
experience,
yeah
using
slos,
that
that
would
be
like
helpful
as
like
an
open
source
product
that
you
can
run
next
to
a
prometheus
instance?
Basically,
so
we
kind
of
started
like
these
things
like
in
para,
parallel,
but
then,
like
kind
of
like
they
kind
of
like
influence
each
other
right.
B
B
And
for
the
last
slide,
we
basically
just
took
that
and
yeah
just
really
uplifted
this
with
some
design
guidance
spacing
colors
and
really
make
this
something
that
is,
that
is
enjoyable
to
use
because
we
all
know
prometheus
as
a
ui,
and
I
do
love
the
project
but
yeah.
That's
like
always
missing
with
prometheus
itself
and
yeah.
That's
like
almost
all.
There
is
for
the
slides,
and
I
can
just
show
you
the
live
projects
or
maybe
before
that
are
there
any
questions.
People
have
already.
B
All
right
we'll
talk
about
a
bit
more
on
how
to
set
things
up
and
and
and
so
on,
but.
D
B
Yeah
good
question:
we
basically
yeah.
We
had
the
idea
to
to
use
that
one.
It's
like,
I
think,
the
knees
of
prometheus
but
yeah
the
spelling,
I
think,
also
was
kind
of
like
a
bunch
of
people
already
use
it
for
different
products,
not
only
like
software
and
then
kind
of
hard
to
spare.
We
also
ask
like
we.
B
Exactly
so,
basically,
it's
a
typo,
but
it's
on
purpose
to
make
it
easy,
yeah,
yeah
and
then
yeah.
You
can
see
the
the
ui
and
kind
of
yeah,
something
something
that
now
is
usable.
Is
this
hosted
demo?
You
can
find
it
on
demo.purapongdev
and
it
is
essentially
just
like
a
docker
compose
setup
running
a
prometheus
in
the
back
end.
So
always
from
from
here.
You
can
always
open
prometheus
running
in
the
background
and
just
use
prometheus
as
you
know
it.
B
But
then,
obviously
you
have
this
like
more
specific
ui,
which
still
is
like
the
mvp.
As
we
said,
it
is
something
that
is
like
evolving
and
we
have
multi
multi,
burn
rate
alerts
and
all
of
this
all
of
what
you
see
is
generated
from
the
conflict
and
I
think
like,
given
that
most
people
are
engineers.
B
That's
probably
what
you
were
like
kind
of
like
waiting
for
so
in
in
pura
kind
of
the
idea,
was
to
really
have
the
user
just
specify
metrics
for
errors
and
the
total
amount
of
of
events
that
happened
right
so
slos
themselves
work
by
always
counting
the
errors
against
the
total
amount
of
events
and
over
time,
you
kind
of
like
measure
how
many
errors
they
are,
and
so,
for
example,
you
want
the
the
objective
up
here
is
like
90
of
the
requests
should
be
successful,
and
then
you
measure
the
actual,
successful
events
that
that
happen
in
the
system
that
you're
that
you're
looking
at
and
then
kind
of
the
inverse
or
out
of
like
these
90
percent
that
you
have
as
the
objective
the
inverse
of
that
would
be
that
10
percent
can
actually
fail.
B
So
that
is
usually
called
the
error
budget
and
then
what
in
slos
we
always
talk
about,
is
that
we
kind
of
want
to
be
within
this,
like
these,
like
in
this
case,
like
10,
that
we
have
kind
of
yeah
as
a
budget
to
work
with
so
right
now
we
we
still
have
33
percent
of
the
error
budget
and
the
math
behind
this
is
somewhat
it's
doable.
Obviously,
but
you
kind
of
like
need
to
dedicate
like
an
afternoon
or
two
to
really
understanding.
B
What's
going
on
so
yeah
the
project
kind
of
had
this
idea
of
of
using
just
like
these
metrics
that
that
you
know
like
metric
selectors,
basically
to
to
count
the
errors
and
the
total
amount
of
events,
and
then
everything
that
kind
of
like
comes
from
that
happens
in
the
background.
So,
for
example,
from
the
total
amount
of
requests
that
happen,
we
can
see
like
the
the
request
per
second
right,
and
that
is
probably
something
that
you
are
quite
familiar
with
prometheus.
B
So
if
we
open
this
up,
you
can
see
we
do
want
to
have
a
rate
over
five
minutes
and
then
kind
of
graph
that
so
pura
takes
the
the
metric
that
we
have
in
here
and
then
really
just
like.
It
doesn't
just
like
template
these.
It
actually
parses
the
metric,
the
the
queries
and
then
does
like
a
full
full-on
substitution
of
the
metrics
right
and,
for
example,
for
the
errors
as
well.
B
B
There
are
some
errors,
so
we
can
see
that
this
just
happened
when
we
started
the
meeting
and
then,
as
as
we
kind
of
like
progress
like
these
were
kind
of
like
usually
what
people
are
kind
of
familiar
with
right
and
then
again,
this
the
the
system
or
like
pure
itself,
can
take
these
metrics
and
construct
these
rather
complicated
or
like
these
metrics
that
these
queries
that
you
need
to
kind
of
understand,
and
it
does
that
all
based
on
that
config
you
give
at
the
bottom
right.
B
So
in
here
we
we
said
we
have
this
arrow
metric.
We
have
this
total
metric
and
then
we
have
the
target,
and
we
want
to
look
at
this
over
four
weeks
in
this
example
and
pirro
takes
all
of
this
information
and
generates
these
queries
for
you
as
well,
and
as
we
can
see,
we
have
the
ninety
percent
objective
over
four
weeks
and
then
kind
of
like
yeah,
really
generates
the
thing.
B
So
you
don't
really
need
to
to
understand
why
up
front,
at
least
and
most
importantly,
we
basically
just
looked
at
querying.
Most
importantly,
we
want
to
alert
on
these
right
and
there's
something
called
the
multi-error
burn
rates
in
the
books,
and
I
think
I'm
sure
we
can
discuss
these
in
a
bit
as
well.
If
there
are
questions
we,
we
want
to
use
the
the
same
metrics
in
the
background
and
generate
the
rules
and,
most
importantly,
as
I
said,
the
alerting
rules.
B
So
if
we
look
at
these,
we
see
that
we
we
do
get
the
multi-error
burn
rates
for
one
hour
and
five
minutes,
for
example.
So
these
are
combined
and
then
the
same
is
true
for
6
hours
and
30
minutes
in
this
case,
because
we
had
a
window
of
four
weeks,
I
think
for
the
demo
and
the
way
this
is
kind
of
like
wired
up
is
that
prometheus
gets
these
alerting
rules
and
recording
rules
and
then
loads
them
into
its
system
and
going
forward
with
yeah,
okay.
B
Let's
then,
oh
hourly
exactly
so,
we
have
like
a
recording
rule
for
the
burn
rate
of
five
minutes,
30
minutes,
one
hour,
etc,
and
then,
at
the
end,
we
have
like
the
error
budget
burn
alert
here
where,
where
the
things
get
really
kind
of
like
complicated
in
in
a
way
that
this
is
why
people,
usually
at
least
like
use
a
generator
to
generate
these
things,
because
you
need
four
of
them
and
they're
somewhat
intricate.
And
when
you
change
one
thing,
you
want
all
of
them
to
change
right
so
yeah.
B
This
is
that's
what
you
get
with
pura
out
of
the
box.
It
has
like
the
ui
that
generates
the
queries
for
you
based
on
the
conflict,
and
then
it
will
also
generate
the
alerts
and
put
them
into
prometheus,
and
then
you
can
hook
that
up
to
a
alert
manager
and
that
will
whatever
you
have
configured
alert,
managers
and
slack
or
pagerduty
notifications.
B
When
something
actually
happens,
and
you
burn
too
much
error,
budget
and
yeah-
that's
that's
kind
of
it
right
now.
As
I
said,
it's
just
an
mvp
and
we
started
a
couple
of
months
ago
working
on
this
and
we
do
do
hope
for
contributors
and
participants
from
the
community,
because
this
is
just
a
free
time
project
and
yeah.
We
we
really
want
to
invite
people
to
to
join
this
effort
and
one
of
the
things
I
want
to
kind
of
contribute
or
like
work
on
in
the
in
the
future
is
you
might
have
seen.
B
The
config
is
something
specific
to
pura
itself
and
there's
a
standardization
process
going
on
called
openslo,
and
we
really
would
like
to
to
support
this
as
well.
So
we
could
integrate
with,
for
example,
sloth
another
similar
tool,
that's
out
there
or
the
folks
from
nova
9,
for
example.
So
it
really
becomes
something
where
poorer
you
can
just
put
on
on
the
prometheus
that
you
run
on
your.
I
don't
know
like
home
server
in
the
basement
and
then
for
more
serious
things.
B
C
A
I
don't
have
a
question
I
I
just
want
to
say
it
looks
great,
so
thanks
for
your
hard
work
on
this
bedtime
project
and
hopefully
see
you
soon
online
in
the
future,
thanks
for
joining
today
and
have
a
great
evening
after
your
call
for
for
the
tool
itself,
I
think
we
talked
about
before
before
the
the
meeting
today
for
before
the
cafe.
A
How
does
this
tie
into
the
open,
slo
specification,
matthias
and
and
also
kid
and
everyone
else,
it's
just
something
like
on
the
on
the
road
map.
How
can
we
achieve
that,
and
how
can
we
like
place
number
nine
into
the
picture.
G
Well,
it's
interesting,
I
think,
what
one
hello
sorry,
my
mute,
yup.
A
G
You
hear
me:
okay,
yeah.
I
wanted
to
actually
have
daniel
show
some
prometheus
in
noble
9.
G
If
it's
all
right,
because
I
think
this
is
a
really
interesting
use
case
here,
as
we
think
about
the
open,
slo,
slow,
yaml
format
that
we're
using
to
make
you
know,
sort
of
the
git
ops,
workflow
around
eslos
and
then
taking
the
sort
of
hello
date,
hello,
world
data
from
you
know,
being
able
to
generate
and
render
these
slos
and
then
be
able
to
show
them
in
kind
of
a
production
ready
system
so
that
we
can
start
doing
less
math
and
getting
more
sleep.
It's
kind
of
like
this.
G
The
big
idea
of
slo
is
right.
We
we
can
focus
our
energy
on
solving
reliability,
issues
and
being
productive
for
customers.
I
don't
know,
do
you
guys
want
to
see
what
daniel
has?
I
think
he
tried
to
get
this
working
before
the
meeting
here
or
do
you
want?
Is
there
more
to
show
in
your
demo
mattias.
B
I
guess
yeah
we
can
definitely
look
kind
of
like
at
your
demo.
I
think
it
really
nicely
fits
because,
like
I
definitely
want
to
support
openslo,
as
I
said,
that's
something
on
the
roadmap
and
I've
actually
talked
to
chaby.
I
think
his
name
was
pronounced
from
the
sloth
project,
who
already
integrated
with
open
slo.
So
that's
definitely
something
like,
so
it
doesn't
even
matter
if
you're,
using
puro
or
sloth
or
you
want
to
like
upgrade
to
to
nova
9
as
a
hosted
service
yeah.
That's
that's.
B
Definitely
something
I'm
happy
to
support
and,
like
I
already
plan
on
on
doing,
I
just
didn't
have
time.
Yet,
as
always
is
the
case
but
yeah
like
we
can.
We
can
like
continue
with
this.
We
can
like
play
around
with
slos,
see
how
things
turn
out,
but
yeah,
I'm
happy
to
just
look
at
the
nova
9
demo
for
sure.
J
A
M
M
You
can
take
a
look
at
these
and
you
can
see
you
know
it's
typical,
typical,
prometheus
requests
to
get
the
metrics
point
and,
from
the
metrics
point,
we're
calculating
the
reliability
burned
down
your
error
budget
and
your
air
budget
burn
rate.
So
you
know,
as
we
pull
the
data
from
these
slis,
that's
what
this
blue
line
is
here,
we're
actually
just
burning
down
the
air
budget.
You
can
kind
of
see
where
the
burn
down
rate
is
taking
place.
M
The
nice
thing
about
this
is
we're
looking
forward
and
we
do
everything
inside
of
these
we're
as
we
convert
to
open,
slow
yaml.
So
you
you
know,
you
should
be
able
to
start
with
one
project
and
move
to
another
as
you
need
it
and
as
your
needs
change,
so
keeping
things
very
open.
This
is
the
yaml
that
we
use.
M
We
have
a
binary
here
called
slow
cuddle.
That
makes
it
very
easy
for
you
to
grab
your
slows
from
our
system
and
then
put
that
into
a
file,
maybe
upload,
that
into
a
git
repo,
and
then
you
can
use
a
github
action
to
to
do
that.
So
the
vision
for
us
is
to
make
it
where
you
can
create
slows
without
having
to
worry
about
doing
the
math
or
spending
those
days
and
hours
trying
to
make
that
work
and
handling
this
for
you.
So
we
also
support
quite
a
few
other
integrations
on
here.
M
So
if
you
have
other
systems,
we're
constantly
adding
things
right
now,
so
these
are
all
listed
in
our
product,
but
you
can
also
look
on
our
website
and
I
can
post
something
to
the
slack
channel
later,
but
the
nice
thing
about
it
is
it
definitely
you
know
if
you
start
with
matthias's
project,
you
can
easily
come
to
our
project.
You
know
that's
one
of
the
beauties
of
open
source.
B
Yeah,
I
just
want
to
point
out
that,
like
your
support
for
different
integrations
is
quite
a
lot
more
than
what
we
have
like.
We
definitely
only
support
prometheus
right
and
that's
like
our
goal,
but
I
think
yeah,
like
kind
of
like
having
the
the
noble
nine
servers,
is
amazing
for
for
having
all
of
these
different
integrations
for
sure.
One
of
the
questions
like
given
that
you
tried
setting
up
with
it
with
the
demo.
There
was
like
a
bit
short
on
time.
B
It's
like
how
do
you
actually
get
the
data
into
your
system
and
kind
of
like
I
guess
it's
like.
I
don't
really
need
to
care
about
like
how
it's
stored,
but
like
yeah.
How
do
you
integrate
with
these
different
services
like
and
obviously
prometheus
being
the
one
I'm
interested
in.
M
M
So
you
just
need
a
url
and
then
once
you
get
that
url,
we
give
you
a
docker
container
that
you
can
run
so
you
can
either
run
it
inside
of
kubernetes
or
docker.
So
you
just
fire
that
up.
This
is
a
very
lightweight
container.
I
you
know
you
can
see.
I
have
a
docker
desktop
running
on
this
mac.
Here
I
run
quite
a
few
of
these
agents
on
the
mac,
but
on
my
personal
desktop
to
do
it,
but
you
can
put
this
in
your
own.
M
So
take
your
regular
prom
queries
and
put
them
in
just
like
you
normally
would
to
configure
configure
the
threshold.
My
agent
is
having
a
little
bit
of
a
problem
talking,
but
once
you
get
the
data
source
in
you
know,
the
ratio
metric
is
already.
There
can
define
the
time
window
that
you
want
either
rolling
or
calendar
aligned,
and
then
your
simple,
your
error
budget.
M
You
can
also
do
different
alert
methods
in
here,
so
we
have
integrations
for
alert
methods.
So
if
you
have
pager
duty
or
discord
or
service
now
or
slack
system
like
we're
using
here
for
the
open,
slo
slack
channel
one
of
the
new
things
that
we've
done
is
this
custom
web
hook.
So
this
allows
you
to
send
agent
or
send
alerts
to
a
you
know
to
a
system
that
you've
built.
So
this
will
allow
you
to
in
your
ci
process.
If
you
wanted
to
put
up
errors,
you
know
hey.
M
B
Do
have
a
follow-up
if
no
one
else
has
any
questions
go
ahead.
Yeah
like
so
pure
supports,
like
the
multi-arrow
burn
rates
like
how.
How
do
you
do
the
alerting
with
nova
9?
Like
you
said
you
configure
the
agent,
the
data
shows
up
and
then
you
just
tell
people
right,
but
is
there
like
anything
specific?
B
As
we
said,
like
multi-arrow
burn
rates
are
mentioned
in
the
books.
Do
you
have
that
as
well
or
okay?
So
you
can
configure
basically
whatever
your
needs
are
or.
M
Yeah,
you
could
do
when
your
error
budget
would
be
exhausted,
which
is
in
this
one
days
hours
or
minutes,
and
the
condition
is
you
know,
going
to
last
for
hours
or
minutes.
Probably
the
most
popular
one
for
people
that
are
new
in
slows
is
doing
the
remaining
error
budget.
Is
you
know?
So
if
you
have
something
based
off
30
days
or
a
30
day
error
budget,
and
you
have
10
percent
less
and
you're
in
the
10th
day
of
that
30
day
process,
you
know
time
to
call
together.
M
You
know
your
stakeholders
and
your
customers
and
say
hey.
You
know,
come
up
with
a
plan
right.
We
also
allow
you
to
do
the
average
error
burn
rate
we
find.
This
is
very
popular,
for
people
who
are,
you
know
are
pretty
advanced
for
slows,
because
this
would
allow
you
to
see,
like
maybe
you
put
out
a
new
version
of
your
project
or
something
and
your
burn
rate
is
kicked
up
a
little
bit,
so
you
can
see
when
things
are
going
to
go
wrong
a
little
bit
quicker
than
you
would
in
your
standard.
M
You
know
monitoring
condition
and
then
once
you
pick
one
of
these
alert
conditions
that
you
want
to
do
it's
a
simple
thing
of
just
signing
it:
putting
a
display,
name,
picking
your
severity
level,
high,
medium
or
low
and
then
test
here.
So
I
can
get
past
and
then
how
do
you
want
to
send
the
alert
message
through
either
through
discord,
opportunity,
servicenow,
slack
or
a
web
hook
or
jira?
If
you
want
to
create
a
journal
ticket
so
creating
jira
tickets,
this
to
create
jira
tickets?
This
is
some
of
the
ones
that
I've
created.
M
B
M
We
should
we
should
definitely.
We
should
probably
try
to
create
one
yeah.
A
Yeah
cool,
I
have,
I
have
a
question
around
like
if
I'm
a
beginner
with
slos
and
like
using
all
the
different
tools.
Where
should
I
be
starting?
Should
I
be
looking
into
parish
pura?
Should
I
be
looking
into
november
9?
What
is
like
the
best
learning
experience?
What
is
the
best
journey
to
to
get
an
immediate
success
with
slos?
M
Obviously,
you
know,
I
think
we
do.
You
know
a
pretty
good
job
with
it
at
level
9..
It
really
depends.
It
depends
on
budget
and
what
you're
trying
to
accomplish.
So
I
mean,
if
you
don't,
have
a
high
budget.
Maybe
you
know
a
free
tool
will
do
the
job
for
you.
I
also
recommend
you
picking
up
alex's
alexa
doggo's
book
implementing
service
level
objectives.
It's
an
o'reilly
book,
that's
where
I
started
learning
about
slows
and
to
do
them,
and
actually
I
started
figuring
things
out
and
calculating
them
on
spreadsheets.
G
G
We
got
to
wait
until
you
know
our
system
gets
reliable
and
sophisticated,
and
all
this
kind
of
stuff
and
the
the
reality
what
we've
seen
people
who
are
successful
is
they
put
the
slos
first,
they
get
best
solos
in
place
because
it
tells
them
where
they
want
to
go
and
what
they're
trying
to
measure
what
they're
trying
to
accomplish
and
then
all
the
investments
that
they
make
and
improvements
and
everything
else
have
a
lot
more
meaning
a
lot
more
focus,
and
so
we
definitely
encourage
people
to
look
for
how
to
create
sls
from
the
data
they
have
like
what's
readily
available.
G
G
You
can
actually
start
to
understand
if
it's
making
it
better
or
worse
or
about
the
same
right,
and
I
think
that's
a
really
powerful
thing
for
people
to
realize
that
they
don't
need
to.
You
know
first
go
make
a
big,
it's
like.
Oh,
I
need
to
lose
a
bunch
of
weight
before
I
start
going
to
the
gym.
You
know
what
I
mean
like
it
doesn't
make
any
sense.
G
You
want
to
start
weighing
yourself
every
day
start
doing
the
better
behaviors
start
trying
to
make
better
decisions,
and
then
the
destination
will
be
that
better
reliability
and
efficiency,
which
is
what
we're
really
trying
to
get
slos.
Are
not
the
destination
they're
a
tool
to
help
you.
You
know
along
the
way.
B
Yeah
100
degree
one
one
thing
that
I
mentioned
in
the
open:
slo,
not
open
slo
in
the
slo
conf
talk
that
frederick
and
I
did
was
really
look
at
data
that
you
have
as
kid
just
said,
and
then
what
I
really
love
doing
is
basically
go
from
kind
of
like
the
load,
balancer
or
somewhere,
like
kind
of
like
super
close
to
the
user
and
then
just
set
something
very
generic
almost
and
then
try
to
take
it
one
step
at
a
time
and
like
trying
to
get
further
down
into
into
your
system,
trying
to
really
just
like
get
more
specific
afterwards.
B
But
I
think
that's
as
kid
said
like
you
can
just
start
like,
you
probably
have
low
balancer
metrics,
even
if
you
don't
know
about
them
so
like
just
having
those,
you
can
already
start
with
them
today
and
not
wait
until
whenever
and
then
yeah
just
like
improve
on
those
in
the
future.
And
then
because
I
mentioned
the
slo
conf
next
to
the
book
that
alex
wrote.
I
want
to
shout
out
to
all
the
videos
and
all
the
speakers
from
sloconf
that
noble9
hosted.
I
do
think,
especially
for
beginners.
B
There
are
fantastic
talks
in
there
also
from
from
mickey
who
is
hosting
us
now
and
and
different
vendors.
Different
people
from
the
community
really
came
together
and
it's
it's
an
amazing
set
of
videos.
You
can
just
spend
hours
if
you,
if
you
feel
like
learning
about
sls.
F
Hey
matthias,
how
long
did
you
say
that
you
it
that
you've
been
working
on
pyro.
B
So
yeah
a
couple
of
months,
I
think
it's
it's
kind
of
cheating,
because
I've
been
like
working
on
slos
for
two
years
at
this
point
and
kind
of
like
said
trying
to
to
put
them
in
place
at
red
hat
before,
and
so
I
came
with
like
a
bunch
of
experience
right
and
a
lot
of
the
generating.
B
The
queries
also
came
from
the
slo
lib
sonnet
tool
that
I
wrote
that
had
yeah
generated
from
qa,
with
with
jason
it
so
yeah
like
it's
a
bit
of
cheating,
saying
I
only
worked
four
months
on
this,
but
on.
F
The
tool
itself,
it's
super
impressive
and
I
think,
like
in
picking
out
picking
out
a
place
to
start
right,
and
so
I
think
the
more
things
like
this
that
we
see
in
our
community
makes
it
easier
for
people
to
get
started
right,
and
this
is
something
that
we
can
pick
up.
You
can
have
no
budget
right
and
you
can
implement
something
like
this
and
I
would
expect
how
long
it
would
an
implementation
for
this
tool
to
take
for
someone
with
a
like
a
fair
amount
of
prometheus
knowledge.
F
It
I
don't
feel
like
this
is
a
giant
lift,
and
so,
even
if
you
I
feel
like,
if
I
wanted
to
introduce
slos
into
an
org-
and
maybe
I
don't
have
the
budget
or
I
don't
have
the
buy-in
yet
I
could
start
with
a
project
like
this
and
start
showing
very
early
benefits
to
getting
slos
in
place,
and
I
think
that
can
help.
You
know
an
engineer
in
an
org
go,
get
buy-in
and
build
up
to
making
a
decision
point
of
like
okay.
F
We
definitely
see
that
we
need
slos
in
place
before
we
try
to
do
hardcore
reliability,
engineering
and
that
can
be
a
way
to
earn
budget
and
earn
buy-in
before
you
go
and
you
you
make
an
investment
in
a
tool
like
level
9
or
one
of
the
other
vendors.
That's
out
there.
F
I
think
leveraging
things
that
the
community
is
doing
right
like
this
project,
like
the
sloth
project
like
slow
conf,
I
think
that's
the
best
place
to
start
is
just
reaching
out
to
somebody
in
the
community,
especially
from
slow,
conf
and
saying,
like
hey,
I'm
interested
in
getting
started.
Can
you
give
me
a
hand?
Can
you
point
me
in
the
right
direction.
D
Yeah-
and
I
would
also
to
see
that
so
that
we
can
also
come
from
different
points
right,
so
we're
coming
up
from
operating
into
production,
and
so
you
know
how
your
application
is
behave,
but
probably
also
a
good
point.
Is
that
you're,
starting
from
the
other
way,
when
you're
developing
starting
developing
your
application
literally
and
seeing
giving
your
developers
the
possibility,
also
to
see
the
metrics
and
probably
also
have
quality
dates
on
that
and
seeing
okay,
hey?
D
So
I
think
there
are
different
ways
to
tackle
that,
but
I
really
like
so
we've
approached
over
the
simplest
one,
and
probably
also
that
you
need
to
have
a
lot
of
10
of
data.
So
probably
two
or
three
metrics
are
enough
for
doing
all
the
stuff
here.
So
it's
not
like
that.
You
need
100
metrics
to
get
all
the
information,
and
probably
what's
also
here
important,
is
what
I
currently
see
or
when
I
started
with.
Metrics
was
okay,
a
lot
of
tools
or
a
lot
of
stuff
is
bringing
a
little
bit
more.
D
Noise,
but
what
we
are
literally
every
time
we
were
talking,
but
it's
important
we
are
talking
about
signals
and
signal,
should
be
only
three
or
four.
What
is
very
human
capable
to
take
off,
not
that
we
need
to
have
100
metrics,
and
no
one
is
looking
at
that
anymore.
So
then
we
can
say:
oh
okay,
we
don't
need
this
matrix
anymore,
because
no
one
is
using
it
or
the
system
can
take
part
of
it.
D
But
then
we
need
to
have
predefined
rules
like
the
burn
rates
and
all
the
stuff
with
the
slos,
so
that
the
ultimately
evaluation
will
take
place
but
yeah,
and
also
for
that
for
doing
the
delivery
stuff.
There
also
totally
true
it's
out
there
in
the
cloud
native
space,
so,
for
example,
from
the
from
the
cloud
native
space
we
have
captain
here
so
when
you
want
to
go
to
captain's
age,
so
you
can
also
do
their.
D
This
is
the
tour
coming
from
vedana
trace
case,
where
they're
doing
exactly
this
job
coming
from
the
development
side,
and
they
want
to
do
it
like
really
when
you
want
to
do
continuous
deployment,
because
a
lot
of
people
talking
about
continuous
delivery
and
probably
continuously
promised
another
level
where
we
are
talking
about.
D
So
when
you
really
want
to
wrote
that
right
directly
from
your
from
your
development
machine
into
production
into
amount
of
time,
so
mostly
about
one
hour
or
something,
that's
also
for
different
people
and
also
you're,
giving
the
flexibility
to
define
it
in
on
your
way.
Not
everyone
needs
to
deploy
in
15
minutes.
Not
everyone
needs
to
be
probably
deploy
every
day,
so
also
some
people
needs
to
deploy
only
once
a
month
so
and
that
where
everyone
has
cf
flexibility
and
mostly
calculating
all
this
stuff.
D
A
I
think
everything
is
tied
together
somehow
which
we're
just
on
a
learning
journey
on
on
figuring
out
which
tools
are
out
there
and
how
we
can
use
them
to
be
successful
and
not
to
learn
yet
another
jamelson
box
wrapped
into
something
else,
but
just
generate
the
stuff
and
either
we
use
jsonnet
or
we
use
something
else,
but
it
needs
to
be
more
approachable
because
I
personally
struggled
a
lot
to
to
write
the
best
chromkil
query
and
once
I
found
it,
it's
like
oh
copy
paste
copy
paste.
A
But
every
time
you
do
something
on
repeat,
it
should
be
automated.
It
should
be
hidden
from
you
and
I
think
it's
it's
a
great
way
to
find
a
unique
specification
with
open,
slo
and
and
also
to
have
ui
abilities
and
also
cli
to
to
generate
these,
and
maybe
in
the
future.
We
can
also
find
ways
to
machine
learning.
I
don't
know
something
which
detects
that
there
is
a
load
balance
in
your
system
and
dust
it
automatically
for
you
or
something
with
going
a
little
more
easy
and
saying
I'm
monitoring
a
website.
A
G
Put
a
slow
on
it.
I
actually
you
know
it's
funny
on
the
yaml
generation
piece
I
know
zack
has
been
doing.
I
don't
want
to
put
him
on
the
spot,
but
I
know
zach's
been
doing
some
work
in
this
space
because
there
was
a
a
little
project
he's
got
going
where
he's
generating
yaml
for
different
environments.
I
don't
know
zach
if
you
want
to
share
a
little
bit
about
what
you're
doing
there,
because
I
think
it's
pretty
cool.
F
Yeah
so
I
mean
you
know
at
outsystems,
we
have
a
fairly
healthy
scale
and
we
have
lots
of
slos
that
we
want
to
apply
to
lots
of
different
environments
and
they're
they're
very
similar,
and
so
what
we've
done
is
we've
generated,
slow,
yaml
files
and
then
we've
turned
those
into
templates
where
we
can
drop
in
specific
pieces
of
the
in
the
metadata,
spec
and
specific
pieces
inside
of
queries,
and
so
that
we
can
then
just
build
another
config
file
to
pick
up
that
template
populate
them
and
then
use
the
ci
cd
system
to
roll
those
out
and
start
applying
those
yamls,
and
so
that
took
us
from
taking
an
engineer
that
doesn't
have
a
lot
of
experience
with
slos
and
being
able
to
understand.
F
You
know
like
hey:
these
are
the
most
critical
parts
to
go,
apply
this
slow
template
to
many
environments,
you
know
and
so
that
we
can
start
building
those
kind
of
cheat
ahead,
config
files
so
that
we
can
go
and
apply.
F
You
know
a
set
of
slos
to
100
customers
in
a
day
by
using
you
know,
generating
a
slow,
gamo
file
kind
of
doing
some
fast
and
dirty
templatizing
and
then
being
able
to
go
back
populate
those
variables
and
apply.
I
mean
that's
a
that's
a
game
changer
for
us
and
that's
one
of
the
things
that
like
when
I
look
at
where
open
slow
started
and
where
we're
at
today
and
then,
where
we're
going.
F
What
makes
me
really
excited
about
that
is
that
ability
to
generate
that
slow,
yaml
and
then
populate
in
the
values
that
I
need
to
it.
I've
not
been
the
greatest
contributor
so
far,
I've
been
busy
and
haven't
made
a
lot
of
the
meetings.
F
But
that's
one
of
the
features
that,
like
I
would
like
to
work
on
is
adding
that
to
oslo
and
and
open
slow
is
to
be
able
to
say,
give
me
a
slow
yaml
and
then
create
a
template
based
on
these
places
in
the
spec
and
just
spit
that
out
of
the
tool
for
us,
so
that
we
know
no
matter
what
vendor
we're
using
that
we
can
generate
slow
templates
so
that
we
can
then
stick
that
in
a
ci
seating
system
right
so
like.
F
If
I
need
to
go
out
and
create
a
hundred
thousand
slows,
I
don't
want
to
have
to
send
an
army
of
people
to
go
out
and
do
that.
I
would
like
to
be
able
to
do
that
with
machines
and
save
a
bunch
of
engineering
time
right
and
then,
if
I
can
do
that,
I
can
then
go
to
my
leadership
and
say:
look
at
all
this
engineering
time.
We
saved.
Let's
move
on
to
the
next
thing.
B
Yeah,
that's
kind
of
what
we
did
with
jsonnet
and
and
just
like,
generating
the
promptql
itself.
I
do.
I
do
think
that,
having
generated
the
the
spec
for
open,
slo
is
kind
of
kind
of
cool,
because
it
like
is
a
bit
more
high
level
and
really
then
templates
the
most
important
aspects
and
hides
the
implementation
detail.
At
the
same
time,
I
do
think
we
can
do
better
as
as
a
industry,
and
I
think
you
already
have
an
amazing
novel,
9
ui.
B
So
I
would
love
to
see
some
some
way
of
like
integrating
the
ui
and
and
the
templating
there,
because
I
always
feel
like
falling
into
the
trap
of
just
templating
everything
and
I
definitely
been
there
and
then
it's
not
really
approachable
for
the
user
in
the
end
again,
so
I
do
think
there's
a
difference
for
like
beginners
that
just
want
to
use
the
ui
and
then
like
hardcore
pro
users
that
just
really
want
to
like
generate
everything
right.
B
G
It's
like
the
having
the
command
line
and
the
api
in
the
ui.
I
think
this
is
like
a
very
important
design
point
and
it's
something
we're
seeing
this
in
all
the
like
modern
cloud
platforms
and
open
source
projects
like
the
api
is
so
important.
The
command
lines
are
so
important
getting
it
so
that
you
have
this.
You
know
ergonomics
for
the
developer,
it's
so
so
critical.
But
then
we
can't
forget
about
our.
You
know:
beginner
users
right
and
the
people
who
want
to
visualize
and
they
want
to
click
through
stuff.
G
They
want
to
get
guided
tours.
I
think
it
would
be
really
interesting,
especially
in
I
think
this
is
part
of
my
inspiration
of
the
hello
world.
Bringing
it
back
to
that
discussion
is
if
we
could
create
some
public
resources
of
data
sets
and
queries,
because
you
know
we're
all
saying:
oh,
you
know
we
need
a
problem
ql,
we
need
this.
We
need
that,
but
what
I
think
we're
missing
from
the
kind
of
the
slow
working
code
I
mean
there's
a
lot
of
slow
literature
right.
G
I
think
what
we're
missing
from
the
slow
working
code
is
is,
is
really
kind
of
like
ready
data
sets
ready
examples
so
that
people
don't
have
to
also
struggle
with
the
existential
question
of
what
matters
to
my
customers
right,
we
can
say:
okay,
like
we
can
here's
something
where
it's
like
it
exists,
and
we
may
have
some
assumptions
about
what
matters
to
these
mythical
customers
of
an
example
data
set,
and
then
we
can
build
really
strong
working.
G
Examples
then
plug
into
these
different
technologies,
and
they
show
pira
and
open
slo
and
how
the
git
lab
workflow
works
and
how
the
noble
nine
piece
works,
and
we
can
put
all
these
pieces
together
as
a
public
resource.
I
this
is
what
I
think
would
be
very
cool.
I
just
don't
know
if
anybody's
willing
to
work
on
it,
but
I
think
we
would
be
so.
B
Yeah,
for
sure
I
mean
like
for
the
demo
for
pura,
I
already
have
like
a
category
response
errors
and
latency,
for
example.
B
So
I
do
think
that
especially
given
the
familiarity
weird
word
with
http
for
most
people,
I
do
think
that,
like
just
like
99
of
like
all
the
nines
of
of
availability
that
you
want
to
have
for
a
given
from
http
server
really
makes
sense,
because
I
do
think
that,
like
hides
away
like
the
complexity
of
learning
about
something
else,
it's
just
http
and
that's
at
least
like
for
me
personally,
usually
what
I
I
think
of
like
as
the
hello
world,
but
just
giving
my
opinion
here.
G
I
wonder
I
I
wonder
if
we
should
start
under
open
slo,
a
new
repo
called
hello
slo
and
we
could
create
a
new
project
there,
and
maybe
we
could
put
some
ideas
and
things
like
that.
We
can
propose
this
with
ian
and
the
other
open
slo
folks,
but
I
think
that
could
be
a
cool
place
to
house
the
the
work
and
then
we
can
point
to
some
of
these
other
projects
or
other
data
sets
or
or
other
things
that
might
be
useful
for
that.
For
that
effort.
B
Yeah
for
sure
I'm
definitely
in
I
do.
I
do
think
that
also
because,
like
noble
nine
supports
more
than
just
for
me,
feels
like
most
of
these
systems
do
have
like
some
notion
of
like
http
servers
right
for
example.
So
that's
kind
of
the
point
is
like
how
do
you
measure
with
prometheus
as
a
back,
and
how
do
you
measure
with
like
yeah
whatev,
whatever
system?
B
There
is
basically
like
with
the
with
kind
of
like
a
same
nginx
metric,
for
example,
just
a
different
back
end,
and
I
do
think
that
that
would
be
a
fantastic
thing,
but
I'm
100
on
board
like
doing
this
in
an
opener
solo.
You
think
that
would
be
a
great
place
to
have
it.
G
You
want
to
file
issue
or
pull
request
or
something
come
on.
Let's
look
it.
Let's
make
it
real,
live.
A
G
A
D
Yeah,
probably
what's
coming
out
to
my
mind
so
when
we
are
talking
about
prometheus,
so
what
we
can
use
instead
of
writing,
then
on
application
instrumenting,
the
application.
So
because
we
said
okay,
let's
use
the
low
balance
or
something
else.
So
probably
we
can
leverage
the
black
blackbox
exporter
and
monitoring
a
simple
website
to
get
free
results
from
that
bed
and
making
on
that
and
also
so
then
you
can.
B
Yeah,
just
to
kind
of
like,
let
me
quickly
share
the
screen
again,
also
to
not
work
on
on
the
issue
that
I
started
fighting,
but
I'm
sure
the
demo.
B
Thought
you're,
gonna,
hey.
Let
me
show
you
how
to
follow
an
issue.
You
already
opened
it
good
man,
but
taking
taking
like
jumping
on
the
bandwagon
here.
B
I
I
thought
about
like
having
just
like
these
load,
balancer
metrics
and
in
here,
for
example,
there
is
caddy,
which
is
the
web
server
that
I'm
using
for
this,
but
I
already
started
like
with
the
demo
app
and
I'm
happily
contributing
that
upstream
or
I
mean
like
it's
already
open
source,
but
like
I'm
happy,
I'm
putting
this
into
open
slo
again,
where
these
metrics
are
just
counters
in
memory
exposing
this
and
we
can
kind
of
like
just
fake
having
load.
B
Balancer
metrics,
like
I
mean
like
the
nginx
metrics,
are
kind
of
well
known
at
this
point,
for
example,
so
we
could
just
like
not
even
have
a
nginx
running,
but
just
like
synthetically
generate
these
metrics
and
then
we
can
work
with
them.
I
do
think
that's
that's
possible
and
yeah,
because
I'm
I'm
sure
or
wanted
to
show
this,
like.
B
I
already
started
with,
for
example,
these
hourly
things
where,
like
just
the
timer
and
then
before
at
the
hour,
it
will
just
like
error
for
five
minutes,
for
example,
giving
you
like
funny
patterns
over
the
week.
So
we
can
use
the
black
box
exporter,
but
I
do
think
we
can
even
like
fake.
A
question
is
how
good
of
of
random
data
do
we
get.
That
is
somewhat
representable
of
real
slos.
B
A
Where,
where
is
openmslo.com
being
hosted,
is
that
running
nginx?
In
the
background,
maybe
we
could
dock
food
like
the
metrics
from
the
website
and
the
more
you
pre,
the
more
often
you
press,
command,
reload
or
f5,
the
more
metrics
you
generate.
I
don't
know
it
could
be
an
interesting
idea
to
to
say:
hey.
We
are
kind
of
monitoring
ourselves
or
breaking
breaking
ourselves.
G
Yeah,
it
might
be
cool
also
to
look
at.
I
like
that
idea
a
lot,
but
if
we
could
add
something,
maybe
even
with
some
a
load
generator
too
right,
so
you
could
have.
Maybe
we
could
host
a
page
where
you
can
use
I'm
thinking
like
k6
or
something
like
that
right
and
do
some.
You
know,
put
some
load
profiles
in
because
I
think
this
is
one
of
the
challenges
also
getting
some
data
and
having
an
error
rate.
It
was
almost
like
a
controlled
chaos.
F
Yeah,
I
like
the
idea
of
running
some
k6
to
throw
some
synthetic
load
on
things
so
that
we
can
see
variation
in
the
data.
That's
a
really
good
idea.
I've
done
in
the
past
when
testing
testing
to
see
if
the
slo
was
going
to
blow
up
on
me
or
not.
So
that's
a
that's
a
great
idea.
D
And
then
also
using
the
k6
data
so
and
putting
also
some
slos
on
that.
So
but
I
know
when
I
use
k6,
I
think
two
or
three
months
ago,
so
that
they
don't
support
previews.
So
you
can
only
export
the
k6
metrics
instead
steve,
but
that's
not
a
problem.
So
then
you
can
also
create
it
and
then,
with
that
cx
photo
back
to
prometheus
when
you
would
say
precious
data
source.
G
You
know
there's
another
possible
resource.
We
could
add
in
you
know
in
in
alex
hidalgo's
book,
there's
a
mythical
project
called
wienerschertzel,
it's
one
of
his
examples.
We
actually
created
the
site
and
we
haven't
really
done
anything
with
it
yet,
but
it's
up
and
live
and
it'd
be
kind
of
cool
to
add
some
slos.
On
top
of
that
and
show
that
as
examples
in
the
hello
as
well
and
then
it
would
be,
you
know
an
exercise
for
the
readers.
You
know
what
I
mean
tie
back.
That
was
kind
of
the
idea.
B
A
Amazing
awesome
now
I
have
so
many
ideas
in
my
head
and
we
have
so
little
time,
but
it
was
really
great
to
build
the
bridge
from
learning
something
new
about
puber
and
and
also
peeking
into
november
9,
and
seeing
how
it
works
with
prometheus
to
further
discussing
slo
for
beginners
and
now
coming
to
ideas,
creating
like
the
hello
world
application
and
the
getting
started,
guys
looking
forward
to
collaborate
and
and
contribute
with
that,
I
would
say:
let's
wrap
it
up
for
today,
thanks
everyone
for
joining,
we
should
do
it
again
sometime
and
see
see
how
far
everything
was
going.
A
G
We
have
the
only
question
is
the
format
it's
the
only
question
we'll
be
talking
about
it
soon.
I'm
sure
you
guys
will
be
invited
to
the
planning
committee.
A
We
will
we
will
be
hosting
a
cell
conf
next
year
and
the
year
after
and
the
year
after,
and
we
will
bring
slo
to
everyone,
who's
been
asking
themselves
why
they
don't
have
it
yet
yeah
and
until
then
I
wish
you
a
pleasant
afternoon
evening,
wherever
you're
located
and
see
you
online
and
hopefully
see
you
in
person
soon.