►
From YouTube: Grafana/Prometheus Montoring Update for CS
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
To
do
and
there's
you
know,
the
the
previous
community
is
still
trying
to
expand
on
good
ways
to
improve
these
things.
So
if
we're
talking
about
like
customers
that
have
deployed
prometheus
and
and
have
applications,
you
know,
of
course
the
first
thing
that
needs
to
happen
is
the
customers
need
to
instrument
their
applications
and
so
yeah,
it's
kind
of
underwhelming
when
you're.
A
First
looking
at
it
like,
if
you're
talking
about
running
an
application
in
a
kubernetes
cluster,
you
get
some
container
metrics
and
yeah
you
can
you
can
see
how
much
your
CPU
and
memory
your
pods
are
using?
But
that's
you
know
not
super
interesting
that
the
interesting
stuff
comes
when
you
start
to
actually
add
custom
application
metrics
to
your
your
apps
yeah
and
you
know
I'm.
A
You
know
I'm
looking
at
that
right
now
in
the
gitlab
codebase,
like
you
know,
we're
looking
into
things
like
I'm
looking
at
in
the
rails
code,
we
track
we
use
Redis
as
a
cache
and
we're
looking
for
better
insight
into
how
the
application
uses
the
Redis
code
so
that
we
can
find
out.
Where
are
our
performance
is
going
and
when
we're
having
performance
problems
with
the
application
talking
to
Redis?
A
So
you
know
we
can
you
know
we
want
to
be
able
to
tell
based
on
the
data
that
we
get
from
Redis
and
the
data
we
get
from
the
rails
app.
If
there's
like
a
mismatch
or
if
it's
just
the
rails,
app
is
hitting
from
read
it's
too
hard
and
you
know
which,
which
controller
is
using
Redis
the
most
and
things
like
that
and
like
what
is
the?
What
is
the?
What
is
the
heat
map
of
performance?
Look
like
so
you
know
the
hard
part
to
tell
the
customer
is
like.
A
Well,
you
gotta
start
adding,
you
have
to
add
start
adding
instrumentation
to
your
codebase
and
that's
where
that's,
where
things
to
start
getting
really
interesting,
really
fast
and-
and
it
may
seem
like
a
lot
of
work
to
start,
but
once
it's
in
there
it's
in
there
and
you
can.
You
can
start
to
get
the
real
power
of
having
native
instrumentation
in
your
in
your
apps
cool.
B
So
can
you
kind
of
walk
me
through,
like
you
know,
say
so,
so,
let's
take
your
example
of
you
trying
to
evaluate
the
performance
of
Redis
within
the
gitlab
application.
How
would
you,
how
would
you
go
about
you
know
getting
that
set
up,
and
how
would
you
achieve
that
in
terms
of
maintainability
as
well
like
right?
You
know
you've
set
that
up
once
you
know
is
it
are
those
queries
you
know
set
up
inside
Griffen?
Are
they
set
up
yeah.
B
Okay,
well
you're,
getting
this
set
up.
I
did
hit
the
record
button
after
I
kind
of
intro
this
so
I
just
want
to,
because
we
do
want
to
share
this
with
the
cs
team
as
well
just
to
give
a
quick
intro.
What
we're
doing
here,
then
from
monitoring
team
is
helping
us
kind
of
walk
through
how
somebody
utilizing
Prometheus
in
grow
fauna
packaged
in
our
omnibus
package,
how
they
can
best
utilize
that
inside
their
own
application
code,
to
monitor?
What's
going
on
in
that
inside,
get
lab
Integra
fauna.
A
D
A
C
B
A
So
what
I'm
doing
is
in
in
in
our
in
the
get
lab
Prometheus
data
we
have,
we
have
a
whole
bunch
of
metrics
and
what
we're
looking
at
here
is
controller
I.
A
So
we
have
that
we
have
this
metric
and
I
already
knew
what
this
metric
the
metric
that
I
was
interested
in.
This
is
actually
a
recording
rule
that
summarizes
some
data
and
if
I
were
to
graph
this,
it's
going
to
take
it's
going
to
be
way
too
much
data
to
look
at,
and
it's
not
going
to
look
great.
So
what
we
want
to
do-
and
this
is
probably
gonna
make
my
browser
super
slow.
So
we
want
to
filter
this.
B
A
D
So
Ben
this
is
Josh.
Why
are
you
doing
that?
Just
you
mentioned
a
minute
ago
that
you
know
the
customers
gonna
have
to
instrument
some
of
these
in
the
code.
I
assume!
That's
that's
what
we
saw
with
the
the
predefined
list
of
values.
Those
are
the
things
that
you
guys
well,
they're,
not
predefined
business
and
things
that
you
guys
have
instrumented
inside
of
the
get
lab
code,
and
so
that's
how
you
that's
how
you
built
that
list
of
available
monitors
is
that
correct,
yep.
A
So
we
define
this
cache
operation,
so
gitlab
cache
operation
Dhiresh
in
seconds,
is
a
histogram
that
contains
timing,
information
about
cache
operations
and
whenever
there's
an
actual
cash
operation,
it
calls
to
this
observe
function
which
causes
it
to
observe
each
of
these
things
and
the
duration
is
a
milliseconds
and
so
for
Prometheus.
We
divide
that
by
1,000
to
make
it
ace
in
in
seconds,
because
Prometheus
uses
a
high
accuracy,
float64
data
for
this
all
right,
let's
see
if
I
can
is
to
so.
A
So
so,
yeah
in
the
normal
upstream,
open
source
previous
there's
a
whole
section
on
instrumenting
and
there's
these
client
libraries,
for
you
know
pretty
much
all
the
popular
software
development
languages
out
there
so
that
you
can
instrument
your
code,
and
so
we
have.
You
know
we
have
this
with
previous.
We
have
this
idea
that,
rather
than
rather
than
having
agents
that
you
have
to
install
on
every
machine,
we
were
starting
from
the
idea
of
containers
where
it
doesn't
make
sense
to
have
like
a
monitoring
agent
in
a
container.
A
A
B
A
A
B
A
A
A
A
A
A
A
A
A
A
A
A
A
So
here's
here's
some
other
here's
some
various
things
where
we've
we've
instrumented
our
our
rails,
app
to
tell
us
about,
say
the
need.
We
have
a
thing,
there's
a
thing
called
the
unicorn
killer
and
basically
it
watches
for
out
of
memory
conditions
on
brewing
processes
and
terminates
them
before
they
before
they
leak
too
much
memory.
And
basically
this
tells
us
that
you
know
how
many,
how
many,
how
many
different,
how
many
processes
have
been
killed
in
the
last
15
minutes,
and
so
we
have
a
metric
that
tracks
this.
And
then
this
is
a.
A
This
is
just
a
visualization
of
the
various
ways
that
the
unicorn
killer
is
kicked
in
and
had
killed
those
processes,
and
we
just
we.
We
just
keep
these
queries
stored
in
Griffin
and
dashboards,
and
so
it's
not
not
super
elegant
this
way.
But
we
are
working
on
a
couple
of
ideas
where
we're
we're
gonna
start
changing
the
way
we
do.
These
dashboards
we're
gonna,
start
checking
them
in
to
source
code
and
using
a
generator
library
that
takes
some
some
more
basic
configuration
language
and
generates
the
JSON
and
then
uploads
that
into
Kohana
yeah.
B
That
was
actually
my
next
question.
Cuz
I.
Think!
Oh,
that's
that's
what
a
lot
a
lot
of
people
are
wondering
around.
Is
you
know
how
do
they
maintain
these
types
of
things
in
a
in
its
source
controlled
way
so
that
they
can
have
you
know
essentially
in
ephemeral,
dashboards
at
that
point
they
can
move
them
around
from
you
know
where
their
their
gathon
is
located
and
also
just
keep
those
queries
in
some
sort
of
you
know,
version
manner,
yep.
A
Thing
in
there
so
there's
this
work
being
done
called
mix-ins,
and
this
is
where
you
can
take
and
and
generate
Groupon
and
dashboards
using
using
JSON
and
instead
of
using
all
use
other
languages.
There's
a
language
called
JSON
it,
which
is
designed
just
for
JIT,
for
generating
JSON
templates,
which
is
that
what
the
actual
back-end
foregrip
ahna
uses
and.
A
Where
is
this,
it's
just
a
thing
where
they
were
talking
about
generating
service
level
objectives.
D
B
I
think
I
think
that
that's
that's
kind
of
the
next
question
there
is,
you
know
so
so,
once
you
get
these
dashboards
set
up
with
your
own,
you
know
Prometheus
monitoring
instruments
and
then
you
know
build
out
your
dashboards.
You
know
what
is
that?
What
is
the
connection
back
to
get
lab?
How
do
you
utilize
that
and
you
lab
that
data.
A
Yeah,
so
what
we're?
What
we're
working
at?
What
the
the
monitor
group
is
working
on
next,
is
the
ability
to
to
feed
these
alerts
back
into
get
lab
so
that
you
can,
you
can
do
incident
management
and
tracking
of
of
the
you
know
the
the
performance
of
your
application
using
get
lab
and
I.
You
know
the
monitor
the
monitor
stage.
People
can
probably
talk
more
about
that.
B
And
then,
what,
in
in
terms
of
like
you
know,
when
I'm
looking
at
like
our
operations,
dashboards
inside
get
lab
and
I'm
looking
at
the
metrics
tabs,
they
are
right,
which
is
just
showing
that
you
know
graph,
the
graphs
that
are
just
output
from
Griffin
and
aren't
sorry
just
from
Prometheus
and
not
using
graph
on
there
is
there
any?
Is
there
any
connection
points
between
graph
on
ax
back
to
those
dashboards
or
anything
along
those
lines?
No,
not
right!
Now:
okay,
okay!
So
right
now
it's
more
about!
B
You
know
we
packaged
in
these
solutions
because
we
feel
like
that's,
going
to
help
you
guys
as
application
developers,
but
really
what
you're
doing
in
this
case
is
you're
using
the
upstream
Prometheus
and
grow
phyla
packages,
kind
of
standalone
in
some
ways,
just
just
bundled
in
with
our
our
solution
for
managing
the
connection
points
and
things
like
that.
Yeah.
A
B
A
It's
more
the
other
way
around
is
we
won't
want
to.
We
want
to
get
more
stuff
more
more
clearly
and
directly
integrated,
so
the
experience
is
a
little
nicer,
so
you
know
grifone
Oh
Rick,
you
know
we.
We
spend
a
pretty
significant
amount
of
time
clicking
and
managing
Griffin
on
dashboards,
and
we
want
to
provide
a
more
out-of-the-box
experience.
We
have
three
Mondays
basis.
A
A
B
A
B
C
So
I
don't
have
anything
specific
because
I
guess
I
don't
know
enough
yet
to
you
know
like
like
I
wrap
my
head
around
it
yet
so
I
we
were
just
talking
about.
So
when
we
were
planning
the
demo
Samira
mentioned
exporting.
C
The
dashboards
and
then
importing
into
their
own
instance,
but
just
talking
about
it
now
it
makes
sense
that
you
know
that's
for
monitoring
gitlab
as
an
app,
but
not
their
app,
so
good
question
to
ask
them
basically
to
clarify
you
know:
I
would
think
you
know
that's
what
they
are
looking
to
do
their
applications,
what
they
are
building,
not
you
know,
gitlab
itself,
so
that's
something
that
I
would
look
to
clarify
when
we
were
on
the
call
tomorrow,
but
I
think
just
to
give
them
an
idea
of
what
can
be
done.
C
B
Yeah
I
feel,
like
you
know,
maybe
what
you
do
there
is.
You
know
well,
first,
make
sure
that,
like
you
said,
you
know
clarified
to
see
that
they
are
talking
about
monitoring
their
own
application,
not
in
your
lab
their
instance.
If
T
lab,
of
course,
I
assume
that
that
that
that
is
the
case,
because
if
they
did
set
up
for
me
just
and
good
set
up
they're
seeing
those
dashboards
that
are
pertaining
to
their
their
application
app
way
but
yeah.
B
C
B
C
B
I
think
the
key
now
is,
you
know
for
us
to
go
back
and
try
to
you
know
distill
this
down
into
one.
You
know
how
people
go
about
doing
that.
To
you
know
what
are
some
common
use
cases
around?
What
people
are
monitoring
in
their
applications
and
how
they
you
know
based
on
you,
know,
kind
of
what
they're
trying
to
achieve
as
their
end
goals
and
then
the
third
one
is
more.
B
You
know
the
future
of
how
we're
going
to
you
know,
implement
monitoring,
to
get
lab,
and
what
that
the
future
of
the
connections
between
your
your,
your
metrics
that
you're
collecting
and
the
dashboards
back
to
get
labs.
So
you
can
do
things
like
setting
thresholds
and
alerts
and
you
know
taking
action
against
those
dashboards.
You.
B
So
it
might
be
a
good
next
step.
Was
we
I
think
that
there's
some
good
connection
there
to
talking
about
the
future
of
the
integrations
and
get
labs
specifically
around
the
secure
features
that
we're
implementing?
Then
I,
don't
know
if
you
have
much
context
around
that
side
of
things,
but
I
know
that
there
yeah
the
remediation
piece
of
it
and
the
live
environments
based
off
of
thresholds.
B
You
know
we
can
do
some
of
that
stuff
now,
with
just
the
dashboards
that
are
in
the
metrics
tab
under
operations,
but
it's
pretty
limited
so
should
get
what
that
feature
looks
like,
so
we
can
show
them
how
they
could.
You
know
set
up
some
metrics
and
things
to
do.
Some
automation
around
you
know
ensuring
their
applications
in
good
shape.