►
From YouTube: Monitor Stage Demo for Prospect
A
Hey
my
name
is
Kenny
Johnson
I'm,
the
senior
director
of
product
management
here
at
get
lab
covering
the
ops
section
which
actually
covers
verify
package
release,
configure
and
monitor,
but
I
wanted
to
specifically
give
an
overview
of
some
questions,
a
key
prospect,
and
hopefully
future
customer
had
about
our
monitor
capability.
So
I'm,
gonna
jump
right
in
I
want
to
start
with
an
overview
of
how
we're
thinking
about
monitor.
In
general.
A
This
specific
content
comes
from
our
ops
section
direction
page
so
where
we're
headed
across
all
of
those
stages,
but
I
wanted
to
ground
us
in
the
context
that
our
perspective
on
building
tools
is
not
for
operators,
necessarily
it's
for
performing
operations
tasks
and
we
believe
that
developers
are
being
asked
to
perform
more
and
more
of
these
operations
tasks.
So
our
capabilities
are
kind
of
first
and
foremost
centered
around
serving
the
developer
user.
A
You'll
see
that
throughout
the
features
and
functionality
that
I
described
to
you
that
we
very
much
take
a
developer
first
perspective
and
as
far
as
the
team
goes
I
do
want
to
highlight.
We
have
some
great
team
members
working
on
these
features
and
functions,
especially
in
the
monitor
stage.
We
have
product
management,
expertise
from
elastic
and
big
drops
as
well
as
team
members,
who've
joined
us
from
New,
Relic
and
other
existing
monitoring
companies.
We
have
a
very
forward-looking
perspective
and
we
have
about
a
total
of
20
team
members
working
on
this
effort.
A
So
it's
a
considerable
investment
that
you
lab
is
making
into
these
capabilities
ones
that
are
maturing.
We've
really
started
to
ramp
up
over
the
last
year,
but
we
we
plan
to
continue
to
build
really
extensive
capabilities
in
this
area.
We
believe
that
there
really
is
no
complete
devops
platform
without
it,
including
the
types
of
monitor
capabilities
that
I'm
going
to
describe
here.
A
One
of
our
kind
of
starting
principles,
especially
with
that
developer
first
mentality,
is
this
kind
of
as
code
you've
seen
that
when
you've
heard
about
get
labs
ability
to
define
your
CI
pipeline
as
code
right
there
in
your
project
repository,
we
take
the
same
perspective
when
it
comes
to
infrastructure
and
observability
I'll
show
you
places
where
you
can
define
dashboards
and
alerts
as
code.
Those
are
really
important
principles
that
allow
developers
to
collaborate
with
each
other
on
ensuring
that
their
application,
not
only
has
the
features
and
functionality,
but
also
is
observable
and
performant
in
production.
A
I
want
showcase
our
monitor
stage
overview,
which
is
a
really
to
highlight
that
we're
focused
on
on
not
just
building
a
observability
system,
but
also
having
the
ability
to
respond
and
resolve
that
so
you'll
see
features
and
functions.
That
I'll
show
around
alert
and
incident
management
and
response
and
triage
that
are
really
some
unique
capabilities
that
only
get
lab
can
provide
and
we're
also
very
focused
on
a
couple
of
key
workflows.
A
The
first
being
this
kind
of
triage
workflow
developers
are
frequently
asked
to
be
on
call
and
respond
to
operations,
alerts
and
instance,
and
we
really
want
to
make
sure
that
we
have
a
cohesive
experience
that
allows
them
to
quickly
triage
that
alert,
and/or
incident
and
resolve
it
quickly.
We
also
have
a
really
heavy
emphasis
on
dogfooding
I.
Don't
wanna
I
can't
understate
that
enough.
A
So
we
have
a
very
high
bar
internal
bar
for
ourselves
to
make
sure
that
our
features
and
functionality
are
not
just
things
that
you
know
a
developer
would
use
in
a
toy
app,
but
are
things
that
we
would
actually
use
it
to
support
our
production
environments
when
it
comes
to
what's
next
I
mentioned
we
focused
on
the
dog
fooding
we
are.
We
are
really
focused
on
dog
fooding
of
dashboards
and
incident
management
and
alerts
today,
but
we'll
move
on
to
include
dog
training
of
our
vlogs
and
traces
and
other
observability
information
in
the
future.
A
I
wanted
to
touch
on
our
our
maturity.
As
I
mentioned.
This
is
a
fairly
new
investment
for
gate
lab
in
our
monitor
stages.
We've
really
started
to
come
to
fruition
and
get
many
of
these
categories
to
viable
or
complete
recently,
and
so
I
wants
to
highlight
that
we
have
an
aggressive
plan
to
be
a
key
player
in
here
in
this
space.
A
We
might
not
be
there
completely
today,
but
we're
moving
there
rapidly
when
it
comes
to
one
of
the
specific
questions
this
prospect
had
asked
was
questions
around
logging
and
is
yet
lab
a
replacement
for
an
existing
elk
stack
deployment.
Gate
lab
our
log
Explorer
functionality
that
we've
recently
released
and
I'll
showcase
to
you
is
based
on
elk,
but
today
it
requires
you
to
have
deployed
that
in
attached,
kubernetes
cluster
to
your
project
that
get
labs
kind
of
aware
of.
A
We
have
a
future
issue
for
kind
of
bringing
your
own
elk
stack
in
order
to
point
our
log
Explorer
at
it
and
allow
that
kind
of
integrated
experience.
But
that
is
not
something
that
we're
capable
of
today.
It
is
something
we
plan
to
to
work
on
as
we
move
our
logging
capabilities
to
complete
right
now.
We're
really
focused
on
showcasing
more
logs
in
that
integrated
experience,
including
from
your
get
lab,
managed
apps,
and
it's
kind
of
a
recognition
that
we're
focused
on
building
this
integrated
experience
for
kubernetes
based
applications
as
a
first
first
priority.
A
Here's
the
demo
part
I,
wanted
to
showcase
this
project
that
the
monitor
team
actually
actively
uses
to
test
their
features
and
functionalities
called
that
showcase
for
our
ops
features.
It's
a
relatively
simple
of
UJS
app
that
I'll
show
you
the
home
page.
This
is
the
live
production
site,
so
demo
app.
It
can
allow
you
to
generate
errors
and
generate
logs
I'm,
just
gonna
click
the
generate
logs
capability.
A
A
Some
some
recent
errors,
so
this
is
a
view
into
the
consolidated
logs
from
that
ELQ
stack
deployed
to
this
cluster
so
and
that
these
are
actually
specific
to
this
application
environment,
so
I'm
in
this
application,
this
project,
tanooki
inks
production,
environment
and
that
environment,
the
all
the
pods
that
are
running
for
this
environment
that
we're
aware
of
because
it's
in
an
attached
cluster,
we're
automatically
scoping
these
logs.
To
that
you
can
use
search,
will
continue
to
add
other
filter
capabilities.
A
You
can
filter
by
pod
name,
but
will
enable
improved
search
capabilities
in
logs,
including
ability
to
target
specific
time
frames
for
your
logs
right
now.
You
can
do
this
with
a
custom
range
here
in
the
UI,
but
we
want
to
be
able
to
quickly
allow
users
to
jump
to
specific
time
frames
in
logs
and
I'll
show
you
the
kind
of
workflow
we're
talking
about
there.
A
We
also
have
this
ability
to
add
metrics
dashboards
and,
if
I
look
at
the
repository
for
this
project,
you'll
see
that
in
the
gitlab
folder
there
are
dashboard
gamma
files
that
highlight
and
allow
you
to
create
your
specific
dashboard
views
that
you'd
like
these
are
all
defined
as
code
and
show
up
in
your
git
lab
UI.
So
if
we
go
back
to
operations
and
metrics,
we
can
see
that
we
have
defaults
dashboard
on
the
production
environment,
we're
looking
at
the
last
eight
hours.
A
It
has
system
metrics
for
kubernetes,
but
then
I
also
have
other
dashboards
like
that
I've,
specifically
defined
anomalies
and
common
metrics
I
can
duplicate
for
words.
I
can
star
specific
dashboards,
so
they
show
up
as
highlights
when
I
look
at
this,
that
I
start
with
system
metrics,
but
we
also
automatically
had
response
metrics
and
can
set.
You
can
set
up
alerts
on
those
response.
Metrics
this
one
is
actively
firing
and
because
I
hit
that
error
button,
those
firing
alerts
if
they
last
for
more
than
five
minutes,
create
alerts
in
incidents
by
default.
A
We
can
also
add
custom
metrics,
so
here's
a
custom
metric
called
up
that
I
added
I
mentioned
that
when
you
have
firing
alerts
they
show
up
in
get
lab
as
an
alert.
Here's
our
alert
view-
and
this
is
relatively
new
I-
think
shipped
last
last
release
a
month
ago,
but
you
can
see
kind
of
where
we're
headed.
You
can
pull
up
the
alert
to
see
that
alerts,
details
that
came
in
from
the
alert.
Specifically,
you
can
get
an
overview.
A
You
can
triage
or
adjust
the
status,
as
we
saw
in
the
last
green
automatically
to
say:
oh,
it's
acknowledged
or
triggered
or
resolved.
In
this
case,
it's
acknowledged.
You
can
create
an
issue
from
alert
if
there's
no
issue
created,
but
in
this
case
we
had
already
created
an
issue
for
the
alert.
Here's
that
issue
up
just
ran.
I
have
already
created
it,
but
here's
that
issue
when
I
created
this
issue,
I
could
embed
a
zoom
link
and
that
automatically
allows
me
to
collaborate
with
my
team.
Responding
to
this
incident.
A
You
can
fully
customize
these
templates
so
that
when
an
incident
issue
is
created,
it
tags
or
assigns
or
notifies
specific
team
members
right
in
the
issue
template
that
you
choose
to
use
to
create
incidents
I,
you
can
have
a
dialog
and
share
charts.
So
I
showed
this
chart
hey.
Maybe
it
has
something
to
do
with.
This.
Is
a
custom
chart
that
I
utilized
it
displays
it
live
in
line
with
up
to
date.
This
is
you
know,
one
minute
ago,
information
I
can
collaborate
on
what
you
think
the
cause
of
the
issue
is
maybe
it's.
A
You
know
the
fact
that
we
have
this
generate,
alert
or
generate
error
button
right
there
on
the
home
page.
You
can
confirm
that
that's
the
case.
We've
got
that
confirmed.
You
can
have
a
improve
thread
and
say:
oh
well,
you
know
we
had
this
incident.
What
are
we
gonna
do
to
prevent
it
from
happening
again
and
showcase
and
track
the
progress
of
actually
ensuring
that
it
doesn't
happen
again
by
attaching
merge
requests
in
this
case
it
already
attached
to
merge,
merge,
request,
hey
we,
you
know
our
follow
up.
Step
should
be.
A
We
should
remove
that
generate
errors.
Button
I
created
a
merge
request
for
it.
I
attached
that
merge,
request
and
said
you
know,
this
issue
is
going
to
close
we're,
not
gonna
close
this
issue,
and
it's
going
to
close
when
that
merge
request
closes
so
I
created.
This
merge
request.
It
removes
that
generate
errors
and
when
I
merge
this
since
were
compared
risks
I
don't
actually
want
to
to
merge
it.
It
will
close
that
issue.
A
For
me,
I
can
see
my
full
list
of
all
the
incidents
available
that
are
still
open,
closed
or
oh
I've
used
a
filtered
search
to
get
the
incidents.
These
are
all
automatically
created
by
the
gitlab
alert
bot
because
an
alert
triggered
them.
I
also
want
to
highlight
that
Gilliam
has
the
ability
to
add
a
generic.
We
have
a
generic
alert
endpoint
to
add
items
to
this
alert.
A
So
if
you're
using
a
separate
monitoring
tool,
you
can
point
your
alerts
that
get
lab
and
then
do
your
alerts,
triage
and
incident
management
right
here
and
get
lab
it's
a
really
powerful
tool
for
enabling
that
developers
to
respond
directly
in
the
context
of
the
tooling.
They
have
think
about
the
power
of
saying
I
didn't
showcase
this
on
I,
don't
know
if
this
application
has
a
deployment,
but
you
can
add
annotations
on
your
charts,
so
you're
responding
to
an
incident
and
you
see
the
naina
tation,
that's
like.
A
Oh,
there
was
a
there
was
a
recent
deploy
and
here's
the
commit
that
was
deployed
right
before
this
alert
triggered.
You
can
quickly
look
at
that
chart
jump
in
find
the
developer.
Who
performed
that
commit
ask
questions.
It
enables
a
level
of
collaboration
with
the
development
team.
Who's
likely
had
some
involvement
in
the
cause
of
the
incident
or
the
minimum
can
help
you
triage.
A
The
incident
directly
in
your
incident
response,
so
I,
really
I'm,
excited
and
think
give
up
is
fully
capable
of
being
that
kind
of
replacement
for
your
incident
management
and
incident
response
workflow,
even
if
you
might
use
a
separate
tool
for
your
application
performance
monitoring.
As
long
as
you
point
your
alerts
to
get
lab,
I'm,
gonna,
stop,
sharing
and
just
briefly
say:
I'm
I
am
available
to
chat
anytime
I'm
at
Kent,
C
Johnson
on
get
lab
or
Kenny
at
get
lab.
Comm
on
email
feel
free
to
ping
me.