►
From YouTube: Ops Section AMA - Monitor Group
A
Hello,
everyone
welcome
to
the
second
op
section
ama.
We
are
here
with
the
monitor
group
sean.
You
had
the
first
question
and
I
see
you're
on.
A
Saw
steve's
name
and
thought
it
was
sean
sorry
do
you
want
to
maybe
verbalize
for
or
I'll
verbalize
for
sean,
and
then
you
can
answer
sarah
sean
asked,
which
markets
should
we
be
prioritizing
in
order
to
increase
user
count?
This
seems
to
be
a
renewed
focus
on
sas.
So
it's
the
idea
to
build
and
learn
off
of
sas
users,
even
free
ones,
before
we're
finding
products
towards
an
enterprise
market.
B
B
In
my
second
bullet
point,
I've
had
a
handful
of
recent
conversations
with
analysts
that
make
me
think
that
we
are
actually
closer
to
being
an
enterprise
solution
than
we
originally
thought.
B
Many
of
our
enterprise
customers
only
need
a
good
enough
or
quote
unquote,
get
the
job
done
solution.
They
don't
need
all
the
bells
and
whistles
that
ops,
genie
or
pagerduty
offer.
B
I
don't
have
an
understanding
of
the
size
of
our
gitlab
customer
base
that
fits
this
bill,
but
I'm
looking
into
this,
I
think
that
answers
the
whole
question.
A
Yeah
cool-
and
I
remember
you
posting
about
that-
update
from
the
analysis,
yep,
exciting
revelation.
I
think.
B
Thank
you
very
exciting.
It's
very
motivating
for
the
team.
Obviously
it's
nice
to
see
the
tool
you're
using
solve
people
solve
problems
for
people,
and
so
a
few
insights
into
that
is,
I
believe,
a
lot
of
the
growth
is
due
to
introducing
the
type
selector
on
the
issue,
creation
form.
B
Now
they
see
the
type
selector
and
those
who
are
creating
something
in
the
realm
of
bug
or
service
disruption
or
outage
now
sees
an
incident
and
sees
that
as
an
option,
and
so
it's
made
it
a
lot
more
discoverable
and
it's
also
led
to
people
being
able
to
you
know.
Oh
now,
we
create
incidents.
Well,
there's
an
operations
section.
Oh
there's
a
list
for
incidents.
B
So
that's
a
hypothesis.
I
need
to
do
a
little
bit
more
behavior
tracking
on
like
how
people
are
working
through
the
product,
but
adding
that
type
selector
has
generated
a
lot
of
discoverability
for
us
and
anup.
You
had
the
next
point.
D
E
I'm
just
gonna
yeah,
that's
great,
I
agree
and
do
you
have
any
thoughts
about
other
places
that
you
know
we
could
have
that
kind
of
discoverability?
It
seems
like
what's
working
really
well
there
is
that
someone's
in
like
a
plan
context
and
then
that
pulls
them
into
monitor
just
curious.
If
we
thought
any
of
of
any
more
examples
that
we
might
want
to
do
like
that,.
B
That
is
a
great
question
which
has
been
a
topic
of
conversation
for
myself,
kenny
kevin
and
the
team
we're
in
the
midst
of
kind
of
formal,
formally
doing
this
discovery,
and
so
in
the
last
two
weeks,
amelia
and
I
went
through
and
we
have
this
whole
document.
B
If
people
want
to
look
at
it,
we
did
an
evaluation
of
all
of
the
stages
in
git
lab
and
then
all
of
the
categories,
and
essentially
looked
for
where
is
somewhere,
where
the
job
to
be
done,
is
kind
of
similar
to
maintaining
a
service
and
responding
to
something.
B
That's
a
little
bit
too
far
away
for
us
to
pursue
some
sort
of
opportunity,
we're
looking
for
like
low
to
medium
hanging
fruit
and
we
came
up
with
four,
and
so
that's
within
verify,
release
within
our
plan
team
and
we're
there
for
there's
one
more
that
I'm
forgetting.
But
I
have
the
issue
that
I
will
link
here
and
then
we're
going
to
run
problem
validation,
cycles
on
jobs
to
be
done,
and
the
personas
between
those
two
categories
to
come
up
with.
B
A
Yeah,
what's
your
favorite
cereal,
I
I
you
responded
directly,
but
this
was
for
everybody.
This
is
the
ama
meant
for
the
whole
monitor
group.
So
I
meant
to
ask
everybody
on
this.
Call.
A
D
F
I
hate
cereal.
I
don't
understand
why
people
eat
it
for
breakfast
seems
like
sam
shares,
the
sentiments
all
passionate.
E
A
All
right,
thanks
for
your
input
and
anub
sarah
and
justin,
specifically
for
liking
cereal.
My
next
question
sarah
was
kind
of
about
that
discoverability
you
were
talking
about
and
I'll.
I
read
your
answer,
and
so
I
want
to
add
a
little
bit
more
detail.
What
I
was
wondering
is:
if
there
are
user
behaviors
kind
of
before
they've
touched
incident
management
that
you
would
say
like.
Oh
maybe
that
is
a
good
indicator
that
they
would
want
incident
management.
One.
F
A
The
ones
that
came,
to
my
mind
was
something
like:
are
they
doing
deployments
with
gitlab?
Maybe
that
would
mean
they
would
be
interested
in
also
doing
incident
management.
But
I'm
wondering
if
there
are
things
like
that
that
you
would
it'd
kind
of
be
like.
Where
would
you
trigger
the
recommendation
engine
to
say,
hey?
Have
you
ever
considered
gitlab
incident
management.
B
Yeah
great
question,
so
the
two
that
I
provide
there,
which
are
very
specific,
are
kind
of
in
the
realm
of
people
who
are
using
the
plan
stage
for
issue
dragging
and
might
find
value
in
incident
management,
but
other
areas
that
I
would
look
at
are
in
the
manage
stage
in
value
stream
management.
B
Are
we
seeing
people
create
charts
so
that
they
can
understand
analytics
for
again
issues
with
the
incident
label?
If
that's
something
they're
doing
within
get
lab
more
specific
examples
would
be
verified,
continuous
integration
or
release
for
continuous
delivery.
Are
we
seeing
people
create
issues
for
problems
within
the
pipeline
or
what
do
they?
What
does
someone
do
in
gitlab
click
on
create
interact
with
when
when
a
deployment
fails?
A
I,
like
that
point
yeah,
which
is
we
we
know
about
deployment
failures
and
we
can
infer
what
people
are
doing
as
a
result
of
that
deployment
failure.
And
then
you
know
if
they're
not
connected
to
you
or
we
can't
sense
or
have
some
mechanism
of
knowing
whether
they
did
it
in
another
system.
Maybe
they
would
appreciate
ours.
C
C
A
Yeah,
my
next
question
was
something
that
you
and
I
have
chatted
about,
I'm
interested
in
the
kind
of
broader
group's
thoughts
on
this.
If
you're
familiar,
the
andon
chord
is
like
a.
It
was
originally
toyota
manufacturing
methodology
where,
if
there's
a
problem
in
the
production
line,
you
pull
the
chord
and
everything
stops
and
you
don't
restart
the
line
until
you
fix
the
problem.
A
Even
if
that
problem
was
like
one
person,
you
know
like
you
needed
to
go,
get
a
coffee
or
something
you
still
pull
the
line
to
keep
the
whole
train
moving
and
to
optimize
to
like
learn
from
any
mistake
that
might
have
caused
that
outage.
It's
like
in
my
head,
there's
something
similar
to
that
in
our
continuous
integration
process,
where,
if
master
is
red,
you
would
want
to
stop.
That
might
be
something
where
you
would
create
an
incident.
A
A
F
F
They
can
pull
the
end
on
cord
and
get
involvement
from
higher
ups
to
to
let
them
know
something's
wrong,
and
then
the
idea
is
like
together,
we'll
figure
out.
What's
wrong.
I
haven't
I.
I
didn't
find
that
particularly
work
well,
because
I
at
least
in
the
year
or
so
that
was
there
no
one
ever
pulled
the
court.
I
think
it's
like
a
self-preservation
thing.
You
don't
want
to
say
something's
wrong,
but
I
think
kenny.
What
you're
suggesting
here
is
like.
F
Does
it
make
sense
for
organizations
to
stop
if
master's
read
with
the
incident,
or
at
least
something
highly
visible
enough,
where
someone
is
responsible
to
jump
on
it?
I
I
really
like
that
as
a
concept,
I
think
we
talked
about
it
before,
where,
like
it,
read
master,
shoot,
trigger,
alert
and
like
the
logical
next
step
to
me,
is
that
alert
can
automatically
be
turned
into
an
incident
where
somebody
can
stop
to
take
a
look
at
it.
F
D
A
new
yeah
I'm
happy
to
go
into
details
at
another
time,
but
when
I
was
in
engineering
we
had
implemented
this
at
cisco.
It
worked.
We
discovered
some
exceptions
we
had
to
make.
There
is
a
p1
issue
for
a
customer
review.
D
This
is
self-managed
worldwide.
We
would
split
a
branch
and
go
fix
that,
instead
of
waiting
for
the
main
line
to
get
fixed,
it
made
all
of
us
better
about
helping
each
other.
When
issues
came
in
because
a
lot
of
times
with
complex
code,
you
change
something
and
then
something
else
breaks
somewhere
totally
that
you
just
don't
have
any
idea
about,
and
so
we
we
got
connected
better
with
each
other
and
learned
about
the
product
stack
much
quickly.
E
You
know,
I
just
think
it
can
go
both
ways
too,
where
I've
seen
a
pattern
where
organization.
When
you
have
an
incident
you
don't
want
to
deploy
then
like
that's
when
you
stop
the
train,
and
if
you
don't
have
that
you
can
run
into
situations
where
you're
fire
fighting
something
and
then
like
the
auto,
you
know
deploy
starts
and
you
know
makes
a
bad
situation.
Worse
so
seems
like
the
signal
could
go
like
from
ci
out
into
incident
management,
but
it
also
could
be
compelling
maybe
to
go
the
other
way.
A
Yeah
great
point
and
a
lot
of
the
genesis
of
this
came
from
a
discussion
that
I
think
sarah
and
I
had
about
how
to
like
turn
incident
management
on
by
default.
It
there's
a.
I
think
that
if
you
pulled
on
this
thread,
you
could
say
that
every
ci
project
or
user
became
an
incident
management
user.
By
saying
we
think
it's
a
best
practice
for
you
to
create
an
alert
or
an
incident.
E
D
B
Urcello
certificate
expirations
to
an
endpoint
and
it
is
triggering
incidents,
so
these
are
low
urgency
but
need
to
be
fixed
in
a
certain
amount
of
time
and
that's
fantastic
progress,
so
every
team
is
using
it.
I
have
a
plan
with
brent
to
continue
iteratively
working
on
higher
and
higher
severity
alerts
to
get
them
dog
footing
at
more,
but
this
is
the
first
step
which
is
awesome
and
then
do
you
want
me
to
voice
over
the
other
points
on
supporting
the
dev
team
quickly?
I'm.
D
B
Yeah,
thanks
for
the
question
I
do
not
have
dollar
amounts
for
this
I'll
need
to
make
an
estimate
on
it,
and
then
I
can,
when
I
do
so
add
it
to
the
direction
page.
I
can
give
you
insight
on
how
I
see
the
market
performing
though
yes,
so
five
years
ago,
or
so,
we
started
to
see
the
increase
in
popularity
of
kind
of
three
players.
B
Pagerduty
was
already
well
established
at
that
point,
but
there
was
pagerduty,
there
was
appsgenie
and
there
were
victorops
and
they
were
competing
for
different
pieces
of
the
market
and
then
two
ish
years
ago
there
were
two
notable
acquisitions
where
pictures
was
acquired
by
splunk,
which
is
not
a
workflow
tool
in
and
of
itself,
but
does
enable
a
ton
of
workflow
like
arbitrary,
workflow,
customization
and
atlassian
acquired
ops,
genie
and
atlassian
is
a
well-known,
workflow
tool
as
well
about
that
time.
B
B
Great
okay,
sorry,
the
incident
management
market
broke
off
from
itsm,
and
now
it's
kind
of
being
absorbed
back
into
itsm,
as
people
realize,
we
don't
necessarily
need
a
really
fancy
proprietary
tool
to
do
this.
We
can
actually
use
our
workflow
tool
to
do
this.
B
A
lot
of
progressive
companies
do
want
a
proprietary
tool
such
as
pagerduty,
which
is
why
it's
still
doing
pretty
well,
but
I
think
we're
seeing
the
shift
towards
working
your
incident
management
processes
into
just
a
custom,
workflow
tool
and
that
being
a
just
good
enough
solution.
So
I'm
gonna,
let
kevin
take
d,
as
I
voiced
over
my
b
and
c
sure.
F
Then
the
number
four
and
I
listed
here
is
from
some
research.
I
read
a
while
back,
which
is
different
from
the
number
listed
by
kenny,
and
so
it
depends
on
how
they're
defining.
But
if
you
look
at
this
arr
of
the
largest
companies
data
dogs,
ar
for
2020
is
roughly
600
million
new
relic
is
in
the
same
range.
Splunk
is
somewhere
up,
there,
you'll
be
taking
the
other
log
companies
and
all
the
other
apm
companies.
It
adds
up
quickly
above
2
billion.
F
F
What's
driving
this,
I
think,
is
likely
to
continue
because
every
company
is
becoming
a
software
company
and
the
general
business
model
in
the
space
is
usage
based
and
if
there's
more
software,
there's
going
to
be
more
telemetry
and
this
market
will
grow
will
continue
to
grow
because
of
that
and
when
thinking
about
incident
management,
if
we
just
look
at
pager
duty
as
a
proxy,
the
previous
quarter,
they
earned
50
million
growing
at
25.
F
F
The
other
thing
that's
interesting
is
pagerduty
goes
after
the
enterprises
versus
like
the
ops
genies
of
the
world.
Maybe
I
I'll
dig
into
their
quarterly
report
to
see
if
they
they
state
anything
how
revenue
growth
is
split
for
their
business.
A
Currently,
thanks
kevin,
I
just
wanted
to
add
the
there
was
a
reference
to
the
new
tam
and
sam
numbers
that
spreadsheet
is
actually
in
our
product
investment
page
now
that
spreadsheet
is
sliced
to
only
the
like
idc's
defined
devops
market,
so
they
say
of
those
tools
which
ones
are
being
used
in
a
devops
application.
That's
why
the
broader
apm
market
is
significantly
larger.
A
E
I
was
wondering
sarah,
you
mentioned
a
couple
times
that
there's
this
motivation
or
kind
of
story
from
analysts
that
people
or
enterprises
even
want
a
less
less
bells
and
whistles
in
their
incident
management.
Yeah.
I
was
kind
of
curious.
Is
that
because
they
want
just
lower
cost
and
they
don't
need
it
or
is
it
like?
They
want
a
more
consistent
tool
chain?
You
know
the
integration.
B
Got
it.
This
is
not
the
correct
answer
to
the
question,
so
I'm
deleting
it
okay,
I
think
so.
This
is
going
to
be
true
with
any
software
product
that
you
use
is
software
products
try
to
be
innovative
and
a
lot
of
the
innovation
is
based
on
you
only
using
that
product
and
giving
them
all
of
your
data,
and
then
it
does
lots
of
cool
things.
B
They
are
innovating
and
trying
to
do
things
such
as
you
know,
tying
delivery,
insights
to
triggering
of
alerts
and
incidents,
and
they
tell
you
you
know
to
be
able
to
use
these
product
sets.
You
need
to
give
us
as
much
much
information
about
your
infrastructure
as
possible
and
you
need
to
be
operating
on
microservices
and
we
need
to
have
access
to
everything
that's
going
on
and
if
you
just
don't
have
that
data
or
you
haven't
broken
things
into
microservices
or
you
don't
have
the
time
to
configure
everything
perfectly.
B
The
bells
and
whistles
don't
work
for
you
and
you
don't
need
them,
so
the
amount
of
like
every
product
wants
you
to
do
that,
which
is
why
it's
so
important
that
we
make
things
work
out
of
the
box
and
as
easy
as
possible,
but
so
cost
space
yeah
someone's
not
going
to
pay
for
something
that
doesn't
work
for
them
or
that
they
don't
have
the
time
to
put
into
making
it.
A
B
Yes,
I
will
a
little
clarification
on
question:
eight,
a
noop,
I'm
assuming
the
incident
management
or
the
monitor
direction
page.
The
monitor
direction
page
refers
to
principles.
Is
that
direction
page
you're
talking
about
you
are
on
mute.