►
Description
Andi Grabner from Keptn and Mike Kobusch from NAIC give us a look into how to automate performance engineering by integrating Keptn and Loadrunner.
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
Well,
thanks
for
being
here,
it's
we're
doing
this
every
month,
where
we
invite
a
captain
user
to
our
user
group
meetings,
really
showing
how
they
get
value
out
of
captain,
and
I
know
we
will
see
a
live
demo
from
you.
We
hear
some
background
from
you
who
you
are,
how
you
got
a
kind
of
introduced
to
the
captain
project
and
how
you're
using
it
in
your
day
to
day
life
mike.
A
I
would
like
to
ask
you
to
maybe
slide
forward,
because
for
those
folks
that
have
kind
of
missed
what
we
have
been
doing
already
in
the
past,
he
said
the
one
slide
forward
would
be
great
sure
perfect.
A
couple
of
weeks
ago,
we
did
a
10
minute
walkthrough,
where
you
kind
of
showed
me
through
how
you're,
using
captain
what
then
cloud
automation
already,
which
is
the
the
managed
version
from
dynatrace
on
how
you
are
using
it
to
automate
your
performance
analysis.
A
A
I
know
we
have
a
couple
of
slides
but
then
really
kind
of
the
background
and
the
demo,
but
what
you
showed
there,
as
you
just
pointed
out
right,
you
built
an
slo
dashboard
that
really
included
all
the
metrics
and
then,
if
you
do,
I
think
one
more
click.
Then
we
are
done
with
the
animation.
Then
I
think
I'm
shutting
up
and
pasting
it
over
to
you,
but
then
captain
really
automated
the
the
analysis
so
correct,
maybe
quickly
pass
it
over
to
you,
because
this
is
also
what
you
said:
captain
saves
you
hours.
B
Captain
saves
me
hours
so
as
performance
engineers.
We
know
that
after
we
run
our
scripts
there's,
you
know
thousands
and
thousands
of
data
points
to
look
at
and
charts
and
graphs,
and
each
script
could
take
anywhere
from
a
half
hour
to
an
hour
to
analyze.
A
B
That's
all
that
all
takes
time,
but
with
captain
and
cloud
automation
we
can
take
that
analysis
down
to
just
seconds
and
then
we
can
provide
teams,
project
teams,
the
heat
map
like
we
have
here,
showing
them
their
just
how
their
runs.
Look.
You
know
in
a
color
steamed
heat
map,
you
know.
So
if
somebody
opens
this
and
they
see
a
lot
of
greens,
they're
gonna
say
oh
great
this.
This
this
release
is
good
right.
If
they
see
a
lot
of
reds
or
yellows
they're
going
to
say,
oh
wait,
a
second.
B
We
may
need
to
take
a
step
back
so
through
through
our
dashboards
and
knowledge
of
kpis
and
knowledge
of
dynatrace
and
our
applications.
We
can
build
out
this
heat
map
and
make
things
a
lot
quicker
for
us
in
the
long
run.
B
B
We
can
kind
of
build
out
these
dashboards
with
all
our
kpis
and
just
let
dyna,
trace
and
captain
do
the
work
for
us
instead
of
us
manually,
going
out
there
and
doing
all
the
work.
So
this
really
is
going
to
speed
up
time
and
the
way
we
as
performance
engineers
start
analyzing
data
in
the
future.
Hopefully,.
B
And
this
is
just
another
sample
of
my
dashboard,
so
this
is
just
you
know,
kind
of
what
a
dashboard
inside
the
tile
looks
like.
We
have
a.
A
B
There
that
you
know
just
says
okay
pass.
If
I'm,
you
know
greater
less
than
or
equal
or
less
than
5.9
and
we'll
just
go
through
this
a
little
bit
later.
We
we
can
await
different
tiles
on
how
important
they
are
to
our
data
analysis.
B
So
I'm
going
to
go
to
the
next
slide
so
who
I
am
like
andy
said,
I'm
mike
kobush,
so
I
don't
really
have
a
technical
background,
so
I
have
a
degree
in
biology
and
a
minor
in
chemistry.
So
I
started
out
of
college.
I
started
with
a
company
doing
manual
testing
and
we
would
we
would
take
three
to
five
months
manually
testing,
an
application
for
just
one
release
and
it
took
six
or
seven
of
us
to
do
that
and
I
thought
man.
This
is
crazy,
so
I
started
getting
into
automation.
B
So
I
brought
automation
into
this
company
and
I
automated
all
these
test
scripts
and
I
got
testing
time
down
to
like
just
two
weeks
for
one
person
I
was
like.
Well,
that's
a
lot
better
right
I
mean
now
I
have
more
time
to
do
stuff,
so
I
kind
of
got
bored
with
kind
of
the
automation
stuff.
So
I
had
a
supervisor
to
kind
of
approached
me
and
said:
hey.
Would
you
like
to
be
a
performance
engineer?
I
said
well
yeah.
A
B
Sounds
that
sounds
cool
so
about
12
years
ago
I
started
as
a
performance
engineer
and
just
learning.
I
got
a
mentor
and
just
started
working
working
with
him
and
just
just
started
learning
every
day.
I
would
learn
and
learn
and
learn,
and
until
I
became
just
comfortable
to
go
off
on
my
own,
so
I
moved
to
a
company
that
I'm
at
now
and
I'm
doing
performance
engineering
for
them
and
but
I'm
still
learning
every
day-
and
you
know,
data
analysis
is
the
just
the
longest
part
of
all
of
this.
B
You
know,
scripting
is
pretty
easy,
getting
gathering
requirements
pretty
easy
right,
but
just
the
data
analysis.
It
takes
a
lot
of
time
and
I
think
if
we
could
automate
that
process,
we're
going
to
have
more
time
to
do
do
other
stuff,
like
other
technologies
and
just
kind
of
grow.
This
industry,
we
started
andy
and
I
started
with
kept
in
7.3.
So
I
heard
about
captain
a
couple
years
ago
at
perform,
and
I
was
thinking
man.
This
could
really
help.
This
could
really
help
me
with
my
data
analysis.
B
You
know,
because
I'm
testing,
I
think
at
this
company
I
have
about
80
applications
that
I
test
and
there
is
no
way
if
I'm
testing
five
or
six
at
one
time,
that
I
can
do
a
data
analysis
for
them
like
for
the
quality
that
I
expect
that
I
should
be
able
to
do
it
for
him.
Well,
I
was
thinking
man.
Captain
could
really
really
help
us,
so
we
started
with
captain
7.3.
I
got
on
a
call
with
andy
one
day
and
a
couple
other
people,
but
then
to
install
captain.
B
It
wasn't
an
easy
process
right,
so
I
had
to
get
multiple
teams
like
security
and
network
and
middleware
together,
I'm
not
technical.
So
I
had
to
get
somebody
here
to
to
install
captain
and
we
went
through
this
process
and
it
probably
took
us
two
two
and
a
half
hours
to
get
kept
it
installed,
and
when
that
was
done,
I
was
like.
Oh
my
gosh.
There
is
there's
no
way
I
could
have
done
that
on
my
own
and
probably
me
like
a
lot
of
you.
B
I
don't
have
access
to
all
the
all
the
things
that
I
need
just
due
to
security
right
security
reasons,
so
you
have
to
get
a
lot
of
people
together
to
get
kept
him
going,
but
once
we
got
it
going
it
was,
it
was
good.
You
know
I
had
to
start
building
out
dashboards
and
stuff
so
got
that
going.
It
was
a
slow
process,
but
hey
captain
is
up
and
going
so
here
we
are
and
we're
off
and
running,
and
I
think
andy
kind
of
saw
like
oh
my
gosh.
B
B
So
I
think
he
contacted
me
a
couple
weeks
later
and
said
you
know
what
we're
working
on
this
dynatrace
cloud:
automation,
preview,
it's
going
to
make
things
a
lot
easier
for
companies
to
install.
I
was
like
okay
yeah.
That
sounds
great.
So
then
they
got
this
cloud.
Automation
going
and
andy
sends
me
an
email
and
says:
okay,
I
have
cloud
automation,
here's
the
instructions
to
install
it.
I'm
going
to
contact
you
in
30
minutes
and
I'll
see
how
it
how
it
went
and
see
how
you're
doing-
and
I
was
thinking.
B
Oh
my
gosh
wait
a
second.
I
don't
think
I
don't
think
I'm
gonna
be
able
to
do
this
in
30
minutes
because
it
took
us
two
and
a
half
hours
and
like
five
people
last
time
I
opened
up
his
email
started
reading
through
it
and
I
think
in
probably
10
minutes.
I
had
cloud
automation
up
and
running
on
my
system.
A
B
I
had
an
eval
going
even
before
you
contacted
me,
so
that's
just
cloud
automation
makes
things
so
much
easier
for
for
a
setup
and
and
now
dynatrace
is
hosting
it.
So
I
don't
have
to
worry
about
where
my,
where
storage
is
and
and
how
to
keep
upgrading
it,
because
now
that
trace
is
going
to
do
everything
for
us.
B
A
To
chime
in
thanks,
first
of
all,
for
all
the
nice
words,
I
want
to
just
recap
for
people
again,
captain
right
the
reason
why
it
is
challenging
for
some
organizations,
because
captain
runs
on
the
kubernetes
cluster.
That
means,
depending
on
who
you
are
as
an
organization
how
easy
or
tough
it
is
for
you
to
get
a
kubernetes
cluster
where
you
can
install
software.
Whatever
security
restrictions
you
have,
it
can
be
a
matter
of
minutes,
but
it
can
be
a
matter
of
hours
or
even
days,
sometimes
to
get
stuff
software
installed.
B
A
Not,
I
would
say
necessarily
just
the
captain
issue,
it's
in
general,
if
you're
in
software
on
premise
and
we
as
dynatrace
who
are
heavily,
you
know,
involved
in
pushing
captain
open
source
further.
We
then
also
said
well
how
what
can
we
do
to
our
dynatrace
customer
base
to
provide
captain
capabilities
to
them
without
having
to
go
through
the
pain
of
running
software,
because
we
provide
dynatrace
as
a
service?
B
Great,
it
was
great
and
like
I
was
getting
ready
to
say
you
know,
analysis
time
goes
from
hours
to
seconds
and
I
think
I
gave
this
kind
of
demo.
Captain
cloud
automation
demo
to
my
cto
and
I
said:
look
I
usually
spend
hours
doing
this
now
now
it's
seconds
so
now
I'm
going
to
spend
more
time
fishing
instead
of
working.
B
I
don't
know
if
he
liked
that
idea
or
not
he
kind
of
just
chuckled,
but
I'm
gonna
say
he
liked
it
and
I'm
listening
more
sign
fishing.
B
So
yeah
hey,
let's
just
let's
just
jump
into
a
live
demo
and
kind
of
see
what
it
looks
like
on
my
side.
Oh
sorry
about
that.
Let
me
get
move
this
this
thing
out
of
the
way
here
and
we'll
go
to
cloud
automation.
B
So
this
is
what
this
is.
What
captain
looks
like
so
here's
our
heat
map-
and
these
are
different
releases-
that
I've
ran.
So
you
can
see
you
can
kind
of
you
can
tell
captain
or
when
you
trigger
an
evaluation.
You
say:
okay,
I
want
this
to
be
my
july,
build
or
my
august.
I've
run
them
a
couple
times
so
right
now
you
can
see
how
this
screen
is
refreshing,
so
just
bear
with
us
during
this
performance,
because
there
is
a
new
release
of
kepnum
coming
out.
I
think
release
9.9.x
9.1.
B
I
think
it
had
been
released,
but
I
think
then
dinotrix
pulled
it
because
there
was
a
couple,
a
couple
of
glitches
in
it
that
they
really
wanted
to
to
work
out,
so
let's
just
test
them
to
dynatrace
and
how
they
want
just
quality
to
be
top-notch
every
time
so
yeah.
So
we
have
a
little
refresh
issue
here,
but
we'll
work
through
it
and
just
yeah
just
bury
it
with
me
when
this
thing
starts
refreshing
but
anyway,
so
you
can
see
you
can
see.
B
I've
had
previous
runs,
so
you
can
see
down
here
at
the
bottom.
It
gives
us
a
a
score
right,
so
I've
got
a
score
here.
Let
me
get
this
off,
so
my
first
run
I
had
a
score
of
94..
Let
me
get
this
off.
Oh
I
highlighted
it
didn't
want
to.
So
you
can
see
how
our
scores
I
had
a
73
score
on
this
test
run
so
hey.
It
turns
yellow
it
says:
hey
warning,
you
may
have
a
performance
issue
right,
so
we
can
go
back
and
kind
of
see
what's
happening.
B
So
in
this
run
I
can
see
my
cpu
time
kind
of
increased,
so
I
went
back
to
the
project
teams
and
they
said
yeah.
You
know
we're
doing
something
with
how
we're
fetching
data,
so
we
expect
cpu
time
to
increase.
So
that's
cool.
Well.
That
was
a
really
good
catch
by
kept
in
cloud
automation
and
my
dashboards
and
kpis,
and
how
how
I
have
it
set
up.
So
what
I
did
here
on
this
new
new
test
run,
I
don't
you
could
probably
see
it.
B
Let's
see
what
happens
so
I
have
my
pass
criteria
here
as
one
1.8,
if
you
sign
so
after
talking
to
the
project
teams,
I
kind
of
increased
this
if
I
increase
it
to
2.3
for
my
next
run,
so
you
can
kind
of
keep
up
with
baselines
if
you
want
to
by
changing
your
dashboards
and
I'll,
show
you
how
to
do
that
here,
a
little
bit
so
this
last
best
run.
B
So
now,
let's
look
at
like:
let's
look
at
dynatrace
here
and
our
dashboards,
so
here's
a
test
drive
that
I
ran
this
morning
just
to
get
ready
for
this,
so
here's
the
dashboard
that
I
have
set
up.
So
what
I'm
doing
is
I'm
so
I
have
some
kpis
that
I've
set
up
for
this.
I
have
response
times
right
here.
These
are
for
each
of
my
test
steps.
B
So
if
you're
familiar
with
maybe
load
runner
or
probably
even
neoload
or
jmeter,
you
have
request
attributes
set
up,
so
donna
tries
to
capture
them,
call
them
test
apps.
I
call
them
test
steps,
so
these
are
each
of
my
test
steps
like
logging
into
an
application
or
or
clicking
on
search
or
maybe
clicking
on
a
summary
or
or
maybe
doing
a
checkout
right.
So
those
are
my
test
steps.
So
each
one
of
these
tiles
down
here
is
a
test
step
and
they
have
calculations
built
into
them.
B
So
if
I
dig
further
into
this
tile,
I
can
say:
okay,
this
is
like
average
response
time.
My
sli
here
is
a
response
time
for
a
search,
so
I
say:
okay,
I
want
my
passing
criteria
to
be
equal
or
less
than
200
milliseconds,
and
I
want
you
to
warn
me.
I
want
you
to
calculate
a
warning
score
if
it's
equal
or
less
than
300
seconds,
and
I've
weighted
this
as
a
5..
B
So
what
that
means
is
I'm
saying,
okay,
cloud,
automation,
captain
I
I
like
response
times
right,
so
I
want
you
to
wait
this
at
the
highest
at
five
and
calculate
it
based
on
that,
and
if
we
go
back
here,
you
can
see
that
on
some
of
these
scores
like
cpu,
I
have
a
very
low
weight.
So
he's
scoring
these
as
a
2.5
passing,
but
if
I,
if
I
weight
it
higher
each
one
of
these
is
scored
a
lot
higher,
so
passing
scores
now
for
this
to
the
12.5.
B
So
this
is
the
way
it
kept.
Them
adds
up
and
gives
you
a
total
result.
A
A
Maybe
just
for
those
that
have
never
seen
this
before
what
we
always
do.
Is
we
look
at
all
the
metrics?
We
call
them
sli
service
level,
indicators
that
you
put
on
the
dashboard
and
that
you
also
mark
with
it
should
be
included
in
the
evaluation,
and
then
we
always
calculate
a
total
score
of
100.
So
by
default
we
treat
every
sli
equal.
Everyone
has
the
same
weight
of
one,
but
what
you
just
did
you
said
my
response.
Time
is
five
times
more
important
than
the
default
and
therefore
you
can
change
the
weight.
A
B
B
I
say:
oh,
let's
weight
it
as
a
two,
because
it's
kind
of
an
important
metric,
but
I
just
want
you.
I
don't
want
you
to
totally
fail
my
script
if
this
is
out
of
whack
for
some
reason
right.
So
that's
why
I
waited
too.
So
you
can
wait
on
how
how
you
feel
your
k,
how
you
feel
your
kpis
are,
so
some
people
may
say:
okay
response
times,
probably
not
as
important
as
memory
if
you're
on
a
you
know,
java
shop
or
or
something
like
that.
B
Sorry,
let's
go
back
to
my
chart.
I
just
want
to
mention
one
more
thing
about
these
charts
go
back
to
my
tester
on
here,
so
I
have
another
another
tile
here
and
I'm
using
jvm
max
memory.
So
this
is
a
good
way.
So
not
only
can
can
you
analyze
data
and
use
kpi
life
response
times
and
cpu
time
and
database
times
this
statement
right.
You
can
use
this
as
like
an
application
configuration
check
right.
So
I
want
to
check
that
my
max
memory
is
where
it
should
be
right
up.
B
B
We
have
the
memory
that
we're
supposed
to
and
if
it's
less
than
20
of
my
previous
run,
hey
warn
me
because
now
I
know
that
if
this
jvm
spun
up
overnight
right
because
we
take
them
down,
they
spin
up
if
it
didn't
spin
up
correctly
overnight
and
it
used
the
default.
Setting
of
I
think
we
had
like
one
gig,
which
I've
seen
before
then.
Hey
warn
me
that
I'm
not
using
my
right
settings
my
correct
settings
in
this
script
right.
A
That's
actually,
I
I
think
now
I
mean
we
talked
about
this
when
we
were
preparing
for
this,
and
I
thought
it
was
fascinating
that
you
are
really
using
some
of
these
metrics
as
environment
readiness
checks,
and
you
should
almost
consider
in
the
future
to
maybe
run
two
captain.
Let's
say
evaluations
once
before.
You
run
the
test
to
make
sure
your
environment
is
properly
configured
and
if
then
captain
says
environment
looks
good.
Your
jvm
memory
is
good,
there's
no
other
ambient
noise
on
the
system.
A
Maybe
some
other
traffic
is
running
and
then
based
on
that
you
just
you
say
now,
I'm
running,
let's
say
an
hour
long
test,
because
what?
If,
at
the
end,
you
should
not?
You
should
not
know
by
the
end
of
the
test
that
the
memory
was
incorrectly
said.
You
should
do
this
before
the
test
actually
runs.
You
can
even
add
this
check
before.
B
That
is
actually
that's
very
true,
so
what
I
can
do
is
here.
So,
let's
just
look
at
captain
and
how
it's
configured
here,
real
quick.
So
what
I
have
is
projects
right
so
kind
of
this
default
dynatrace
project,
and
then
I
have
each
of
my
applications
here
as
a
project
and
then
within
these
applications.
B
I
have
these
services
right,
so
each
service
is
probably
a
test
script
within
my
application,
and
what
he
was
just
saying
is
what
I
could
do
is
I
could
build
an
sbs
service
that
has
all
my
configuration
checks
run
that
against
captain
and
then,
if
I
get
a
passing
score
down
here,
go
ahead
and
kick
off
my
script.
So
that's
yeah!
That's
a
really
good!
B
That's
a
really
good
idea!.
A
I
want
to
add
one
more
quick
thing,
because
you
just
you
just
brought
up
a
good
point,
there's
a
big
misconception
in
captain.
We
have
the
terminology
of
a
project
of
stages
and
services,
which
makes
sense
right
in
this
case
you
have
an
sbs
project,
you
have
a
quality
stage,
a
quality
gate
stage,
and
then
you
have
services,
but
your
services
in
this
case
are
not
necessarily
micro
services
or
applications
in
your
case,
they're
actually
modeled
against
your
tests,
because
you
run
test
scripts
revenue
search
education
right.
A
Well,
we
called
it
a
service
and
the
initial
idea.
The
initial
intention
was
that
you
are
letting
captain
automate
sequences
for
individual
micro
services,
you're
not
constrained
to
a
service.
A
service
is
just
a
configuration
entity
and
you
can
really
then
do
things
like
what
mike
is
doing
analyzing
the
result
of
a
particular
test
run
it's
up
to
you.
What
do
you
do
right
exactly.
B
Yep
so
yeah
you
like,
like
you,
said
you
can
evaluate
services
in
your
production,
environment
and
qa
environments
right,
but
so
what
I'm
primarily
using
it
for
is
my
test
runs
because
that's
where
the
time
has
been,
you
know
analyzing
the
data.
So
this
is
where
I
want
to
kind
of
just
you
know,
speed
up
that
process,
and
so
we
have
an
overview
page
here
right.
So
I
can
see
like
oh
my
gosh.
All
my
scripts
are
actually
failing
in
qa
right.
B
So
I
mean
this
gives
you
a
quick
indicator
of
how
your
test
runs
are
going,
so
it's
a
really
good
snapshot,
yeah,
it's
cool,
and
then
we
can
click
in
here
we
can.
We
can
get
our
heat
map
and
you
can
send
this
to
project
teams.
You
can
get
to
see
what's
going
on
over
time.
There's
a
nice
little
dashboard
link
up
here.
So
if
something
is
wrong,
you
can
just
click
on
this
and
it'll
go
right
to
that
dashboard.
B
This
is
not
our
time.
Is
it.
A
B
Yeah,
this
is
the
time
that
I
ran
it,
but
anyway,
so
you
can
just
get
right
to
the
dashboard.
So
I
forgot
to
kind
of
mention
this.
This
kind
of
header
up
here
you
can
configure
this
differently.
So
we
have
we
we
say:
okay,
I
wanted
to
pass
if
we're
at
ninety
percent
or
me
at
seventy
percent
right
and
then
we
have.
This
compare
feature
right,
so
we
can
compare
results
with
the
last
passing
test
run.
B
We
can
compare
it
with
the
last
two
three
four
five
passing
test
runs
and
that's
where
this
the
compare
function
is
kind
of
where
this,
where
I
showed
you
my
jvm
memory,
where,
if
you're
saying
okay,
I
want
to
be
greater
than
less
than
or
or
greater
than
20
right.
So
this
is
where
it's
going
to
compare
to
your
previous
test
runs
and
say:
okay,
are:
are
we
three
percent
worse
right?
If
so
fail
us
so.
B
Let's,
let's
just
see
how
easily
it
is
to
analyze
a
test
run,
so
I
ran
this
test
run
this
morning.
So
usually
my
next
steps
are
going
to
be
okay.
So
let's
just
dive
into
the
data
and
start
looking
around
right,
I'm
going
to
look
at
response
times,
I'm
going
to
look
at
network,
I'm
going
to
look
at
database
times,
yada,
yada,
yada
right,
but
and
that
all
takes
time,
but
let's
just
see
how
quickly
we
can
actually
evaluate
this
data.
B
So
what
I'm
going
to
do
is
I'm
going
to
come
into
this
windows
power?
Shell?
So
I've
already
authorized
it
today,
just
because
I
didn't
want
anything
to
go
wrong
so
in
in
captain.
What
you
can
do.
Is
you
just
click
on
this
it'll
copy,
this
alteration
command,
and
you
just
paste
it
in
here
and
authorize
this
message.
So
you
can
start
triggering
evaluations.
B
So
here's
a
captain
trigger
evaluation
that
I
did.
I
don't
know.
When
did
I
do
this
on
the
night,
so
I've
been
I've
been
on
vacation,
so
I
haven't
done
anything
for
a
while.
So
let's
just
kind
of
go
through
this.
So
what
we'll
say
is:
okay,
I'm
gonna
type
in
I'm
not
going
to
type
all
this
in
just
for
time,
but
I'm
going
to
say:
okay,
let's
trigger
an
evaluation
on
my
project,
fbs,
which
you
can
see
is
right
here.
B
My
stage,
which
andy
pointed
out,
is
qualitygate
stage,
and
I
ran
this
on
my
revenue
search,
so
I
don't
really
have
to
type
this
back
in.
So
this
is
my.
This
is
my
service
that
I
want
to
trigger
an
evaluation
on
is
revenue
search,
so
I
have
a
time
frame
here
of
73
minutes
and
I
do
that
because
each
one
of
my
tests
I
run
for
an
hour
so
with
ramp
up
times
and
wrap
down
times.
It
usually
comes
out
to
about
73
74
minutes.
B
So
I
just
do
a
73
minute
time
frame,
I'm
going
to
say:
okay,
let's
start
this
evaluation
today.
So
what
is
today
the
21st,
my
21
and
I
ran
this
at
7
14,
my
local
time,
but
captain
uses
like
universal
time,
so
I
know
that
I'm
five
hours
behind
universal
time,
so
that
would
be
at
12
work
scene
and
then
I
have
these
labels
that
I
can
put
put
in
so
like
I
said
you
can
put
release
names.
B
You
can
put
dates
right,
so
I'm
going
to
call
this
one,
we'll
just
call
it
cog,
captain
user
group
and
then
all
I
have
to
do.
Is
click
enter
and
now
my
evaluation
is
done
right,
so
you
can
see
it
took
just
talking
through
that
it
took
maybe
what
a
minute
to
do,
but
without
talking
through
all
this
I
mean
I
could
have
done
this
in
about
20
seconds
right.
So
what
I'll
do
is
I'll
come
in
and
refresh
this
page
and
hopefully
hours
to
be
we'll
see,
and
here
it
is
coke.
B
So
here's
our
evaluation
for
this
test
run.
B
You
can
see
I've
put
in
a
couple
new
dashboards
since
since
my
last
run,
but
we
got,
we
got
a
score
of
97,
so
I
can
say:
okay,
hey
this
test
run
was
better
than
the
previous
test
run
of
90,
which
is
better
than
this
one.
This
is
my
best
test
run,
yet
we
do
have
a
failing
score
here
of
memory.
Jd
memory
used,
so
I
my
memory
is
way
out
of
whack
here,
so
it
failed
it.
B
So
it
gave
me
a
failing
score
of
zero
but
calculated
everything
else
based
on
weight
as
passing
so
yeah
everything
looks
good.
So
that's
how
that's
how
quickly
you
can
trigger
an
evaluation
and
you're
going
to
save
a
lot
of
time
so
the
hardest
part
about
probably
this
whole
process
is
I,
if
you're
kind
of
like
my
company,
we
don't
have
like
nobody,
has
ever
kind
of
sat
down
and
said.
Okay,
these
are
our
sli's.
B
We
don't
have
slos
just
because
we're
kind
of
a
not-for-profit,
but
I
think
a
lot
of
companies
kind
of
missed
that
step
say:
okay,
these
are
our
slis
for
each
of
our
windows,
so
we
have
to
go
back
through
dynatrace
and
kind
of
look
okay
over
the
last.
I
don't
know,
however,
many
days
or
months.
What
are
our
average
response
times
or
our
90th
percentile
response
times?
What
are
we
going
to?
What
are
we
going
to
add
as
our
sli?
B
What
should
our
slis
be
or
our
slos
be
and
then
build
out
these
dashboards?
This
is
this:
is
the
your
your
longest,
the
longest
part
of
this
project
right
is
building
out
these
dashboards
building
them
out
correctly,
with
all
your
kpis,
like
you
said,
we
we
started
this
a
few
months
ago.
B
I've
had
a
ton
of
projects
that
I've
been
doing
so
I
haven't
built
all
these
dashboards
out
like
I
want
to,
but
anytime
I
get
like
15
or
20
minutes.
You
know
I'm
putting
more
tiles
in
because
I
don't
have
all
my
kpis
defined
in
here
yet
so
I
still
want
to
look
at
you
know
database
times.
I
want
to
look
at
how
many
statements
I'm
sending
to
the
database
right
because
I
don't
want
one
of
my
releases
to
be
to
send
like
a
thousand
more
database
requests
and
that's
you
know
that
way.
B
We
could
say.
Oh
gosh,
there's
probably
we've
introduced
an
intel
plus
one
problem
somewhere
right,
so
I'm
still
building
out
these
dashboards,
but
once
you
have
them
built
out
like
you
want
them,
clone
them
right
and
then
just
reconfigure
them
for
your
next
test,
and
and
that's
that's,
how
I've
been
doing
this
yeah.
A
A
You
did
a
start
time
and
then
a
time
frame
setting
you
can
also
do
start
and
end
time
if
you
know
exactly
how
when
they
started
when
the
test
started,
when
it
ended
now
in
what
I
would
assume
as
a
next
step,
you
would
then
trigger
this
fully
automatically
from
the
script
that
you
use,
that
triggers
your
loadrunner
scripts
correct.
I
mean
how
do
you
tweak
a
loadrunner.
B
So
I
just
I
just
manually
trigger
a
loadrunner,
so
I'm
actually
using
stormrunner.
So
what
what
I
do
is
I
schedule
scripts
out
based
on
you
know
the
days
that
teams
want
them
scheduled.
So
for
me
it's
pretty
easy
just
because
I'm
in
storm
runners
and
it's
easy
to
manually
schedule
them
but
yeah.
So
next
steps
are
to
get
this
into
a
pipeline
like
a
jenkins
or
whatever.
I'm
jenkins
automatically
kick
off
my
scripts
and
then
hopefully
use
that
to
automatically
kick
off
these
evaluations.
B
So
that's
that's
down
the
line
right,
like
I
said
we're,
I'm
in
a
huge
middle
of
a
huge
project.
So
I'm
kind
of
doing
those
things
right
now
is
not
a
luxury
for
me,
but
it
will
be.
Hopefully
you
know
in
the
next
three
or
four
months,
I'll
start
building
that
out.
That
was
one
of
my
goals
this
year,
but
things
get
pushed
back
and
then
so
I'm
still
manually
kind
of
just
triggering
evaluations
but
even
trigger
trigger
manually.
A
Yeah
yeah,
I
don't
know,
load
as
well.
As
I
know
some
other
products
I
work
with,
but
in
case
load.
The
storm
runner
has
the
option
at
the
end
of
the
test
to
trigger,
let's
say
an
api
call,
you
could
even
from
storm
runner
once
the
test
is
done.
Just
make
an
api
call
to
your
captain
instance
and
then
just
fire
it
off
automatically
right.
There's
different
options.
B
A
Can
how
you
can
do
it
yeah.
B
And
I
think
that's
available
through
storm
runner.
It's
just
like
I
said,
like
I
said
the
beginning
of
I'm,
not
a
thousand
percent
technical.
I
consider
myself
a
google
developer,
but
yeah.
So
that's
something
that
I'll
research
and
and
we'll
get
going,
hopefully
december
time
frame
I'll
have
some
time
free
and
then
I
can
start
working
on
that
stuff.
A
Perfect
then,
I
think
this
was
one
question
that
came
in,
but
I
I
guess
you
answered
that
the
question
was
from
a
load
runner
perspective.
Are
you
using
load
runner
on
premise
or
are
you
using
load
runner
cloud?
So
the
cloud
runner
says.
B
B
B
Yeah,
so
we're
using
storm
runner
right
now
so
storm
runner.
I
was
kind
of
skeptical
of
at
first
but
yeah.
It's
been
an
easy
transition
thing.
I
I
I
I
like
it
a
lot
better
now,
it's
still
the
same,
you
know
you're
still,
writing
in
your
gen
and
like
all
your
analysis
and
and
the
way
you
can
kick
off
scripts
all
in
storm
winner,
so
yeah
and
it
integrates
great
with
with
data
trees.
A
You
also,
I
kind
of
follow
up
question
was
that
whether
you
then
use
the
load
runner
api
to
pull
in
the
load
test
results,
but
in
this
case
you
are
integrating
your
load
runner
script
with
dynatrace,
so
dynamotrace
can
actually
calculate
all
of
your
metrics
based
on
test
steps
in
dynatrace,
and
then
you
know,
captain
is
just
taking
the
dynasty
metrics
for
you
into
consideration
here.
B
That's
that's
correct
yeah.
Through
my
request,
attributes
and
everything
like
that,
but
yeah
I
am
using
the
dynatrace
api
to
inject
into
storm
runner
as
well,
so
yeah,
I'm
doing
it
both
ways
that
way.
I
can
see
down
a
trace
that
has
trace
metrics
when
I'm
running
storm
runner
and
then,
but
I
mostly
ninety
percent
of
the
time,
use
dynatrace
for
pretty
much
everything.
A
Very
good,
yeah
and
dan.
I.
A
B
B
A
A
Awesome
and
then
the
the
projects
you
have
in
captain
the
sps
and
the
others
are,
then
these
these
are
different.
Applications.
B
Yep,
these
are
just
all
my
different
applications
that
I'm
testing
like
I
said
I
have
about
80
of
them,
so
I've
only
been
able
to
work
with
about
three
of
them
right
now,
trying
to
get
it
kept
and
going
but
yeah.
These
are
all
these
are
all
different
applications.
B
These
are
all
my
different
applications
that
I'm
working
with
yep
exactly
nice.
So
I
have
one
here.
I
just
built
this
dashboard
like
before
I
went
on
vacation,
so
I
haven't
really
done
an
evaluation
on
this
one.
I've
done
one
evaluate
eval.
B
I
did
run
a
script
this
morning.
Let's
just
see-
let's
just
let's
just
go
in
here
and
just
trigger
an
eval
on
this
on
this
guy,
because
I
think
it's
done
so.
What
time
did
I
start
this
at
eight,
so
universal
time,
it'd
be
like
13
8.
Let's
go
back
in
here,
eight,
what
8,
35
so
13
35.
So,
let's
see
what
happens,
let's
just
forget
an
evaluation.
We
can
see
how
quickly
we
can
do
this.
So
here's
my
trigger
evaluation.
So
I
gotta
change
my
project.
B
Now
I
gotta
change
it
to
my
project
name
as
vision.
Oh,
my
stage
is
going
to
be
qualitygate,
so
we're
going
to
change
this
name
to
this
search.
B
Search
today
at
13
35
and
then,
let's
just
label
it
check
the
user
group
okay
there.
My
evaluation
is
done
I'll,
just
refresh
this,
and
here
it
is
right
here
now
we
got
a
94
score,
so
we've
kind
of
our
memory
has
come
down
a
little
bit
right.
So
now
our
memory
is
not
a
warning
and
not
a
fail
this
time.
So
but
then
just
probably
using
more
memory
here
on
this
last
eval,
and
since
I
ran
it
this
morning
at
eight
o'clock,
there's
probably
not
a
lot
of
people
on
came
down
got
a.
B
We
got
a
warning
score
of
2.78
instead
of
a
failing
score
of
zero.
So
now
we
now
we're
at
a
pass
instead
of
warning
results,
yeah,
quick
and
easy.
I
love
it.
It's
amazing,
like
I
said
just
once:
you've
built
your
dashboards
and
they're
robust
and
they
have
all
your
kpis
man
you're.
I
mean
sky's
the
limit
right
I
mean
your.
Your
evals
are
going
to
go
down.
How
long
did
that
eval?
Take
me
30
seconds
45
seconds
yeah
instead
of
hour
or
two
hours.
B
You
know
I
mean
if
there's
if
something
goes
wrong.
Of
course,
you're
really
gonna
have
to
dig
in
and
start
looking,
but
if
everything
comes
up
like
this,
let
me
just
send
this
to
your
project
team
say
we
got
a
passing
score
of
94.,
we're
ready
to
move
on
to
the
next
stage
right,
yeah
and.
A
If
you
think
about
it
right,
I
mean
the
magic
behind
this.
Is
that
you
as
a
performance
engineer
you
build
these
dashboards.
You
know
your
apps,
you
know
your
metrics,
because
you
know
how
your
app
should
behave.
If
you
have
then
built
them,
you
can
then
take
it
and
really
shift
it
even
further
left
into
your
pipeline,
so
that
every
time
your
developers
commit
the
build
has
been
triggered.
That
test
is
executed
and
then
they
automatically
get
feedback
and
hopefully,
in
99
of
the
cases.
A
Everything
comes
back
green,
which
is
great
because
you
just
saved
so
much
time
in
otherwise
analyzing
all
these
results
and
in
the
one
percent
where
it
fails.
You
then
know
where
you
have
to
regression,
and
then
you
dig
into
the
the
details
by
looking
at
the
dashboard
or
whatever.
Then
you
do
exactly.
B
And
shifting
left
is
going
to
be
a
big
part
of
our
process,
especially
in
our
you
know,
cloud
migration.
I
think
the
more
we
can
shift
left
the
better.
Our
quality
is
going
to
be
overall.
A
Yeah,
I
wonder
I
want
to
put
one
more
thought
in
your
in
your
pain,
though,
while
this
is
great
for
the
performance
analysis
and
to
shift
left
also
think
about
that,
once
this
version
gets
deployed
into
production,
you
can
also
use
the
same
approach
to
do
production
deployment,
validation
because
I
assume
mic.
You
have
teams
that
then
deploy
into
prod
and
then
they
sit
there
and
validate.
Did
the
production
deployment
go
well,
yes
or
no,
and
I
would
assume
they
look
at
it
at
similar
metrics
right.
A
Exactly
good
mike,
can
you
flip
quickly
back
to
the
slides
to
kind
of
close
it
up,
yeah.
A
B
Went
over
some
kind
of
best
practices,
you
know
how
we
analyze
test
threat
to
three
steps.
How
we
use
kept
in
to
like
configure
how
to
look
at
our
build
configurations
right
and
he
gave
gave
us
a
good
idea
of.
Maybe
we
should
build
a
dashboard
to
have
all
of
our
configuration
settings,
run
that
first
to
make
sure
everything's
cool
validated
and
then
kick
off
our
script.
So
yeah.
That
was
a
good
lesson
learned
there.
B
A
B
Quickly,
yeah
there's
another
thing:
let's
see
if
we
can
get
to
the
slide
here
hold
on
a
second.
I
think
I
think
the
next
slide,
we're
going
to
show
is
kind
of
how
we're
making
trying
to
make
improvements
to
the
layout
of
captain
and
stuff
like
that.
So
there
we
go
here.
We
go
so
yeah
so
I'll.
Let
you
talk
about
this
a
little
bit
if
you
want
yeah.
A
So
the
great
thing
about
having
people
in
the
captain
community
like
mike,
is
that
we
get
a
lot
of
feedback
on
how
we
can
make
the
life
of
people
even
easier,
and
I
think
one
of
the
things
you
gave
is
hey.
You
would
like
to
see
a
quick
overview
not
of
only
of
your
last
test
run,
but
also
kind
of
a
quick
historical
overview,
instead
of
always
having
to
go
into
the
heat
map,
but
this
is
something
that
the
the
team
is
already
addressing.
A
This
is
a
mock-up
from
johannes
who
is
becoming
the
leading
pm
on
the
captain
open
source
project.
So
things
like
this
are
coming
also
what
you
pointed
out
several
times
and
we
experienced
it
all
the
some
of
the
annoying
refreshes.
They
will
be
gone.
They
are
already
gone
in
the
captain,
0.9
release
and
0.9.1,
I
think,
was
just
released
last
night.
That
will
also
then
make
it
to
the
cloud
automation
instances
that
means
cloud
cloud.
Automation
is
the
managed
offering
we
have
a
steiner
trace
so
like
soon
for
you,
the
refreshes
will
also
go
away.
A
I
was
initially
writing
a
lot
of
the
integrations
with
from
captain
to
dynatrace
the
dashboard
parsing
and
when
I
built
it
about
a
year
ago,
I
did
my
best
to
come
to
a
quick
result,
but
I'm
definitely
sure
that
they
didn't
build
the
best
code
and
we
have
a
new
team
that
took
over
now
this
integration
and
they
are
they've,
now
reflected
everything.
They
increased
a
lot
of
the
quality
aspects.
A
A
So
the
new
data
explorer
will
be
fully
supported
very
soon,
where
you
have
the
pass
and
warning
kind
of
as
a
tile
like
as
a
text.
But
you
will
then
be
able
to
also
use
the
nice
color
coding
sequences
in
the
chart
itself
to
define
your
your
thresholds.
So
more
things
are
coming
and
that's
also
thanks
to
you
and
a
lot
of
people
behind
the
scenes
that
are
working
on
this
yeah
and
with.
A
I
wanted
to
say
thanks
mike
and
I
think
there's
a
mark
tomlinson
is
on
the
call
less.
B
B
A
Yeah,
I
hope
so
too,
but
with
this
I
want
to
say
thanks
again.
All
of
this
has
been
recorded,
we'll
do
some
post
processing
and
then
we'll
put
it
up
on.
The
youtube
channel
may
happen
next
week,
because
I'm
relying
here
a
little
bit
on
eurogan,
who
is
doing
all
the
magic
work
behind
the
scenes
and
this
week
he's
enjoying
a
little
time
off
because
he's
building
a
house
and
that's
getting
into
the
final
stages
and
he
needs
to
be
making
sure
that
the
house
gets
built
correctly.