►
Description
Following up on Automated SRE-Driven Performance Engineering we will now focus on integrating automated performance tests and analysis into your continuous delivery pipeline. Keptn provides automated SLI/SLO-based Quality Gates as well as automated performance test execution. The tight integration with Dynatrace and the open APIs make it easy to bring this capability into your CI/CD Tooling.
In this Performance Clinic, Andreas Grabner will show you how to Shift-Left Performance into your Jenkins pipelines using Keptn, JMeter and Dynatrace.
A
Then
dynaTrace
I'm,
Andy,
Grabner,
self-proclaimed,
DevOps
and
ace
activist
a
stands
for
autonomous
cloud
enablement.
You
can
reach
me
sure
you
figure
out
how
you
reach
me.
If
you
know
my
name,
you
know
my
email,
probably
as
well,
but
twitter
linkedin.
Everything
is
good.
This
is
part
of
my
performance
clinic
serious.
Every
performance
clinic
is
posted
to
youtube,
so
that's
great
and
you
can
also
watch
and
also
get
the
slides
on
dynaTrace
universities
on
university
dynaTrace
comm.
A
You
can
also
not
only
see
the
video
but
also
get
the
slides,
because
a
lot
of
people
are
always
asking.
How
do
I
get
the
slides
just
go
to
the
University
to
dynaTrace
com
click
on
dynaTrace,
click
on
I
think
on-demand
webinars,
and
this
is
where
you
see
the
list
of
every
performance
clinic,
including
a
link
to
the
slides.
Now
today's
topic
is
shifting
left
performance
into
a
Jenkins
pipeline.
However,
while
I'm
using
Jenkins
today,
everything
I
show,
you
today
is
also
possible
with
any
type
of
pipeline
tool
that
you
may
use.
A
Ok,
Jenkins
is
just
I
think
very
prominent
out
in
the
world.
Obviously
so
I
show
it
to
you
with
Jenkins,
but
everything
I
do
today.
You
can
also
do,
with
any
other,
see
ICD
tool
or
release
automation
tool
that
you're
using
okay.
Let
me
jump
in
first
of
all
as
a
reminder.
Last
week,
maybe
it
was
actually
two
weeks
ago,
I
did
kind
of
a
preparation
for
today's
webinar.
We
are
talked
about
how
we
can
leverage
dynaTrace
for
SLI
based
performance
engineering.
A
Now,
if
you
remember
for
those
that
have
watched
it
in
case,
you
didn't
watch
it
go
to
the
blog
on
the
bottom
or
to
the
performance
clinic
recordings.
But
what
we
covered
is
how
can
you
set
up
dynaTrace
so
that
dynaTrace
can
monitor
your
hosts
and
infrastructure,
a
slice?
So
your
service
level,
indicators,
service
and
process
level,
specific
SL
eyes,
hazel
eyes
based
on
things
like
HTTP
status,
that's
always
great,
especially
for
web
applications
and
also
any
type
of
API
driven
system,
also
our
eyes
for
your
throughput
load
distribution
response
time
per
test.
A
So
last
time
we
definitely
focused
on
what,
if
you
have
a
testing
tool
driving
load
and
how
can
I
showed
you
how
you
can
augment
these
test
scripts
so
that
tests
can
then
also
pass
on
the
test
name.
The
test
steps
to
the
ANA
trace
so
that
you
and
Diana
trace
can
also
get
metrics
split
by
test
steps
and
some
additional
ones.
A
So
this
is
what
we
did
about
two
weeks
ago
and
I
walked
it
through
the
whole
process
and
it's
great
to
have
these
dashboards
as
a
reminder,
there's
a
blog
on
the
bottom
and
also
in
case
you
missed
it,
and
you
want
to
watch
the
recording.
Then
it's
on
the
one
side
on
YouTube,
so
you
can
go
to
the
youtube
channel.
The
bitly
link
is
just
a
short
link.
One
agent
tutorials
gets
you
through
the
whole
thing
and
then
you'll
find
it
or,
as
I
mentioned
also
earlier
University.
A
What
we
want
to
do
today
is
I
want
to
take
the
lessons
learned
we
built
a,
but
they
put
metrics
on
it,
a
slice
and
as
great
as
these
dashboards
are
because
we
are
visual.
We
want
to
see
things.
What
we
wanna
do
today
is
we
want
to
convert
these
dashboards
into
an
SLI
mo,
so
as
I
am
L
is
something
that
our
open
source
project
captain
can
consume
and
then
translate
that
into
queries
to
the
diameter
is
API.
A
So
what
we
are
going
to
do
today
is
I'm
going
to
show
you
how
you
can
take
a
dashboard
or
take
metrics
and
then
convert
it
into
an
SMI
document
and
then
how
you
can
also
specify
a
solo.
So
what
are
your
criteria?
What's
your
objective
when
you're
analyzing
a
timeframe,
a
test,
because
again,
what
we
want
to
achieve
is
whatever
CD
tool
would
ever
see.
A
I
told
you're
using
and
again
I'm
using
Jenkins
today,
but
this
will
work
with
any
type
of
tool
and
sorry,
if
I
didn't
add
all
the
logos
out
there
in
the
world,
there's
a
lot
of
City
tools
out
there,
but
I
put
a
lot
of
them
out
there.
Whatever
tool
you
use
when
these
tools
are
deploying
your
builds
and
if
they
may
or
may
not
run
tests,
then
at
the
end
of
the
build
pipeline.
A
Yes,
we
can
look
at
the
dashboards
that
we've
created
last
week,
but
instead
of
manually
evaluating
a
build
and
looking
at
build
one
built
to
build
three
instead
of
manually
kind
of
comparing
and
figuring
out
how
our
individual
builds
do
in,
which
takes
I
think
a
lot
of
time.
I
just
put
a
rough
estimate
between
30
and
60
minutes.
You
know,
obviously,
depending
on
how
large
the
test
is
and
how
savvy
you
are
with
the
performance
analysis,
but
this
state.
A
This
is
manual
steps
that
you
that
a
lot
of
people
are
still
doing,
but
we
want
to
automate
that-
and
this
is
where
captain
comes
in,
so
captain
can
automate
the
evaluation
of
a
slice.
These
are
metrics
against
SLO
your
objectives
and
can
fully
automate
that
so
that
the
whole
performance
evaluation
is
part
of
the
delivery
pipeline
can
be
fully
automated
and
brought
down
to
a
fraction
of
the
time
it
may
take
you
right
now.
That's
what
I
want
to
show
you
today,
all
right,
so
I
actually
have
two
major
use
cases.
A
A
If
you
have
questions
put
them
into
the
question,
feature
of
GoToWebinar
and
I
will
try
to
kind
of
keep
an
eye
on
the
left
side
in
case
questions
come
in
to
answer
them
right
away,
otherwise
we'll
keep
them
for
the
end
of
the
Q&A
section.
So
let
me
focus
on
how
can
we
automate
build
approvals
because
the
challenge,
as
I
said
earlier,
that
I've
seen
if
you
have
a
build
pipeline
where
you
deploy,
where
you
run
some
tests?
A
And
typically,
you
have
the
approval
stage
and
it's
often
a
manual
approval
stage,
especially
as
it
comes
to
a
variety
of
tests
that
you're
executing
now
I
know
that
a
lot
of
you
have
fortunately
invested
in
functional
testing
as
part
of
the
build.
But
if
a
functional
test
fails,
the
question
is
you
know
what
does
this
really
mean?
This
is
good
enough.
Is
it
not
good
enough
faith
actually
succeeds.
It's
good
enough
to
promote
the
next
stage.
Functional
test
is
often
not
enough.
A
That's
why
I
know
a
lot
of
you
have
invested
in
including
performance
tests
as
part
of
your
pipe.
So
you
can
execute
performance
test
and
there's
a
variety
of
tools
out
there
just
put
a
couple
on
the
bottom
Jamie,
the
new
Lord
Gatling
loadrunner,
silk,
there's
a
lot
of
tools
out
there.
The
challenge,
though,
is
it's
great,
that
we
have
these
tools
and
these
tools
individually
are
doing
and
oak
and
I
think
a
good
job,
depending
on
the
tool
on
analyzing
the
test
results
and
then
also
figuring
out.
A
What's
the
difference,
but
it's
still
it's
a
manual
thing.
You
need
to
compare
one
bill
with
the
other.
There's
a
lot
of
tools
out
there
that
have
great
dashboarding
technology.
Don't
get
me
wrong
to
compare,
but
it
still
it's
a
manual
process
and
then
they're
only
analyzing
the
stuff
that
the
load
testing
tool
is
seen
now.
The
next
step
and
I
know.
A
This
is
why
most
of
you
are
here
is
because
most
of
you
are
have
invested
in
including
monitoring
in
your
delivery
pipeline,
so
not
just
for
production
monitoring,
but
also
for
everything
that
ends
up
with
happening
before
production,
but
the
feedback
that
we've
received,
whether
you
use
dynaTrace,
whether
you
use
prometheus
or
any
other
monitoring
tool.
There's
so
much
data
now
and
and
how
do
I
know
that
this
data
belongs
to
my
test
and
not
to
some
other
tests?
A
This
is
why
I
believe,
if
you
are
in
a
situation
where
you
have
invested
in
automated
testing,
that's
great
now,
it's
time
to
invest
in
automate
the
analysis
of
these
tests,
so
we
can
cut
down
the
time
it
takes
to
manually,
prove
it.
So
how
did
we
solve
this
problem?
How
did
we
solve
the
problem?
I
just
highlight
that
it
takes
too
much
time,
and
how
can
we
automate
that
I
want
to
first
start
with
how
it
we
how
our
problem
solution
it
was
inspired.
A
It
was
inspired
by
Google's,
essary
practice,
so
I
want
to
give
Google
credit,
even
though
I'm
British
to
Google
was
not
the
first
one
that
came
up
with
all
this,
but
they
at
least
they
made
it
very
prominent
and
very
public.
So
Google
is
talking
about
a
slice
s.
Allies
are
service
level
indicators
repay.
Basically
what
it
is,
it's
a
metric,
it's
something
that
you
can
measure
and
that
we
can
then
Bey
use
for
the
base
of
an
evaluation
so,
for
instance,
the
error
rate
of
our
login
transactions.
A
So
when
we
are
deploying
a
build
and
we
run
a
test
and
the
tests
hit
the
login
page,
they
don't
want
to
know.
What's
the
error
rate,
that's
an
SLI
and
you
have
should
have
multiple
of
those
four
individual
metrics
that
are
important
for
you,
then
SLO
so
service
level
objectives.
This
is
kind
of
like
a
binding
target
of
what
you
expect
the
system
to
do.
How
should
the
system
behave
in
a
period
of
time?
Often
we
hear
obviously
SL
O's
in
production,
where
we
say
in
a
30
day
period.
A
We
are
not
allowing
more
than
two
percent
failure
rate
and
login.
That's
your
SLO!
That's
your
objective.
We
can
also
take
this,
however,
to
the
testing
stage,
where
the
period
is
obviously
not
30
days,
but
the
period
would
become
they
at
the
timeframe
of
your
tests.
Maybe
five
minutes,
maybe
10
minutes,
maybe
an
hour,
maybe
two
hours,
but
in
saying,
essentially
we
can
use
the
same
concept
of
defining
objectives.
What
is
your
objectives
that
you
want
to
that
you
want
to
reach
and
then
also
to
kind
of
complete
that
picture.
A
Sla
is
this
is
clearly
something
for
for
production
monitoring.
This
is
where
you
are
taking
the
SLO
concept
and
bring
it
up
to
the
business
level
and
say:
if
we're
missing
certain
objectives,
then
we
have
a
business
agreement
violation
and
so,
for
instance,
if
you
are
agreeing
to
your
business
stakeholders
that
your
system
is
up
and
running,
let's
say:
logins
must
be
reliable
and
fast
within
a
30
day
window
and
there's
to
be
up
and
running
and
fast
for
99.9
percent
of
the
time.
A
Then
this
is
an
SLA,
and
if
you
preach
that
you
may
even
have
legal
implications
so,
but
this
is
just
to
give
you
an
overview
for
those
of
you
that
are
new
to
our
services,
to
those
in
SAS
has
a
great
YouTube
video
out
there
from
Google.
That
explains
this,
and
also
I
can
install
this
or
borrow
this
from
their
slide.
I
really
like
this
SL
ice
drive
as
slows
which,
in
the
endolymph
or
Mesa,
lays
now.
The
great
thing
is.
A
We
can
use
these
concepts
also
in
pre
prod
as
part
of
our
delivery
pipeline,
and
this
is
where
we
are
under
my
next.
Inspiration
comes
in
Thomaston,
Maura,
he's
leading
our
performance
engineering
team,
eight
dynaTrace,
and
for
years
he
has
been
looking
at
key
metrics
in
our
performance
test,
environment
and
he's
always
looked
at
every
time.
Every
day,
when
we
deployed
a
build,
he
said
I'm
looking
at
different
metrics
and
then
I,
based
on
a
set
of
metrics
I,
come
up
with
a
performance
signature
kind
of
like
it's
a
big
you'd.
Are
we
not
good?
A
Are
we
meeting
our
expectations
yet
suno?
Now
what
he
really
did
in
the
backend
fully-automated
after
redeployment
after
certain
load
was
was,
was
put
on
the
system.
He
then
through
just
some
custom
coding
that
he
did
he
reached
out
to
dynaTrace
because
we
monitored
dynaTrace
with
dynaTrace.
So
he
looked
at
a
list
of
metrics.
These
are
the
SL
eyes
in
his
case,
memory,
errors
and
so
on
and
so
forth.
So
we
had
here's
like
a
list
of
a
hundred
and
fifty
different
metrics.
He
looks
at
every
build
so
his
SL
eyes.
A
He
also
has
a
solos,
so
here
he
specified
either
thresholds
that
he
knew
we're
not
allowed
to
exceed
certain
throughput
thresholds
or
memory
consumption
that
we
expect,
but
it
could
also
be
comparing
them
against
the
previous
build
to
do
regression,
detection
and
then
really,
if
you
have
these
allies
and
their
solos.
If
you
then
calculate
the
overall
picture,
this
is
kind
of
like
his
performance
signature,
it's
kind
of
like
a
consolidation
or
like
an
aggregation
of
multiple
relays
to
come
up
with
a
single
answer.
A
Is
this
build
good
and
good
enough,
or
is
it
not
good
enough
and
that's
what
he
bought
me?
Also,
we
got
inspired
from
Thomas.
So
how
does
this
now
work
with
captain
so
with
captain
quality
gates?
So
Cap'n
is
an
open
source
project
and
it
can
do
many
many
things,
but
really
see
it
in
a
way
that
it
is
a
layer
and
automation
layer
on
top
of
your
monitoring
in
this
case,
hopefully
dynaTrace
right.
A
If
you
have
dynaTrace,
then
dynaTrace
provides
a
lot
of
api's,
but
cap'n
is
an
additional
layer
and
one
of
the
capabilities
it
provides
is
a
quality
gate
capability
where
you
can
specify
a
slice.
What
are
the
metrics
that
are
important
for
you
in
an
SLI
file?
You
can
then
specify
a
solos
your
objectives,
each
individual
SLI
is
listed.
A
Your
criteria,
including
a
total
score
on
what
your
total
achievement
should
be,
and
then,
if
you
say
captain,
please
do
my
evaluation
for
a
particular
timeframe
on
a
particular
part
of
my
infrastructure
on
a
particular
service
on
a
particular
app
and
here's
my
Salina
slow.
Then
cap
automates
the
process
that
you
used
to
do
manually.
It
reaches
out
to
the
different
tools,
in
our
case
most
of
the
time,
obviously
dynaTrace
so
reaching
out
through
the
metrics
API.
A
The
dynamic
which
provides
career
is
all
these
values
comes
back
with
the
values,
compares
them
against
us
solos
and
then
scores
every
single
metric
and
based
on
the
total
score.
It
then
knows:
are
we
doing
a
good
job,
or
is
this
a
bad
build
so,
depending
on
the
scoring,
it
says:
hey?
Yes,
we
agreeing
well,
yes,
we
are,
or
unfortunately
we
are
red
and
his
gain,
because
captain
is
an
automation
in
layer
on
top
of
dynaTrace.
A
You
can
trigger
this
from
any
type
of
system,
whether
it
is
your
your
common
line,
whether
it
is
through
an
API
call
whether
it
is
from
your
CI
CD
tool,
you
can
do
it
from
a
chat
pod.
You
can
do
it
from
anywhere,
so
hopefully
this
gets
it
across
I
always
use
another
example
for
to
show
you
how
this
works
build
over
build,
and
thanks
for
questions
that
are
coming
in
so
yes,
I
will
I
will
I
will
answer
this
in
a
second
and
I.
A
Think
actually
now
excuse
me
answer
you,
one
of
the
questions
so
look
at
this
example.
We
have
SL
ice,
so
SL
ice
could
be
response
time
for
a
service,
the
overall
failure
rate
of
a
service,
but
now
because
dynaTrace
can
give
you
the
data,
the
response
time
of
a
particular
transaction
or
of
a
particular
test
step,
we
could
also
be
the
number
of
database
calls.
A
The
number
of
service
calls
the
time
spent
on
CPU,
so
every
metric,
the
time
address,
can
give
you
either
on
a
service
level
and
machine
to
answer
your
question
also
on
a
transaction
level
and
transaction
means
either
an
API
endpoint,
but
it
could
also
be
a
test
transaction.
This
is
stuff
I
showed
last
week
in
my
in
my
sorry,
based
performance
engineering
tutorial,
so
you're
specifying
a
restless.
Then
you
specify
your
s,
loz,
meaning.
What
are
your
objectives?
A
And
here
again
you
can
actually
combine
it's
a
hard-coded
thresholds
where
you
know
you're
not
allowed
to
go
over
a
certain
a
threshold.
Let's
say
like
100,
milliseconds
response
time
is
green
or,
let's
say
250
milliseconds
of
response.
Time
of
the
service
is
yellow,
but
you
can
also
combine
it
with
relative
values.
Relative
means
comparing
it
to
a
previous,
build
or
multiple
previous
builds
or
a
particular
build
in
the
past.
A
Maybe
your
last
release,
and
so
you
specify
that
audio's
is
loz
on
every
metric
and
you
also
specify
an
overall
score
because
we
look
at
every
metric,
we
rate
and
weight
every
metric,
and
then
we
come
up
with
a
score
between
0
and
100.
So
for
my
example:
if
you've
built
number
one-
and
you
run
your
test-
and
you
can
say,
captain
start
evaluation
for
my
project
for
this
service
and
here's
the
time
frame,
then
captain
goes
off
which
we
saw
the
metrics
rates
them
and
then
in
this
case,
looks
all
green.
A
It's
a
hundred
percent
bill.
Two
comes
along
next
build
here.
We
now
CEO,
two
metrics
are
actually
in
yellow
and
if
we
do
the
math,
it's
75
percent,
now
to
answer
one
question
because
it
often
comes
up,
it
is
the
you
can
also
weight,
metrics
or
SL
eyes,
so
you
can
have
certain
SL
eyes
having
a
higher
weight
and
you
can
even
define
key
SL
eyes.
So
if
a
certain
metric
or
SL
eyes
is
it's
very
key
to
you
and
if
that
fails,
you
can
also
see
the
whole
build
field.
A
So
we
have
built
this
into
our
algorithms
now
build
3
comes
along
to
my
example.
In
this
case,
the
number
of
database
calls
jumps
from
3
to
6,
but
here
you
can
see
in
order
to
consider
the
green.
We
only
allow
a
serial
percent
increase
to
the
previous
builds
and
if
it's
plus
one
from
the
previous
builds
its
warning,
but
it
actually
increased
by
by
three,
which
means
it's
completely
red,
which
means
we
get
penalized
for
it
and
and
then
build
four
comes
along.
Everything
is
green
against
we're
back
to
100.
A
This
should
just
show
you
how
this
works
by
looking
at
different
metrics
the
SL
eyes,
as
we
call
them.
I
really
try
to
also
start
the
terminology
here,
even
though,
in
the
end
it's
just
metrics
and
comparing
them
automatically
against
thresholds
or
against
previous
builds
and
I.
Think
that's
important
good.
Now
this
looks
nice
in
PowerPoint
tables,
but
it
also
looks
nice
in
the
captain's
bridge
which
is
our
UI.
So
you
will
see
it
later
on
every
time,
I
ask
captain
to
run
an
evaluation.
A
It
will
automatically
give
me
an
overview
of
every
single
SLI
of
every
single
evaluation,
I
ran
and
on
top
I,
get
to
score
and
I
get
the
actual
result
on
kind
of
aggregated
it
up
to
green,
yellow
and
red
and
I
also
get
the
option
to
say:
I
want
to
see
this
data
over
time,
and
now
the
important
thing
here
is
also
to
understand.
This
is
data
that
kept
Knicks
tracks
from
them
from
dynaTrace
and
stores
in
its
own
data
store.
We
are
extracting
one
metric
value
for
the
time
range
and
then
on
the
bottom.
A
You
can
chart
it
from
bill
to
bill
from
elevation
revelation
on
the
top.
The
heat
map
is
really,
you
know,
kind
of
condensing
it
to
something
that
you
know
it's
clear
for
us
good
or
no
good
good.
So
how
does
this
now
change
the
picture?
If
you
have
the
old
shangkun's
pipeline
on
the
top,
we
have
manual
approval,
taking
30
to
60
minutes
now
what
we
can
do
with
Jenkins
dynaTrace
on
the
bottom.
A
I
also
always
add
Prometheus
to
it,
because
remember,
captain
is
an
open
source
project,
so
you
could
actually
go
to
different
data
sources,
and
do
this
or
obviously
dynaTrace
can
also
ingest
Prometheus
metrics.
So
it's
you
know
whatever
you
want,
but
the
goal
here
is
your
pipeline.
If
you're
executing
tests,
the
first
thing
is
you
want
to
do
you
want
to
take
your
test?
This
is
something
that
is
outside
of
the
scope
of
captain.
A
Alright.
So
how
does
this
work?
I
have
talked
about
20
minutes.
Now
we
get
into
the
fun
part.
Let
me
show
you
how
this
works
in
case
you've,
never
setup
me,
okay,
I
know
some
of
you
by
name
so
I
know
that
some
of
you
have
tried
and
thanks
for
all
the
feedback
on
on
making
it
even
easier
to
install
captain,
there's
now
new
ways
to
install
it
to
make
it
even
easier.
So
the
first
thing
is
what
you
need
in
order
to
install
captain.
A
Remember:
captain
is
kind
of
an
an
automation
layer
on
top
of
dynaTrace
or
whatever.
Your
monitoring
tool
of
choice
is
what
you
need
for
captain
is
either
a
kubernetes
cluster
or
open
shift
or
a
Linux
machine
where
you
can
run
things
like
micro,
kubernetes
or
Keys.
K3S
is
the
new
version
that
I
want
to
show
you
and
demo
today.
So
that's
a
prerequisite.
You
also
need,
in
our
case
a
dynaTrace
tenant,
meaning
you
have
dynaTrace
somewhere.
A
There
is
monitoring
your
environment
right,
the
environment,
where
you
run
tests
against
where
your
CI
deploys
into
and
so
on
now,
in
order
to
install
captain
there's
two
options:
you
can
either
install
it.
As
I
said
on
openshift
on
kubernetes,
you
can
download
the
captain
CLI
to
keep
an
install
and
then
this
should
then
install
captain.
The
captain
itself
has
a
lot
of
different
components
to
it.
It's
very
powerful.
It
can
do
a
lot
of
things.
You
don't
have
to
know
everything.
A
What
you
have
to
know
is,
after
you
install
it,
you
get
an
API
endpoint,
so
you
can
automate
the
captain
and
you
get
a
bridge
which
is
the
UI.
It's
that's
important.
Another
thing
is
when
you
set
it
up,
there's
only
one
thing
you
need
to
do.
Additionally,
you
need
to
kiss
a
captain
configure
monitoring
dynaTrace
so
that
captain
is
connected
to
dynaTrace
through
its
API
there's
just
you
need
to
provide
the
tenant
token
and
the
API
token
entertainment
URL.
So
this
is
the
installation
coupon.
There
is
an
open
shift.
A
What
I'm
going
to
show
you
life
from
scratch
is
how
to
install
captain
in
about
80
seconds
on
a
Linux
box
by
using
a
new
project
that
was
just
recently
released.
It's
called
captain
on
keys,
really
cool
stuff,
and
then
once
captain
is
installed,
we
can
work
with
captain.
Now
captain
has
a
concept
of
projects
and
services
and
stages
and
I
just
highlighted
here
kind
of
what
normally
happens
if
you
are
interacting
with
Captain
in
this
demo,
I
show
you
today.
A
Everything
is
fully
automated
thanks
to
the
champions
integration
we
have,
because
we
build
the
Jenkins
library
that
automates
all
of
that
it
automates
the
creation
of
a
project.
It
automates
the
creation
of
a
so-called
service
definition.
It
adds
resources
like
the
SL
ice,
the
air
solos,
your
test
files,
whatever
you
have
okay,
it
automates.
All
of
that
for
you
and-
and
that's
really
it's
so
step
number
one.
Let
me
show
you
this
life.
A
Let
me
go
over
in
we're
ready
here
we
go
so
this
is
a
a
I'm
running
on
ec2
I,
think
it's
a
medium-size
ec2
instance
Amazon
Linux
and
the
only
thing
I
need
to
run
and
again.
I
know
this
is
hard
to
read
for
you,
but
I
will
post
it
later
on
I'm,
just
running
a
single
line,
comment
that
is
downloading
an
installed
script
and
what
the
install
script
is
doing.
A
It
is
now
on
this
Linux
book,
installing
keys,
it's
a
lightweight
kubernetes
single
binary
and
on
that
it
automatically
installs
captain
get
automatically
installed
support
for
dynaTrace
I.
Have
you
actually
see
it
here
in
the
output
enable
the
entry
support?
I
have
really
a
set
DT
tenant
and
DT
API
token
as
an
environment
variable.
So
that's
the
way
just
like
connecting
captain
with
an
address.
What
else
I
also
enabled
jmeter
support,
because
one
of
the
use
case
I
want
to
show
you
as
Captain,
cannot
only
execute
the
quality
gate.
A
It
can
also
execute
tests
for
you
and
whether
this
is
kicking
off
at
a
meter
test
or
kicking
off
a
new
low
test,
which
is
another
test
tool.
We
have
already
integrated
this
I
just
enabled
that
support
and
if
I
look
at
the
time
it
is,
it
should
take
about.
You
know
80
seconds,
so
you
can
see
it's
now
installing
all
different
things.
A
While
this
is
running,
let
me
quickly
go
back
to
actually
my
questions.
Let's
watch
this
and
I
see
a
couple
of
questions.
Machine.
Hopefully,
I
answered
your
question
earlier
on
AP
and
when
section
level
instead
of
service
level.
So
yes,
there's
possible.
If
the
service
contains
multiple
ApS,
yes,
we
can.
We
can
do
all
that.
A
There
is
a
long
question,
a
lot
of
empty
spaces
here
on
the
observability
stable
check.
The
observability
question
later
on.
The
question
is:
why
is
the
past
token
for
dynaTrace
required?
In
my
case
the
past
token
is
only
required
if
you
also
want
dynaTrace
to
monitor
debt
kubernetes
cluster,
where
you
have
kept
me
installed
right
and
it's
not
like
in
my
case
I,
didn't
specify
it.
I
only
said
the
API
token
in
the
tenant.
No
pass
token.
A
In
this
case,
I
am
NOT
interested
in
in
dynaTrace
monitoring
captain
for
me,
if
you're
using
captain
end-to-end,
but
captain
also
deploys
an
application
into
a
kubernetes
cluster,
then
captain
is
the
ability
to
automatically
roll
out
the
one
agent
for
you
and
for
that
we
need
the
pests
okay.
So
here
we
go
now.
Please
do
me
a
favor
for
those
of
the
watching
life,
as
you
can
see
here,
I'm
exposing
temporarily
some
security
details,
like
my
tokens
and
my
passwords.
A
So
please
do
me
a
favor
and
don't
mess
with
my
demo
now
by
typing
these
things
off.
But
let
me
show
you
what
actually
just
happened:
I
have
the
ape.
Actually
it's
the
bridge.
First,
let's
go
to
the
bridge,
so
after
captain
was
installed,
and
it
really
only
took
about
I
would
say
maybe
eighty
seconds
to
a
minute,
so
it
spit
out
that
this
is
my
bridge.
It
was
my
bridge
and
you
can
log-in
with
captain
and
the
password
is
also
the
bridge
password
I'll
copy
that
here
we
go
signing
in
here's.
A
My
cape
installation
right
now,
I
have
no
projects,
that's
fine!
What
else
do
I
get
I
get
my
API
endpoint.
This
is
this
one
here,
so
this
is
my
API
endpoint
and
in
order
to
get
to
the
swagger
UI
I
can
type
in
swagger,
and
here
we
go
so
you
can
see
that
captain
really
is
fully
set
up.
I
can
I
can
I
can
get
both
to
the
the
bridge,
the
UI,
and
once
we
set
up
some
project
later
on
we'll
see
this,
how
this
looks
like
and
the
API,
where
I
can?
A
A
We
discussed
this
the
last
time
in
the
last
webinar
taking
is
very
important
because
once
you
dynaTrace
is
installed,
it
sees
a
lot
of
things
and
extracting
metadata
were
passing
metadata
to
your
deployed
processes
like
what
the
environment
is
it
which
project
does
it
belong
to?
What
type
of
service
is
in
front
and
back-end?
This
then
allows
us
in
dynaTrace
to
to
do
automated
tagging
like
in
this
case.
A
This
process
belongs
to
the
test
environment,
to
the
test
project,
for
instance,
and
then
captain
can
also
reach
out
specifically
and
get
metrics
for
these
processes
that
you
just
deployed
now.
The
application
that
we're
using
is
my
not
so
famous
or
infamous
I,
don't
know
sample
app.
It's
a
node
app.
So
if
you're,
if
we
deploy
that
app,
it
will
automatically
be
monitored
and
thanks
to
the
metadata
that
we
have
on
the
process
level,
we
can
also
take
services
all
right.
A
So
here
I
have
professor
service
and
evil
service,
so
these
are
takes
I'm
going
to
use
later
on.
In
my
different
projects,
all
right,
it
seems
I
had
a
slight
I
had
a
short
freeze
on
my
end.
Hopefully
it
seems
the
network
connection
has
been
re-established,
sound
spec,
so
ever
that
I
don't
know
two
minutes
so.
A
What
I've
explained
your
pick?
Thank
you.
What
I've
explained
is
that
you
need
an
environment
that
is
obviously
instrumented
with
an
urge
ways:
proper
tagging
of
the
process
and
service
level,
and
if
you
run
some
tests
to
make
sure
you
have
enabled
the
test
integration,
so
we
can
also
get
metrics
on
a
test
level
or
at
a
state
level.
Okay,
so,
in
my
case,
I
want
to
show
you
what
I
have
in
my
environment.
A
But
from
last
week
that
we
built
and
I
have
a
couple
of
services
and
the
one
that
we
are
going
to
test
later
on,
if
there's
two
actually
one
is
the
this
service
here
all
right
now
it
doesn't
matter
how
I
deployed
the
service.
What
I
have
here
is
one
tag
that
is
called
evil
service,
because
later
on,
I
will
set
up
a
captain
project
where
I
say
I
want
this.
Captain
project
links
to
dynaTrace
data,
where
the
territory
services
that
I'm
interested
in
are
take
with
this
particular
name.
A
Okay,
so
this
is
one
of
those
and
the
other
one.
Is
this
one
here
that
I've
tagged
with
Bob
on
perv
is
a
service?
Ok,
so
just
remember
these
things.
Important
thing
is
you
need
to
make
sure
that
you
have
tags
on
your
services
all
right,
so
we
got
the
app
instrumented.
Let's
see
now.
The
next
thing
is
I
said
told
you
we've
built
this
both
last
time.
A
If
you
have
build
your
dashboards,
your
own
dashboards,
whatever
you
put
on
there
now
it's
time
to
take
those
dashboards
and
create
a
slice,
and
it
allows
for
it.
So
what
this
really
is,
it's
a
list
of
metrics,
with
a
logical
name
on
the
left
and
with
the
actual
dynaTrace
query
language
on
the
right.
So
let
me
show
you
this:
what
I
have
prepared?
In
my
example,
it's
not
this
one.
A
It
is
I
think
this
one
here
exactly
so
later
on,
when
I
run,
my
might
my
tests
and
by
the
way
all
of
these
SLI
files
are
all
on
github
in
the
in
the
Jenkins
tutorial.
So
here's
as
its
called
SLI
basic
I
have
five
metrics
I,
give
a
give
them
a
logical
name
and
on
here,
as
you
can
see,
this
is
just
the
dynaTrace
API
metric
selector.
So
this
is
the
throughput.
A
Meaning
is
the
request,
count
total
for
an
entity
that
has
a
particular
tag
on
it
and
it's
of
type
service,
so
I'm,
basically
telling
captain
I
want
a
query.
A
service
metric
that
is
called
request,
count,
total
and
I
want
to
get
it
from
the
entity
that
has
a
tag
on
it
with
a
particular
name.
This
is
a
placeholder.
I
could
also
hard
code
like
evil
service
in
here
or
whatever
tag
you
have.
You
can
also
combine
it.
You
can
have
multiple
tags,
my
other
tech,
okay,
so
you
can
do
whatever
you
want.
A
Because
captain
has
a
concept
of
project
stage
and
service
and
when
I
setup
a
captain
project
I
can
call
my
service
evil
service,
for
instance,
or
pervasive
service
or
test
service
or
whatever,
and
then
it
will
replace
it
if
I
working
with
with
placeholders
here
I
can
reuse
my
a
slice
in
a
much
easier
way.
That's
why?
So?
This
is
my
SLI
basic,
so
throughput
error,
response
time
and
I
also
show
you
the
other
one
that
I'm
going
to
use
later
on.
It's
a
type
of
test
here.
A
I
have
the
same
metrics
but
I
added
a
couple
of
additional
ones:
response
time,
testing
book
response,
time,
test,
echo
and
so
on.
So
these
are
now
metrics
and
again
shashi
into
your
question.
These
are
so-called
calculated
service
metrics
that
give
me,
for
instance,
the
response
time
of
a
particular
test
step
in
this
case,
the
test
step
invoke
and
again,
as
you
can
see
here,
on
the
right
same
entity,
selector.
Okay,
so
give
me
the
metrics.
Now
these
are
the
same
metrics
they're
I
have
last
week
right.
Put
on
these
dashboards.
A
Remember
those!
So
these
are
all
the
load
distribution.
So
that's
the
that
the
the
throughput
test
step
response
time,
failing
failure
rate,
so
all
of
these
metrics
I
can
put
on
there
right
now.
If
you
want
to
know
more
about
that
query
language,
because
that's
basically
the
query
language
used
for
the
matrix
API
then
go
to
dynaTrace
metrics
API
v2.
A
A
The
resolution
is,
is
automatically
the
time
frame
of
the
test
or
the
evaluation
time
frame,
so
we
don't
need
to
care
of
worry
about
that
and
also
the
entity
selector.
This
is
where
you
can
say
for
which
entities
right
and
you're
very
flexible,
so
you
can
use
the
whole
thing.
The
whole
capability
of
the
of
the
API
but
I
think
in
most
cases
it
will
be
something
like
what's
the
type
of
metric
you
want
and
from
which
entity
do
you
want
it,
and
then
please
also
make
sure
you
use
these
placeholders.
A
A
And
so
here
we
go.
This
is
how
a
basic
SL,
I
sort
of
look
like
I,
can
specify
what
metrics
am
I
interested
in
all
right?
So
I
have
my
response
time:
service,
P,
95
and
I
can
specify
pass
and
warning
criteria.
I
can
say
either
a
an
absolute
value.
So,
whenever
kept
in
which
weaves
the
response
time
of
the
service,
P
95
the
95th
percentile,
it
is
passing
if
it
is
faster
than
600
milliseconds
and
if
it
doesn't
violate
or
if
it's
not
more
than
10%
above
the
value
of
the
bill.
A
That
I
want
to
compare
it
with.
So
there's
different
comparison
options.
You
can
pay
a
single
run
to
multiple
runs
and
you
can
also
specify
the
aggregation
function,
how
we
actually
calculate
the
value
you
want
to
compare
against.
In
case
you
compare
to
multiple
builds
and
you
can
specify
here
these
allies
and
the
criteria
on
every
single
metric.
Now
by
default,
every
metric,
every
SLI
has
a
weight
of
1,
and
you
can
see
here.
You
can
also
specify
different
weights.
I
could
say,
error
rate
is
so
important.
A
There
is
a
weight
of
5,
so
it's
5
times
more
important
than
the
other
metrics,
and
why
is
this
important
because
at
the
end
we
look
at
every
single
metric
and
then
captain
comes
up
with
a
total
score
between
0
and
100,
and
then
you
can
specify.
What's
your
total
objective
score,
and
so
here
is
my
SLO
basic
file
for
5's
lies
and
then
I
have
the
other
one
that
contain
a
little
bit
more.
So
these
are
all
the
additional
ones
on
number
of
service
calls
number
a
response.
A
Time
of
a
particular
test
case,
all
right.
So
this
is
what
we
have
here
now,
we're
ready
to
say
he
kept
them
instead
of
me,
looking
manually
at
these
metrics
in
the
dashboard,
as
beautiful
as
these
dashboards
may
look
like
I
want
you
to
do
this
for
me,
and
now,
let's
go
into
the
demo,
so
this
is
based
on
they
again
they
Jenkins
tutorial.
You
can
do
all
of
this
yourself
in
this
tutorial.
You'll
find
my
sample
Jenkins
pipelines.
Now
what
you
will
see
it's
actually
before
yeah.
Let
me
walk
you
through
this.
A
The
first
one
is
just
kicking
off
an
evaluation
from
my
Jenkins
pipeline.
It's
a
sample
Jenkins
pipeline
that
I
built,
and
what
you
have
to
specify
here
is
the
captain
always
has
a
project
and
a
stage,
but
the
nice
thing
is
our
Jenkins.
Captain
integration
will
automatically
make
sure
in
case
there's
no
project
and
captain
configured
right
now
there
it
will
automatically
be
set
up
for
the
quality
gate
use
case.
That
means
it
creates
a
project
with
a
single
stage,
and
then
it
also
tells
hey
captain
I'm.
A
My
pipeline
is
interested
in
a
particular
service
on
a
particular
application,
so
you
give
it
a
logical
name,
and
you
see
here,
evil
service.
This
is
what
these
are
they're
the
placeholder.
This
is
the
placeholder
that
will
then
be
used
in
the
query
will
be
replaced
with
a
with
with
the
placeholder
that
ever
made
a
light
document.
Then,
additionally,
what
I
put
in
here
is
monitoring.
A
I
can
say
captain
please
evaluate
a
particular
time
frame
because
that's
what
it's
really
all
about
and
what
captain
should
do
it
should
then
just
do
what
Jenkins
asks
it
to
do
and
we
should
see,
hopefully
some
runs
and
then
at
the
end,
the
heatmap.
All
right,
let
me
go
over.
Let
me
go
to
my
champions
and
I
see.
Questions
are
coming
in
and
I
will
make
sure
that
we
we
have
enough
time.
So
the
first
thing
I
need
to
do
is
if
you're,
using
my
Jenkins
integration,
there's
a
Jenkins,
shared
library.
A
A
Let
me
know
if
you
can
hear
me
still
here:
okay,
that's
good!
Thank
you
perfect.
So
this
is
my
champions
and
the
first
thing
you
need
to
do
if
you
use
my
champions
library
that
integrates
Jenkins
with
Captain
I
need
to
obviously
tell
Jenkins
how
to
talk
to
captain.
Remember
captain
has
a
beautiful
UI,
but
right
now,
there's
no
project
available
and
there's
a
beautiful
API,
but
now
we
need
to
tell
Jenkins
how
to
communicate
and
right
now
that
we
have
implemented
it
I've
implemented
it
through
in
global
environment
variables.
A
The
next
step
I
want
to
do.
We
also
want
to
do
this
through
Jenkins
credentials,
all
right.
How
do
I
know
token
bridge
an
endpoint
remember
this
is
the
stuff
I
received
here.
So
the
API?
That's
this
here!
That's
the
end
point,
then
the
bridge
is
this
URL
here.
Why
do
I
need
to
bridge
because
mouch
Jenkins
too
kept
an
integration
will
create
direct
links
for
individual
runs
and
pipeline
runs,
and
then
the
token
is
this
one
here
perfect
and
then
I
save
it
and
hopefully
you
didn't
take.
A
You
know
you
don't
heck
me
now,
while
I'm
running
these
things
so
now,
what
do
I
have
I
want
to
show
you
this
one.
First,
they
kept
quality,
get
evaluation,
as
you
can
see,
I
ran
things
in
the
past,
so
what
I
can
do
is
assume
you
have
deployed
your
app
and
you
have
executed
some
tests
and
it
is
monitored
by
dynaTrace,
remember
evil
service.
So
let
me
go
back
to
my
to
that
application
to
that
service
with
this
tech
to
validate
that.
So
in
my
case
it
is
take
evil
service.
A
So
this
is
the
one
there
is
some
load
on
it.
There's
some
constant
load
on
it,
so
it
actually
works.
So,
instead
of
me
going
in
here
and
doing
all
the
magic
behind
the
scenes
in
looking
at
metrics,
I
can
say:
captain
I
want
you
to
use
dynaTrace
and
use
the
basic
sli
document
that
I
showed
you
earlier
and
then
I
just
give
it
a
start
and
an
end
time
start
an
end
time
in
this
case
is
I.
A
A
Hopefully,
this
works
as
expected.
If
I
click
on
retry
you
they
click
on
retry
there
we
go
I,
see
that
automatically
my
champions
to
keep
integration
has
created
the
project
called
GQ
project.
It
also
has
created
a
stage
called
quality
stage
and
it
has
created
a
so
called
service.
Entry
called
evil
service
right
and
we
see
that
captain
quality
gate
evaluation
was
executed.
So
what
do
I
see?
Basically
here
is
what
Jenkins
actually
triggered
I
see
was
Jenkins
build
number
36.
The
job
name
was
kept
in
quality
gauges.
That's
also
nice.
A
Whenever
you
call
captain,
you
can
also
pass
additional
context,
information
about
who
is
the
caller?
That's
nice
and
right
now
it
basically
says
start
evaluation.
It
is
now
retrieving
the
SL
eyes,
I
think
this
UI
refreshes
every
couple
of
seconds
here
we
go
SLI
which
we've
all
done
and
quality
result
is
a
quality
gate
result.
This
year
we
get
an
overall
score
of
warning.
Why?
What
is
warning?
A
We
got
80
out
of
100,
so
it's
a
warning
and
we
see
all
the
individual
values
here
now:
why's
our
teeth
at
the
50th
percentile
white
and
not
green
or
red
or
yellow,
because
you
don't
have
to
specify
an
objective
for
every
metric
and
if
I
go
back
to
my
to
my
SLO
basic,
then
the
the
response
time
for
the
95th,
the
50th
percentile
I,
didn't
specify
pass
a
warning
in
this
case,
Kepner
we'll
just
which
we've
that
value
for
me,
but
it
doesn't
penalize
or
grade
me
on
it.
Okay,
so
hopefully
that
makes
sense.
A
A
A
It
doesn't
matter
who
executes
solo
tests
as
long
as
your
load
tests
have
been
adapted
so
that
the
load
testing
tool
sends
the
HTTP
header
to
dynaTrace
and
damages
can
then
calculate
metrics
based
on
the
individual
test
names
currently
I'm,
just
showing
you
the
use
case
of
you
say:
captain
evaluate
that
timeframe.
Look
at
these
metrics
and
then
tell
me
the
result.
Okay,.
A
A
Well,
I
think
this
is.
If
whoever
creates
captain
with
any
software
you
install,
then
somebody
takes
ownership
right.
If
somebody
leaves
a
company
that
has
installed
and
brought
in
a
project,
then
first
of
all,
hopefully
that
somebody
has
told
people
that
they
have
installed
that
software.
So
yeah
you
can
just
a
week
I
say
you
manage
captain
as
any
other
piece
of
software
organization,
dynaTrace,
dashboards,
yeah,
dynaTrace
dashboards.
If
you
look
at
dynaTrace
dashboards,
you
can
take
the
exports.
A
You
can
give
it
an
owner
positive
being
created,
have
an
owner,
so
if
somebody
leaves
then
you
can,
the
admins
can
obviously
still
have
access
to
it.
So
there
should
be
that
you
feel
good
all
right.
Second,
quality
gate
went
fine
now
here,
I
want
to
give
a
shout
out
to
to
sue
me
a
girl
from
Intuit.
He
gave
me
a
little
tip
because
earlier
I
kind
of
just
opened
up
that
bridge
right
in
that
URL
and
I
kind
of
looked
and
clicked
through
it,
but
what's
also
really
cool
and
I've
extended
my
sample.
A
So
in
this
case
here
is
my
captain
quality
gate
result
and
now
I
also
have
I'm
using
the
HTML
report
plugin.
So
you
can
click
on
this.
Now
captain
results
in
bridge
and
then
you
get
a
link
here
and
then
you
can
actually
get
directly
from
Jenkins
to
the
bridge.
So
I
think
that's
pretty
cool
right.
So
hopefully,
hopefully
you
liked
it
and,
as
you
can
see
here
now,
we
got
two
builds
consecutive
result
same
same
result.
A
Then
it's
adding
all
of
your
documents
to
its
the
SLI
dear
solo.
That's
it
and
then,
when
it's
time
to
actually
do
the
evaluation,
it
basically
says
captain.
Please
start
the
evaluation
for
this
particular
timeframe
and
what
captain
really
does
underneath
the
hood
as
I
explained
earlier,
it
takes
you
individual
queries,
the
queries
get
transformed
into
it.
The
placeholders
are
replaced
and
then
also
the
the
timeframe
is
automatically
added.
Whatever
timeframe
you
give
it,
then
these
these
things
are
pulled
to
pulled
back
right.
So,
for
instance,
only
pull
back.
A
The
metrics
in
this
case
from
the
service
set.
Has
this
particular
tag
on
it
and
then,
as
this
can
take,
you
know
a
couple
of
seconds
up
to
a
minute
or
two,
depending
on
what
time
frame
you're
evaluating
you
have
another
capability
in
my
champions,
library,
which
says
wait
for
evaluation
done,
so
you
just
wait
and
then
it
waits
until
captain
says:
hey
I'm
actually
done,
and
then
this
result
can
then
be
used
to
either
fail
or
succeed
the
pipeline
or
if
it's
warning
it
goes
into
the
orange
range
and
all
right.
A
A
My
in
this
case
my
evil
service
and
the
service
that
was
tagged
then
what
I
also
see
on
the
bottom
here
is
captain
is,
is
also
it's
not
only
pulling
data
from
dynaTrace.
It's
also
pushing
events
to
de-energize.
So
every
time
when
you
run
an
evaluation,
it
also
pushes
this
information
to
dynaTrace.
Oh,
that's,
pretty
sweet
too
right.
So
what
I
see
here
is
the
entity
evaluation
time
frame
linked
to
the
captain's
bridge
the
number
of
SL
is
being
evaluated
the
project,
the
score,
also
any
additional
information
that
you
have
passed
in
from
your
pipeline.
A
In
this
case
the
build
number
that
Jenkins
job
name
and
the
job
URL
it's
automated.
It's
completely
automated
thread.
I
didn't
have
to
do
anything
else.
This
is
because
I
also
installed
that
the
bi-directional,
the
energies
integration
when
I
set
up
captain
earlier
good,
all
right
so
I
will
skip
this.
One
I
believe
because
this
is
just
the
second
project
where
I
just
run
some
loads
from
Jenkins
against
my
environment.
A
So
this
is
really
just
if
you
would
have
a
test
if
you're
executing
a
test
as
part
of
your
Jenkins
pipeline,
then
I
am
just
showing
here
how
you
would
kind
of
rep
captain
around
the
whole
thing.
That
means
take
a
time
step
before
your
test
runs
one
at
the
end
and
then
do
the
evaluation.
I
am
just
skipping
this,
because
I
want
to
show
you
a
third
one
that
I
believe
is
more
interesting,
but
for
completeness
and
the
way
this
will
work.
A
So
that
means,
if
you
have
a
chance,
if
you
have
a
chenkin
spy
plane
and
that
is
triggering
any
type
of
test
and
I
think
Dimitri.
You
ever
seen
us
the
equation
earlier
this.
This
will
all
work,
whoever
execute
the
test,
but
in
your
pipeline
you
would
still
a
beginning
called
captain
in
it
to
make
sure
to
set
up
just
kept
know
that
captain
is
probably
configured.
A
Then,
if
you
have
a
part
of
your
pipeline,
where
you
run
your
test,
you
call
jmeter,
you
call
neo
load
whatever
you
do
with
loadrunner
there's
a
function
in
my
champions,
library
that
is
called
mark
evaluation
start
time
because
remember
later,
we
always
need
a
start
on
a
stop
time.
So,
there's
a
function
that
you
can
call
before
you
execute
the
tests.
Then
you
execute
your
tests
and
then,
at
the
end,
when
you're
done
with
the
test,
you
say,
send
start
evaluation.
Events.
A
Oh
please,
captain
start
your
evaluation
and
this
function
will
automatically
take
the
timeframe
that
you
marked
in
the
beginning
of
your
test,
the
current
time
frame
for
the
end
all
right,
and
then
it
will
do
the
same
magic
same
magic
as
before,
all
the
same
okay,
including
sending
the
results
off
to
dynaTrace
as
part
of
an
event.
So
this
would
be
the
way
you
would
do
it
if
you
have,
if
you
execute
tests
already
in
your
pipeline
now,
the
last
one
I
think
this
is
for
people.
A
The
reason
but
I
want
to
highlight
this
because
we
got
a
lot
of
positive
feedback
on
it.
It
is,
if
you
don't
know
the
yet
have
tests
being
executed
in
your
pipeline
and
now.
Why
would
you
not
do
it
well,
because
it's
not
that
easy
right.
If
you
want
to
include
tests,
execution
performance,
test
execution
in
your
pipeline,
there's
a
lot
of
questions.
You
typically
get
asked:
where
do
these
tests
run?
What
are
the
different
workloads?
Where
do
we
swim
the
metrics
to
how
do
we
analyze
the
results?
A
There's
a
lot
of
things
you
have
to
consider
now,
it's
all
possible
and
all
some
of
you
have
probably
built
automated
performance
testing.
Already
some
of
you
struggle.
Some
of
you
have
found
some
of
the
do-it-yourself
approaches,
so
it
is
obviously
doable,
but
in
case
you
don't
want
to
go
through
building
the
whole
thing
yourself.
You
can
also
use
captain
for
it.
So
how
does
this
work?
If
you
have
a
pipeline
where
you
deploy
an
app
and
then
you
want
tests
to
be
executed,
you
can
actually
let
captain
also
execute
the
tests.
A
Captain
is
an
event-driven
system,
and
one
of
the
integrations
we
have
is
with
testing
tools
as
jmeter
integration
available.
There's
a
new
load,
integration
available
and
I
know
some
people
already
working
on
some
other
integrations
essentially
can
easily
trigger
any
type
of
testing
tool
from
captain,
but
basically
captain
takes
care
of
the
orchestration.
So
how
does
this
work?
And
actually
you
know
what,
while
before
it
will
take
a
couple
of
minutes?
So
let
me
kick
off
that
test
quickly
or
that
pipeline
and
then
go
back
to
the
slides
to
explain
what
it's
running.
A
I
call
this
captain
performance
as
a
service.
So
this
is
my
pipeline
bill,
but
parameters
similar
con
to
this
earlier
project
stage.
We
see
here
now
my
service
is
called
pervasive
service,
again
I'm
using
it
for
taking
I'm
now
also
using
the
different
basic,
where
I'm,
using
all
the
proof
test.
Is
it
the
SLA
document
with
more
metrics
and
the
way
we've
implemented
this
right?
Now
you
can
give
captain
a
chi
meter
script,
but
then
you
can
define
different
workloads
and
I'm
using
performance,
10
and
I'm,
showing
you
the
script
later
on.
A
It
basically
just
executes
a
very
fast
10
user,
a
little
test
and
I
basically
say:
please
run
it
against
this.
This
is
the
endpoint.
This
is
the
app.
This
is
the
URL
that
it
will
be
passed
to
jmeter
running
this
particular
workload.
Configuration
and
it's
monitored
by
dynaTrace
through
these
tags,
so
I'll
do
build
all
right
and
now
captain
does
its
its
thing.
So
what
is
captain
really
doing
behind
the
scenes?
Let
me
go
back
to
the
slides
and
basically
it
triggers
the
testing
tool.
A
Integration
I've
installed,
I,
Mikey's
HT
meter
also
gives
it
the
workload,
configuration
and
everything
else.
Then
these
tools
are
doing
its
job
after
that,
captain
automatically
knows
the
next
step
after
executing
a
test
is
evaluating
SL
Aizen,
it's
low,
so
this
will
happen
automatically
at
the
end
of
the
test
and
then
rather
pulls
the
data,
and
then
it
validates
it
against
us
ellos
and
then
returns
back
the
result
to
your
pipeline.
Okay,
and
so
this
is
just
an
additional
step.
A
You
don't
have
to
do
it,
but
in
case
you
are
looking
for
an
Orchestrator
that
can
orchestrate
a
test
execution
cap
can
do
it
for
you
as
well,
and
that's
what
I
want
to
show
you
here
again.
This
is
the
pipeline
that
I
just
ran.
Don't
need
to
explain
anything
else,
we'll
see
it
running,
it
should
run
about
two
minutes
or
so,
and
then
we
should
also
get
a
heat
map
good
all
right.
So,
let's
see
this
runs
as
I
said
about
two
minutes.
How
much
are
we
into?
We
are
into
it
a
little
bit.
A
Let
me
just
highlight
a
couple
of
more
things
in
the
slides.
I've
also
explained
all
the
details
on
how
my
individual
Jenkins
pipelines
look
like
how
they
work.
Most
importantly,
for
you
to
understand,
is
if
you
are
interested
in
this
particular
use
case
of
performance
testing
with
captain
fruity
meter,
then
you
have
the
ability
to
upload
your
Chi
meter
scripts
to
captain,
but
additionally
upload
a
so-called
jmeter
confi
mo
file,
where
you
can
specify
your
different
workloads.
A
We
call
them
test
strategies
here
and,
as
you
can
see
here,
I
have
specified
those
now
in
my
in
my
git
repo
jmeter
convey
mo
right.
This
is
basically
the
original
file
and
I
think
I
selected
the
performance
test,
performance
10.
Actually,
that's
50
virtual
users,
10
loops
of
my
script
a
little
thing
time,
and
then
this
is
the
script.
That's
what's
going
to
be
executed
and
remember
that
script
gets
called
and
parameter
receives
a
lot
of
parameters
like
the
URL,
and
it
receives
also
any
captain
context.
Captain
stage
project
service.
A
It
even
receives
your
your
Jenkins
job
information.
If
you
want
to
so
all
the
metadata
kept
in
here,
it
will
also
be
passed
over
to
Tucci
meter
in
case
you
need
it
good
and
let's
see,
as
this
is
running
before
I
opened
up
for
Q&A.
If
you
want
to
try
this
out,
the
integration
is
really
simple
and
straightforward.
Once
you've
kept
me
installed,
captain
can
now
be
installed
within
80
seconds.
A
For
instance,
we
just
got
a
contribution
from
one
of
our
partners.
He
already
built
a
in
Asia
DevOps
extension.
We
also
have
a
kid
lab
extension
from
one
of
our
customers.
So
that
means,
if
you
are,
if
you
run
kid
lab
or
Asia
DevOps,
you
can
already
do
this
without
even
worrying
about
how
to
call
the
Capon
API.
So
that's
all
they're
cool
good.
Let's
have
a
quick
look.
If
the
tests
actually
finished
it
finished
and
we
can
again
see
kept
the
results
in
British,
it
opens
it
up.
A
Let's
click
on
that
link,
Riley
created
a
new
project,
Pervis
a
service
project.
Rather
this
is
new
now
because
my
integration
made
sure
and
it
actually,
if
I
shouldn't,
wouldn't
have
clicked,
it
would
have
stayed
here
it
automatically
jumped
to
my
hitman,
but
now
the
heap
map
has
many
more
entries
because
I'm
used
the
second
at
the
more
extended
version
of
my
a
slice
and
therefore
captain
has
has
analyzed
more
a
slice.
A
You
know
plan
for
that
for
the
library,
as
I
mentioned,
support
for
credentials
instead
of
the
environment
variables,
better
hip-hop
visualization
in
in
Jenkins
itself,
even
though
that
current
integration
with
the
the
plug,
the
HTML
repo
plug-in,
is
already
really
cool,
just
one
click
and
any
care
to
the
heat
map
and
yeah.
What
captain
can
also
do?
A
A
Now
it's
time
for
Q&A
and
while
I'm
taking
the
questions,
it
would
be
great
if
we
get
some
support
from
you
folks
out.
There
start
our
projects
on
github
join
the
slag
Channel.
Look
at
the
tutorials.
You
know
give
us
feedback
all
right.
So,
let's
see
for
those
that
are
I
know
it's
the
top
of
the
hour.
So
I
see
some
people
are
dropping
off.
But
if
you
have
questions
thanks
for
the
nice
feedback,
Dimitri,
hopefully
I
answered
your
questions
on
the
at
least
the
ones
that
I
saw
so
far.
A
So
now
does
anyone
on
the
observability
stand,
but
should
captain
SLO
always
be
defined
to
be
lower
than
the
company
SLA
or
equal
to
place
in
order
to
add
consistent
data
accuracy
of
data?
So
I
think
this
is
obviously
mm.
You
know
a
discussion
that
you
need
to
have,
but
in
general
I
believe
s.
Loz
should
be
stricter
than
your
essays.
Why?
A
Because
you
want
to
make
sure
that
you
get
an
early
warning
system
in
case
your
SLO
is
already
fail
and
you
should
not
be
your
business
that
is
being
impacted
because
you're
already
on
the
thresholds,
I
think
s
ellos
always
need
to
be
derived
from
your
essays.
If
you
have
an
SLA
of
a
particular
response
time
of
a
particular
capability
of
your
application,
then
I
suggest
at
least
write
that
your
SLO
is
a
little
stricter
because
you
always
want
to
be
better
and
not
worse,
off,
I
think
it's
it's
exactly!
It's
like
a
buffer
yeah!
A
Exactly,
let's
see
what
else
did
I
miss
here.
There's
a
couple
of
questions
here,
this
one.
Is
it
really?
Okay,
that's
the
same
question,
but
it
just
looked
a
little
strange
here.
The
UI.
A
You
it's
this
one
here.
The
pesto
can
I
think
explain
that
all
right.
So
if
there
is
a,
let
me
know
if
there's
more
questions
feel
free
to
put
them
into
the
question
feature,
while
I'm
waiting
for
more
questions
for
those
of
you
that
are
interested
in
and
I
know,
seeing
some
of
these
champions
pipelines
very
straightforward
right.
A
These
are
just
Jenkins
pipelines
that
are
that
I've
put
in
here
and
in
order
to
include
the
captain
integration,
it's
just
a
shared
library
and
then
I
just
called
my
equipment
in
it
with
project
name,
service,
name
and
so
on
and
so
forth.
Then
your
your
your
files,
the
supporting
files
for
Iran,
have
to
be
added
to
kept
them,
so
you
send
it
to
captain
kept
next
leaders
version
control
on
itself.
Here's
it
get
in
there,
but
you
see
here:
I,
upload,
the
dynaTrace,
config
and
sli,
and
so
on
and
so
forth.
A
Then
I
trigger
my
tests.
How
do
I?
How
do
I
tell
captain
that
they
should
execute
tests?
There's
a
special
event
called
st.
deployment
finished
event.
So
I
basically
that
deer
here
is.
You
have
a
tool,
let's
say
jenkins,
that
has
deployed
an
app
and
you
know
the
endpoint.
So
now
you
say:
hey
captain.
I
want
to
make
you
aware
of
my
deployment
that
I've
finished
it.
A
Here's
the
deployment
URL,
that's
the
endpoint
and
I
want
you
to
then
execute
tests
as
well
before
you
do
the
evaluation
of
the
quality
gates
and
you
can
also
pass
the
test
strategy
and
the
test
strategy
is
exactly
remember.
My
workload
configurations,
they're
Qi,
media
integration
is
to
keep
abilities
and
take
this
test
strategy,
and
then
it
take
this
particular
workload
and
then
execute
it
and
that's
it
and
then,
at
the
end,
I
just
say:
captain
wait
for
evaluation
done,
because
captain
will
execute
the
tests.
Will
then
wait
for
a
test
to
be
executed.
A
Will
then
do
the
evaluation,
and
so
you
want
to
know
at
the
end.
You
know:
is
everything
good?
Yes
or
no,
so
yeah,
that's
what
it
is
and
what
should
also
happen
if
I
go
back
to
dynaTrace
on
dynaTrace,
if
I
now
go
to
Purvis
a
service
all
right,
so
here's
the
application,
I've
rented
my
test
against
now.
Here's
the
nice
thing
now
I
also
have
more
events
from
captain
it
actually
sent
with
the
deployment
information,
because
we
told
captain
there
was
a
deployment,
so
it
tells
me
who
notified
captain
about
the
deployment
it
was.
A
A
Right
so
I
think
that's
it
from
ient.
There's
no
more
questions.
I
can
see
and
I'm.
Obviously,
over
time,
as
often
that
is
always
put
is
often
the
the
whole
thing
was
recorded.
That
means
you
will
get
they're
recording
also
on
YouTube
and
on
the
energized
University
and
watch
out
for
more
of
these
performance
clinics.
Okay,
so
thank
you.
Have
a
good
day
stay
safe
and
healthy
and
talk
to
you
soon,
bye-bye
you,
you.