►
Description
Want to level-up your Jenkins Pipelines with Keptn? In this webinar we show you how to "Automate Build Approvals through SLI/SLO based Quality Gates" and how to "Enable a Performance-Driven Culture through Performance as a Self-Service".
https://github.com/keptn-sandbox/jenkins-tutorial
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
A
Hey
welcome
everyone
to
a
captain.
Community,
webinar
I
think
it's
the
first
time
we
kind
of
give
it
that
name.
We
always
call
the
community
meetings
now,
as
you
probably
heard,
if
you
follow
us,
we're
kind
of
splitting
it
between
the
developer
focused
meetings
that
happened
on
a
regular
basis
every
week
on
a
Thursday
and
then
talking
really
more
about
from
the
engineering
team
to
the
developers.
A
They
will
show
a
lot
of
stuff
that
they
are
planning
that
they're
working
on
individual
sprints,
and
then
we
also
have
this
Kaplan
community
webinars,
where
we
focus
on
end
user
use
cases,
and
we
had
people
in
the
past
that
talked
about
the
integrating
cap
with
gitlab.
We
had
a
session
last
time
around
the
place
yen
today,
I
want
to
talk
about
leveling
up
your
chenkin
space
delivery
with
captain,
so
I
have
invested
the
last
couple
of
weeks
and
figuring
out.
A
A
We
can
save
a
lot
of
time
now,
whether
you
agree
with
me
on
the
80%
or
not,
you
know
that's,
obviously
the
dateable
but
I
think
what
I've
seen
there's
a
lot
of
manual
time
wasted
in
in
manual
build
approvals.
The
second
thing
is
around
enabling
a
performance
driven
culture,
so
this
is
the
whole
thing
about
performance
as
a
self
service.
Continuous
performance
as
a
self
service
there's
different
terms
around
it.
A
But
the
idea
is:
how
can
we
give
developers
engineers
an
easy,
automated
self-service
way
to
get
performance
feedback
of
a
new
artifact
that
they
have?
The
third
use
case
is
about
automating
monitoring
configuration
as
part
of
the
delivery.
I
think
we
all
agree
that,
as
we
are
getting
more
as
well
as
monitoring
becomes
more
prominent
I
think
in
these
days
and
as
we
are
delivering
more
artifacts
through
our
pipeline.
A
These
two
systems
need
to
be
connected,
much
better
together
and
there's
ways
how
the
delivery
pipelines
can
tell
the
monitoring
tool
about
when
a
deployment
happened
who
deployed
it.
What
is
what
are
features
that
are
involved,
and
then
the
monitoring
tool
can
also
tell
back
to
the
pipeline,
say
hey.
This
is
a
bad
deployment,
so
you
may
want
to
stop
that
build
because
it's
not
good
at
all.
Our
market
is
bad.
The
third,
the
fourth
thing
now
is
also
a
way
that
I
think
most
of
us
will
experience
soon.
A
Those
of
us
that
have
invested
in
pipelines
over
the
last
couple
of
years
probably
started
with
you
know
a
team
building
pipelines
the
pipeline's
grew.
Then
they
were
copy
pasted
over
from
one
team
to
another
to
another
application
and
what
we
have
seen
both
internally
and
also
externally
with
our
customers,
that
we
work
with
a
lot
of
people
have
monolith
monolithic
pipelines
to
do
pretty
much
everything.
A
There's
a
lot
of
hard
coded
things
on
how
they
integrate
with
tools
and
it's
copy
pasted
all
over
the
place
is
very
hard
to
maintain
and
also
very
hard
to
build
something
new
in
these
pipelines.
So
this
is
where
we
talk
about
breaking
up
the
monolithic
pipelines
and
then
using
a
choreography
engine,
a
delivery
choreography
engine
this
case.
It's
captain
that
can
then
choreography
the
individual
pieces
that
then
Jenkins
can
still
do.
But
overall
we
break
up
the
monolith
and
make
it
more
event-driven
and
in
connecting
it
through
choreography
engine.
A
So
what
I
want
to
focus
on
today,
because
I'd
only
have
an
hour,
so
I
cannot
cover
everything,
but
I
want
to
cover
the
first
two
in
very
much
detail
and
then
a
little
bit
a
third
one
and
the
last
one.
We
will
do
these
in
another
webinar
all
right.
So,
let's
start
with
the
first
one.
First
use
case
automate
build
approvals
through
SLI
and
SLO
based
quality
gates.
A
What's
the
challenge,
as
I
highlighted
earlier
I
think
the
challenge
is
that
if
you
look
at
the
champions
pipeline
between
testing
and
then
maybe
the
next
stage
which
is
staging,
then
you
have
a
lot
of
time
here.
Waste
wasted
were
spent
in
mental
approvals.
Why?
Because
whoever
needs
to
make
the
decision
needs
to
look
at
the
test.
Trends-
and
maybe
you
know,
is
this:
is
this
a
problem
because
now
we
have
high
failure
rates?
Yes
or
no
I,
don't
know
then.
Well,
let's
have
a
look
at
the
performance
results.
A
I
know
we
have
great
plugins
that
pull
in,
let's
say
Jamie
the
results
into
interchange
into
Jenkins,
but
really
is
one
build
versus
the
other
better.
It's
really
hard
tail,
so
I
need
to
dig
deeper.
Then
fortunate,
a
lot
of
people
are
now
using
also
APM
products
and
monitoring
tools,
whether
it's
prometheus
or
dynaTrace.
A
But
then
we
often
see
that
it's
a
lot
of
data,
but
it's
unstructured
and
also
it's
hard
to
know
which
data
actually
comes
from
my
test,
which
is
not
a
pipeline
run
versus
maybe
somebody
else
that
is
just
doing
some
random
testing.
So
this
is
why
a
lot
of
time
is
is
needed
manual
time
to
really
figure
out.
This
is
a
good
bill
or
a
bad
build.
A
The
solution
to
this
problem
is
the
way
captain
implements
it
and
the
way
we
also
have
a
very
tight
integration
with
testing
tools
where
we
feed
data
context
to
the
underlying
monitoring
tool.
So
the
way
the
pipelines
will
change
if
you
follow
our
best
practices
is
when
you
run
the
tests
from
your
pipeline,
whether
you're
shrieking
at
Jamie,
the
test
or
a
get,
laying
or
annuities
or
a
selenium
test.
A
The
first
goal
is
that
you're
actually
adding
a
tag
to
these
requests,
so
that
the
monitoring
tool
underneath
whether
it's
prometheus,
whether
it's
dynaTrace,
whether
it's
a
mother
APM
tool,
understands
that
this
data
now
comes
in
from
this
particular
test
for
this
particular
test
transaction.
Then,
instead
of
having
to
look
at
dashboards
and
digging
through
data,
we
follow
the
approach
of
SLI
and
SLO
based
validations,
where
you
specify
your
SL
eyes.
A
Your
eye,
solos
captain,
reaches
out
to
the
underlying
monitoring
tools,
pulling
out
this
information
for
the
time
frame
of
the
test,
validating
them
and
then
giving
you
the
result
on.
How
am
I
doing
with
my
key
metrics?
Are
the
good
or
the
bad
have
to
change
from
one
build
to
the
earth
to
the
other
and
then
reporting
back
that
results
to
the
pipeline
and
the
way
I?
This
can
be
fully
automated.
So
that's.
Why
I'm
a
strong
believer
we
can.
A
We
can
highly
make
these
pipelines
or
more
efficient
here
all
right,
quick
primer
or
prick
intro
into
SLI
sana
solos
in
case
you're,
not
familiar
with
it.
A
slice
also
called
service
level
in
the
haters
SLO,
our
service
level
objectives
we
didn't
come
up
with
it.
Google
has
been
spearheading
all
of
this
around
there
s
re
practice
their
site,
reliability,
engineering,
best
practices,
but
this
is
the
way
we've
implemented
it
in
Captain,
because
I
know,
there's
other
tools
out
as
well.
A
lot
of
people
are
trying
to
implement
it
themselves.
A
We've
also
implemented
it
in
captain
as
part
of
our
quality
gate
capability,
where
captain
can
dig
into
different
data
sources
to
pull
data
out,
but
essentially
what
it
is
you
define
your
SL
is
your
service
level
indicators
you
specified
in
the
ML
file
that
you
don't
have
to
give
captain
you
specify
what
are
the
indicators?
What
are
the
metrics
that
are
important
for
you
and
then,
depending
on
the
tool,
for
instance,
here
with
dynaTrace,
we
have
a
dynaTrace
query
that
can
be
used
and
accepted
by
the
metrics
api
damage
which
provides
for
Prometheus.
A
It
would
look
similar
for
new
load,
it
would
look,
you
know
they
have
their
own
query
language,
but
you
say
what
are
the
metrics
that
are
important,
and
then
you
specify
your
SLO.
So
what
are
your
objectives
if
you
analyze
a
particular
timeframe
of
a
test?
And
if
you
look
at
individual
metrics
your
indicators,
what
is
a
pass
and
what
is
a
warning
criteria?
And
here
we
allow
you
to
combine
both
static
versus
dynamic
thresholds,
meaning
you
can
actually
compare
with
previous
builds
or
we
can
enforce
static
thresholds.
A
How
does
this
work
if
you
enforce
the
captain
quality
gate?
If
you
say
captain,
please
evaluate,
let's
say
30
minutes
for
a
particular
service
that
I
want
you
to
manage.
Here's
my
sli
here
is
maria
solo.
Then
captain
goes
off
reaches
out
to
the
data
source
could
be
damaged,
which
prometheus
and
yellowed
or
any
other
tool.
Then
these
tools
are
returning
the
values.
Then
captain
is
scoring
these
sli
so
for
every
metric
we
basically
validate.
Are
we
in
the
passing
range
in
the
warning
range
or
the
devaya
late,
our
our
objective
for
there
sli?
A
So
we
do
this
for
every
metric
and
then
overall,
when
we
look
at
all
the
metrics
and
we
do
the
validation
for
everyone,
we
know
hey.
Seven
out
of
eight
are
good.
That
means
we
achieved
eighty-seven
point
five
percent,
that's
grain,
because
we
can
also
specify
a
total
score
acceptance
rate
or
in
case
of
a
bad
build.
A
If
only
four
out
of
eight
are
good,
then
we
only
50%
of
possible
points
achieved,
so
that
might
be
a
bad
build
and
to
give
you
a
little
more
visual
and
graphic
you're,
like
tables
I,
believe
right
at
the
explainer,
so
ASL
eyes
it's
real
about
what
are
the
indicators?
What
are
the
metrics
that
are
important
for
you?
This
could
be
a
response
time
of
a
service.
This
could
be
you
see,
failure
rate.
It
could
be
response
time
of
a
particular
transaction.
The
number
of
database
calls
for
a
particular
transaction.
A
It
could
also
be
memory
usage.
It
could
be
network
traffic.
It
could
be
anything
from
any
layer
of
your
stack.
Then
you
specify
your
solos.
So
here
is
where
you
can
define
what
are
my
aesthetic,
my
dynamic
thresholds,
so
static
means
steadily
enforced
dynamic
by
comparing
it
with
previous
builds,
and
then
you
specify
an
overall
goal.
So
what
what
do
I
consider
is
being
a
good
build
if
I
achieve
90%
success,
then
it's
good.
If
I'm
between
75
and
90,
it's
warning
everything
will
always
failed.
A
So
now
the
way
you
can
use
cap-
and
you
can
say
captain-
do
an
evaluation
for
a
particular
project
for
a
particular
timeframe.
Let's
say
I'm
doing
this
for
build
one
metrics
come
in
they're,
all
green,
that's
great!
I
get
a
hundred
percent
rating
bill.
Two
comes
along
red.
You
do
another
run
two
of
the
metrics
and
I'm
warning
stage,
which
means
overall
I
get
75
percent
still
warrants
warning.
A
Build
number
three
comes
along,
has
a
bad
violation
on
the
database,
calls
because
it
just
increased
it
from
three
to
six
database
calls
for
a
party
for
the
login
transaction.
That's
no
good!
So
now
we
are
down
to
sixty
two
point:
five
percent,
which
means
we
let
the
bill
fail
bill
for
comes
along.
Everything
is
green
again,
and
this
is
good.
So
this
is
how,
in
a
non
ion
from
a
high
level,
perspective
a
slice
and
a
slow
work
in
captain
now.
A
Not
only
does
it
look
nice
to
hear
on
this
kind
of
Excel
type
of
table,
we
also
have
a
UI
built
into
captain.
We
both
have
a
heat
map
that
you
will
see
later
in
my
demo,
where
we
basically
show
the
same
table
as
I
just
shown
you
so
the
heat
map
visualizes
every
single
result
of
my
SL
is
every
single
SLI
for
every
single
build.
These
are
the
columns
and
on
the
top,
is
the
score.
A
So
this
is
the
aggregators
for
an
individual
bill
that
immediately
tells
you
Greenbuild
or
yellow
a
red
build,
and
if
it's,
if
it's
yellow
or
red,
you
can
look
down
which
metrics
actually
failed
and
on
the
bottom
is
also
a
screenshot
of
the
chart.
Visualization,
you
can
also
charge
the
individual
metrics,
the
actual
values,
so
you
can
see
also
trends
over
time.
Alright.
So
today,
I
want
to
focus
on
captain
with
Jenkins
and
dynaTrace
as
a
monitoring
tool,
if
you're
using
Prometheus
or
other
tools,
there's
also
great
tutorials
on
tutorials
on
capital
conferences
for
Prometheus.
A
If
you
have
other
monitoring
tools,
then
I'm
sure
you
can,
you
know,
be
invited
to
write
an
SLI
provider
for
that
monitoring
tool.
But
today,
I
want
to
focus
on
dynaTrace
and
I
want
to
give
you
some
insights
on
how
you
would
actually
set
up
dynaTrace
so
that
you
take
most
out
of
this
integration
that
we've
built
and
then
integrating
it
with
with
Jim
with
Jenkins.
So
what
I've
done
I've
installed
captain
in
order
to
insult
captain,
you
either
need
kubernetes.
A
Were
you
need
a
Linux
machine
pilot
hero,
boon
to
Linux,
because
we
have
a
gritty
cool,
a
script
from
one
of
our
colleagues
Sergio
who
completely
automated
the
installation
on
micro
kubernetes?
What
you
also
need
you
need
to
dynaTrace
tenant
and
from
there
an
API
and
a
pair
token.
Now
this
can
be
a
dynaTrace
tenant.
There
is
already
monitoring
and
in
existing
environment
right.
When
we
talk
about
captain,
we
often
talk
about
captain
doing
the
deployment,
and
then
you
only
monitor
the
stuff
that
captain
deploys
today.
A
I
also
want
to
focus
on
you
can
use
captain
and
the
quality
gates
and
all
the
stuff
I'll
show
you
also
from
by
by
looking
at
data
from
environments
that
are
completely
out
set
of
equipment
is
deployed.
So
you
can
really
deploy
captain.
It
is
an
additional
service
and
then
get
your
s,
allies
and
us
hellos
from
any
type
of
environment.
They
do
you
monitor
with
a
notorious,
but
it's
as
you
can
see
all
the
logos
on
the
bottom
and
it
doesn't
matter
it
will
work
with
anything
that
you
monitor.
A
So
how
does
this
work
by
the
way?
There's
also
great
tutorials
out
there,
but
you
installed
captain
either
the
fool
or
the
quality
gate
version
of
it.
Then
captain
will
install
its
components
into
your
kubernetes
cluster
or
micro.
Kubernetes
deployment,
notify
evaluation,
testing
and
also
it
has
get
inside
so
get
is
part
of
of
captain,
because
every
configuration
element
we
give
captain
later
on
will
be
stored
in
its
internal
captain
git
repository
band.
A
In
order
to
enable
dynaTrace
support,
you
need
to
first
create
a
secret
for
the
analyst
with
asbestos
with
save
the
token
and
the
the
end
point
so
that
later
on,
captain
can
access
that.
Then
you
install
the
so
called
dynaTrace
service
and
you
also
install
the
so
called
dynaTrace
sli
service.
It's
it's
two
comments
that
you
to
cube.
Ctl
applies
that
you
do
it's
all
documented,
pretty
well,
so
very
easy
and
straightforward.
Then
you
say:
captain
configure
monitoring,
dynaTrace,
that's
a
captain,
CLI
comment,
which
will
automatically
do
a
lot
of
things.
A
It
will
enable
first
of
all
the
integration
between
dynaTrace
in
captain
it
will
configure
taking
rules.
It
will
create
a
dashboard
for
you.
It
does
a
lot
of
cool
things,
fully
automatically
configures
your
monitoring
tool.
Basically,
if
you
want
to
install
all
this
on
the
boom,
as
I
said,
check
out
kept
mini
box.
This
is
the
script
that
Sergio
he
wrote
in
his
showed
of
last
week.
A
Alright
last
thing
this
was
just
installation
now,
normally,
if
you
would
use
captain
like
this,
and
you
want
to
interact
with
it,
your
honor,
you
want
to
work
with
captain
and
you
want
to
setup
quality
gates
and
a
vent
and
do
evaluations.
You
would
need
to
create
a
captain
project
so,
for
instance,
perf
project.
What
this
will
actually
do.
It
will
create
a
repo
within
its
get
up
within
its
git
repository.
It
will
also
create
branches
for
the
individual
stages.
A
You
would
need
to
tend
to
create
service
which
again
will
provide
the
structuring
it,
and
then
you
would
need
to
add
resources.
You
basically
upload
files
that
then
kept
man
all
its
tools.
It's
talking
to
can
access.
Now.
The
good
thing
is
I'll
show
you
today
how
all
this
can
be
fully
automated
without
you
having
to
do
anything
man,
because
I
wrote
a
Champions
shared
library
that
does
all
of
this,
for
you
manages
your
projects,
and
the
only
thing
you
have
to
do
is
here
my
files
and
run
with
it.
A
Alright
was
the
first
thing
installing
captain.
The
second
thing
is
what's
very
important
with
any
type
of
monitoring
tool,
but
this
is
not
true
for
dynaTrace.
You
need
to
think
about
taking.
So
when
we
are
later
on
when
we
let
captain
query
our
monitoring
tool
to
query
certain
parts
of
our
infrastructure,
it's
a
metric
from
a
certain
host,
a
process
or
a
service
or
an
app.
A
We
need
to
tell
dynaTrace
in
this
case,
for
what
particular
service
and
the
way
this
works
is
through
tags
and
the
reason
why
I
highlight
this
because
takes
are
very,
very
important
and
the
way
captain
talks
to
dynaTrace
is
also
through
take
based
filtering.
So,
for
instance,
one
way
it
could
be
a
very
simple
tag.
You
just
put
a
tag
on
your
service
in
the
imagery
that
says:
evil
service,
evil
away,
evil
for
evaluation.
So
then,
I
can
later
on,
say:
I
want
to
query
data
from
services
that
have
this
tag
on
it.
A
A
If
you
deploy
an
app
into
kubernetes
with
captain,
then
it
sets
for
Tech's
project
service
stage
and
deployment,
but
what
I'm
showing
you
today
will
work
also,
obviously,
if
you
don't
use
captain
for
deployment,
if
you
just
manually,
take
your
services
and
then
you
can
pull
in
the
data.
Okay,
what's
the
next
thing
we
need
to
do
remember
one
of
the
problems
I
highlighted
is
that
when
we
execute
tests
right
now-
and
we
look
at
APM
data
that
there's
often
no
connection
between
them,
I,
don't
know
which
data
comes
from
my
testing
tool.
A
So
in
dynaTrace
we
have
the
ability
to
allow
the
testing
tool
to
take
its
requests.
What
you
see
on
the
left
is
my
Chi
meter
script,
that
I'm
going
to
use
later
on
and
I
have
different
test
steps.
I
have
home
page
version
echo
and
invoke,
and
what
we
and
dynaTrace
allow
your
test
tools
to
do
is
to
add
an
additional
HTTP
header
on
every
request
and
then
giving
it
name
value
pairs.
So
TSN
would
stand
for
test
step
name,
there
can
be
multiple,
so
you
can
also
to
a
script
name.
A
You
can
do
virtual
user
ID.
You
can
do
whatever
you
want
red,
but
the
deer
is
once
you
add
context
to
your
requests.
You
can
now
use
a
really
cool
new
feature
in
dynaTrace.
It's
new
things
about
November
that
is
called
calculated
service
metrics.
So
you
can
now
instruct
the
image
race
to
not
only
give
you
response
time
and
failure
rate,
and
maybe
throughput
of
your
service-
that
you're
testing,
but
actually
splitting
it
by
your
test
step
name.
A
So
here's
one
example
and
I
come
to
more
examples
later,
but
we
can
now
create
metrics
like
request
time,
spy
test,
name
or
number
of
database
calls
by
test
step
number
of
service
calls
CPU
time
wait
time.
We
can
do
a
lot
of
things
and
getting
metrics
that
are
now
specific
to
individual
tests.
We
get
a
more
granular.
Why
do
we
want
to
do
this?
A
Well,
because,
once
you
start
doing
this,
you
can
create
first
of
all,
your
dashboards
I
think
we
still
want
to
start
with
dashboards
to
get
kind
of
an
idea
on
feeling
what
our
metrics
that
are
important.
What
would
I
normally
look
at
during
a
test,
so
in
my
case
here,
I
can
create
a
dashboard
also
on
the
top
I
have
my
text.
This
is
why
it
takes
important
right.
You
want
to
build
a
dashboard
and
then
maybe
you
want
to
look
at
at
a
particular
app
or
at
a
particular
service
and
tags.
A
Allow
you
to
filter
it's
the
same
filtering
mechanism
we
use
later
on
from
the
captain
side,
but
what
I
can
put
on
these
dashboards
are
now,
especially
these
metrics
and
I
can
split
them
up
by
again
my
test
steps.
My
invoke
my
echo
my
version
in
my
home
page.
So
these
are
my
four
test.
Steps
and
I
can
now
have
a
dashboard
that
I
can
look
at
it.
It
makes
much
more
sense
now
if
I
analyze
it
but
remember
what
I
said
earlier.
Dashboards
is
great,
but
it's
still
a
manual
steps
step.
A
So
now,
if
we
have
a
dashboard
that
contains
data
that
is
relevant
for
you,
that
tells
you
is
this
a
good
build
or
a
bad
build.
Then
we
can
take
this
data
and
convert
it
into
our
SLI
files.
Remember
my
DSLR
files.
Really
it's
a
list
of
metrics
and
in
the
dynaTrace
context
we
specify
a
metric
query
against
our
metrics
API.
So
this
metric
query
specify
what
type
of
metric
do
you
want?
I
adds
them
entity
selector.
A
These
are
the
text
and
I
can
also
add
if
I
have
dimensional
matrix
like
metrics,
that
are
split
by
let's
say
test
name.
I
can
also
say:
I
only
want
to
have
this
metric
for
a
test
step
invoke
for
version
four
echo
for
home
page,
so
you
can
really
define
individual
metrics
for
individual
test
steps,
really
cool
and
then
filtering
it
on.
A
Let's
say
your
Deaf
environment,
your
staging
environment,
your
production
environment,
your
front
and
your
packing
component,
and
you
can
mix
it
with
infrastructure
metrics
as
well,
and
then
what
you
have
if
you,
if
you
basically
wrote
down
your
metrics,
then
you
can
also
specify
SLO.
So
what
are
your
objectives?
Maybe
you
have
an
objectives
of
100
millisecond
response
time.
Then
you
can
put
this
in.
If
you
don't
really
know
what
the
best
objectives
are.
A
A
Good
and
I'll
see
it
in
a
heatmap,
and
obviously
this
can
also
be
retrieved
by
API,
which
is
why
we
can
integrate
it
in
Jenkins
and
then
let
the
pipeline
fail
on
the
bottom.
You
also
see
the
visualization
again
that
I
had
earlier
with
the
metrics
over
over
the
time
frame
as
well.
So
this
is
what
we
this
is,
what
we,
what
we
can
achieve
all
right,
so
my
first
demo,
because
I
want
to
show
you
obviously
how
this
works
in
real
life.
A
I
have
baked
this
into
a
Jenkins
pipeline
by
the
way
what
I
actually
built
I
have
built
the
chain
kins
tutorial
with
currently
three
use
case
and
its
other
three
use
cases.
I
want
to
show
you
today,
life.
The
first
one
is
a
very
simple
one,
where
I
built
a
champions
pipeline.
That
is
just
triggering
an
evaluation,
so
I
assume
I
have
some
system
there's
some
load
on
it
and
I
can
say:
please
kept
them.
A
Take
a
particular
time
frame,
pull
out
the
metrics
and
tell
me
what's
wrong
what
what
the
result
is
so
the
way
this
will
look
like
I
call
my
pipeline
and
then
it
will
run
and
then
it
will
give
me
the
result
and
I
will
just
kick
it
off
quickly,
all
right.
So
let's
do
this.
Okay,
it
is
my
by
the
way.
All
of
this
is
from
this
tutorial.
So
if
you
wanna
do
this
yourself
just
check
out
the
tutorial
and
I
also
have
all
the
descriptions
in
there
and
the
champions
pipelines.
A
So
the
first
one
is
called
the
captain.
Quality
gates,
evaluation
and
I
say
build
with
parameters.
So
what
I
say
here
is
remember:
captain
has
the
notion
of
a
project
stage
and
service,
so
I
still
need
to
provide
these
three
values,
but
my
pipeline
is
automatically
then
creating
a
captain
project.
In
case
it
doesn't
exist,
it
automatically
creates
it
with
that
stage
in
case
it
doesn't
exist
and
it
automatically
creates
a
service.
Then
it
automatically
configures
my
monitoring
tool.
It
automatically
uploads
a
set
of
a
slice
to
make
it
easier.
A
I,
just
basically
specified
two
and
then
I
can
give
it
a
time
frame
and
then
I
can
say
how
long
should
I
wait
for
the
result,
because
it
takes
just
a
second
or
two
now
what's
important
here:
service,
evil
service,
my
SLI
that
are
specified
here,
is
actually
referencing.
That
name.
So
it's
it's
calling
dynaTrace
and
they
give
me
these
metrics
that
are
specified
in
here
from
services
in
dynaTrace
that
are
take
with
evil
service.
So
let
make
sure
you
kick
this
off
now.
Let
me
go
to
dynaTrace
here's
my
my
dashboard.
A
I
showed
you
earlier
and
remember
with
the
tagging.
This
is
really
cool,
so
now
I
can
say,
show
me
the
services
that
are
taked
with
evil.
Sir,
he
evil
service.
Here
we
go
so
I
have
one
application.
It
can
run
anywhere,
I,
don't
care
where
it
runs.
Most
important
thing
is
in
my
case
here:
I
have
a
evaluation
service,
evil
services
a
take
on
it,
okay,
so
now
what
my
captain,
what
my
champions
pipeline
actually
does?
Actually,
let
me
go
back
to
the
slice
because
I
believe
I
have
it
highlighted.
A
So
what
I've
built
is
I
have
a
captain
in
it
stage
where
really
I'm,
just
using
my
captain,
Jenkins
library.
So
it's
an
init
call
and
I
give
it
a
project
name,
the
stage
name,
the
service
name
and
the
monitoring
tool
and
what
it
does.
It
just
makes
a
call
to
captain
create
project
with
the
project
name,
and
it
also
automatically
creates
a
shipyard
file
with
that
individual
stage
in
there.
A
So
the
cool
thing
here
is
I
am,
if
you
have
your
own
Jenkins
pipelines,
and
you
have
your
source
code,
repo
with
your
source
code
of
your
of
your
app
and
maybe
your
tests,
then
the
only
thing
you
have
to
add
is
a
Celina
slow
and
then
it
just
every
time
from
your
pipeline.
You
just
upload
it
to
captain,
and
then
captain
has
its
internally.
It
doesn't
need
to
access
your
kit
and
that's
it.
A
So
once
captain
has
everything,
then
my
next
stage
here
is
making
a
single
call
to
my
captain,
Jenkins
library,
it's
called
st.
start
evaluation
event.
We
can
pass
in
either
a
start
and
an
end
time
frame
numbers
in
minutes
where
I
basically
say
evaluate
the
last
600
seconds
and
so
on.
It
does
the
automatically
calculation
of
the
timestamps.
But
basically,
what
happens
now
is
kept
them
we'll
take
every
individual
SLI
that
I've
specified
and
uploaded.
A
It
will
then
look
at
the
SLI
and
will
actually
do
it
will
transform
the
SLI
because
in
the
SLI
is
in
order
to
not
make
them
hard-coded
for
a
particular
app.
But
in
order
for
you
to
reuse
a
slice,
you
can
also
specify
placeholders.
So,
for
instance,
I
assume
request
count
is
something
you
want
to
have
all
the
time.
The
only
thing
I'm
doing
here,
I
say
I
want
to
have
the
request
count,
so
that's
kind
of
the
throughput
metric
for
a
service
that
has
a
tag
on
it.
A
That
is
exactly
the
same
name
as
my
captain
project.
So
this
is
why
this
gets
translated
into
this
particular
query
and
the
timestamp
the
from
and
two
will
be
applied.
Then
this
query
will
be
executed
against
the
Diana
trees,
matrix
API,
for
not
only
one
SLI,
but
for
every
single
SLI
that
I
have
in
there.
Then,
in
my
Jenkins
pipeline,
I
can
also
say:
wait
for
completion.
This
is
not
a
function
that
I
wrote
in
my
Jenkins
PI
in
my
captain,
Jenkins
library.
A
It
will
basically
wait
until
captain
is
truly
done
with
the
evaluation
because
it
can
take
about
a
minute
or
so
once
the
results
come
back,
it
will
report
it
back
and
then
you
can
also
say
do
you
want
to
let
the
pipeline
failures
to
know
and
what
kept
and
also
does
because
I
have
the
banner
is
integration
turned
on?
It
will
also
send
the
result
back
to
dynaTrace.
So
let
me
show
you
all
of
this.
A
Let
me
first
because
I
completely
forgot
about
this.
Where
is
my
there?
We
go.
The
first
thing
I
want
to
show
you
are
these
SL
eyes
remember,
so
this
is
the
the
basic
SLI
so
Mike's
my
example.
You
can
just
run
it
and
it
comes
with
these.
So
here's
my
basic
SLI
file,
these
are
the
fit
of
these
metrics
or
these
SLI
definitions.
A
So
this
is
this
one
here.
So
basically,
what
happened
behind
the
scenes?
Captain
was
just
basically
executing
all
of
these
queries,
but
it
was
replacing
the
placeholders
and
it
was
adding
the
timestamp
in
the
back.
That's
it.
So
if
I
go
back
to
to
this
gentleman
here
exactly
and
if
I
zoom
in
again
a
little
bit,
then
what
I
see
on
the
bottom
is
an
automatic
event.
A
It
was
also
sent
from
Captain
to
dynaTrace
because
after
damages
integration
turned
on
hey,
there
was
a
quality
gate,
evaluated
its
failed
with
60
out
of
100
potential
points,
plus
they
get
a
whole
lot
of
information.
For
instance,
I
get
a
link
back
to
the
Jenkins
bridge,
so
I
click.
It
then
I
get
to
the
Jenkins
bridge
with
a
deep
link.
This
is
also
possible
and
then
I
get
directly
to
the
quality
gate
overview.
So
here
I
see
it
seems
my
build
is
always
anything.
That's
okay,
but
I
see
the
automatic
evaluation
I.
A
Also
we
support
labels
now
so
every
time
when
I
call
captain
from
Jenkins
I
may
I'm
passing
things
like
the
build
ID,
the
job
name,
the
job
URL
and
it's
all
interconnected.
So
that's
pretty
cool.
You
can
click
on
the
on
all
of
these
previous
bills.
Here
that
I
have-
and
you
can
see
all
the
actual
values
you
can
also
look
at
the
charts
and
then
chart
it
over
time.
Okay,
so
that's
pretty
sweet!
Now,
let
me
quickly
go
back
to
the
slides.
So
this
is
this.
A
Is
the
the
first
pipeline
I
think
this
is
a
standalone
pipeline
that
just
kicks
off
your
SLI
evaluation.
I
think
I
will
probably
use
it
primarily
if
you
start
building
your
SL
ice
just
to
validate
is
my
isolate
or
comment.
Correct
will
get
back
the
read
results
and
you
need
to
do
it
against
the
system
that
has
at
least
some
load
so
that
metrics
come
back.
A
Okay
now
second
example:
I
have
a
second
pipeline
where
I
built
into
my
Jenkins
by
applying
I
would
call
it
a
poor
man's
load
testing
tool,
it's
just
an
if
loop
or
like
a
while
loop.
That
is
just
executing
a
couple
of
QL
comments,
but
I'll
allow
you
to
call
this
pipeline,
you
give
it
a
URL,
and
then
this
this
curl
comment
is
also
adding
these
HTTP
headers
I
told
you
earlier
so
on
the
dynaTrace
site.
A
I
can
actually
I
can't
differentiate
between
the
individual
test
steps
and
so
on
and
so
forth,
and
then
so
when
I
run
it,
it
will
look
like
this
and
then
I'll
get
the
results.
So
let
me
run
this
one,
but
better.
We
are
this
one
succeed.
It
kept
not
quality
gates,
but
this
is
the
simple
load
test,
with
captain
quality
gates.
So
again,
this
is
where
a
simple
test
runs.
Let
me
run
this
so
here
I
basically
say:
hey
a
different
project
test
with
quality
gates,
call
it
a
staged.
A
A
I
give
it
a
URL,
so
my
simple
load
testing
tool
that
I
built
will
just
execute
a
couple
of
different
URLs
here,
slash
echo,
slash
version,
and
so
on
and
I
can
also
give
it
is
like
comma
or
semicolon,
or
colon
separated
name
value
pairs
with
the
URL
and
then
the
tear
that
the
step
name
so
version
will
actually
later
show
up
in
dynaTrace
and
so
on
and
so
forth.
I
can
say
load
test
time.
It
should
run
for
three
minutes.
A
I
have
a
think
time
in
here
and
then
at
the
end
of
the
way,
it's
three
minutes
until
kept
returns.
The
result.
So,
let's,
let's
do
this,
let's
called
built
right
so
now
it
runs
also
while
we're
waiting.
We
see
here
that
what
the
Jenkins
my
library
also
does
it
allows
it
creates,
allows
you
to
our
it
allows
you
to
artefact
to
store
artifacts
to
archive
artifacts
like
I'm,
are
automatically
archiving
the
s.
Solana
solo
I
also
have
to
kept
in
context
and
the
result.
A
So
the
the
raw
results
that
come
back
from
an
evaluation
are
also
stored,
because
we
have
heard
requests
that
people
want
to
have
these
results,
obviously
here
in
in
Jenkins,
so
that's
also
all
possible.
So
now
what's
happening,
it's
kicking
off
my
test
and
after
the
test,
its
enforcing
the
quality
gate.
Again,
it's
really
as
simple
as
that
and
I
actually
have
an
animation
that
shows
this.
So
what
is
happening
now?
I
have
a
pipeline.
A
So
if
you
have
a
champions
pipeline,
we
already
where
you
already
run
tests
from
the
pipeline,
then
I
encourage
you,
add
a
captain
in
it
in
front
of
it.
So
basically
initialize
the
library
give
it
a
project
name
and
on
it
does
all
the
initialization
just
as
I
explained
earlier
now.
The
new
thing
here
is
in
order
to
figure
out.
What's
how
long
the
said
the
load
test
taken?
What's
the
perf?
What's
the
exact
timeframe,
we
want
a
later
useful
evaluation,
I've
implemented
a
method
in
my
Jenkins
library.
That
is
called
mark
evaluation
start
time.
A
It
really
just
takes
this
time
stamp
at
this
point,
and
then
you
trigger
your
tests.
Whatever
the
tests
are
doing
right,
in
my
case,
it's
just
a
cool
script
that
is
doing
stuff,
and
once
that
is
done,
you
are
when
the
test.
Once
the
test
is
done,
you
just
call
st.
start
evaluation
just
as
a
little
earlier.
So
basically
say.
Please
stop
my
revelation.
A
You
don't
have
to
give
it
any
parameters,
because
it
will
take
the
start
timestamp
from
this
method
call
and
if
the
end
timestamp
is
omitted
or
not
passed,
it
will
just
assume
by
default.
It
is
now
so
the
current
timestamp.
Then
it
will
do
the
audit
transformation
again.
In
this
case
it
goes
to
a
service
that
is
take
with
test
service
pulls
in
the
metrics.
Then
either
wait
for
evaluation
done
so
that
the
Jenkins
pipeline
waits
until
the
results
are
here
once
the
results
are
here.
A
I
can
let
the
pipeline
fail
and
I
again
get
the
stuff
in
dynaTrace
event
in
dynaTrace,
so
that's
pretty
sweet.
A
couple
of
things
troubleshooting
wise
that
I
have
noticed
in
the
dive
Renick
into
very
common
problems,
are,
if
you
get
the
first
error
message,
no
evaluation
performed
by
lighthouse,
because
no
SLI
provider
configured
for
project.
This
means
you
have
probably
not
installed.
A
The
diameter
is
SLI
provider,
where
you
have
not
called
captain
configure
monitoring
dynaTrace
for
that
project,
so
make
sure
that
you
have
you
installed
not
only
the
dynaTrace
service,
but
also
the
damage
with
SLI
provider.
It's
two
services
right
now
we're
trying
to
automate
that
so
you
have
to
have
just
a
single
install,
but
this
is
what
it
is.
My
Jenkins
pipeline
automatically
takes
care
of
the
second
one,
because
it's
automatically
configuring
dynaTrace
monitoring.
The
second
error.
Diameter
is
metrics
return,
zero
results,
even
though
I'm
expecting
something.
A
So
this
means
you
either
have
you
know
no
data
or
you
have
a
wrong
label.
You
have
not
taken
it
correctly.
Basically,
you
have
specified
a
query
that
is
returning
no
data
because
it
was
an
invalid
query
or
yeah.
You
just
pick
the
timeframe
that
there's
no
data
as
well,
but
that's
what
it
isn't.
The
last
one
dynaTrace
API
return
status
code
for
three.
For
instance,
the
query
involves
seven
thousand
seven
and
seven
line
metrics.
A
This
is
where
you
also
made
a
mistake
in
the
query:
either
you
have
not
specified
in
the
takes
correctly.
So
too
many
services
come
back
where
you,
you
have
basically
said
I
want
to
have
all
the
data
of
the
time
frame,
but
what
we
are
expecting.
We
want
to
boil
it
down
to
a
single
metric
right.
So
when
we
evaluate
a
time
frame,
we
want
to
get
a
single
value
for
the
time
frame,
and
this
is
why
I
want
to
leverage
the
options
of
merge.
A
So
there's
a
lot
of
great
documentation,
also
on
the
metrics
API
out
there,
how
this
works
good,
also,
some
tips
and
tricks
on
calculated
service
metrics,
because
I
believe
this
is
the
power
also
the
dynaTrace
brings
to
the
table,
because
not
only
do
we
give
you
response
time,
failure
rate
and
stuff
like
this,
but
we
can
give
you
metrics
split
by
let's
say
the
tested
name
that
you've
seen
earlier
right.
This
is
very,
very.
This
is
great.
You
can
watch
my
youtube
video
that
I,
that
is
called
dynaTrace,
calculated
service,
metrics
and
metrics
api.
A
Where
I
talk
about
it,
there's
some
best
practices
give
the
metrics
good
names
metrics.
When
you
define
such
a
metric
they're,
also
calc,
they
were
always
calculated
for
a
particular
filter
condition.
So,
but
you
cannot
say
calculate
these
metrics
for
everything
in
my
dynaTrace
environment,
because
maybe
you
have
thousands
of
hosts
and
ten
thousands
of
microservices.
So
that's
why
you
have
to
specify
condition
my
best
practices
use
a
condition
so
start
small
and
then
expand
what
kept
us.
A
If
you
use
captain
for
continuous
delivery
right,
remember,
captain
always
tries
to
use
captain
project
captain
service
stage
and
deployment.
So
if
you
want,
if
you
want
to
do
this
as
well,
you
could,
for
instance,
say
create
this
metric
for
everything
that
has
the
captain
project
tag
on
it
already
come
up
with
your
own
tag
and
say
every
every
service
that
has
the
tag
on
it.
That
is
called
for
evaluation
or
four
days
of
evaluation.
That
should
be
the
one.
So
this
this
is
it's
a
great
capability
now
the
other
thing
tips
and
tricks.
A
A
It's
always
good
when
you
create
a
metric
and
to
test
the
metric
out
before
you
give
it
to
captain
so
put
it
on
a
dashboard
and
see
if
actually
the
values
come
in,
use
the
API,
Explorer
and
dynaTrace
and
see
if
the
metric
really
API
really
returns
exactly
the
values
that
you're
expecting
okay
and
then
put
it
over
good
I
also
want
to
highlight
one
more
thing
before
I
go
into
the
next
use
case.
There
is
a
dynaTrace
so
I'm
using
the
dynaTrace
service
and
the
dynaTrace
sli
service,
but
I
actually
have
a
new
version.
A
I
developed
some
enhancements
that
were
requested
by
some
of
you,
the
users
out
there.
It
is
already
available
in
these
two
feature
branches
and
it
will
soon
be
merged
into
into
a
master.
So
it
will
be
available
anyway
for
everybody.
But
what
this?
What
these
things
do?
Is
they
allow
custom
taking
rules
because,
by
default,
the
dynaTrace
service
only
pushes
events
to
services
that
have
exactly
the
four
takes
on
it,
the
captain
projects,
waivers
and
so
on?
But
you
can
see
here
on
the
bottom
left.
A
A
It
now
allows
you
to
have
multiple
dynaTrace
tenant
configurations
per
captain
project.
So
if
you
have
a
captain
project
and
if
maybe
staging
and
production-
and
you
have
to
dynaTrace
environments,
you
can
actually
specify
multiple
Apia
tokens
for
that,
so
that's
also
possible
and
for
the
SLI
service
as
well.
A
So
let
me
double
check
quickly.
This
test
finished
and
what
I
want
to
quickly
show
you,
the
console
output.
So
there's
a
lot
of
output
that
I
do
here.
I
know
I'm,
I,
think
better
mordant
and
then
less,
but
there's
also
a
link
to
the
bridge.
So
once
I
basically
tale
kept
and
hey
study,
evaluation,
I
get
the
link
here
and
then
I
can
also
click.
It
from
here
and
I'm
sure
I'll
find
a
better
way
to
make
this
link
more
visible
in
Jenkins
itself.
A
But
it's
the
same
thing
read
this
for
this
time:
to
build
a
screen.
That's
all
great
all
right
next
use
case
performance
driven
culture
provide
performance
as
a
self
service.
So
what
I've
seen
a
lot
of
people
like
the
idea
of
performance
as
a
self
service,
however
quickly
need
to
to
have
a
sip
here.
Are
there's
a
lot
of
questions
right?
If
you
built
this
yourself,
where
do
these
tests
run?
How
much
hardware
is
needed?
How
do
we
stream
the
metrics
I
mean?
There's
a
lot
of
great
do-it-yourself
guides
out
there.
A
A
So
what
we
have
done
with
captain
is:
we
have
already
available
a
kubernetes
environment
that
we
run
so
basically
an
expandable,
on-demand
environment
and,
as
we
have
from
the
beginning,
implemented
a
test
rhythm
approach
to
our
delivery,
where
we
have
integrated
tools
like
che
meter
or
notice,
so
we
say
well,
we
can
actually
give
you
a
service
where
you
use
captain
not
only
for
evaluation,
but
also
for
test
execution
and
evaluation.
So
that
means,
if
you
have
a
champions
pipeline
that
deploys
already
then
like
deploy
your
app
into
a
certain
environment.
A
What
you
can
then
do
you
can
actually
trigger
captain
and
say:
hey
I
have
deployed
my
app
on
this
URL
now
go
off
and
execute
the
tests
for
me.
So
I
can
actually
then
give
captain
my
test.
Scripts
I
can
upload
it
to
the
repository
can
be,
jmeter
can
be
near
load.
These
are
the
two
tools
that
are
currently
supported.
So
then
kept
execute
the
tests
for
me
in
this
environment
in
this
Ark
infrastructure
that
we
already
have.
Then
we
do
the
evaluation.
It
happens
fully
automatically
wants
to
test
the
done.
A
The
evaluation
kicks
in
pulls
the
metrics
for
that
timeframe
of
the
test.
Does
the
SLO
validation
at
the
end
returns
the
result,
and
here
we
go
right
so
this
this
automate,
so
it
that
means
you
don't
have
to
necessarily
build
your
own
testing
infrastructure.
I
have
a
look
at
what
captain
can
offer,
so
the
the
demo
mode
number
three
that
I
have
for
you
now
is
another
pipeline
and
I.
You
know,
I'll
actually
show
you
and
life
is
much
better,
but
this
is
how
it
looks
like,
and
this
is
the
result.
A
So
let
me
do
this
for
you,
so
my
last
pipeline
is
I.
Call
the
captain
performance
as
a
service,
so
I
assume
I,
have
an
application
deployed
anywhere
and
I
have
a
URL
to
it,
and
now
I
have
my
pipeline
again.
Captain
project
captain
stage,
captain
service,
this
proof.proof
s
service
will
again
be
my
take.
So
I'm
when
I'm
my
sli
assess
their
defiance.
They
have
this.
They
have
to
have
to
take.
A
So
let
me
just
double
check
which
service
it's
going
to
be
so
take
a
service,
and,
as
you
can
see
here
now,
if
you're
familiar
with
damage,
this
can
be
anything
any
application.
Java.Net
whatever
it
is.
This
is
my
this
is
going
to
be
my
my
service.
So
now
the
cool
thing
is,
I
have
it
running
and
now
my
pipeline
allows
me
to
say:
okay,
what
type
of
test
you
want
to
run?
Do
you
wanna
run
the
performance
test
performance,
10,
50
100
along
so
here
they
they're
the
key
meter
service.
A
There's
an
extended
version
now
that
we
have
just
promoted
into
captain
contrib
allows
you
to
upload
at
a
meter
script,
but
not
only
a
script.
You
can
also
upload
multiple
workload,
configurations
and
I'll
show
you
in
a
second
how
this
looks
like.
Let
me
just
execute
the
performance
10
again,
which
sli
should
be
evaluated
and
then
also
the
URL,
and
so,
if
I
click
on
build
now,
what
should
happen?
Captain
will
automatically
trigger
the
test
that
sits
behind
this
workload.
A
Configuration
hands
at
this
URL
because
I
could
run
it
against
different,
obviously
versions
of
that
app
will
run
the
test,
and
then
it
will
wait.
A
total
of
60
minutes
until
captain
comes
back
with
hey
tests
are
good
and
evaluation
is
here
and
it
will
evaluate
these
s
allies,
so
click
on
build
alright.
So
let
me
show
you
before
I
go
back
to
the
slides
to
explain
visually.
What
happens?
A
Let
me
show
you
two
things
and
I
have
first
I
want
to
show
you
the
SL
ice
that
are
called
proof
test,
so
you
see
here
here,
I
have
additionally
to
these
five
basic
metrics
that
I
always
have
additional
indicators
for,
for
my
individual
test
cases.
Invoke
a
conversion
homepage.
So
give
me
the
response
time
of
my
test
case.
That
is
called
invoke.
Give
me
the
response
time
of
the
test
case.
Echo,
give
me
the
number
of
service
calls
so
I'm
testing
a
micro
service.
App.
A
Give
me
the
number
of
service
calls
this
app
is
making
to
the
next
to
the
next
layer.
I
have
my
queries
here,
ready
such
as
dynaTrace
queries
using
these
new
calculated
service,
metrics
filtering
on
the
test
case
and
so
on,
as
water
tests
already
finished.
Iori
also
completely
forgot
about
you
see
my
my
select
notification
pop
up,
because
I've
also
integrated
captain
with
slick
I,
even
get
notifications,
as
things
are
done
and
the
rest
is
all
straightforward.
So
these
are
my
metrics
I.
Have
my
chain
Mix
script
here
and
I?
A
Have
a
new
chamber
jmeter
script
and
I
have
a
chain
me
to
convey
ammo.
So
this
is
now
the
new
cool
thing
where
you
can
basically
specify
different
test
strategies
and
what
you
basically
specify
which
script
should
be
executed,
and
then
you
can
pass
in
a
list
of
parameters
to
the
script
so
I'm
using
it
to
control
my
num,
the
number
of
virtual
users.
A
How
often
they
loop
through
the
think
time
and
also
the
acceptance
of
error
rate
so
I
can
actually
specify
the
test
should
be
immediately
be
broken
up
in
case
I'm
exceeding
a
certain
error
rate,
because
why
would
I
even
do
anything
for
right
if
the
system
is
completely
broken,
and
this
is
exactly
what
I
just
executed
with
in
my
in
my
pipeline
here?
So
let
me
let
me
go
through
here.
So
basically,
what
happens
in
my
pipeline
I
do
a
cap
in
it.
A
A
Captain
here
is
a
URL,
so
we
are
sending
a
so
called
deployment
finished
event
with
that
URL
and
what
captain
will
do
with
this?
It
knows
there's
an
app
deployed,
so
it
picks
up
the
workflow
with
hey.
The
first
thing
is
we
need
to
execute
a
test.
In
my
case,
I
have
G
meter
is
testing
tool.
You
may
want
to
use
new
load
as
well
right.
It
doesn't
matter
which
tool
of
I'm
using
G
meter.
A
It
runs
the
tests,
it
will
also
automatically
send
the
test
start
and
test
stop
women
to
dynaTrace,
also
a
deployment
event
notification.
So
this
is
also
best
practice
where,
with
our
customers
every
time
we
deploy
a
meter,
maybe
a
new
version
of
an
app
every
time.
You
start
a
stop
a
test.
Send
this
information
to
dynaTrace
once
the
test
is
done,
the
SLI
magic
happens.
This
is
just
a
short
version
of
it,
but
you
get
the
idea
now
and
it
pulls
the
metrics.
It
calculates
the
results.
A
It
also
tells
dynaTrace
about
the
results
and
it
either
succeed
or
lets
the
pipeline
fail.
So
I
think
that's
pretty
cool.
Alright,
the
additional
service
that
I
installed
here
is
the
chain
meter
extended
version
that
he
can
already
try
out
in
case
you
have
installed
the
standard
version.
Alright
I
am
almost
done
and
I
know,
there's
questions
that
came
in
so
we're
almost
there.
Let
me
just
double
check,
so
this
thing
ran
it
failed.
Why?
Well?
The
question
is:
why
did
it
fail
because
I
assume
the
evaluation
failed
right?
A
So
let
me
go
into
my
bridge
again
for
the
particular
build
and
again
I
will
make
this
link
easier.
Let's
see
and
exactly
it
failed,
because
we
violated
a
lot
of
different
metrics
right.
The
response
time
was
through
the
roof
and
also
all
these
are
all
tall
red.
Definitely
not
good.
If
you
look
at
the
charts
right,
they
have
definitely
seen
these
metrics
have
seen
better
days.
Yeah
there's
something
wrong
here
again:
the
link
back
to
the
information
about
which
build
actually
executed
it
in
Jenkins
Plus
on
the
dyno
trip.
A
Excuse
me
on
the
dinosaur
aside.
If
I
refresh
that
screen
I
also
see
hey,
we
started
that
performance
10
test.
Well,
first,
we
hit
the
deployment
event
right.
This
is
the
first
thing,
including
the
link
to
the
the
application.
Then
the
test
started,
then
the
test
finished
and
then
the
quality
gate
was
evaluated.
So
the
information
is
in
all
tools
fully
automatically
and
that's
the
thing.
What
I
love
about
this
event
driven
approach
of
captain
and
you
can
use
individual
pieces
of
it
good.
A
To
conclude,
if
you
are
interested
in
trying
this
out
with
your
own
Jenkins
to
level
it
up
check
out
the
camp,
the
captain
Jenkins
shared
library,
here's
the
link
to
it.
It's
really
easy
to
use
and
also
check
out
the
tutorial
that
I
just
did
I
used
here,
for
my
for
my
demo,
give
us
feedback
as
well,
but
it
should
be
really
really
straightforward.
Now.
The
last
use
case
that
I
will
cover
in
another
webinar
is
breaking
the
monolithic
pipeline.
A
I
said,
while
I'm
not
going
to
talk
about
this
today,
I
still
want
to
show
you
that
you
can
already
try
it
yourself,
because
there's
a
Jenkins
service
in
cap
that
can
actually
call
individual
Jenkins
pipelines
to
do
different
things.
So
I'll
leave
this
a
leave
this
here,
just
as
a
teaser.
If
you
want
to
try
it
out,
I'm,
definitely
having
another
webinar
on
this,
but
this
is
how
it
can
configure
it.
A
How
you
can
map
and
link
it
kept
an
event
for
let's
say
for
deploy,
for
test
for
evaluate
for
promotion,
for
notification
to
etching
kins
pipelines
so
that
you
can.
You
can
use
individual
Jenkins
pipelines
for
individual
tasks
and
now
I
want
to
open
it
up
for
Q
a
but
I
want
to
ask
you
for
a
favor
look
at
our
github
repos.
You
do
as
a
favor.
A
A
The
way
you
install
captain
is
through
captain
install,
that's
just
a
comment,
so
it's
a
silly
comment
that
will
then
actually
deploy
an
installation
job
in
in
in
kubernetes
and
then
basically
installs
captain
from
will
from
within
kubernetes
I
think
we
already
have
a
requirement
for
also
an
operator
man
in
general
captain
itself.
If
you
look
at
the
architecture
and
what
we're
doing
with
shipyards,
we
are.
A
The
way
this
works
is
you
have
to
write
your
own
query
language
or
you
have
to
write
your
own
queries,
so
these
essays
are
specific
to
the
monitoring
tool
underneath
right.
That's
why
also
you
have
individual
SLI
files.
In
case
you
have
different
monitoring
tools,
so
that
means
for,
if
you
look
at
the
Prometheus
examples,
they
DSL
is
all
contained
from
QL
queries
for
diameter.
A
These
are
these.
Are
the
things
were
we're
we're
thinking
of
awesome
all
right,
let
me
see
I
want
to
actually
make
sure
that
my
I
want
to
show
you
yeah
I,
think
I
should
this
was
the
last
thing.
I
did
I
ran
the
performance
as
a
service
okay,
so
this
was
this
one.
What
I
should
actually
be
able
to
see
if
I
go
back
to
dynaTrace
and
if
I
go
to
my
tag?
A
Perf
is
a
service
just
make
sure
that
I
really
got
it
all
covered?
Oh
yeah,
here
we
go
so
it
was
not
only
the
quality
gate.
Yeah
I
mean
I
showed
this
earlier.
So
this
is
the
great
thing
it's
it's
all
automatically
forwarded
and
for
those
of
you
that
are
on
the
line,
I
see
people
like
bird
and
Bill
Brian.
All
these
guys
until
now,
captain
could
only
send
these
events
to
dynaTrace.
In
case
you
had
all
of
these
tags
from
captain
on
it.
A
But
thanks
to
the
new
extension
of
the
Dynegy
service,
you
can
just
upload
a
so-called
dynaTrace
conf
file
and
then,
instead
of
if
this,
if
this
file
exists,
it
will
use
these
rules
to
send
events
you
can
have.
You
can
have
one
taken.
They
are
multiple.
If
this
doesn't
exist,
it
go
back.
It
goes
back
to
the
default
way
where
it
assumes
you
have
to
keep
and
takes
on
it,
but
this
gives
you
full
full
flexibility
of
everything.
A
Also,
these
placeholders
they're
not
only
limited
to
the
$2
service
or
dollar
project
and
stage.
You
can
also
do
dollar
label
a
build
number
builder
if
they
mix
any
any
any
sense
red,
but
if
you're,
if
you're
talking
with
captain,
if
you're
sending
cap
in
an
event
like
get
the
evaluation
going
run
a
test,
you
can
also
give
custom
labels
to
it.
So
remember
what
I
had
in
my
environment,
I
had
with
my
captain
my
captain,
so
I
have
built
seven
job
name,
so
I
can
add
labels.
A
Maybe
one
label
will
be
the
environment
that
you're
testing
against.
If
you
know
from
Jenkins
in
which
environment
you're
deploying
it
may
just
take
this
and
add
it
as
a
label
to
kept
them.
So
then
you
can
configure
your
dynaTrace
integration
and
say:
hey.
You
know
what
dynaTrace
you
are
just
posting
your
events
to
those
services
that
have
this
particular
environment
label
on
it
and
also
now
no
longer
constrains
you
on
just
services.
You
can
also
push
these
events
to
host
a
process
group
instance
and
application.
A
A
Thanks
for
the
feedback,
so
this
one
will
now
run
for
a
little
over
an
hour
and
the
reason
why
I
like
it
because
it
fills
up
my
dashboard,
so
this
put
that
I
have
here,
which
I
I
promised
a
lot
of
folks
already.
That
I
will
also
automatically
create
this.
It
might
fill
my
tutorials
on
the
top
request
split
by
response
code.
Then,
here
all
of
my
test,
metrics.
What
I
also
did
I
added
links
now
from
the
dashboard
to
your
diagnostic
cases.
A
So
in
case
you
look
at
the,
but
because
something
fails,
but
you
still
wanna
have
the
dashboard
and
then
you
say:
hey
now:
I
would
like
to
go
into
the
multi
dimensional
analysis
view
so
click
here.
That's
a
link
now
to
my
multi
dimensional
analysis
view
in
dynaTrace
that
automatically
filters
exactly
on
on
my
test
cases,
so
that
makes
it
easier.
That's
why
these
these
dashboards
with
links
extremely
powerful.
A
Okay,
good,
then
I
wish
you
all
the
best.
We
have
a
holiday
tomorrow
here
in
Austria,
May
1st.
It
means
I'll
try
to
follow
the
guidance
from
my
wife
to
not
work
tomorrow.
Otherwise
you
will
not
sure
what
you
will
do
with
me,
but
in
case
you
have
a
holiday
do
enjoy.
If
in
case
you
don't
have
a
holiday,
then
we'll
enjoy
it
hopefully
the
weekend
soon,
thanks
Chris
for
this,
like
message
that
I
just
received
from
you
and
all
the
best
see
you
next
time,
bye.