►
Description
Sachin showed us how Schlumberger integrate Keptn Quality Gates into their Azure DevOps pipelines to validate up to 100+ performance SLOs.
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
A
B
A
Yeah,
so
I'm
really
honored
to
have
you
because
we've
been
working
together
over
over
you
know
quite
a
while.
You
are
working
in
a
very
let's
say:
you
know
regulated
environment
security
is
a
big
concern.
That's
also
why,
for
the
people
that
are
joining
we're
going
to
show
how
you're
using
captain
right
now,
it's
really
you
came
up
with
a
title:
a
practitioner's
guide
to
automated
performance
evaluation.
A
Please
folks
that
are
online.
You
know
use
the
chance
to
ask
questions
in
the
q,
a
feature
for
folks
that
are
watching
this
later
on
reach
out
to
us.
You
have
our
names,
I'm
sure
you'll
find
us
either
on
linkedin
or
just
you
know,
reach
us
on
slack,
you
find
sashin
on
the
captain's
leg
you
find
me
on
the
captain's
leg
reach
out
in
case.
A
You
have
any
additional
questions
that
you
couldn't
ask,
because
it's
you
know,
because
you
maybe
couldn't
join,
live
and
yeah
having
that
said,
this
is
a
captain
users
group
and
it's
from
users
to
users
and
sashin.
Please
take
it
away,
walk
us
through
it
I'll
try
to
keep
it
a
little
interactive.
B
Sure,
thanks
andreas,
so
I
start
with
introduction
about
myself,
I'm
a
consultant
in
app
modernization
and
devops
area,
and
it's
for
one
of
the
largest
oil
and
gas
services,
client
for
atos,
I'm
based
in
pune,
india
and
the
firm
for
us.
B
The
customers
for
captain
are
the
internal
business
application
teams
of
the
client
and
what
we
set
out
to
achieve
here
is
automated
performance
evaluation
for
these
applications
and
then
maintain
the
the
history
of
the
performance
of
the
builds
over
a
period
of
time
as
well
as,
if
the
make
the
evaluation
performance
evaluation
into
ci
cd
pipeline.
So
I
have
this
evaluation
in
automated
fashion
right.
So
this
is
a
sort
of
a
problem
statement
we
are
we
set
out
to
achieve
on
the
next
slide.
B
This
is
the
technology
stack
that
we
have
within
the
client
so
for
performance
test.
We
have
hp,
load,
runner
and
apache
emitter.
We
also
use
apache
bench
as
well
for
some
of
the
applications
for
performance
testing
for
cicd
pipeline.
We
use
azure
devops
platform,
so
the
application
build
and
release
is
automated
using
azure
devops.
B
We
use
diana
trace
monitoring
for
the
applications
and
for
the
performance
evaluation.
We
use
captain
quality
gate,
so
there
are
other
features
from
the
captain,
but
this
is
the
feature
that
we
are
using
captain
quality.
B
We
have
both
self-managed
as
well
as
the
sas
instance
of
captain,
so
we
we
have
0.72
version
of
the
self-managed
captain
instance
and
the
one
that
that
is
a
sas
version,
which
is
0.82
version.
B
So
really
we
started
with
captain
last
year
with
help
from
andreas
and
other
colleagues
from
captain,
and
we
started
with,
I
think,
0.62
version
of
captain.
Initially
we
tried
deploying
it
on
mini
cube
and
other
kubernetes
distributions.
But
finally,
the
version
0.72
that
we
run
on
premise
is
on
the
k3s.
That's
keys
right.
B
We
recently
started
exploring
the
sas
instance
of
captain,
which
is
also
known
as
dinotris
cloud
automation
sas,
and
we
we
have
one
of
the
application
deployed
there,
so
it's
still
under
evaluation
for
us
yeah.
So
this
is
this
is
this
is
how
the
technology
stack
looks
like
right.
A
Hey
because
there's
a
way,
the
first
couple
of
questions
that
came
in,
can
I
quickly
jump
in
you
mentioned
the
apache
bench.
Can
you
quickly
give
me
an
overview
of
how
you're
using
bench
is
it
just
to
to
burst
some
load
against
like
a
root
url
or
how
does
this
work?
You
know.
B
Yeah,
so
we
use
apache
bench
software
to
load
test
the
application
right.
I
think
it's
a
b,
it's
called
a
beep
and
then
you
have
some
options
there
that
you
pass
a
number
of
requests.
So
what
we
do
is
we
have
a.
B
We
have
a
yaml
script
that
we
use
within
azure
devops.
So
within
the
cd
pipeline
we
we
have
the
yaml
script
that
we
run
and
what
we
do
is
stress
out
the
application
with
the
number
of
requests
using
apache
bench.
So
mostly,
we
use
hp
load
runner.
We
have
a
load,
runner,
lre
extension
within
azure
devops.
B
That's
that's
being
used
configured
within
azure
devops
pipeline
and
apache
geometer.
Is
we
scripted
that,
in
in
a
yaml
yaml
pipeline.
A
B
And
we
we
do
a
we
stress
out
the
application
by
by
executing
this
performance
test,
which
what
happens
is
that
would
generate
the
a
lot
of
metrics
within
dinotris,
and
then
we
kick
off
the
captain
evaluation
using
the
using
captain
quality
gate
stage,
which
is
part
of
azure
devops
pipeline.
So
captain
quality
gate
is
integrated,
basically,
capital
quality,
gate
evaluation,
triggering
of
that
evaluation
is
integrated
in
azure
devops
pipeline.
A
And
one
more
thing
because
you
just
mentioned
again,
loadrunner
a
question
came
in.
Are
you
using
loadrunner
in
the
cloud?
Are
you
using
loadrunner
that
you
manage
that
you
run
the
list.
A
A
B
Yeah
so
moving
on,
so
this
is
down
address
dashboard.
So
you
have
two
methods
right.
You
can
have
sli
solo,
yaml
files
right,
you
can
script
those
ml
files
and
then
those
ml
files
could
be
used
by
the
captain
for
evaluation
or
you
could
configure
the
dinotrus
dashboard
with
your
sli
slos,
and
these
are
the
tiles
within
that
dashboard
and
you
define
what
is
your
sli.
So
if
you
see
here
one
of
the
example
that
that's
shown
on
the
screen,
the
sli
we
used
is
gak
rt.
B
So
gak
is
the
name
of
application
and
gak
rt
is
a
sli
which
is
a
response
time
and
we
have
a
pass
and
warning
threshold
here.
So
1500
milliseconds
is
a
pass
threshold
and
5000
is
a
warning
threshold
milliseconds
and
if
you
see
this
is
defined
for
the
dashboard,
so
the
at
a
cumulative
level,
the
ninety
percent.
But
ninety
percent
is
a
pass
and
then
warning
is
a
seventy
percent.
B
So
if
you
define
this
and
use
the
diameters
dashboard
id
within
the
captain,
then
captain
will
query
that
dashboard,
specific
dashboard
that
you
mentioned
and
then
what
it
will
do
is
it
will
get
the
metrics
data
query
the
metrics
data
using
dynatris
metrics
api
and
get
this
evaluate
it
and
then
tell
you
whether
your
whether
whether
the
slo
evaluation
is
pass
or
its
warning
or
failure.
B
So
we
we
initially
used
sli
solo,
yaml
files,
but
then
later
on,
found
out
that
with
the
dynavis
dashboard,
it's
easier
to
do
it
and
then
started
using
dynamic
dynamics.
Dashboard.
B
Obviously,
when
you,
when
you
start
with
captain,
you
need
to
tag
your
application
services
with
captain
tags,
right,
captain
project,
captain
service
and
captain
stage,
so
you
tag
the
services
application
services
with
captain
tag,
and
then
you
not
only
do
service
level
metrics,
but
you
can.
You
could
also
do
transaction
level
metrics.
So
if
you
want
to
do
a
transaction
level
metrics
like
one
here,
you
need
to
define
the
calculated
service
metrics
within
the
within
the
dynadress,
and
when
you
do
a
calculated
service
matrix,
you
could
do
it
at
a
transaction
level.
B
B
I
just
wanted
to
show
you
how
our
azure
devops
pipeline
look.
So,
as
I
was
saying,
we
trigger
captain
evaluation
from
the
azure
devops
pipeline,
and
this
is
our
pipeline
for
one
of
the
application.
So
we
do
the
deployment
of
our
application
and
it
could
be
whatever
components
of
your
application,
so
we
have
angular
and
ips
services
and
then,
subsequent
to
that,
we
kick
off
the
performance
test
and
you
stress
out
the
application
which
will
generate
the
metrics
within
address.
B
And
after
that
you
kick
off
the
captain
quality
gate,
which
is
evaluation
of
your
performance
using
captain
api.
So
this
is
typically
where
we
call
the
captain
apis
right.
What
we
do
here
is
before
we
start
the
performance
test.
We
have
a
start
time
and
end
time,
so
we
record
the
start
time
and
end
time
of
performance
test
and
we
pass
that
times
to
captain
captain
apa
call
right.
B
So
if
you
see
this
a
particular
stage
kept
in
quality
gate,
we
have
this
captain
quality
gate
where
we
use
the
shell
scripting.
So
shell
script
has
the
the
statements
to
call
the
captain
api.
A
Hey
so
shane
two
things
on
this
one.
First
of
all,
the
the
reason
why-
and
we
talked
about
this
in
preparation-
the
reason
why
you're
using
a
shell
script
here
and
not
the
existing
azure
devops
extension,
because
in
the
regulated
environment
that
you
are
it's
a
it,
takes
a
long
long
time
to
get
external
tools
in
right
and
that's
why
you
actually
showed
how
easy
it
is
to
integrate
captain
just
with
a
simple
shell
script
that
is
making
the
call
to
the
captain
api.
A
B
Yeah
yeah
correct,
so
we
we
looked
at
the
the
extension
for
captain
quality
gate,
azure
devops
extension,
but
we
have
the
security
controls
in
place.
That
needs
to
be
validated
and
authorized
to
use
within
the
within
the
organization,
and
we
thought
better
to
start
with
the
shell
scripting
before
we
complete
the
the
authentication
and
authorization
process
of
for
use
of
the
extension
within
the
organization.
Yeah.
A
Yeah
and
also
the
shell
script,
you
were
kind
enough
to
share
the
shell
script
with
me.
It's
basically
a
script
that
is
first
triggering
the
evaluation,
and
then
it's
also
waiting
until
the
evaluation
is
done,
because
captain
is
an
event-driven
system
it
you
know
it
may
take
a
second.
It
may
take
10
seconds
until
all
the
data
is
there
and
we
will
I'm
working
with
jurgen
right
now
and
some
others
like
rob
and
other
folks
that
have
also
kind
of
written
their
own
bash
scripts
and
helper
files.
A
We'll
put
this
on
a
dedicated
captain,
sandbox
project,
so
all
of
the
great
stuff
that
the
community
is
building
around
enabling
new
integrations
will
be
in
one
central
spot
and
we'll
also
share
yours
there.
So
that's
that's
one
thing
I
wanted
to
say
and
sachin
have
one
more
question
that
just
came
in
from
brian.
He
says:
hi
sashin.
Do
you
also
use
a
combination
of
azure
gates
and
captain,
or
is
it
just
captain
quality
gates.
B
We
use
just
the
captain
quality
gates.
Yeah,
we
don't
use
your
gates,
you
mean
approval
gates.
You
is
that
the
question.
B
Yeah,
we
have
approval
gates
yeah,
we
do
have
approval
gates
like
once
you
do
a
captain
quality
gate.
You
come
to
know
the
the
the
quality
of
your
build
and
then
you
could
decide
to
deploy
to
next
stage
right
to
production,
probably
and
then
for
some
of
the
applications
we
have
it
automated
and
for
some
of
the
other
applications
there
would
be
manual
approval
that
needs
to
happen
before
it
goes
to
the
production,
so
yeah,
that's
correct.
B
Other
gates
are
used,
yeah,
perfect,
all
right,
just
so
moving
on
moving
on
to
this
captain
bridge,
so
this
is
from
the
sas
instance
0.82
version
of
captain.
So
what
you
see
is
the
the
the
gag
service.
This
is
the
name
of
the
service
for
the
gac
application,
and
here
on
the
right
side,
you
could
see
the
heat
map,
the
evaluation
hitmap
that
got
generated
within
the
captain
bridge
and
on
the
left
side.
These
are
the
gak
rt
slis.
B
So
we
have
a
lot
of
sli's
for
this
particular
application
where
we,
we
have
split
the
service
by
dimension
right,
and
this
is
at
for
the
each
of
the
transactions,
and
obviously
you
and
what
you
don't
see
in
this
screenshot
is
what
is
the
cumulative
score,
which
is
at
the
bottom
whether
the
evaluation
is
pass
or
warning.
So
all
that
information
is
available
and
you
get
to
know
immediately
whether
your
your
build
is
good
build
or
it's
not
doing
good
in
terms
of
performance.
B
B
So
I
think
this
is
a
lot
of
good
stuff
and
and
if
you
see
the
sequences
you
get
to
know
what
time
you
started
the
evaluation
when
it
is
completed,
so
you
get
to
see
all
that
with
in
this
sequences
menu
and
you
can
have
multiple
projects.
I
mean
this
is
for
one
of
the
application
that
we
use
to
you
to
evaluate
the
sas
instance
of
captain.
B
I
just
took
a
screenshot
from
there
and
wanted
to
show
you
that.
A
B
So
in
terms
of
number
of
slis,
it
varies
from
application
to
application,
for
this
particular
gac
application.
It's
almost
a
100
slice
and
that's
all
decided
by
the
application
team
and
the
we
have
a
testing
team
us
test
coe,
which
we
call
so
they.
B
What
we
do
I
mean
in
terms
it's
good,
to
explain
a
little
bit
on
the
process
side,
so
how
we
onboard
the
application
and
who
gets
involved
in
the
onboarding
process.
So
we
have
a
couple
of
teams
working
together
for
the
for
the
application,
performance,
testing
and
captain
performance
evaluation.
B
So
we
have
testing
center
of
excellence
and
resources
from
that
team
gets
involved.
B
They
do
performance
testing
and
they
they
talk
to
application
teams
to
understand
what
are
the,
what
are
those
metrics
that
that
are
important
and
that
need
to
be
tested.
So
they
come
up
with
the
scripts.
They
they
configure
the
performance
testing
within
the
azure
devops
pipeline,
and
then
we
set
up
a
call
with
perform
with
the
testing
center
of
excellence,
as
well
as
the
dyno
trust
monitoring
team
internal
team
working
at
client
site.
So
we
come
together,
agree
on
the
down
interest.
B
B
We
do
the
we
decide
the
tagging
we
configure
dynadress
dashboard.
We
tag
the
services,
application
services
with
the
captain
tags
and
then,
on
the
captain
configuration
side.
B
We
have
couple
of
commands
that
we
run
using
cli
so
to
create
the
project,
create
the
service,
create
the
stage
create
the
donatress
secret,
apply.
B
So
all
that
we
do
it
on
the
captain
side
and
I
think
that
that
that
whole
process
takes,
I
think,
one
hour
so
for
for
us.
The
time
it
takes
to
onboard
new
application
onto
captain
is
approximately
an
hour.
A
Hey
there's
a
two
questions
that
are,
I
think,
very
similar.
So
let
me
just
read
the
first
one:
if
you
have
two
services,
let's
assume
you
have
two
services
that
belong
to
an
application
and
they
are
depending
on
each
other.
Do
you
then
put
everything
into
one
cap
project
and
then
have
slis
and
slos
that'll
cover
the
whole
thing,
or
would
you
define
a
servicing
captain
for
each
individual
service
to
belong
to
an
application?
B
So
what
we
do
in
dynapress
is
we
find
out
what
logic
to
use
to
tag
the
services,
maybe
host
group
or
it
could
be
if
they
are
running
on
the
kubernetes
cluster
already,
then
we
use
a
namespace,
so
we
use
a
sort
of
a
mechanism
to
tag
these
services
for
the
application.
But
yes,
the
we
create
a
project,
captain
project
per
application
and
then
do
the
tagging
and
then
create
the
director's
dashboard.
A
B
So
here,
if
you
see
gag
service,
that's
from
captain
side
from
application
sites,
you
could
have
multiple
those
multiple
of
those
services
here.
A
What
we're
currently
doing
and
one
of
the
feature
requests
that
came
in
is
to
tag
slice
and
then
also
have
a
filter
option,
so
you
can
go
in
even
though
you
may,
you
know
evaluate
many
different
sli's
across
different,
let's
say,
services
or
even
apps,
layer
and
and
infrastructure
layer
slis
will
will
have
text
in
the
future.
So
you
can
then
say.
Yes,
I
want
to
do
the
whole
evaluation,
but
now
in
the
bridge
I
want
to,
for
instance,
only
focus
on
front-end
service
and
back-end
service
or,
let's
say
application,
layer
and
infrastructure
layer.
A
B
At
this
client,
so
we
onboarded
five
applications
so
far
and
then,
as
I
was
saying,
it
takes
one
hour
per
application
to
onboard
it,
and
we
have
a
couple
of
teams
who
work
for
the
performance
testing
and
kept
an
evaluation
as
well
as
monitoring
so
all
come
together,
have
a
call
and
then
on
board
the
application.
B
A
I
haven't,
I
have
another
question
that
just
came
in
before
I
say
thank
you
and
hopefully
more
questions
will
come
coming
back
to
the
the
question
we
had
earlier,
where
you
said
you
have
one
project
replication
and
then
what?
If
you
do?
You
have
a
situation
where
you
have
multiple
applications
on
your
case
projects
and
they
are
depending
on
each
other?
Where
are
all
the
applications
completely
stand
alone
and
isolated.
B
Honestly,
I
haven't
come
across
where
applications
are
dependent
on
other
application,
so
we
have
all
independent
applications
as
such,
and
then
we
created
one
captain
project
and
one
captain
service
per
per
application.
A
Okay,
perfect
thanks
for
the
clarification,
hey
sashin!
Thank
you
so
much.
I
know
it's.
It's
been
a
long
journey
and
thank
you
so
much
for
you
know
agreeing
to
doing
this
talk.
I
know
you're
working
in
a
in
a
very
regulated
environment,
and
this
was
still
great
that
you
could
show
all
this
data.
I
know
we
would
have
loved
to
show
some
live
demos,
but,
as
I
said
we
need
to,
we
need
to
be
careful
with
what
we
do.
We
need
to
make
sure
that
we
are
not
violating
any
protocols
here.
A
B
A
A
And
hopefully
you
know
you
know
we,
you
have
been
very
active
in
the
community
and
I
would
encourage
everyone
that
is
currently
listening
in
and
that
has
additional
questions.
Well,
now
is
still
your
opportunity
to
put
a
question
into
into
the
q
a
or
afterwards,
please
join
the
captain,
slack
channel.
You
can
go
to
slack.captain.sage
in
case
you're,
not
yet
on
the
slag
channel.
You
can
post
questions
to
the
channels
and
then
we
can
link
sachin
in
case
it
doesn't
see
it
or
just
you
know
message
him
directly.
B
Thanks
thanks
andreas
for
your
help,
all
this
all
through
our
our
evaluation
and
captain
journey,
I
would
say.
B
A
Thank
you
so
much,
and
I
also
want
to
say
thanks
to
the
to
your
to
your
organization,
because
I
know,
while
we're
not
allowed
to
give
the
name,
they
are
helping
us
in
our
cncf
process
to
reach
incubation
status,
because
we
just
launched
that
process
today
and
I'm
very
grateful
for
that
as
well
and
for
the
collaboration.
A
Yeah
all
right,
the
audience
also
says
thanks
sasheen
for
your
clarification,
and
with
this
I
want
to
say,
I
guess,
night
to
india.
I
want
to
say,
enjoy
football,
the
euro
cup,
for
those
europeans
and
in
the
us
I
guess,
you're,
probably
ready
for
lunch
or
breakfast
depending
on
whether
you're
east
or
west
coast.