►
From YouTube: Release Readiness Validation With Keptn for Austrian Online Banking Software - Keptn User Group
Description
Marco and Andreas work at Raiffeisen Software who provides banking software for many austrian financial institutions. In this session they show us how Keptn is used to automate the validation of key SLOs as part of their release process.
Meetup link: https://community.cncf.io/events/details/cncf-keptn-community-presents-release-readiness-validation-with-keptn-for-austrian-online-banking-software/
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
A
Save
us
so
at
least
the
first
thing
you
you
learn
in
case
you
are
non
austrian
or
non-german
speaking.
Servers
is
the
way
we
greet
each
other
and
the
nice
thing
is
with
servers.
This
works
for
hello
and
goodbye,
and
for
in
between
as
well
hey
marco.
I
didn't
I
mentioned
it
earlier,
but
I
wanna
mention
this
again:
andreas
lind.
He
was
supposed
to
co-present
as
well
with
you.
A
Unfortunately,
he
cannot
make
I'm
sure
we'll
make
sure
we'll
have
you
again
back
on
the
user
group
at
a
later
time
now
remember
everybody
that
is
listening.
Captain
user
groups
are
really
from
users
for
users.
That
means
we
want
to
really
learn
how
users
are
adopting
captain
in
various
use
cases
and
marco,
maybe
to
your
person.
Can
you
just
quickly
give
an
introduction
on
what
you
do
exactly,
besides
being
a
release,
automation,
engineer,
but
just
a
background
on
you.
B
A
And
a
live
demo,
because
we
really
want
to
see
what
you
what
you
have
been
doing.
If
for
those
newcomers
to
the
captain
project,
there's
a
lot
of
links,
you
can
see
on
the
page
whether
it's
the
website
follow
us
on
twitter,
starr
us
on
github,
and
also
we
have
a
slack
channel
select.captain.sh.
We
also
have
a
slack
channel
on
the
cncf
slack
workspace.
You
can
also
find
us
there.
Most
activity,
however,
is
on
our
specific
captains
like
channel
marco.
A
If
you
can
do
me
a
favor,
maybe
let's
switch
to
the
next
slide,
because
I
am
kind
of
using
you
today
as
my
remote
automation
tool
in
terms
of
sites,
marco,
I
think
it
was
a
little
over
a
year
ago
when
we
got
on
a
call,
and
then
you
actually
showed
me
what
you've
been
doing
with
captain,
and
I
said
this
is
a
great
story
and
I
then
took
the
time
to
write
a
little
blog
post
about
how
you
are
using
a
captain
to
automate
the
release,
readiness
for
a
rifle
software
and
the
the
kind
of
banking
software
that
you
guys
are
building
there.
A
But
for
me
what
is
interesting
is
it's
been
a
while
now,
since
you
wrote
the
blog
and
things
have
changed,
I
think
back,
then
you
were
using
probably
captain
0.8,
something
or
even
older.
I
don't
remember
what
you
started
with.
Thank
you,
all,
the
even
all
the
red
yeah
yeah
you've
been.
Was
it
oppoint?
I
don't
remember
which
one
which
version
you
started
with.
B
A
Yeah,
but
you
were,
you
were
one
of
the
early
adopters
and
I
think
some
bruises
as
well,
that
came
out
of
it,
but
with
this
we
may
get,
we
make
happen
stronger.
What
I
would
like
to
see
now
is
kind
of
marco,
where
is
captain
today
and
then
kind
of
also
walk
a
little
bit
about
lessons
learned
and
and
just
some
best
practices,
and
this
is
when
we
go
back
and
forth,
but
I
would
like
you
now
to
give
me
a
little
update
on
what
has
happened
since
we
published
a
blog.
B
Okay,
good,
but
first
I'll
give
you
a
short
overview
of
our
continuous
test
tool
stack
as
you
can
see,
we
are
using
using
and
providing
tools
for
monitoring,
test,
automation
and
continuous
test.
B
B
Each
environment
has
a
test
project
in
this
case,
drp
test
and
smart
test.
The
test
projects
are
for
checking
new
metrics
or
check
if
the
evaluation
works
after
a
captain
upgrade
and
so
on.
A
That's
actually
a
great,
I
think,
best
practice
already
right,
because
every
time
you
are
you're
you're
playing
around
and
extending
your
slice
and
slows
or,
as
you
said,
right
you're
upgrading
you
want
to
first
validate
if
everything
is
still
is
still
valid.
This
could
also
be
interesting
as
feedback
I
mean.
Creating
a
test
project
is
one
option,
but
I
would
probably
want
to
give
this
also
back
to
to
the
product
team
to
the
captain
team
and
see
if,
if
we
something
like
that
kind
of
like
dry
runs
test
runs.
B
Point
would
be
nice,
so
we
do
not
have
a
bad
history
with
this
test
runs
yeah.
B
B
B
Then
captain
initialize
later
more
executing
the
the
load
test,
sending
triggering
the
events,
the
captain
events
and
check
our
lock
volumes
in
splunk.
B
A
And
maybe
maybe
marker
to
this
one
just
for
people
that
are
not
aware
of
it.
A
I
believe,
and
the
the
big
point
here
I
will
highlight
this
later
on
as
well.
Christian
biaga
is
actively
working
on
or
continuously
continuously
improving
it,
and
in
case
you
are
existing
captain
users
of
that
library
then
make
sure
to
check
out
the
latest
pre-release
that
is
out
there.
A
B
Yeah
yeah,
okay
good
and
we
also
add
the
resources,
the
slo
file,
sli
file
and
the
dynajs
comp
again
before
starting
the
load
test.
B
A
Cool,
so
so,
to
recap:
just
you
have
in
here
in
the
pipeline:
you
do
the
captain
in
it.
Then
you
take
the
timestamp
of
the
load
test.
You
run
the
test,
you
take
another
time
stamp
and
then
you
trigger
a
captain
evaluation
sequence.
Basically
saying
hey
captain,
I'm
done
with
my
load
test.
I
know
that
start
and
the
end
time
frame
now
do
your
evaluation
and,
depending
on
that
result,
you
also
set
the
jenkins
build
result
based
on
whatever
captain
says:
either
green,
yellow
or
red
yeah.
That's
correct,
perfect!.
B
Yeah,
you
can
see
the
status
here
in
the
builds.
Let's
see,
how
does
it
look
like
in
the
captain
bridge?
This
is
our
evaluation.
We
can
open
the
evaluation
board
and.
B
B
A
You
also
have,
can
you
explain
on
the
bottom
on
the
x-axis?
B
These
are
the
ids
from
the
jenkins
builds.
This
is
with
this
id,
I
know
okay,
this
was
jenkin,
builds
number
450,
and
I
have
also
an
link
to
the
jenkins
job
back.
A
Here-
and
this
is
also
done
automatically
by
the
jenkins
library
and
folks
in
case
you're
interested
how
this
works,
it's
rather
straightforward,
when
you're
triggering
a
captain
sequence,
you
can
pass
so-called
labels
and
you
can
see
the
labels
up
there.
One
of
the
labels
that
is
very
magic
is
the
build
id
label
lowercase
b
uppercase
I
and
the
build
id
will
then
be
used
as
the
xx
x
axis
description
by
default.
If
it's
not
passed
in
it's
just
a
time
stamp
but
like
using
something
like
the
job
execution
id
or
whatever
else.
You
have
right.
A
If
you
integrate
this
note
with
jenkins,
but
with
some
other
tools-
and
you
have
let's
say
a
build
id
build
number,
then
this
is
typically
a
thing
and
then
the
job
url.
This
is
also
you
can
not
only
use
labels
with
name
value,
but
if
the
value
is
a
link,
it
will
also
provide
the
link
to
click
on
it,
and
then
you
get
back
to
that
external
tool.
B
The
failed
and
the
warning
slis
and
see
okay
in
this
case
app
signature
documents
result
is
failed,
can
open
compared
with
that
means.
The
relative
threshold
is
okay,
yeah
1.9
percent
slower,
that's
okay,
but
our
absolute
threshold
is
failed
and
yeah
this
in
this
case,
this
is
a
known
issue
in
this
environment
and
for
me,
as
a
slow
tester,
I
can
say:
okay
check.
That's
we
should
we
should
repair
that,
but
it's
not
not
not
a
big
problem
for
the
release.
A
Can
I
ask
a
quick
question
here:
the
the
sli
name
here
is
rt
app
signature.
Pushtan
documents.
Is
this
basically
the
response
time?
I
guess
from
a
particular
test
step.
B
Yeah
that
that's
the
the
response,
time
from
from
a
request
from
the
app
services
in
signature,
ui
services
and
the
request
pushed
on
documents.
A
Okay
and
then
also
for
people
to
explain-
because
I
mean
you
mentioned
it
nicely,
because
when
with
captain
we
allow
you
to
do
two
types
of
comparisons,
you
can
either
say
a
metric
should
not
should
always
be,
let's
say
faster
or
better
than
xyz
like
in
your
case.
It
should
be
faster
than
500
000
milliseconds,
but
you
can
also
do
a
relative
comparison
and
relative
means
it
compares
it
to
the
previous
good
builds.
A
So
with
this,
it's
a
nice
way
of
saying
I
want
to
detect
regressions,
so
I
should
not
get
slower
than
20
to
the
previous
build,
but
I
also
want
to
make
sure
I'm
not
hitting
a
certain
magic
threshold
right
and-
and
that's
that's
really
important,
because
a
lot
of
people
think
I'm
not
aware
of
the
of
the
combination
that
we
provide.
This.
B
Yeah,
the
the
relative
threshold
compares
with
the
last,
I
think,
with
the
last
three
successful
yeah
successful
evaluations
yeah.
This
is
how
our
captain
evaluation
looks
like
it's
a
big
report,
but
some
some
month
or
some
years
ago.
I
I'm
looking
for
every
single
request
manually.
Okay,
this
is
okay.
Is
it
good?
Is
it
bad?
Yeah
captain
would
be
very
helpful.
Yeah.
A
So
captain
is
automating
for
you,
the
retrieval
of
the
values.
In
your
case
they
come
from
dynatrace,
but
on
the
dynamic
side,
you
have
the
context
from
your
test
steps
because
you
have
the
integration
from
jmeter
to
dyna,
trace
passing
over
context.
This
is
why
you
have
all
of
this
response
time
per
test
step.
I
also
see
pg
cpu.
These
are,
I
guess,
cpu
from
processes
right.
B
B
A
But
that's
great!
So
basically
you
run
a
load
test
and
you
want
to
make
sure
that,
during
that
period
of
the
load
test,
you're
never
getting
into
a
situation
where
your
application
is
is,
is
in
the
need
of
opening
and
keeping
a
lot
of
database
connections
open
for
too
long,
and-
and
this
is
this
is
great,
because
if
you
run
a
new
test
and
you
see
if
the
database
connection
behavior
has
changed,
because
your
application
holds
onto
a
connection
for
too
long
before
it
releases
it
back
to
the
connection
pool.
B
A
A
B
A
Awesome
and
yeah,
sometimes
powerpoint
online
is
a
little
tricky.
I
have
noticed
that
myself.
I
think
if
you
want
to
start
from
a
particular
here,
we
go
just
think:
yeah
yeah,
almost
there
hey
quick
question.
B
B
A
Cool,
we
could
even
think
about.
You
know
like
kind
of
what's
the
future,
I
guess
with
captain
we
could
even
automate
that
at
the
end,
you
know
make
a
call
to
confluence,
because
confluence
pages
can
also
be
edited
through
the
confluence
api.
I'm
pretty
sure
we
could
even
figure
that
out.
B
Awesome,
I
think
it's
very
important
to
to
re,
to
create
and
repo
also
in
wiki
conference
whatever,
because
even
the
management
do
not
look
at
captain
or
jenkins
that
they
want
to
see
a
report
and
yeah.
If
we
could
create
the
report
automatically,
that
would
be
nice
yeah.
A
We
will
come
back
and
do
another
user
group
once
you
have
managed
it,
because
I
think
that's
a
great
use
case
that
we
can
publish
the
data
to
external
data
sources
and
I
assume
a
lot
of
organizations
are
using
confluence
products
like
it
like
like
confluence.
So
we
should
definitely
add
this
to
the
list.
I'm
not
sure
rob
is
actually
here
as
well.
I
see
rob
you're
listening.
This
could
be
another
cool
use
case.
A
Partnership
with
with
the
legion.
A
B
A
A
B
B
We
do
not
have.
We
do
not
spend
much
time
in
in
this.
A
A
Yeah
and-
and
I
saw
in
your
case-
and
this
could
also
be
some
some
great
feedback
for
us
on
the
captain
side,
most
of
your
slis
had
a
relative
score,
a
relative
threshold
from
twenty
percent
and
fifty
percent.
I
think
that
seems
to
be
like
your
default,
and
that
does
this.
B
On
the
response
time
matrix,
the
default
is
10
and
20
percent,
but
but
on
the
technical
matrix
like
cpu
usage
or
suspension
time
is,
would
be
20
and
50
percent.
A
Okay,
because
I'm
wondering
I
know
some
of
the
captain
developers
are
also
on
the
line,
if
we
can
also,
if
you
have
any
stories
for
future,
to
come
up
with
some
defaults
for
certain
categories
of
slice
and
slos.
That
would
be
nice.
A
B
A
Okay,
would
it
then
make
sense
for
us
to
also
think
of
being
able
to
comment
in
captain,
maybe
even
on
a
result
and
sometimes
and
also
overrule
the
final
reason
like
a
human
overruling,
you
can
say:
hey,
you
know
it
failed,
but
the
reason
why
it
failed
is
because
these
two
metrics
are
worse
than
they
should
be,
or
we
can
ignore
them.
B
B
A
Sure
that
everybody
is
listening
in,
I
see
some
people
like
moritz
and
folks
and
christian
they're.
Still
there
it's
a
little
result
or
writing
and
commenting
on
it,
yeah,
perfect
and
also
they
that
says
thanks
for
christian
and
moritz
and
everybody
else
to
also
already
provide
some
links
to
the
adolescent
api
that
was
from
rob
and
the
ability
to
use
either
a
web
hook
service
to
call
an
api
or
even
the
job
executor,
to
call
more
complex
things.
So
definitely
there's
enough
opportunity
to
automate
exactly
what
is
on
the
screen
here,
marco.
A
In
your
case,
you
are
using
dynatrace
as
your
observability
platform.
This
is
why,
as
you
can
see
on
the
top,
you
would
specify
a
query
like
rt
app
after
sending
the
off
trigger.
So
that's
the
name
of
the
sli
and
on
the
right
side,
the
metric
selector.
That's
the
dynamic
trace
query.
If
you
would
use
another
data
source,
for
instance,
we
have
a
prometheus
service.
I
know
that
right
now
somebody
is
working
on
a
data
dog
service
and
I
saw
what
there's.
A
Of
other
you
know,
sli
providers
like
we
have
a
new
notice,
a
new
load
service,
so
by
a
name
of
of
an
sli
and
the
query
that
is
then
specific
to
the
tool
and
then
you
can
specify
in
the
slo
yaml.
So
we
have
a
nice
separation
of
concerns
between
how
do
I
get
the
data
and
then
what
do
I
do
with
this
data?
And
this
is
what
you
specify
in
the
slo
embl
and
you
have
pass
and
warning
criteria
just
as
you've
shown
earlier,
but
this
is
the
way
how
you
specify
it
and
mark.
A
I
think
in
your
case,
you
have
all
of
these
files,
the
sli
in
your
code,
repository
right
and
yeah,
and
then
you
are
uploading
it
to
captain
and
with
uploading
I
mean
you're
using
the
captain
ad
resource
function
in
jenkins,
to
just
say:
hey
captain
by
the
way
yeah
yeah
from
the
pipeline
yeah,
that's
perfect
awesome,
so
that's
kind
of
like
one
example
of
how
they're
defining.
I
think
there's
another
slide
that
I
we
wanted
to
highlight.
A
So
the
the
garbage
collection
metrics,
because
the
first
one
was
a
response
time
metric.
This
one
is
a
garbage
collection.
I
think,
marco,
from
your
perspective
as
a
performance
engineer,
I
guess
these
more
down
towards
the
infrastructure.
Side
are
extremely
important
metrics,
especially
as
chains
as
behavior
may
change
during
the
load
test,
correct
yeah,
that's
correct.
B
We're
working
near
to
our
data
to
our
data
center
colleagues
and
yeah.
This
is
very
helpful.
A
A
I
guess
the
answer
is
clearly
yes
like
grouping
them
together
right,
I
mean
you're
grouping
them
right
now
with
the
kind
of
the
name
abbreviations,
but
I
know
we
already
discussed
this
dimitri
demetry,
providing
groups
of
where
you
can
say
this
sli
belongs
to
let's
say
front-end,
back-end
or
infrastructure
application
and
then
or
like
in
your
case,
latency
availability.
Throughput
we
wanted
to.
A
A
A
Groups,
yeah
yeah
exactly
and
then
there's
one
more
thing
that
I
think
we
wanted
to
show
in
in
the
slides
or
maybe
two
more
things
even
but
this
is
one
and
and
oh
marco.
This
is
another
item
on
the
list
that
we
have,
because
you
have
a
a
lot
of
metrics
right
and
in
your
case
right
now
are
specifying
very
specific
metrics
to
query
exactly
a
specific
metric
from
in
dynasties.
A
We
call
it
an
entity
and
you
you
are
when
you're
deploying
a
new
version
like
it's
a
you
can
see
in
the
first
screen:
app
service,
181
or
pfp
finance
status,
1
85,
when
you
have
a
new
version.
This
also
means
in
your
observability
platform.
You
need
to
adjust
the
query
to
really
get
this
particular
deployed
version
and
what
we
have
is
you
can,
when
you're
triggering
a
captain
sequence,
you
can
add
labels
to
that
sequence.
A
You
have
the
option
to
just
create
a
labels
array,
and
then
you
can
pass
the
labels
on
to
functions
like
send.evaluationevent.
B
A
The
last
thing
we
had
also
next
steps
in
outlook-
I
mean
marco
we've
shown
this
earlier.
I
think
andreas
was
actually
the
one
that
we
did
some
some
remote
implementing
last
week
where,
because
of
the
upgrade
that
you
did
to
captain
0.12
some
of
the
jenkins
functionality,
we
still
use
the
old
jenkins
library
and
we
had
to
implement
the
configure
monitoring
ourselves.
But
thanks
to
christian
who
is,
I
think,
still
on
the
call
he
just
updated
and
provided
a
new
jenkins
library.
A
I
think
it's
still
currently
in
in
the
pre-release,
but
if
everybody
every
one
of
you
who
is
upgrading
to
a
later
captain
version
like
0.12,
I
think,
is
when
they,
when
the
change
is
relevant
for
for
especially
with
the
dyna
trace
integration,
please
make
sure
you're
using
the
latest
captain
jenkins
library,
there's
some
breaking
there's
one
breaking
change.
The
kept
init
function
no
longer
has
the
monitoring
parameter,
but
it
has
an
additional
function,
called
captain
configure
monitoring
which
you
need
to
call
at
the
end.
A
Integration
with
prometheus
alert
rules
could
be
very
useful
as
well.
Yes,
the
prometheus
is
actually
taking
the
slo
mo
and
it
is
automatically
creating
alerting
rules
at
the
time
when
you
call
the
captain
configure
monitoring,
so
captain
configure
monitoring,
if
you
have
prometheus
and
install
the
prometheus
service,
will
send
the
slo
definition
to
the
prometheus
service,
and
that
will
then
create
your
alert
rules.
A
A
B
A
Because
you
are
sharing
the
screen,
you
may
not
be
able
to
see
it,
but
yeah.
There's.
A
Other
great
links,
marco,
is
there
any
anything
else
that
you
wanted
to
show
anything
else
you
wanted
to
mention,
especially
for
people
that
might
be
new
to
captain.
B
A
Just
happy
yeah,
I'm
happy!
Thank
you.
Thank
you
exactly
so
folks.
If
you
want
to
follow
up
again
on
the
camera
project,
if
you
want
to
follow
up
on
the
captain
project,
if
you
want
to
follow
up
with
marco
or
andreas
who
could
be
with
us
today,
here
are
the
names
and
all
the
relevant
links,
and
I
want
to
keep
it
open
for
another
two
minutes
to
see.
If
there's
any
additional
questions
coming
in
so
feel
free
to,
let
us
know.
A
And
yeah,
marco,
I
know
this
was
thank
you
for
that.
I
know
it's
always
challenging
to
to
do
presentations
like
this,
especially
in
a
setting
like
this.
When
you
don't
see
people,
even
though
we
got
used
to
presenting,
but
it's
also
not
our
first
language,
and
I
know
you
are
in
your
job
you're,
not
using
english
as
regularly
as
some
of
us
are
doing
so.
This
is
why,
thank
you
so
much.
Thank
you.
B
A
A
It
should
not
be
the
last
and
we
have
a
plan.
At
least
we
talked
about
there's
an
event
coming
up
on
open
source
in
finance,
so
I
would
love
to
present
there
with
you,
and
this
is
just
the
first
start
all
right.
Let's
see
the
chat
just
clips,
there
is
even
more
it's
thank.
You
has
been
posting
more
jfrog
pipelines,
a
lot
of
great
examples
on
labels
on
how
to
specify
labels
in
the
different
tools.
Labels
are
really
powerful
and
yeah.
So
I
took
down.
A
A
Then
the
the
grouping
of
slos,
that
is
a
big
one
and
the
other
one,
is
the
defaults,
defaults
default
paths
and
warnings
for
certain
slo.
I
would
say
categories,
maybe
rather
groups.
That
would
be
nice
because
you
have
a
lot
of
the
same,
and
then
you
have
you
overwrite
them,
but
a
lot
of
the
same
defaults,
yeah
good.
A
Hey
and
mike
kobush
is
also
on
that's
also
great
mike.
He
was
presenting
the
previous
user
group
and
also
it
performed
last
week,
and
he
also
likes
the
idea
of
overriding
slo
results,
because
he's
similar
to
you
mike
is
executing
load
tests,
a
lot
using
load
runner
and
then
using
the
quality
gates
to
analyze
the
results,
and
I
think,
if
you
have
flaky
environments
or
maybe
flaky
tests,
you
just
want
to
have
the
ability
to
say
hey.
Actually,
this
should
be
considered
good.