►
From YouTube: Dynatrace Cloud Automation Applied with Mike Kobush
Description
Mike Kobush, Performance Engineer at NAIC, walks us through his Dynatrace Cloud Automation setup and shows us:
- how he automates validation of his performance testing results
- how he created SLO-based dashboards for each business feature
- tips & tricks on defining SLIs & SLOs
A
Welcome
everyone
to
a
special
episode
of
my
performance
clinics,
anti-driven
automation,
with
flower
domination
by
dynametrays.
I
have
mike
kobush
with
me.
He
has
been
a
long-term
dynamics,
user
performance
engineer
and
one
of
our
early
adopters
of
cloud
automation
which
uses
captain
technology
underneath
youth
mike.
Thank
you
so
much
for
being
with
me
today
and
showing
us
how
you
are
using
cloud
automation
and
lessons
learned
how
you
integrate
with
your
performance,
engineering
work
so
show
us.
What
do
you
have.
B
B
So
what
I've
done
here
is
in
captain
I've
set
up
projects
for
each
of
my
applications,
so
we
have
a
default
dinos
project
and
then
I
just
have
a
couple
other
projects
for
applications
that
I'm
currently
working
on
and
within
these
projects
I've
set
up
services
that
correspond
to
each
of
my
test
scripts
that
I
run
for
each
of
these
applications
and
for
each
of
these
services.
I've
also
built
dashboards
in
dynatrace.
B
This
way
this
is
how
kept
in
does
all
the
calculations
it
looks
at
these
dashboards.
It
looks
at
all
the
metrics
on
our
dashboard,
those
calculations
based
on
sli,
slos,
weights
and
stuff,
like
that.
So
what
I've
done
here
I'll
just
show
you
one
of
my
dashboards
here
that
I've
started
building
out
it's
it's
starting
to
get
a
little
bit
more
robust.
B
I
had
an
sli
which
encompassed
every
test
step
which
didn't
work
for
me.
I
wanted
to
break
out
each
each
test
step
because
each
test
step
has
its
own
sli
for
response
time.
So
this
is
a
login
step.
Launching
my
application.
This
is
clicking
on
a
on
a
link.
This
is
maybe
searching
for
some
data
and
here's
our
summary.
B
B
This
is
a
case
search,
and
I'm
saying
I
want
this
to
pass
if
my
response
time
is
less
than
or
equal
to
550
seconds,
but
I
want
you
to
warn
me
if
it's
over
that
or
less
than
850
seconds
and
I
put
a
weight
on
this
of
five
just
because
I
believe
that
response
times
are
probably
more
heavily
weighted
than
something
like
cpu
time
or
memory
or
airaid,
or
something
like
that.
So.
A
B
Put
a
pretty
high
weight
on
this,
so
what
I
can
do
is
I
run
this
test
step.
I
come
over
here
to
captain
and
I
trigger
an
evaluation
for
this
test
step.
Once
I
trigger
an
evaluation
for
once
I
run
all
my
tests.
I
trigger
an
evaluation
for
every
everything
and
I
can
come
over
here
to
captain
and
look
at
my
services,
and
now
I
get
a
quick
overview
right
here
in
this
pane
of
how
my
test
performed,
based
on
all
my
calculations
from
my
dashboard.
B
So
right
now
I
have
response
time
and
I
have
cpu
utilization.
So
what
captain
does
is
when
I
trigger
an
evaluation
it
it
calculates
a
score
from.
I
think
one
to
a
hundred
and
based
on
that
it'll
say,
did
I
pass?
Did
I
have
warnings
and
did
I
have
any
errors,
and
this
is
a
really
cool,
quick
way
to
see
how
my
tests
performed
based
on
the
latest,
our
latest
build.
B
B
B
Our
response
times
did
fairly
good,
so
I
mean
that's
good.
So,
based
on
the
cpu
captain
says
hey,
this
is
this
is
a
failure?
This
top
one
is
a
little
bit
more
robust.
I
have
a
little
bit
more
criteria
in
this
one.
So,
as
you
can
see,
I
have
warnings
in
this
one.
So
what
captain
does
now
is
say?
Okay,
if
I
have
passes
I'm
going
to
score
them
and
you
can
see
it's
14.29
here
anything,
that's
a
warning.
It's
going
to
give
you.
It
looks
like
a
half
a
score.
B
B
We
can
dig
farther
into
it
and
we
can
see
heat
map
we
can
see
based
on
previous
runs,
how
we're
doing
so.
It
looks
like
I
have
this
case,
search
where
it
had
failed
in
the
previous
build.
It
looks
like
I'm
just
now
getting
a
warning,
which
is
good.
Cpu
failed
in
the
previous
two
builds,
so
I've
got
a
warning
now,
so
it
looks
like
things
are
looking
up
response
times
are
good.
If
I
scroll
down,
we
get
that
same
overview
here,
as
we
did
when
we
hovered
over
in
the
previous
screen.
A
Yeah,
pretty
cool
stuff,
hey
mike,
I
got
two
or
three
questions
clarifying
questions
that
means
you're
running
a
set
of
tests.
I
think
you're
using
loadrunner
against
your
application,
and
then
you
have
models
on
the
cloud
automation
site,
which
you
know
is
kind
of.
You
know,
captain
that
we
run
for
you
as
part
of
our
cloud
automation
solution.
A
You
have
modeled
a
service
for
every
test
script,
because
normally
you
would
sit
down
after
the
test
script.
You
look
at
certain
metrics
that
come
in
from
your
from
your
load,
runner
from
dyna
trace,
different
metrics,
and
then
you
figure
out.
Is
it
good
or
not?
Good?
And
basically
this
analysis
is
now
fully
automated
because
you
just
create
a
dashboard
and
all
of
your
thresholds
that
you
have
you
put
into
the
dashboard,
so
visual
quality
gate
definition
and
then
you
just
kick
everything
out
through
automation,.
B
Yeah
correct
so
usually
so,
as
as
you
know,
and
a
lot
of
performance
engineers
know
that
running
running
tests
takes
a
long
time
doing
the
analysis
of
all
the
tests.
Looking
at
all
the
graphs.
Looking
at
all
the
data
I
mean,
there's
thousands
and
thousands
and
thousands
of
data
points
that
come
out
of
every
test
run
and
looking
at
those
and
and
comparing
and
figuring
out
you
know
is
this:
is
this
test
good?
What
happened?
What
were
wrong?
We
can
now
do
that
with
captain
which
used
to
take
hours.
B
We
can
now
do
that
with
captain
in
the
in
the
matter
of
seconds,
and
all
that
takes
is
triggering
an
eval
and
kept
in
based
on
the
dashboards
will
do
all
the
calculations
for
us
right.
So
the
more
robust
our
dashboards
are,
I
think,
the
more
we're
gonna
get
out
of
of
captain.
You
know
it's
just
gonna,
I
think
revolutionize
the
way
we
performance
engineer
going
forward.
A
It's
good
to
hear,
and
maybe
one
other
thing,
because
you
said:
there's
a
lot
of
metrics
and
I
think
in
your
case
you
have
some
clear,
slo,
some
clear
objectives
for
some
of
those
metrics,
but
what
cloud
automation
captain
also
allows
you
to
do
is
to
say
I
want
to
compare
a
metric
from
my
current
test
with
the
previous
test
runs.
So
we
can
also
do
relative
comparisons,
and
I
think
that's
also
a
great
feature.
B
Exactly
yeah,
that's
gonna,
that's
gonna,
be
that's
a
big
part
of
this
yeah.
That's
great
and
I'll.
Get
my
dashboard
up
here.
I'll
show
you
what
he's
talking
about
sorry,
I
I
got
out
of
it,
but
we'll
get
back
into
this.
Real,
quick
and
andy
could
probably
speak
a
little
bit
more
towards
this,
but
we
have
a
banner
up
here
that
says.
Okay,
I
want
to
compare
our
scores
with
our
last
passing
test.
B
So
yeah
we
can
there's
a
there's
a
lot
that
we
can
do
with
captain.
It's
pretty
powerful.
A
Yeah
and
I've
also
seen
people
not
only
with
the
last
passing,
but
maybe
with
you
know,
let's
say
a
baseline
across
the
last
five
passing
tests.
So
you
can
then
also
in
the
individual
tiles
that
you
have
down
there
in
the
charts
where
you
put
a
metric
like
response
time
of
a
let's
say,
search,
you
could
say
my
slo
is,
let's
say
400
milliseconds,
but
I
also
want
to
make
sure
that
I'm
not
getting
slower
compared
to
the
previous
five
good
builds
then
plus
10.
A
So
you
can
also
do
great
regression
analysis
because,
if
you're
faster
than
400
milliseconds
for
a
long
long
time,
but
you
go
from
200
to
220
to
210
to
250
you're
still
under
under
400,
but
you
can
say
you
want
to
be
notified.
If
the
increase
from
one
test
to
another
is
let's
say
more
than
10,
because
then
it's
an
early
warning
signal,
potentially
or
if
it's
growing
over
a
course
of
tests.
So
that's
great
awesome.
Thank
you!
A
B
Yeah,
I
hope
to
be
back
and
I
hope
we
have
more
more
exciting
things
that
come
up
and
and
more
robust
dashboards
and
really
just
kind
of
knock.
This
thing
out
of
the
park,
that's
going
to
be
great.
B
Yeah,
I'm
re,
I'm
ready.
I
I
think
a
lot
of
people
would
be
excited
to
see
this
and
I'm
excited
to
kind
of
talk
about
it
and
share
our
journey
of
how
we've
gotten
here.