►
From YouTube: Keptn Community Meeting - April 16th, 2020
Description
Agenda
New place for our tutorials:
https://tutorials.keptn.sh
Automate Service delivery with Atlassian and Keptn
https://community.atlassian.com/t5/Marketplace-Apps-Integrations/Using-AI-and-automation-to-build-resiliency-into-Bitbucket/ba-p/1343165
Outlook
April 30th: Integrating Jenkins with Keptn
May 14th: Azure DevOps with Keptn
Meetups / Keptn presentations
May 5th - Comtrade Quality Quest Meetup (virtual)
May 5th - Stockholm Google Dev Meetup - Keptn Online Workshop
June 1st - PerfGuild Online
June 25th - Dynatrace User Group Benelux
July 1st - Cloud Native London
Learn more https://keptn.sh
A
A
27,
okay!
Well
thanks
for
thanks
for
joining
and
also
for
those
that
will
watch
this,
maybe
on
later
on
on
YouTube.
So
like
every
week,
we
take
the
community
meeting
to
talk
about
new
things
that
the
community
has
built
a
new
use
cases
some
stuff
that
came
out
from
the
core
development
team.
Today
we
have
one
big
presentation
around
how
to
automate
service
delivery
with
Atlassian
and
captain.
So
that's
really
great
to
have
these
guys
to
have
at
lesson
Ian,
hello,
welcome
and
thanks
joining.
B
A
Texas
and
and
Rob
from
dynaTrace
who
in
come
in
collaboration
with
ian,
has
worked
out
the
integration
and
they
will
show
it
to
us
later
in
a
live
demo
and
you
guys
have
a
couple
of
slides,
also
prepared,
but
before
we
kick
it
off.
I
want
to
also
highlight
another
item
that
actually
Jurgen
has
been
working
on
pretty
hard
over
the
last
couple
of
days
and
weeks.
There's
a
new
tutorials
page
on
captain.
So
what
we've
heard
from
a
feedback
perspective?
A
A
Obviously
you
know
content
for
tutorials
and
documentation.
What
we
learned
this
and
also
things
to
the
feedback
from
you.
We
separated
kind
of
tutorials
into
a
separate
tutorial
channel.
Now
it's
tutorials
that
kept
daughter's
age
and
then
also
the
documentation.
While
we
haven't
yet
refactor
the
documentation
it,
but
you
will
really
see
that
the
documentation
will
also
change.
A
So
the
way
the
clear
separation
between
tutorials
is
how
you
can
learn
and
to
end
different
use
cases,
whether
you
want
to
do
a
full
tour,
whether
you
want
to
quality
gate
when
you
want
to
do
self
feeling
or
performance
self-service
on
the
different
platforms,
you
go
there
and
you
learn
how
this
works
with
the
sample.
Apps
we've
prepared
for
you
and
if
you
don't
have
specific
questions
on
ok,
how
does
the
CLI
work?
How
does
a
great
project
work?
How
can
I
adapt
this
to
my
own
project?
A
This
is
when
you
would
then
go
to
the
documentation,
but
please
check
out
the
new
tutorials
page
and
special
thanks
to
uragan,
because
you
put
a
lot
of
effort
in
there
to
get
this
going
and
I
yet
have
to
deliver
a
tutorial.
I
know:
I
promised
you
you
will
get
one
soon
on
performance.
Is
a
self-service
just
a
little
bit
behind
my
own
schedule,
but
it
will
be
there
soon,
alright.
So
there
was
the
first
big
thing.
The
second
thing
now
is
really
I
want
to
hand
it
over
to
I.
A
A
B
Great,
so
right
so
yeah
we're
excited
to
be
part
of
this.
So
yes,
so
we
had
a
short
introduction
right:
I'm,
Rob,
Jan,
I'm,
a
technical
partner
manager
and
I
work
closely
with
our
strategic
partners
like
Atlassian,
and
so
we've
been
working
together
to
really
kind
of
showcase.
You
know
bitbucket
pipelines
and
how
we
can
kind
of
integrate.
You
know
these
SLO
based
quality
validations
and
in
the
workflow
in
do
one
quick,
quick
introduction
of
yourself.
B
Great
all
right
so
so
yeah,
so
what
we're
prepared
a
couple
slides
and
then
we're
going
to
get
into
a
live
demo,
but
really
what
we're
you
know.
This
is
one
of
the
main
use
cases
within
captain
we're
you
know,
and
one
of
the
drivers
behind
it
where
we're
really
trying
to
move
towards
micro
services.
Deliver
software
and
part
of
that
is
kind
of
an
you
know,
using
more
modern
tool
tool
sets,
including
you
know,
bitbucket
pipelines
and
the
cloud.
So
we're
really.
B
You
know
here
to
show
how
we
can
do
that
and
there's
a
real,
great
synergy
between
the
two
products
here,
where
we
have.
You
know
a
modern,
pipelining
tool,
bitbucket
pipelines,
which
is
you
know
in
the
cloud
able
to
deploy
code.
You
know
you'll
see
later
when
we
demo
it.
These
are
all
kind
of
docker
eyes
steps,
I'm,
really
powerful
and
scalable,
and
then
we
combine
that
with
the
SLO
quality
gates
in
captain
and
that's
where
we're
going
to
demo
this
today
and
just
as
a
quick
overview.
B
What
we've
implemented
and
we're
going
to
demo
is
bitbucket
has
the
ability
to
package.
You
know
some
of
the
pipeline
tasks
into
what
they
call
pipes,
so
we
can
both
build
these
into
a
docker
ice
step
and-
and
you
can
see
kind
of
on
the
bottom
here
when
you-
you
incorporate
this
into
your
bit
bucket
yeah
Mille
pipeline
definition.
It's
really
just
calling
that
out.
We've
published
these
as
an
official
pipe,
so
in
the
in
the
Atlanta.
B
You
can
just
search
and
find
these
automatically
and
so
like.
If
I'm
in
my
code,
editor
I
can
search
for
dynaTrace
this
way,
but
you're
not
limited
to
do
an
official
pipe.
So
anyone
can
build
a
pipe.
You
can
just
put
it
as
your
docker
image
and
then
here
on
the
on
the
line
say
43
you
would,
it
would
actually
just
say,
like
docker
blah
blah
blah
where
your
image
is.
B
But
you
know
we're
kind
of
you
know
making
commitment
by
dynaTrace
to
kind
of
support
some
some
core
functions
like
the
push
events
which
we'll
see,
as
well
as
the
captain
one
which
we're
going
to
see,
and
so,
while
the
all
these
there's
a
link
here
on
the
top,
so
you
can,
you
can
search
for
that.
But
there's
a
number
of
ways
of
integrations
and
I.
Don't
if
you
want
to
add
any
more
on
just
sort
of
you
know
the
the
kind
of
the
architecture
of
a
bitbucket
kind
of
in
pipes
and
kind
of.
C
I
think
you've
covered
it
pretty
well
like
it
as
a
bucket
pipelines
as
a
cloud-based,
CI
CD
system
and
heavily
on
docker
for
helping
manage
the
the
build
environment,
and
so
the
pipes
are
just
a
way
to
encapsulate
all
of
that,
and
this
helps
us
get
Dyna
trait
specific
tasks
into
a
bitbucket
pipelines.
Hi,
okay,.
B
Cool
all
right
so
I
mean
just
as
a
refresher
for
people
that
may
not
be
familiar
with
quality
gates
from
captain
really.
What
drives
this
whole
thing.
Is
it's
a
declarative
file,
so
we
really
want
to
have
just
like
we
do.
Configuration
is
code.
You
know
our
infrastructure.
Definitions
is
code.
We
want
to
have
a
validation,
that's
also
by
code,
and
so
we
have
us
so
captain
is
to
find
a
specification
file
and
I'll
kind
of
walk
through
that
real
quick.
B
So
as
part
of
setting
up
the
pipeline
and
the
project,
we
are
registering
it
within
captain
and
we'll
show
how
that
works,
and
then
we
have
this
these
these
files.
So
in
this
case
this
is
our
we're
looking
at
the
service
level
objective
file
where
we're
defining
the
types
of
service
level
indicators
that
we
want
to
measure
as
far
as
our
tests,
so
here
as
a
walk
through
it
we're
going
to
simulate.
You
know
this
response
time
of
95
percentile,
so
our
system
would
be
gathering
this
data
in
our
case
from
dynaTrace.
B
But
then
we
build
up
other
metrics
like
say,
sequel
statements
that
are
taking
place,
and
this
could
also
have
a
similar.
You
know
measurement
and
and
then
what
one
of
the
neat
things
about
quality
gates.
It
can
be
a
static
threshold,
as
you
can
see,
as
well
as
a
build
to
build
comparison.
So
you
could
say
over
the
last
X
number
of
runs
as
long
as
I
was
within
2%
of
previous
runs,
then
that
I'm
good
and
so
then,
at
the
end
of
this
you
have
a
total
score.
B
So
you
have
based
on
the
sum
of
all
these
scores,
then
you're
going
to
get
whether
the
overall
thing
passes
or
fails
and
I
think
that's
where
we
call
it
the
red
light,
green
light,
so
I
think
so,
hopefully,
that
that
explains
kind
of
quickly
that
concept,
so
it's
really
powerful
and
the
way
we
we
also
are
combining
this
with
another.
Just
concept
and
I
think
I'll
show
how
it
all
ties
together.
B
Another
thing
that
we
built
a
pipe
for
is
dynaTrace
information
events,
so
dynaTrace
is
a
way
that
we
can
kind
of
integrate,
dynaTrace
the
platform
into
other
tools
in
the
ecosystem,
whether
it's
a
you
know,
CI
CD,
pipelining
tool.
In
this
example,
we
can-
or
you
know,
infrastructure
code
deployment
changes.
We
can
inform
dynaTrace
for
things
that
took
place
in
the
environment
and
so
within
the
core
product.
This
is
all
in
the
core
part
of
dynaTrace.
B
These
types
of
events
can
show
up
so
now
you
can
have
kind
of
context
for
what
was
happening
fork,
employment
event
and
all
these
things
are
tied
to
the
specific
entities
that
that
you
want
to.
You
know
associate
this
event
for,
but
then
the
power
of
this
is
not
only
can
I
see
that
this
event
took
place
and
it's
easy
to
say:
hey
I
saw
a
performance
problem
at
this
time
in
oh,
there
were
also
was
a
deployment
going
on.
B
It
also
is
informing
the
dynaTrace
AI
engine,
and
so,
as
the
the
the
AI
engine
is
calculating
the
problem
and
what
the
root
cause
is,
it
can
associate
those
events
as
part
of
the
event
history
of
that
timeframe,
so
that
you
know
that
this.
In
this
example,
you
know
this
carts
was
having
a
failure,
but,
oh
by
the
way,
there
was
also
a
configuration
change
that
may
be
related
to
the
root
cause
of
this
problem.
B
So
it
becomes
a
pretty
powerful
kind
of
way
to
kind
of
tie
this
all
together
and
if
you
notice
on
the
bottom
and
I'll
demo
this
that
we
can
put
into
the
event
metadata
hyperlinks.
So
we
can
put
the
link
right
back
to
the
bitbucket
pipeline,
which
is
what
we
built
to
say.
This
is
the
actual
job
that
ran
that
deployment
or
config
change.
That,
then
may
be
the
result
of
the
problem.
I
have
okay,
so
let's
just
kind
of
tie
this
into
just
a
quick
conceptual
flow
and
then
we'll
actually
demo
all
this
stuff.
B
So
the
idea
is
like
a
developer.
Would
you
know
typically
start
with
a
backlog
item,
make
a
JIRA
ticket
log
that
item
and
then
they
would.
You
know,
make
a
feature
branch
to
work
with
that
black
wall
guy
to
make
their
code
change
and
then,
as
part
of
that
code
change
they
could
you
know
back
in
or
refine
the
service
level
objective
files
that
say
you
know
I
want
my
micro
service
to
have
this
or
you
know,
I'm.
Looking
for
a
performance
improvement,
you
know
I
want
to
adjust.
B
My
SLA
is
to
be
tighter,
for
example.
So
then
I
could
push
my
code
and
now
it's
triggering
this
into
the
bitbucket
pipeline
code,
so
automatic
trigger
from
the
push
and
then
typical
flow
than
that.
What
we
built
in
the
demo
is
that
we're
going
to,
in
our
case,
we're
building
a
dock,
arised
micro
service
that
we're
going
to
push
into
a
docker
hub
we're
going
to
deploy
this
into
a
kubernetes
environment.
Dynatrace
is
in
that
environment
automatically
monitoring
that
application
under
test,
then
we're
going
to
push
these
push.
B
Events
that
I
just
described
like
push
the
dynaTrace
deployment
event
to
let
it
know
that
a
deployment
took
place.
We're
gonna
run
a
performance
test
which
collects
our
data.
We're
going
to
push
another
event
to
say
a
performance
test
took
place
within
this
time
frame.
Then
we're
gonna
call
the
captain
quality
gate
and
say,
for
this
test
start
type
start
time
end
time,
time
frame.
B
How
did
my
service
levels
look
and
then
this
is
where
we
can
stop
or
fail
my
pipeline
and
then
along
the
way,
because
captain
also
has
integrations
as
well
as
bitbucket
has
integrations
to
notification
systems
like
slack
and
teams
all
of
this
data.
You
know
the
pipeline
running
the
pipeline.
Failing
the
captain,
quality
gates,
failing
or
passing,
will
all
also
get
notified
into
the
slack
channel
or
team's
channel
that
you've
configured
okay.
So
with
that
I'm
gonna
go
ahead
and
just
kind
of
demo.
B
What
this
actually
looks
like,
so
you
can
see
that
it
is
real.
So,
first
of
all,
all
this
code
is
available
out
in
in
out
on
bit
bucket.
So
first
of
all,
we
have
created
a
we'll
just
do
the
push
event
first,
so
the
push
event
code-
and
this
is
under
the
dynaTrace
account
and
it's
push
event
and
there's
a
pretty
long,
readme
file.
B
That
explains
how
all
this
works
and
then
with
examples,
and
things
like
that,
then
we
also
have,
under
at
a
captain,
account
we've
created
a
the
captain
quality
gate,
which
also
has
kind
of
a
long
greedy
file
here.
So
then,
when
we
get
into
our
sample
app,
which
is
also
on
on
the
public,
is
there's
a
dynaTrace
demos
account
where
we
actually
have
this
this.
This
account
here
and
all
these
screenshots
and
there's
a
very
long
description
of
all
these
things
and
we'll
see
as
we
do
the
push
events
as
we
call
the
quality
gate.
B
All
this
is
kind
of
described
and
I'm
not
going
to
get
into
the
details
of
all
of
this
today,
but
there's
also
a
a
set
up.
Readme
file
that
gets
into
all
the
details
up
all
the
set
up,
I'll
probably
be
working
with
captain
team
to
maybe
put
this
into
the
tutorials,
because
this
this
is
a
very
similar.
A
lot
of
the
stuff
of
how
you
set
up
the
environment.
Just
to
get
captain
installed,
is
kind
of
part
of
those
two
tutorials.
Then
we
layer
on
top
of
it.
The
bitbucket
pipeline.
A
B
So,
let's
go
ahead
just
so
we
can
fire
off
a
demo.
So
in
this
the
sample
app
that
we
have
just
kind
of
show
is
my
sample
app,
so
I'm
drunk
tab.
So
in
here
I
have
kind
of
kubernetes
environment.
This
is
sort
of
give
all
the
credit
sandy
for
doing
this
approach.
We
have
a
simple
application
that
is
running
in
kubernetes.
We
have
two
namespaces,
so
we
have
a
state.
B
This
is
the
name
of
my
name's
namespace
bitbucket
demo
staging
have
another
namespace
bitbucket
demo
production,
and
so
then
these
these
are
deployed
out
in
that
environment,
so
we're
currently
running
version
1,
and
so
just
as
for
the
demo
purposes,
I'm
going
to,
we
have
an
applicant
go
back
to
this.
We
have
a
behavior
built
into
the
app
it's
a
node
application.
All
the
code
is
in
that
repo,
where,
if
I'm
on
version,
1
I
have
normal
behavior
and
then,
if
I
have
version,
2
I
have
problems
and
of
failures.
B
B
And
so
now
what
we
would
have
seen
out
back
on
the
so
this
is
the
dynaTrace
s.
Aren't
that
dieters
demos
repo.
So
if
we
click
in
here
into
the
pipelines,
this
is
the
area
of
bitbucket.
We're
executing
pipelines
are
running,
so
you
can
see
now
it
automatically
triggered
and
is
now
running
this
update
diversion
to
commit.
So
in
this
scenario,
I'm
not
doing
a
feature:
branch,
I'm
kind
of
working
right
off
a
master
just
for
demonstration
purposes.
But
now
all
of
these
steps
in
my
pipeline
are
executing.
B
So
the
first
thing
it's
doing
is
you
know,
grabbing
some
variables
for
the
URL
in
my
environment
and
actually,
while
I
do
that.
I'll
just
show
this
another
feature.
So
in
in
bitbucket,
you
can
define
variables
in
multiple
ways.
One
of
the
ways
you
can
define
something
is
in
a
built-in
type
called
deployments,
and
so
what
I've
done?
In
my
my
case
for
the
deployment
configuration
each
of
these
deployment
areas
can
have
environment,
specific
URLs.
So
in
this
case
my
app
URL
I've
kind
of
configured,
my
environment.
B
So
this
is
my
URL
to
the
the
DEM
development
and
environment
staging.
You
know
everything's
the
same
with
the
staging,
and
so
the
pipeline
will
based
on
the
environment.
It's
running
in
well,
then,
can
pull
these
environment
specific
variables?
You
can
also
have
kind
of
global
variables,
which
are
which
are
called
repository
variables.
B
So
these
are
all
of
the
the
variables
that
are
specific
to
this
repo
and
then
a
red
end
kind
of
dynamically
on
the
fly
and
you
can
see
you
can
make
them
secret,
so
tokens
can
be,
you
know,
masked
and
will
not
show
up
in
logs
and
things
like
that.
So
then,
my
pipeline,
let's
go
back
and
see
how
it's
doing
it's
now
running,
and
so
now
what
it's
done
is
it's.
It's
set
these
variables
and
if
I
click
on
it,
you
can
see
what
it's
actually
doing.
B
So,
in
my
case,
all
it
was
really
doing
was
saying
I'm
in
this
by
staging
environment,
here's
my
app
URL
and
then
as
I'm
building
my
code,
it's
doing
a
series
of
commands,
but
essentially
what
it's
it's
doing
at
the
end
of
the
day
here
is
using
that
image.
Building
it
pushing
it
into
my
my
registry,
which
is
docker
hub
and
then
now
I'm
actually
deploying
this
thing,
and
what
you
can
see
here,
kind
of
while
it's
running
is
certain
certain
commands
in
pipelines,
like
are
all
run
in
docker
steps.
B
Only
just
in
a
second
you'll
see
it
pull
the
image
down
and
just
refresh
it
yeah
sorry,
it
already
went
past
it.
So
when
it's
when
it's
doing
a
particular
step,
it
will
be
pulling
down
the
you
know
the
code
that
it
needs
to
run
this
particular
step,
and
in
this
this
one
I
am
taking
the
kubernetes
file
and
I'm
updating
the
image
that
I'm
going
to
deploy.
B
B
A
second
sometimes
I
have
to
refresh
it
yep
there
you
go
so
the
part
of
the
build
setup
was
you
can
see
it's
it
needed
to
use
the
dynaTrace
pipeline,
it's
downloading
it
and
then
now
it's
actually
invoking
the
dynaTrace
bitbucket
push
event
blah
blah
blah.
Okay,
here
there's
a
sub
sorry
I
biologist.
This
is
where
it's
actually
pulling
the
the
docker
image,
blah
blah
blah,
and
then
you
can
see
here.
This
is
the
code
output,
kind
of
debug
output
that
I
put
in
here.
B
So
as
I'm
executing
the
pipe
I
can
see
the
URL
I'm
hitting
the
post
event,
the
event
properties,
and
then
this
is
the
the
internal
IDs
from
dynaTrace.
So
if
we
look
back
and
dynaTrace,
it
should
be
synced
just
refresh
my
page
here
in
dynaTrace,
so
the
pipeline
is
still
running,
but
what
I
should
see
now
is
there
we
go?
Let
me
zoom
in
just
to
touch
so
you
can
see
so
in
dynaTrace.
B
If
you
haven't
seen
dynaTrace
dynaTrace,
is
you
know
it's
it's
it's
showing
this
service
and
this
on
this
sort
of
dashboard,
where
I've
established
this
service,
it's
a
node.js
application
and
have
associated
a
number
of
tags
with
it
and
we'll
get
in
all
details
of
that.
But
this
is
how,
when
we
use
the
api's
I
can
say
send
an
event
to
this
application
with
this
name
with
this
service
with
this
stage
and
that
in
this
particular
event,
which
I
kind
of
showed,
the
screenshot
of
we'll
now
show
this
here.
B
So
now,
this
deployment
event-
and
you
can
see
here,
I
now
have
we've
built
the
plug-in
where
all
of
the
metadata,
as
we
made
just
a
bit
more,
will
come
in
from
that
specific
job.
So
this
one
we're
running,
which
is
simple:
node
service,
pipeline
job
140.
Let's
live
demo
here,
let's
go
back
and
see
yep,
so
this
is
number
of
140
perfect,
and
so,
if
I
were
to
click
on
this,
it
will
open
up
a
new
tab
and
immediately
jump
to
that
pipeline.
So
that's
where
you
would
know
hey
you
know.
B
Something
was
happening
in
this.
Annotation
event
is
something
that
we're
doing
for
the
load
test.
So
now,
while
the
load
test
is
running,
it's
a
very
short
load
test.
This
also
has
similar
data
to
let
you
see,
you
know
how
that
load
test
ran.
So
let's
go
back
the
pipeline
Mabel,
so
the
load
test
will
take
about
a
minute
should
be
done.
We
refresh
the
screen
and
then
it
moves
on
to
the
quality
gate,
which
is
the
second
thing
we
want
to
demo
so
right.
B
So
here
we
are
calling
the
quality
gate
and
again
it's
downloading
that
docker
file,
captain
quality
gate,
blah
blah
blah
and
then
now
it's
invoking
the
quality
gauge.
So
the
first
thing
the
quality
does
is
sends
an
evaluation
event
to
captain
for
the
time
frame
for
in
these
the
project
service
stage,
those
map
to
the
the
tags
that
I
was
interested
in
I've
also
sent
in
labels
for
the
bitbucket
pipeline,
and
then
it
told
it
where
it's
coming
from.
B
So
if
I
go
to
the
captain
bridge,
which
is
the
UI
for
captain,
you
can
see
I've
already
registered
this
sample
project.
So,
in
my
case,
only
a
bitbucket
demo
and
you
can
see
that
these
start
evaluation
events
are
taking
place
so
in
here
I
built
labels,
for
this
is
build
140.
It
came
from
bitbucket
and
it's
now
starting
to
do
the
evaluation.
If
I
keep
refreshing
freshing,
it
usually
comes
within
about
a
minute
and
up
retrieval
done.
B
They
go
me
refresh
it
again
and
we
should
hopefully
get
our
score
here
on
this
next
refresh
there
we
go
and
I
failed.
Alright,
something
failed.
So
so
my
evaluation
was
done.
You
can
see
what
did
I
violate
violated,
something
this
was
my
error
rate,
which
is
which
is
perfect.
So
since
I
deployed
version
two,
this
application
has
built-in
errors,
and
so
I
forced
this
to
fail,
because
the
app
code
was
having
a
high
error
rate,
and
so
we
could
see
that's
the
behavior
we
actually
wanted.
So
if
I
go
back
to
here
and
refresh.
B
Right
so
this
so
also
in
the
debug
output,
we
can
see
that
after
it
kept
waiting
for
the
results,
the
avail
that
vote
results
became
ready
and
I
could
see.
My
by
my
violation
of
my
response
time
was
was
it
was
fine
or
no
no
violation,
and
then,
if
I
get
down
to
my
error,
if
I
can
scroll
somewhere
near
there
somewhere
anyway,
well
there's
an
error
rate
in
here
right.
B
So
then
I,
my
overall
status
was
a
fail
so
so
built
into
the
pipe
is
if
I
get
a
status
fail,
it
will
actually
stop
the
pipeline,
it
wouldn't
die.
We
have
a
feature
in
the
code
where
you
can
also
say:
I,
don't
want
the
the
the
step
to
fail
and
then
I
want
to
maybe
manually
parse
this
and
that's
sort
of
in
our
in
the
readme.
B
There's
both
those
scenarios,
let
let
the
pipe
determine
whether
you
pass
or
fail
based
on
the
captain
result
or
you
can
just
take
this
whole
result
and
then
parse
it
as
a
secondary
step
to
decide.
You
know
your
own,
maybe
custom
logic
for
determining
whether
you're
past
your
fail
or
not.
So
then,
back
on,
so
then
what
we
prevent
it
now
is
this
rolling
into
production,
so
I'm
gonna
I'm,
going
to
kick
this
off,
we're
probably
run
out
of
time
or
maybe
I.
Can
you
do
this
to
answer
some
questions?
B
A
B
So
in
my
case,
I
and
I'll
show
you
kind
of
the
mechanics
of
this,
so
I've
built
a
separate
repo
that
stores
all
of
these
particular
sll
files
and
as
I've
registered
them
into
captain
okay
and
so
where
they
come
from
so
in
the
source.
Repo
there's
coming.
So
maybe
not
answering
this
quickly.
So
let
me
we
sit
back
so
in
the
sample.
App
I
have
a
folder
called
scripts
and
they're
inside
of
there
are
these
captain
files.
So
these
are
my
my
setup
files
that
I
needed
for
the
captain
setup.
B
So
the
first
thing
was:
when
you
onboard
a
project
into
captain,
you
need
to
give
it
a
shipyard
file
which,
in
this
case,
is
just
defining
my
three
logical
stages
of
things
and
I
wanted
to
make
this
map.
My
three
kind
of
tiered
environment,
so
then,
for
each
of
those
environments,
I
had
two
files
you
can
see:
I
have
an
SLO
for
the
each
of
the
environments
and
an
SLI
for
each
in
the
environment.
So
if
we
start
with
the
indicators,
let's
look
at
the
staging
since
we're
looking
at
staging.
This
is
the
file
there's.
B
Its
captain
has
two
different
files.
One
is
the
indicator,
and
these
are
all
the
metrics
that
I
want
to
contain.
All
of
these
are
coming
from
dynaTrace
in
this
example.
So
this
this
sort
of
syntax
maps
to
the
way
you
query
dynaTrace
using
the
metrics
API,
and
you
can
see
here,
I've
customized,
each
of
the
tags
that
work
for
my
my
particular
environment,
so
I've
kind
of
overridden.
B
You
know
captain's
default
behavior,
where
it's
expecting
a
captain
project,
captain
stage,
captain
service,
so
in
my
case
I've
overridden
these
tags
to
say
my
and
my
app
its
application
bitbucket
demo.
My
service
is
called
this.
My
stage
is
called
this
and
then
I've
repeated
that
for
each
of
the
metrics
that
I
want
to
capture
as
the
SSL
is,
and
then
the
SLO
is.
If
I
look
at
the
SLO
s,
this
is
kind
of
that
file
that
you
saw
kind
of
in
the
PowerPoint
slide.
B
So
now,
I'm
saying
I
want
to
evaluate
this
service
level
indicator
which
I
define
in
the
other
file
according
to
this
criteria,
and
then
here's
the
error
rate
one
and
then
here's
the
total
score.
And
so
then
each
of
these
files
are
kind
of
living
here
and
then
the
way
I
onboard
them
I'll
just
show
you
guys.
I
have
it
in
the
setup
file
is
I
use
the
captain,
CLI
and
I'll
just
going
to
scroll
down.
B
So
then,
if
I
look
in,
it
will
then
make
branches
that
map
to
my
shipyard
file.
They
look
in
staging
I,
can
now
drill
into
and
see
my
SLI
file.
That's
here
and
then
my
a
solo
file,
that's
here
so
once
they're
registered
in
captain
you
edit
them
with
in
the
downstream
repo
or
you
continue
to
push
updates
into
captain
to
re
re.
Add
those
resource
files,
perfect.
A
B
B
So,
while
you're
talking
I
pulled
up
the
captain
API,
so
this
is
the
technique
we
can
share
hat
with
the
code
where
he
did
this,
but
basically
you
can
use
the
CLI
to
register
these
files
once
they're
in
a
repo
you
can
edit
it
directly
in
the
repo
and
it
synchronizes
or
you
can
use
the
captain
API
to
directly
add
these
resources.
So
what
I've
learned
from
the
this
post
request?
You
can
you
you
would
have
the
name
of
like
slo-mo
and
then
the
resource
content.
B
You
actually
have
to
send
the
actual
contents
of
the
file
as
a
yui
encoded
string,
so
that
was
kind
of
the
getting
real
technical
here,
but
that's
kind
of
one
way
to
implement
it.
So
you
can,
in
your
pipeline,
read
your
own
repo
call
the
post
event
to
push
push
it.
This
is
another
technique
and
then
I
think
over
time
the
captain
team
is
going
to
be
building
in
the
feature
where
these
files
can
be
directly
synced
with
your
your
the
repo
that
actually
has
all
your
code
in
it.
Ok.
B
So
while
this
is
running,
let's
see
this
should
have
maybe
we'll
go
back
to
the
demo
here,
I'm,
sorry!
So
now
it's
it's
gone
through
the
quality
gate
step.
So
in
this
case
remember
it's
version
1
and,
if
I
scroll
down
to
the
results
in
this
case
now
the
quality
gate
past
because
version
1
past
version
2
does
not
past
and
if
I
go
back
to
the
bridge.
If
I
refresh
my
bridge,
hopefully
we'll
see
a
green
light
and
here
as
well
yep
right.
B
B
To
do
in
Excel,
right,
I
have
a
lot
of
kind
of
development
test
runs
in
here,
but
you
can
see
right
if
I
click
on
the
little
cell,
like
error
rate
here,
it
actually
shows
me.
This
is
the
one
that
failed
and
if
I
click
on
the
last
one
you
can
see.
This
is
the
one
that
passed
and
that's
really
cool
right.
C
B
If
you
pass
what
you
need
to
fix
that,
but
you
can
see
here,
I'm
also
pushing
a
captain,
result
event,
that's
great
and
then
what's
kind
of
cool.
So
this
is
a
new
feature
of
the
captain.
Bridge
they've
had
this
deep
link
so
now
I
can
deep
link
off
of
that
hyper
link
and
go
directly
to
the
captain
evaluation
event
for
that
particular
run.
B
So
this
is
sort
of
what
we're
seeing
more
modern
tools
doing
is
we
have
an
ecosystem
of
tools
we
don't
want
to
have
to
you,
know
log
in
and
find
and
search
things
you
just
want
to
do
one
click
and
same
thing
and
it
just
the
last
little
short
demo
like
what
was
happening
is
on
my
slack
channel
all
of
these
things,
while
we,
while
we
were
talking,
we're,
also
being
pushed
out
into
my
into
my
slack
channel.
So
this
last
one
here
is,
you
know
my
past.
B
You
can
see
here
actually
then,
because
I've
hooked
up
the
bit
bucket
but
bucket
also
can
push
things.
So
it
says
my
build
140
failed
I
can
see
the
job
that
I
ran.
This
is
the
failure
event
from
the
previous
run.
So
this
is
really
you
know
hopefully
gives
you
a
taste
of
kind
of.
You
know
really
cool
way
that
we
can
integrate
in
these
different
different.
You
know
platforms,
any
other
was
there
another
question:
Andy
I
may
be
running
out
of
time,
but
just
I.
A
B
Yeah,
so
that's
I,
think
that's
a
great
question.
So
capped
is
right.
There's
many
ways
you
can
go,
there's
two
to
use
cases
of
captain
I'm
running
I'm,
using
captain
to
manage
my
continuous
delivery
process
from
the
deployment
to
the
test
execution
to
the
quality
gates
to
the
promotion
strategies,
but
in
this
case
I'm
a
we
are
choosing
to
show
how
we
can
integrate
just
the
quality
gate
within
the
pipeline.
B
So
let
me
just
show
the
code
real,
quick,
just
because
I
think
that
will
show
you,
because
a
lot
of
people
have
an
investment
in
their
existing
tool.
So,
whether
it's
you
know
bamboo
in
this
case
or
or
or
or
bitbucket
in
this
case,
alright
gonna
be
a
bamboo
or
it's
really
just
a
rest
call.
So
that's
what
the
power
of
this
is
is
that
for
this
pipeline
that
I
may
have
an
investment
in
that
does
a
lot
of
other
things
see
I
work.
You
know
security
checks.
B
I
can
now
just
simply
add
into
my
pipeline
this
step.
So,
in
my
case,
I
am
I'm
kind
of
getting
to
the
answer,
but
I
I
want
I
want
to
choose
how
I
want
to
build
my
my
code
I
want
to
deploy
it
in
my
particular
way.
This
is
how
we
can
call
the
deployment
events
I
want
to
run
my
load
tests,
and
then
this
is
really
all
you
need
to
do
to
make
this
quality
gate
call.
So
it's
so.
It
allows
like
a
real
easy
way
to
to
get
introduced
into
captain,
and
remember.
B
B
You
can
kind
of
in
jet
in
insert
this
one
step,
and-
and
we
can
do
it-
Andy
has
another
example
and
I
may
look
to
do
it
as
well,
for
bitbucket
is
what
we
call
performance
as
a
self
service,
so
we're
basically
I'm
directly
calling
the
captain
evaluation
event
so
I'm,
just
invoking
the
service
of
lighthouse.
That
says,
evaluate
this
SLO
file
and
tell
me
if
it
passes
or
fails,
but
we
could
also
wrapper
it
into
a
little
more
steps
where
I
can
say:
I.
B
Let's
push
a
deployment
event
which
could
then
trigger
a
performance
test.
Invoke
some
of
the
other
services
to
push
these
vents
automatically
perform
the
evaluation
automatically
in
account,
you
can
kind
of
wrapper
multiple,
multiple
things
in
sort
of
a
single.
You
know
single.
You
know
performance
testing
and
combined
with
a
quality
gate,
evaluation
kind
of
in
one
shot
yeah,
but
this
is
also
helpful
like
if
you
just
if
I
just
did
a
deployment
say
to
production,
we're
not
running
a
performance
test.
I
want
to
deploy
my
code,
let
some
real
user
traffic
take
place.
B
B
B
Just
leave
this
up
in
case
people
want
to
take
a
screenshot
of
this.
This
is
where
some
links,
so
we
right,
so
you
can
learn
about
back
bucket
pipelines
here.
This
is
sort
of
the
landing
page
where
you
can
see
the
dynaTrace
integration
as
well
as
other
ones.
I
was
using
it
and
get
into
it,
but
I
was
using
another
pipe
called
the
AWS
cube
control
pipe.
So
I
was
able
to
run
my
kubernetes
command
with
a
single
pipe
that
wrappered
all
of
that
stuff
for
AWS.
B
A
And
I
was
one
our
well.
Thank
you.
So
much
for
I
know
there's
a
lot
of
work
that
went
into
this,
because
not
only
did
you
build
the
captain
integration,
you
also
build
the
dynaTrace
integration
and
thanks
for
forgiving
a
novena
know,
there's
much
more.
Also
on
the
operation
side.
I
know
you
there's
integrations
with
ops,
Cheney
and,
and
our
you
know,
from
even
more
tools
to
the
Latian
has,
but
we
will
have-
and
this
is
a
kind
of
a
preview
for
folks
that
are
interested
in
two
weeks
on
May
the
fourth.
A
We
will
have
a
so
called
dynaTrace
performance
clinic
where
I'm
happy
to
have
you
back
and
then
we'll
dive
deeper,
especially
also
on
these
dynamics,
a
specific
use
cases.
So
that's
awesome.
Rob.
Can
you
do
me
one
more
favor
and
know
the
blog
posts
that
you
did
he
put
out
there
if
you
could
put
it
up
again
sure.
A
A
A
B
B
A
B
This
is
a
great
resource.
I
mean
I
learned
a
lot.
You
know,
as
I
was
learning
about
bitbucket.
This
is
sort
of
there.
Not
only
are
there
articles
like
blogs
here,
but
there's
also
a
lot
of
how-to.
You
know
questions
and
you
can
in
you
can
search
this
site.
It's
a
very
good
resource.
I
learned
a
lot
from
you
know,
teaching
myself
this
as
I.
Did
it
so.
A
Alright,
so
I
am
wondering
kind
of
conclude
today's
meeting
with
a
quick
outlook.
So
obviously,
today
we
had
the
integration
with
election.
In
two
weeks
we
are
giving
a
little
tour
through
because
kepner's
obviously
open
event
within
so
being
able
to
integrate,
with
all
sorts
of
tools,
we'll
do
a
session
on
Jenkins
and
on
May
14th
we'll
do
one
on
edge
of
devups
and
also
just
a
shout
out
or
heads
up
on
other
events
where
we
are
going
to
present
captain.
So
you
can
see
here,
May
and
June
and
also
July.
A
We
have
a
couple
of
events
already
lined
up.
This
is
part
of
the
captain
community
meeting
notes,
which
is
the
Google
Doc
that
also
find
on
our
community
page
and,
as
Rob
said
earlier,
the
best
way
to
get
in
touch
with
all
of
us
is
to
the
Select
channel.
So
in
case
you
have
not
yet
registered
on
this
lake,
just
go
to
the
captain
community
and
register
for
the
slack,
and
if
you
want
to
get
started,
if
you're
fresh
make
sure
you
check
out
our
new
tutorials
page
tutorials
that
kept
notice
age.
A
This
is
where
you
have
the
initial
set
of
tutorials.
I
can
definitely
suggest
you
doing
the
full
tours
either
on
dynaTrace
or
prometheus,
there's
also
quality
gate
with
Prometheus
already,
and
there
will
be
more
around
performance
and
Rob
I'm
happy
to
hear
that
you
wanna
also
contribute
your
tutorial
to
it
as
well,
so
we
will
have
a
captain
with
with
bitbucket
a
tutorial
here
soon
as
well,
cool,
great,
alright
and
Ian.
Thank
you
as
well.