►
From YouTube: 8. #everyonecancontribute cafe: Keptn
Description
Blog: https://everyonecancontribute.com/post/2020-11-11-cafe-8-keptn/
Website: https://keptn.sh/
Demo repositories:
- https://gitlab.com/checkelmann/keptn-demo
- https://gitlab.com/checkelmann/keptn-templates
- https://gitlab.com/checkelmann/keptn-docker
Insights: https://twitter.com/dnsmichi/status/1326572233830109185
A
Okay,
hello,
everyone
and
welcome
to
our
everyone
can
contribute
coffee
or
coffee.
It's
the
eighth
iteration,
it's
version
right
now
and
today
we
hopefully
get
a
deep
dive
and
lots
of
amazing
things
about
captain
and
I'm
very
happy
to
welcome
christian
and
alois
today,
and
I
think
probably
best
would
be
to
start
with
a
short
introduction
round.
Just
like
I'm.
I'm
now
saying
welcoming
alloys,
maybe
introduce
yourself
what
you're
doing
what
you're
working.
What's
your
profession
and
then
like
hand
over
to
the
next
one.
B
Yeah
yeah,
first
of
all,
thanks
for
having
us
today,
my
name
is
alois.
I
work
for
dynamic
race
where
I'm
leading
the
innovation
lab
and
I'm
also
responsible
for
our
open
source
engagement
and
our
research
activities.
B
So
these
days,
I'm
mostly
working
everything
related
to
open
source
cncf
related
used
to
also
drive
a
lot
of
our
w3c
activities
and
open
telemetry
work.
Yeah,
that's
pretty
much.
What
I'm
working
on
most
of
the
time
and
also
working
with
great
people
like
christian
here
who
we
collaborate
on
building
innovative
solutions,
and
so
we
also
made
an
open
source
contributor
along
the
journey.
C
B
C
So
so
may
I
introduce
myself
my
name:
is
christian
heckelman,
I'm
working
as
a
devops
engineer
at
drt,
so
easy
research
technology
in
estenfeld
near
to
wurzburg
there,
I'm
mainly
responsible
for
gitlab
gitlab
cicd,
our
kubernetes
environment
and,
as
all
always
already
mentioned,
I'm
working
closely
with
the
captain
team
and
contributing
to
captain
as.
C
Well,
so
should
I
choose
the
next
one
or.
A
D
Yeah,
I'm
nick
class
meets
I'm
currently
working
as
a
devops
engineer
at
ferriti,
I'm
currently
doing
also
tribonitas
and
I'm
doing
blockchain
on
juveniles,
so
all
the
hard
passwords
mostly
around
but
yeah.
This
is
literally
my
job
description.
Was
it
doing
all
the
stuff
checking
that
every
single
application
is
currently
running
during
prometheus
for
monitoring
the
icd
for
gitlab,
yeah
and
so
on,
and
I
also
contribute
a
lot
of
to
open
source
to
gitlab
and
to
a
lot
of
other
projects,
mostly
in
the
cloud
native
ecosystem.
E
F
F
G
G
G
H
All
right,
thank
you.
I'm
about
currency,
I'm
group
based
in
the
hidden
netherlands,
I'm
a
developer
evangelism
program
manager
at
gitlab,
teammates
of
michael
yeah,
aside.
H
A
Yeah,
I'm
I'm
a
developer
evangelist
at
gitlab.
I
think
it's
nine
months
now,
eight
and
a
half
months.
So
time
is
running
fast,
I'm
the
crazy
guy
who
had
the
idea
of
doing
coffee
chats
about
technology
and
I'm
hoping
to
learn
from
you
today.
I
think
alejandro
is
missing.
Yeah.
I
Thank
you.
I'm
a
technology
account
manager
at
kidlab.
I
actually
support
christian
who's.
My
boss,
I'm
here
as
a
cheerleader
too,
for
christian,
excited
to
learn
from
all
of
you.
Thank
you.
A
Yeah,
I
would
say
the
stage
is
yours:
I
know
that
you
have
prepared
demos
and
and
slides.
Please
go
ahead.
B
Yeah,
so
I'm
more
or
less
the
war,
I'm
here
to
warm
up
the
audience
for
christian
here
so
I'll
keep
it
obviously
brief.
So
in
german
we
would
say
anti-foreign
he
will
do
a
live
demo.
I
was
just
doing
the
most
crazy
thing
like
for
kubecon.
I
was
doing
six
live
demos
in
a
row
in
a
single
presentation,
so
I
decided
to
go
only
for
slides
this
time.
Also
powerpoint
can
be
very
challenging
too,
but
yeah.
B
My
goal
is
to
keep
this
really
short
and
just
give
you
some
background
on
what
we're
actually
doing
here
with
captain.
First
of
all,
captain
is
a
is
now
a
cncf
project.
It
was
originally
started
at
startup
phase.
It
was
a
reimplementation
of
some
of
the
automation
we
had
internally
here
and
when
we
rolled
it
out
to
customers,
we
saw
that
a
lot
of
them
want
to
reuse
it
and
extend
it.
B
People
like
christian
and
that's
why
we
decided
to
make
it
open
source.
Obviously
we
wanted
to
give
up
some
of
the
control
of
the
project
and
give
people
more
security.
That's
why
we
contributed
it
to
the
cncf.
So
we
started
the
contribution
process
a
quite
a
while
ago,
that's
back
when
it
was
still
sandbox
and
now
we'll
eventually
then
go
to
incubation
going
forward.
B
Also,
you
can
find
all
the
details
here
and
if
you
wonder
what
the
logo
actually
means,
it's
obviously
the
kubernetes
shape,
and
this
is
the
captains.
These
are
the
captain
stripes
of
the
british
royal
navy.
So
that's
how
the
logo
came
along.
If
you
wonder
it's,
not
a
fish
without
the
back,
some
people
might
think
so.
Just
very
briefly,
the
idea
behind
captain
is
it
supports
progressive
delivery
and
operations
workflow
as
a
control
plane.
B
So
what
you
see
here
on
top
is
already-
and
I'm
just
accelerating
this
here
when
you're
pushing
a
pull
request.
You
have
especially
multi-stage
delivery
that
you
want
to
trigger
and
obviously
decide
based
on
service
level
objectives,
whether
you
want
to
deploy
to
a
stage
or
you
do
not
want
to
deploy
to
a
stage
and
again
do
the
same
thing
here
for
production
and
the
idea
is
you
can
plug
in
any
tool
that
you
want.
Obviously,
yes,
we're
coming
from
finance
phase.
B
Dyna
trees
is
one
choice,
but
you
can
do
pretty
much
the
same
with
prometheus
as
well,
so
it's
totally
tool
agnostic
what
you,
what
you
can
do
there,
the
key
design
goal
was
we
just
didn't
want
to
cover
the
delivery
side
of
the
house.
We
also
wanted
to
keep
the
ops
automation
side
of
the
house.
So
when
something
happens
later
on,
when
your
application
is
in
production
should
be
running
fine,
you
still
want
to
take
care
of
things
like
yeah,
so
we
wanted
to
cover
the
whole
thing.
B
B
Because,
very
often,
when
you
see
especially
demos
conferences
about
remediation,
you
see
the
tool
like
the
alerting
tool
may
be
prometheus
or
whatever
you're
using,
and
it's
sending
off
an
alert
to
a
system,
and
then
it's
more
or
less
done
and
it
triggers
some
workflow.
B
But
it
really
goes
back
validates
where
that
action
actually
help
and
then
triggers
the
next
next
action
and
that's
really
what
close
loop
remediation
stands
for
to
do
dynamically
pick
actions
that
should
be
used
for
remediation
purposes,
from
simple
things
like
cleaning
up
disks,
disabling
feature,
flags
checking,
whether
they
actually
work
and
in
case
they
do
not
work,
then
taking
other
actions
or
then
eventually
escalating
it
to
a
human
user
oops.
B
So
that's
really
what
we
have
here
in
captain
it
is
and
control
plane.
So
it's
not
like
replacing
the
tools
that
you
have
already
think
of
it
as
a
control
plane.
The
same
way
you
would
think
of
a
control
plane
in
the
networking
space.
So
your
networking
control
plane
does
not
replace
your
active
networking
components,
but
it
is
used
to
orchestrate
all
of
those
components
that
you
already
have
in
place.
B
When
you
look
today
at
a
lot
of
tools,
they
use
proprietary
apis
because
historically
they
didn't
there
wasn't
any
work
that
says:
well,
let's
standardize
what
a
deployment
event
actually
looks
like
that's
where
we
started
to
create
some
events,
what
they
can
look
like
and
are
currently
working
as
part
of
the
continuous
delivery
foundation,
improbability
group
to
standardize
on
these
events,
what
they
can
look
like
and
as
mentioned,
obviously
you
can
integrate
any
services
that
you
want,
and
this
question
but
later
on
show.
B
This
is
obviously
true
for
gitlab
that
you
can
simply
plug
in
there
so
control
plane.
So
why
do
we
actually
need
a
control
plane?
Don't
we
have
all
the
tools
that
we
have
technically
we
have
it,
but
most
of
what
we
have
today
are
hard
dependencies
on
the
build,
the
scripts
and
the
automation
that
we
have.
Where
we
call
say
in
individual
steps
into
our
configuration
management
into
our
deployment
pieces,
then
we
configure
monitoring.
B
Then
we
configure
testing,
then
we
might
throw
in
some
chat
ups
and
some
rollback
activities
and
we
kind
of
like
add
these
scripts
that
get
more
and
more
complicated
and
the
idea
behind
a
control
plane
was:
let's
keep
the
scripts
because
they
are
great,
but
let's
not
hard
code
them
together.
Oops,
let's
add
some
space
in
here
and
add
eventing
in
there.
So,
in
short,
the
idea
is,
if
we
deliver
microservice
applications,
don't
do
this
using
a
monolithic
approach,
but
also
do
this
using
an
event-based
microservice
approach,
so
we
introduce
the
standardized
events
in
here.
B
This
is
an
example
like
a
deployment
event.
We
want
this
artifact
to
be
deployed
to
a
certain
stage
and
the
status
strategy
should
be
blue,
green
and
based
on
these
events.
We
can
have
a
process
definition
and
then
we
just
subscribe
to
tools
and
the
processes
we
want
to
do
there,
and
this
is
where
it
falls
into
that
control,
plane
idea
and
again
it's
the
concept
is
really
copied
over
from
what
we
do
on
a
networking
control
plane.
We
have
the
articulation
plane
on
top.
That
would
be
our
process.
B
What
we
want
the
process
to
look
like,
then
we
have
the
control
plane
that
does
the
actual
event
orchestration
and
then
we
have,
in
our
case
the
execution
plane,
not
the
data
plane
that
out
then
orchestrates
the
proper
services
to
move
services
from
the
left
so
from
more
or
less
your
repository
all
the
way
to
production.
So
you
see
there
is
a
lot
of
overlaps
on
how
you
route
stuff
in
a
network
and
how
the
components
work
in
sdns
to
how
you
do
continuous
delivery,
and
we
copied
a
lot
of
these
concepts
over.
B
B
B
This
is
totally
separated
away
from
the
process,
definition
from
the
event
type
per
se.
So
how
what
you
defined?
What
you
see
here
on
the
left
is,
what's
called
a
shipyard
file
in
captain
the
shipyard
file
defines
only
the
order
of
events
how
they
should
occur
or
how
a
certain
piece
should
flow
through
the
delivery
stages.
In
case
this
case,
it
defines
the
stages,
a
deployment
strategy
and
a
test
strategy
and
also
an
approval
strategy.
B
So
in
this
case,
we
see
that
in
depth
we
deploy
directly
only
to
functional
tests.
We
prove
things
into
staging
automatically
if
quality
gates
validate
fine
manually,
otherwise,
always
in
the
blue
green
fashion,
then
we
do
performance
testing
and
then
we
move
further
on
there.
What
you
can
clearly
see
here
there
is
no
definition
on
how
this
is
actually
done,
and
that's
exactly
the
separation.
B
After
what
we
want
to
do
from
the
how
we
want
to
do
it,
so
the
shipyard
only
defines
what
should
be
done,
but
not
the
how
you
then
have
the
captain
services
that
are
installed
as
right
now
kubernetes
services,
where
a
concept
that
we
call
the
uniform,
so
the
captains
are
wearing
a
uniform
where
define
which
services
should
take
care
of
certain
tasks
again.
Think
of
it
like
an
event-based
micro
service
approach
of
building
automation
in
here.
B
So
we
have
exactly
this
definition
that
you
saw
before
it
will
automatically
create
a
multi-stage
environment,
link
all
the
tools
together
and
allow
you
to
fully
automatically
ship
code
and
orchestrate
the
tools
underneath
what
it
has
built
in
and
that's
where
it's
kind
of
opinionated
it
always
has
quality
control
built
in
so
the
lower
part
down
here
that
you
see
that's
what
we
call
the
quality
gates,
so
we
use
your
as
a
line
as
a
low
definition
that
as
quality
as
monitoring
this
code,
you
would
ship
also
for
validating
whether
we
propose
promote
something
to
the
next
stage
or
even
keep
it
in
the
current
stage.
B
So
in
case
we
realize
that
your
quality
gates
are
not
working
and
for
the
current
stage
we
a
would
not
promote
it
and
b.
We
was
able
to
hold
it
back
in
the
existing
stage
as
well
and
on
the
right
hand,
side.
You
basically
see
the
same
from
an
operations
perspective
when
a
problem
or
an
alert
is
erased.
We
will
automatically
pick
the
proper
remediation
actions
check,
whether
they
are
working
or
not
working
and
then
execute
this
process.
B
B
Think
of
this
just
operations
as
code,
so
usually
what
people
would
do
they
write
would
write
wiki
pages
on
what
should
be
done
if
something
goes
wrong
in
this
case,
it's
a
gamble
based
description
here
we
have
like
a
more
business
focused
version
like
when
we
see
that
the
new
release,
for
example,
drops
on
the
from
the
conversion
rate,
then
scale
it
up,
disable
a
feature
flag
and
do
it
and
scale
it.
This
again
is
managed
along
the
git
process.
B
B
If
they're
back
to
normal,
everything
is
fine,
the
process
gets
closed
or
otherwise
it
gets
escalated
to
a
human
user.
Obviously
you
can
have
multiple
of
these
operation
steps
happening
as
well.
The
idea
is,
instead
of
like
defining
one
and
to
end
operations
process
every
service
ships
with
its
own
operations,
instructions,
and
they
are
triggered
on
demands
when
problems
occur,
for
the
specific
services
without
having
you
to
rather
find
a
lengthy
operations
process
in
here.
B
So
this
was
like
very,
very
short
version
of
what
captain
is
and
what
it
does,
because
I
think
it
becomes
way
more
hands-on
when
christian
shows
its
life
with
an
actual
example.
C
Here's
anwood
zoom,
I'm
used
to
google
meat
so
unfortunately,
so
you
can
see
my
screen
yeah.
Okay,
thank
you.
So
what
I've
prepared
is
a
little
demo.
Not
all
use
cases
of
captain
are
covered
here,
but
the
basic
ones.
So,
in
the
end,
what
we
are
doing
today
is
we're
building
an
image
with
an
application.
C
Oh
I'm
too
too
fast.
I'm
sorry
go
back
one!
No,
it
doesn't
matter
so
we're
building
an
image
with
an
application.
We
are
running
the
gitlab
standard
tooling
over
it.
Then
we
are
doing
something.
What
is
called
captain
onboarding.
So
in
our
case
we
don't
want
that
the
developer
needs
to
to
onboard
his
service
by
its
own
in
captain.
So
we
automated
this
step.
C
In
this
case,
the
captain
project
and
get
service
will
be
automatically
created
in
captain.
It
will
upload
all
the
necessary
resources
into
captain
like
the
sli
and
slo
definitions,
also
jmeter
tests
and
so
on
and
so
on,
and
then
we
will
deploy
our
application
into
monitored
environment
in
our
case,
as
we
are
customer
from
from
dynatrace
into
dynatrace,
monitored,
environment
and
ascend
after
the
deployment
the
deployment
finished
event.
C
So
that's
what
arlo
is
already
told
that
captain
is
event-based,
so
we
telling
captain
hey.
We
have
just
finished
the
deployment
of
our
application.
What
captain
is
now
doing?
It's
yeah
triggering
other
services
like
I
was
described
with
a
with
a
controlled
services.
In
this
case
it
will
create
automatically
an
sla
check
using
dyna
tray
synthetic
service
in
dynatrace.
It
will
execute
jmeter
tests
and,
after
all
the
stuff
has
been
done.
C
We
will
go
further
and
execute
a
quality
stage,
such
sli
slo,
evaluation
of
our
service.
We
will
start
the
evaluation.
We
will
pull
captain
if
the
evaluation
is
done.
Oh
I'm,
sorry
too
fast.
Captain
will
then
retrieve
the
slis
from
our
monitoring
tool.
Donna
trace
in
this
case,
but,
as
arlo
said
from
ethos,
is
also
working
out
of
the
box
with
captain,
then
we
will
call
for
captain
evaluation
done
event
and
as
soon
as
the
done
event
was
returned
from
the
captain
api,
we
get
the
score
of
our
deployment.
C
So
for
the
demo
project,
what
I've
prepared
I've
installed
captain
on
a
single
machine,
so
you
can
deploy
captain
to
kubernetes
cluster
to
full
blown
right.
But
if
you
just
need
to
to
use
a
quality
gate,
you
can
install
captain
on
a
ec2
instance
with
keys
on
it
for,
for
instance,
as
part
of
the
keys
installation,
there
will
be
some
services
already
being
deployed.
The
captain
uniform
like
the
generic
execution
service
and
the
meter
service.
Then
I've
installed
the
gitlab
and
done
trade
synthetic
service.
C
These
are
two
services
which
actually
was
contributed
by
myself
to
the
captain
yeah
to
the
captain
project.
Then
we
have
a
small
go
application,
just
a
simple
web
service
which
is
returning
a
website
and
we
are
deploying
via
helm
to
the
same
kubernetes,
cluster
or
keys
cluster
for
this
demo.
If
you
deploy
completely
in
kubernetes,
you
can
entirely
use
captain
for
your
continuous
delivery.
C
If
you
are
deploying
to
other
services
like
you
deploy,
lambda
function,
you're
deploying
to
an
ias
or
whatever
then
yeah,
then
the
captain
continuous
delivery
feature
is
not
able
to
do
this,
so
we
are
doing
this
in
within
our
pipeline
still,
but
enough
sets
let's
go
to
our
demo
project,
so
everything
is
on
gitlab.com
for
the
german
guys
they
will
recognize
the
face
of
the
captain
igloo.
C
C
So,
first
of
all,
what
we
need
to
set
up
here
to
get
this
up
and
running
is
we
need
to
define
some
cicd
variables.
So
in
our
company
I
have
defined
the
the
captain
api
token
captain
bridge
url
and
the
captain
api
endpoint
as
a
group
level,
ci
variable,
which
can
then
be
used
by
all
other
projects
underneath
right,
which
is
very
convenient.
C
C
Then
I
have
created
me
myself,
a
little
yeah
docker
container,
where
I've
installed
the
captain,
cli
cube,
cuddle
and
helm
three
also
on
on
gitlab,
but
I
think
I
don't
don't
say
anything
new
to
to
you
guys
right
so
after
we
are
including
our
captain
templates.
C
So
I
will
quickly
jump
into
the
repository,
so
all
you
can
all
you
need
to
to
run
this
demo
or
to
get
started
with
captain
in
your
environment.
Is
here
inside
the
repository
feel
free
to
include
it
directly
or
edit,
edit
the
resources
or
contribute
to
the
templates.
If
you
want,
I
think
there
are
much
smarter
people
here
on
the
call
than
I
I
am
so,
as
I
said,
feel
free
to
contribute
here
in
this
repository.
C
Then
we
defined,
of
course,
our
stages,
building
our
image
so
and
after
the
build,
of
course,
the
gitlab
test
stage
will
be
executed
and
when
all
tests
are
done,
we
will
run
our
captain
onboarding
and
the
captain
onboarding
is,
in
the
end
normal
job
right.
So
what
we
are
doing
here
within
the
job
is,
we
are
authenticating
our
captain
cli
against
our
captain
installation.
C
C
Then
we
are
checking
if
the
sorry,
if
the
project,
if
the
service
is
already
onboarded,
so
we
are
asking
captain
get
service
of
my
project.
If
the
service
is
not
already
there,
it
creates
service.
C
Then
every
time
we
are
on
boarding
or
running
the
stage,
we
are
updating
our
sli
and
slo
definition,
so
you
can
have
in
captain.
Normally
you
have
a
kind
of
upstream
repository,
but
we
would
like,
or
in
our
company
the
developers
like
to
have
one
single
repositories
with
all
resources
in
it,
and
so
we
maintain
our
shipyard
or
our
checkmeter
test
within
the
same
repository
as
the
source
code
and
do
not
utilize
the
git
up
upstream
feature
of
captain,
but
normally
you
can
say
a
captain.
C
I
have
here
an
upstream
repository
where
captain
will
onboard
all
the
files
and
if
you
make
changes
to
the
let's
say,
to
the
helm
chart
or
to
the
different
checks
to
the
jmeter
test,
or
so
you
can
check
it
indirectly
in
the
upstream
repository
and
captain
will
utilize
it
from
there
so
completely
give
github's
approach
right.
C
So
we
are
updating
here,
our
sli
and
slo
resources
with
the
cabinet
resource
command
in
our
project
in
our
service.
In
the
stage
you
can
see
it
here,
so
we
say
here:
our
resource
is
a
slo
file
and
we
will
upload
it
to
slo
in
our
captain
upstream
repository
or
captain
repository
the
same
for
the
chain
meter
tests.
So
we
are
uploading
them
to
to
the
subfolder
gmeter.
For
my
gitlab
confirmal,
I
will
cover
later
and
the
same.
C
C
C
Then
we
have
our
sli
and
slo
definition
and
in
the
sli
definition
I'm
telling
what
kind
of
indicators
I
want
to
to
retrieve
from
from
dynatrace
in
my
quality
gate
evaluation,
like
throughput
error
rate
response
times,
that's
the
default
one
from
captain,
but
you
can
also
add
something
like
database
calls
and
so
on.
So
there's
much
more.
You
can
add
here.
C
Then
I
have
my
slo
definition
where
we
are
typing
everything
together.
So
here,
for
example,
I
say:
okay,
my
quality
gate
will
succeed
if
my
response
time
is
faster
than
10
milliseconds
and
one
if
it's
slower
than
10
but
faster
than
20,
and
it
will
fail
if
it's
over
20
milliseconds
the
same
for
error
rate
response,
time,
p50
and
so
on,
and
the
throughput
and
my
quality
gate
should
pass
if
it's
an
overall
weight
or
an
overall
score
of
90
percent.
C
Okay
back
to
the
next
resources.
So
we
have
here
oh
meter
tests,
there's
a
basic
check
and
load
check.
In
this
example.
It's
only
querying
yeah
the
website
once
it's
it's
nothing,
fancy
here
and
then
I
have
my
gitlab
configuration.
C
This
service
is
still
under
development
and
it
would
be
cool
if
somebody
will
contribute
with
the
service
or
or
with
this
repo
in
this
project,
adding
some
features
like
getting
the
logs
from
the
the
entire
pipeline
to
return
this
stuff
to
cap.
C
So
we
have
here
our
captain
bridge,
so
we
can
see
here.
I
have
already
onboarded
my
demo
project.
I
have
my
environment
screen
where
I
can
see.
Okay
on
staging
I've,
this
artifact
already
deployed
on
production,
this
version
or
artifact
deployed.
I
have
my
service
overview
with
all
the
yeah
events
I
already
sent
into
captain.
So
we
see
I
have
some
service
created
and
for
the
rest
I
will
show
you
in
a
second
when
we
are
going
to
the
deployment
finished
event
when
we're
sending
it
after
our
deployment.
C
I
said
nothing
fancy
here,
but
this
is
the
interesting
part
as
a
developer.
He
only
needs
to
to
extend
or
or
add
these
extend
to
the
to
a
stage
so
captain
will
automatically
send
or
or
this
script
will
automatically
build
and
send
the
deployment
finished
event
to
this
captain
api.
C
Let's
take
a
look
in
the
deployment
finished
yaml,
so
you
can
see
here
we
are
just
checking
if
there
isn't
ci
environment
or
url
already
defined,
then
it
will
use
this
one
else.
It
will
yeah
just
fall
back
to
the
uri
public
variable.
It
will
build
the
deployment
finished
event
with
some
labels
in
it.
Like
my
build
number
run
by
so
I
know
who
executed
the
pipeline,
I
give
some
information
for
my
synthetic
service.
C
C
Run
pipeline
and
this
I
wanted
to
do
at
the
beginning
of
the
demo,
but
hey
I
forgot
everything.
So
what
will
captain
do
as
soon
as
you
are
sending
the
deployment
finished
event?
The
dynatrace
service
will
add
metadata
information
of
our
deployment
to
my
dynatrace,
monitored
service.
This
will
looks
like
this.
Let's
jump
into
dynatrace.
C
I
have
the
name
who
executed
the
pipeline
and
all
the
information
you
need
also
a
backlink
to
the
captain's
bridge
to
to
jump
into
the
deployment
finished
event
directly.
So
this
is
really
yeah
a
cool
feature.
C
The
next
thing,
and
and
the
good
thing
about
it
is
as
soon
as
dyna
trace
is
detecting
a
problem
with
your
deployment.
It
will
also
get
these
information
and,
and
perhaps
can
say,
okay,
your
application
is
performing
poor
because
you
have
deployed
a
new
version
and
then
the
operator
just
needs
to
look
into
and
say:
okay,
it
was
deployed
by
christian
heckleman
and
and
asked
me
what
I
have
done
there
right.
So
next
what
will
be
executed?
As
I
said,
I've
installed
the
dynatrace
synthetic
service.
C
It
will
automatically
create
a
synthetic
check
for
me
in
my
monitoring
environment
for
my
deployed
application.
So
I
have
here
my
synthetic
test,
basic
http
check,
but
think
about
it.
You
can
also
write
in
service
for
other
services,
other
synthetic
services
like
pingdom
on
or
whatever
it's
not
a
big
deal.
So
so
writing
a
captain
service.
It's.
D
C
If
I
can
do
it,
other
people
can
do
it
as
well,
because
I'm
not
a
programmer
programmer.
Actually.
So
now
we
have
our
deployment
information
data
trace.
We
have
created
a
synthetic
test
in
dynatrace
and
all
only
with
a
send
deployment
finished
event
and
also
as
demo
purposes.
I
have
executed
my
my
captain
gitlab
service,
and
this
was
actually
triggering
here
in
another
repository,
a
simple
pipeline
with
only
one
job
here.
You
can
see
my
message
from
my
captain
project,
so
this
would
be
a
good
like
like
in
our
company.
C
We
are
executing
some
tests
with
other
frameworks
which
are
actually
triggered
in
a
gitlab
pipeline,
and
so
I
can
say,
okay
trigger
the
stuff
as
soon
as
the
deployment
is
is
made
on
my
service
and
the
developer,
don't
need
to
yeah
configure
any
upstream
or
downstream
pipelines
by
by
its
own
in
in
the
ci
yaml.
So
that's
quite
yeah
also
a
nice
feature
and
in
the
end
it
will
execute
the
geometer
test
and
already
start
an
evaluation
of
my
and
slos
I've
already
done
so.
C
C
So
what
we
are
doing
here
is
we
are
asking
the
captain
api
for
the
last
deployment
event
in
my
project
from
my
environment
within
my
service
right.
So
then
I
getting
the
context
id
of
the
yeah
deployment
event
and
with
the
context
id
I
can
query
the
captain
api
for
the
evaluation
done
event
and
as
soon
as
the
evaluation
done
event
is
popping
up,
I
can
grab
the
results
from
the
evaluation
done
event
and
ask
or
or
take
a
look
into
it
and
can
say:
okay.
C
If
the
result
is
fail,
it
will
fail
my
entire
evaluation
and,
if
not,
it's
all
good
and
go
on
to
the
next
stage.
Let's
take
a
look
into
our
cicd
here.
C
To
our
pipeline,
okay,
this
one
is
still
running,
but
let's
go
to
the
other
one.
I've
executed
an
hour
ago,
so
quality
gate
staging
the
nice
feature
or
a
nice
one.
Is
that
you're
getting
a
direct
link
into
the
bridge,
so
you
can
click
into
it
or
instant
developer.
If
the
job
is
failing,
he
will
get
an
email
from
from
from
gitlab
right
so
and
he
will
have
this
link
within
this
email,
so
he
can
directly
click
onto
it
and
can
see.
C
Okay,
questions:
if
somebody
has
one.
D
Yeah,
I
have
a
question:
what
would
be
the
effort,
for
example,
to
exchange
now
j
meter,
for
example,
like
with
a
different
tool
for
doing
my
performance
testing
so,
for
example,
to
use
k6
or
something
like,
because
we
are
reading
a
resource
right.
C
B
I
think
was
committed
today
to
send
box,
but
the
effort
is,
you
would
just
have
to
register
a
different
service,
so
we
would
have
would
just
have
to
if
you
don't
want
to
have
to
meet
the
running
at
all,
you
would
have
to
just
simply
uninstall
your
meter
for
the
project
and
just
install
another
meet
another
service
that
obviously
understands
the
tests
that
you
want
to
use.
That's
all
you
have
to
do
it's
right
now.
This
is
the
bits
and
pieces.
That's
a
little
bit
rough
around
the
edges.
B
So
the
plan
is
to
have
this
totally
declarative
in
the
uniform
file.
Right
now
it
would
be
like
one
cube:
control
apply
for
the
whatever
service
you
want
to
use
and
then
the
next
test
run
would
automatically
pick
up
hey.
There
is
a
new
test
triggered
event,
and
then
a
new
service
would
pick
it
up.
But
if
you
have
the
service
already
written,
you
just
exchange
one
service
for
another
one.
B
Another
nice
use
case
that
was
actually
shown
in
the
last
app
delivery
use
case.
A
cncf
delivery
use
case
what
they
were
doing
together
with
the
litmus
chaos
people.
They
were
using
an
existing,
even
an
existing
shipyard
file,
the
only
thing
that
they
did
in
the
test
stage
that
just
added
in
the
chaos
test-
and
this
was
just
registering
a
chaos
testing
service
and
by
then
chaos
tests
were
executed
against
the
servers
automatically.
B
So
it's
smallest
one
currently
one.
Currently,
it's
like
one
line
of
a
cube,
keep
control
command
going
forward.
It's
just
one
line
in
a
in
a
yammer
file
that
you
have
to
change.
Okay,.
D
C
So
yeah,
so
basically
that's
it
for
for
this
demo,
but
I
have
an
other
quite
cool
use
case
if
you
want
to
extend
it
a
little
bit
so,
as
I
mentioned
before,
we
have
also
this
generic
executor
service
in
in
the
captain.
Uniform,
which
is
in
service,
was
written
by
andreas
grafner
from
dynatrace
and
there
you
just
need
to
add
something.
So
this
service
can
execute
on
different
events,
either
http
request
cosget
whatever
or
execute
a
shell
script.
C
So
if
you
can
execute
a
shell
script
or
execute
an
web
request,
you're
completely
flexible
right.
So
in
this
case
I
have
yeah
a
little
funny
use
case,
so
I
want
to
get
informed
as
soon
as
new
deployment
was
made
to
my
staging
environment
over
a
telegram
message.
So
what
I
need
to
configure
is
deployment.
Http
deployment
finished
http
file
with
the
post
parameter,
my
my
json
and
all
I
need
to
do
now,
because
I
don't
want
to
give
you
my
actual
bot
id
here.
C
So
don't
wonder
because
I
will
now
add
the
resource
or
this
file
to
my
generic
executor
folder
with
a
deployment
finished
event
naming
convention
so
deployment.finish.http.
So
the
executor
knows
he
should
run
http
request
here,
so
we're
adding
the
resources
has
been
uploaded
and
when
I
now
go
to
my
pipeline
and
hopefully
it's
working
because
demos
right.
C
And,
as
you
can
see
here,
I
said,
adds
a
resource
to
my
project
demo
service
captain
demo
only
in
the
stage
staging,
so
it
will
be
executed
only
in
the
staging
stage,
and
here
it
is
so
your
deployment
on
project
demo,
captain
demo
in
stage
staging,
so
this
is
yeah
a
simple
way
to
call
other
services
or
to
execute
scripts
without
writing
an
entire
captain
service.
If
you
don't
need
to.
C
So
yeah
that's
yeah,
basically,
all
from
from
the
demo
I
got
here:
let's
go
to
the
last
slide.
Deck
yeah
here
are
all
the
resources,
like
the
demo
project,
the
templates,
the
container
captain
website
captain
on
keys,
the
repository
I
told
you
before
this
is
a
really
good
kickstart.
If
you
want
to
start
using
captain
the
gitlab
service,
where
I
already
said
I'm
happy.
C
If
there
are
some
contributors
who
can
extend
the
service
with
me
together,
captain
slack
and
there
are
also
a
youtube
channel
for
captain
available
and
a
perv
clinic
from
andy
grabner,
who
is
explaining
here
the
quality
gates
in
into
details,
and
if
you
want
to
follow
me
on
twitter
use
my
professional
twitter
handle
at
wistalart
yeah.
That's
from
my
side.
A
So
I
have
a
question
so
thanks
for
the
for
the
thorough
demo,
I'm
pretty
much
overwhelmed
right
now.
I
don't
know
what
to
ask
in
a
technical
way.
I
was
wondering
if
I
want
to
start
like
contributing
to
the
project.
What
would
be
the
best
way
is,
like
you
mentioned
some
demo
environments
and
some
some
projects
to
contribute
already,
but
maybe
you
have
some
like
quick
insights,
where
someone
could
have
a
look
at.
C
So
what
I
personally
do
so,
first
of
all,
if
you
want
to
contribute
to
the
captain
project,
go
to
github.com
captain
and
there
are
in
the
issues
all
issues
labeled
with
good
first
issues
when
you
want
to
start
to
contribute
to
captain.
C
If
you
want
to
yeah
play
around
what
what
I'm
doing
all
the
time
is
when
I'm
playing
around
with
captain
and
want
to
create
a
little
demo
environment
for
my
own,
I'm
captain
is
also
a
being
able
to
be
executed
on
a
kind.
So
you
know
these
kubernetes
and
docker.
B
B
There
are
certain
capabilities
on
the
in
the
the
core
pieces,
like
the
event,
execution,
layer
and
so
forth.
That
is
obviously
something
where
help
is
welcome,
but
that
might
not
be
the
easiest
start
to
get
started,
but
what
we
mostly
see
people
contributing
right
now
just
mentioned
the
selenium
service,
as
it
is
a
control
plane,
and
we
want
to
standardize
on
these
type
of
events
and
integrate
as
many
tools
as
possible,
and
I
want
like
we
have,
for
example,
integration
directly
to
slack
where
all
those
messages
were
integrated
into
slack.
B
I
mentioned
litmus
cows.
The
easiest
way
is
is
really
why
the
captain
can't
read
services
that
are
just
different
services.
They
want
to
link
to
that's
where
we
see
most
of
the
contributions
happening
right
now,
people
saying
okay,
I
need
to
integrate
this
with
this
tool.
Another
tool
just
a
baby
hat,
but
I
mentioned
selenium
another
one
that
just
made
it
in
there
is,
for
example,
integration
into
splunk.
B
So
whatever
tools
people
are
using-
and
you
also
see
from
the
dyna
trace
perspective-
we
are,
this
is
like
really
run
as
an
open
source
project,
so
we
have
also
people
using
it
with
non
diamond
phrase.
Environments
at
all,
so
that's
also
totally
fine
here
is
like
the
ansible
that
would
execute
an
ansible
tower
integration
and
the
alexa
service,
for
example,
calls
directly
to
alexa.
I
think
that
the
easiest
way
to
contribute
is
okay.
I
want
to
link
this
to
whatever
I
wanted
to
do.
B
That's
the
most
straightforward
one.
Obviously,
we
welcome
core
contributed
to
some
of
the
core
components,
but
the
learning
curve
might
be
a
bit
steeper
and
the
more
natural
ways
always
by
okay.
I
want
to
integrate
this
tool
over
there
or
I
don't
like
the
way
you're
doing
the
issue.
Creation,
for
example,
so,
like
one
one,
integration
thing
that
we
don't
have
right
now
for
for
gitlab,
but
for
for
other
tooling,
is
if
the
deployment
fails
to
automatically
create
an
issue
for
the
respective
developers
automatically.
B
C
And
if
you
want
to
start
writing
your
own
captain,
there
is
a
captain
service
template
available.
The
captain
serves
template,
go
there's
everything
described,
so
this
would
be
the
yeah
there's,
also
an
and
youtube
video
and
how
to
start
writing
a
caption
service
available
on
the
captain
channel,
whereas
everything
explained
it's
really
straightforward
and
not
so
heavy.
So,
as
I
said,
I'm
not
a
developer,
and
if
I
can
write
in
captain
service,
everybody
can
write
in
captain
service.
C
In
the
end,
you
have
your
event
handler
where
you
put
all
the
stuff
together
to
start
with
captain,
as
mentioned,
I
can
highly
recommend
the
captain
control
plane
on
keys.
So
there
isn't
here,
captain
stunner
trace
check
out
captain
down
in
just
five
minutes.
There
are
webinars,
you
can
watch
and
then
follow
the
guidelines,
there's
also
for
captain
the
tutorials
dot,
captain
sh
site
available,
where
we
have
full
tours,
like
your
captain,
installation
on
keys
or
captain
fulton
from
ethos.
C
C
We
only
need
to
run
the
captain
install
and
it
will
yeah
start
deploying
captain
on
your
local
machine,
and
so
you
can
play
around
or
or
test
your
services.
If
you
want
to
start
writing
your
own
service
in
captain,
the
slack
channel
is
also
highly
recommended
because
they
are
really
all
the
captain
contributors
inside
and
if
you
have
any
questions
they
are
happy
to
help.
B
E
A
Here
I
think
that
nicholas
is
already
hacking.
D
No,
I'm
not
having
I'm
currently
thinking
about
something.
What
is
currently
the
way
how
the
events
will
be
propagated.
So
in
terms
of
the
cloud
events,
so,
what's
the
event
broker
mostly.
B
The
event
broker,
so
the
architecture
is
that
the
opposite
events:
propagation
itself-
is
a
micro
service
right
now,
internally,
it's
using
nets,
okay
for
event,
propagation,
and
when
you
have
a
new
service,
you
just
distinguish
defined
as
part
of
the
service
descriptive
services.
You
want
to
subscribe
to.
That's
always
the
interesting
thing
with
event
subscriptions,
because
you
want
to
do
it
declaratively
outside
of
the
service,
but
you
still
have
to
do
the
service
event
subscription.
That's
always
a
bit
of
a
kind
of
challenging
bits
and
piece.
B
That's
that's
one
way
you
can
subscribe
to
the
event
and
we
also
have
what
we
call
like
outside.
So
when
we
talk
about
captain,
we
talk
about
the
control
plane,
that's
really
the
orchestration
piece
and
the
execution
plane.
That's
the
bits
and
pieces
that
execute
the
actual
service,
like
gitlab
in
this
case,
would
be
an
execution
plane
that
consumes
services,
and
you
can
also
directly
ask
via
the
api
for
specific
events
that
you
want
to
execute
against
and
then
it
would
also
go
there.
B
E
B
Yes,
you
can,
because
even
that
part,
like
the
whole,
not
with
the
standard
installation,
the
standard
installation
would
install
nuts.
E
B
But
the
eventing
service
itself
is
also
a
service
that
you
can
switch
out
because
we
use
it,
for
example,
for
different
that
that's
something
we
also
internally
use
when
we
have
deployments
which
are
also
running
on
kafka
right
now.
It
would
be
a
bit
maybe
a
bit
rough
around
the
edges,
but
the
plan
is
you
can
switch
out
that
underlying
layer
entirely.
So
what
we
wouldn't
have
is
the
shipyard
controller
component,
more
or
less
being
left
in
there
and
the
execution
services,
and
you
can
also
link
in
other
services.
B
C
D
From
that,
of
course,
I
my
first
point
with
cloud
events
coming
from
the
clean
native
project,
and
they
are
so
first
time
the
cloud
events
mostly
they've
also
components
towards
native
eventing,
and
this
is
also
mostly
an
abstraction
about
how
which
event
system
you
can
use
underneath
which
event
flows.
Do
you
want
to
have
in
the
event
system?
Mostly,
that's
why
I'm
asking.
B
But
one
of
the
challenges
why
we
use
nuts
and
also
like
even
like
smaller
compressed
components,
it's
because
you
want
to
deploy
especially
the
core
management
components
directly
on
top
of
your
application
and
many
of
the
the
serverless
frameworks
come
in
pretty
heavy,
but
when
it
comes
to
resource
footprint.
B
So
if
you
would
have
to
like
deploy
the
serverless
framework
on
top
of
everything,
so
we
wanted
to
keep
it
like
super
small.
That's
why
we
had
just
the
interfaces
in
there
and
just
through
in
nuts
from
the
beginning,
because,
well
I
remember
the
very
early
days
when
we
were
running
on
k
native
people
were
saying.
We
can't
even
use
like
a
free
gt
cluster
to
just
run
a
demo
on
my
very
small
application,
because
the
overall
infrastructure
is
consuming
these
amounts
of
resources.
B
D
B
True,
technically,
you
could
also
get
gitlab
pipeline
by
the
way
for
free
to
distribute
the
events.
If
you
don't
want
to
use
another
event
broker.
So
that's
that
that
that's
how
it
it's
set
up
with
bits
and
pieces,
and
that
was
also
the
whole
idea
behind
how
we
built
it.
But
some
people
say
this
is
replacing
what
everybody
has
said.
No,
that's
not
the
idea.
You
have
like
massive
investment,
what
you
have
already
built,
but
if
you
want
to
gradually
build
on
top,
you
want
to
do
it
in
an
easy
way.
B
So
this
is
not
replacing
a
pipeline
system.
You
already
have.
You
can
see
like
you,
can't
build
a
pipeline
in
the
traditional
way
in
captain
or
a
comfortable
zoning
cabinet,
where
it
will
what
it
is
targeted
to
do
so.
You
can
reuse
what
you
already
have
and
we
reuse
other
components,
there's,
for
example,
an
argos
or
service
in
there.
B
If
you
don't
want
to
use
the
built-in
batteries
included
deployment
service,
you
just
switch
to
argo,
you
switch
to
gitlab
pipelines
and,
and
whatever
you
want,
so
it's
more
as
the
from
a
lego
perspective.
It's
the
assembling
piece
and
you
can
take
the
bits
and
pieces
that
you
already
have
in
place
and
that
then
allows
you
with
the
event
subscriptions
to
do
like
super
funky
things,
because
you
can
throw.
For
example,
there
was
one
request
we
got
from
from
one
user.
B
What,
if
I
throw
an
ios
application
at
it,
can
you
automatically
trigger
upload
process
that
would
deploy
it
to
the
app
store
yeah?
Obviously
you
can
do
it
we're
not
able
to
do
it,
but
you
can
have
it
like
flow
through
this
process
and
have
the
same
orchestration
automation
that
you
want
that
it's
deployed
properly.
B
If
you
want
to
do
so
without
having
to
more
or
less
touch
everything
that
we
have
already
available
and
can
more
or
less
have
the
same
validation
workflows
that
we
have
in
place
as
well
and
would,
for
example,
also
pick
another
testing
service
because
you
obviously
test
the
application
differently.
C
So
what
I'm
right
now
thinking
about
because
we
want
to
use,
as
you
know,
gitlab
has
an
unleash
for
feature
toggling
underneath
right
and
there
is
already
an
unleashed
service
in
for
captain
available,
but
it
is
utilizing
the
the
normal
unleash
api
and
gitlab
have
some
own
api
around
it
or
api
endpoints.
C
F
I
have
a
little
question
about
this
more
a
historical
one,
so
you
mentioned
that
the
original
project
was
built
at
dynatrace
and
an
in-house
project.
So
how
long?
How
long
do
you
build
captain
from
day
day,
one
of
new
rewriting
the
the
core
components.
B
Roughly
so
that's
very
that's
a
very
funny
story,
because
captain
is
actually
the
project
project
that
we
never
wanted
to
build.
So
initially,
this
was
really
built
on
the
deployment
automation
there,
which
we
have
internally
for
our
own
sas
environment.
So
that's,
obviously
we
couldn't
use
for
anything
else
than
for
banner
trades
itself,
because
the
only
thing
it
could
deploy
was
dynatwise.
B
Then
I
think
two
and
a
half
years
ago
we
started
to
actually
build
a
demo
project,
so
the
idea
was
to
just
show
people
how
to
build
automation,
so
this
wasn't
supposed
to
be
an
open
source
project
to
do
what
it's
doing
right
now.
This
was
like
really
a
training
project.
That's
how
that
whole
thing
started.
B
So
it
was
really
a
demo
as
part
of
what
we
call
like
the
our
internal
university
to
teach
people
how
to
build
continuous,
progressive
deliverance
automation
that
started
two
and
a
half
years
ago,
and
then
over
the
first
year
it
morphed
more
and
more
into
okay.
We
actually
want
to
deploy
something
else
on
top
of
it,
because
the
funny
stories
for
the
first
year,
the
only
application
you
could
deploy
on
captain
was
sock
shop
because
it
was
a
demo.
B
It
was
a
demo
environment,
that's
what
it
was
really
built
for
to
deploy
sock
shop
and
then
a
bit
more
than
a
year
ago,
yeah.
That
was
like
really
the
first
version
where
you
could
start
really
using
it
for
deploying
actual
applications.
B
So,
overall
it's
like
two
and
a
half
years,
but
you
would
say,
like
the
first
one
and
a
half
years,
it
wasn't
really
built
for
deploying
actual
application,
but
more
or
less
as
a
demo
and
training
and
and
reference
implications.
And
that's
what
you
see
right
now
is
a
bit
older
now
now
than
a
year
and
we're
now
gradually
moving
components
over.
B
It
is
funny
because
it's
the
project
we
never
wanted
to
build
and,
as
we
moved
in
there
and
we
started
people
using
the
code,
we
had
in
the
trainings
for
their
production
create
application.
We
say:
well,
that's
not
a
great
idea.
How
did
you
get
into
working
by
the
way
and
that's
how
it
readily
moved
moved
over?
That's
also
when
we
decided
okay,
let's
move
it
out
of
the
control
of
of
dynatrace
and
move
it
over
to
this.
F
B
Components
that
we
already
have
in
dynatrace
that
run
on
our
own
internal
services,
like
the
previous
version,
think
of
it,
like
borg
and
kubernetes
in
the
in
in
the
google
sense,
more
or
less
we're
not
going
to
replace
what
we
have
already
rolled
out
for
our
existing
customer
installation,
because
the
benefit
would
be
rather
low.
B
We
would
get
exactly
the
same
automation
that
we
have
today,
just
by
investing
a
massive
amount
of
additional
menus
on
top
of
it,
but
for
all
newer
components
that
we're
now
deploying
and
new
software
that
we're
developing
we're.
We
are
and
will
be
using
captain
as
the
deployment
mechanism,
then
yeah,
but
it's
always
hard
to
sell
a
project.
Let
me
invest
like
all
of
these
resources
and
put
all
this
risk
on
the
project
just
that
at
the
end.
Hopefully,
everything
works
exactly
as
before.
That's
always
the
hardest.
B
B
Component,
even
for
the
say,
more
legacy
version
of
what
we
had
before.
We
replaced
the
internal
implementation
of
the
quality
gates
with
the
capture
version
of
the
quality
gates,
bits
and
pieces,
but.
B
Service
based
approach
comes
into
play.
We
just
then
link
it
to
our
existing
backends
to
do
things
and
just
out
just
link
it
in
for
their
quality,
gate,
emulation
purposes
and
the
rest
stays
pretty
much
the
same.
It
was
before.
B
That
that
was
one
of
the
challenges
we
had
actually
when
people
said
well,
we
want
to
modernize
here
and
we
want
to
change
something
over
there.
That
is
not
like
this,
like
you
have
to
change
everything
they
want,
because
that's
not
super
helpful,
but
you
can
gradually
take
on
bits
and
pieces
that
that
you
want
to
use
and
also
change
tools
as
you
as
I
like
them.
That's
also
what
the
cncf
that's.
B
This
is
really
hard
if
you
have
built
something
like
you
want
to
throw
in
a
new
chaos.
Engineering
tool-
or
you
want
to
you-
know,
exchange
your
testing
tool
and
say
well,
I
like
jamie
but
hey
for
90
of
my
services
with
way
more
resources,
effective
and
then
totify
putting
it
in
there,
and
you
don't
want
to
touch
everything
you
have
built
previously
and
you
even
want
to
do
it.
Sometimes
even
under
the
hood
or
we
have
people
like
switching
from
internal
services
to
as
a
service
offerings
for
for
some
of
those
tools,.
F
C
So
what
I
can
tell
you
from
from
our
history
is
we
started
captain
using
using
the
the
quality
gates
and
that's
the
easiest
approach
to
get
started
with
captain.
C
You
don't
need
to
necessarily
use
all
the
other
services
from
the
beginning,
just
yeah
plug
it
into
your
ci,
run
a
quality
gate
evaluation
after
your
deployment
and
then
having
yeah.
This
quick
start,
I
would
say-
and
it's
really
beneficial
because
you're
getting
so
many
insights
out
of
the
box
from
this
quality
gates,
how
your
service
is
performing
and
yeah?
That's
something
we
have
experienced,
which
is
really
really
beneficial
in
our
deployment
process
or
life
cycle.
B
C
So
what
we
did
before
we
use
captain
was
let's
say
when
we
want
to
to
configure
in
synthetic
check
and
dynatrace
for
our
application.
We
had
also
gitlab
ci
templates,
which
were
using
and
including,
and
so
on,
and
so
on
and
so
on,
and
these
are
like,
I
would
say,
the
bits
and
pieces
we
are
replacing
with
captain.
C
So
with
captain
synthetic
service
we
don't
need
to
include
any
any
yummels,
so
we
just
send
the
deployment
finished
event
as
explained,
and
then
captain
will
take
care
of
the
rest
without
any
further
configuration.
And
so
you
can
break
up
these
big
monolithic
pipelines
more
and
more
into
smaller
pipelines,
which
is
also
better
manageable
right.
D
It's
really
complex
and
you
can't
find
now
whatever
point,
for
example,
it's
in
terms
of
where
you
can
also
make
at
least
separation,
so
for
example,
that
you
want
also
to
have
an
option
to
do
a
simple
row
back
on
that
case.
So
some,
then
you
need
to
also
template
more
yammer
to
that
and
add
this
and
find
overage
channels
right
now
or
you
need
to
keep
having
mind
at
start
directly
and
don't
can
do
it
in
a
linear
way
to
see
it.
Okay,
this
metrics
tend
to
do
going
this
way.
D
B
I
mean
one
one
interesting
thing
we
didn't
mention
here
on
the
back
end,
we
also
use.
We
also
create
traces
for
all
the
executions.
Tomorrow's
every
pipeline
run
as
well
as
every
remediation
run,
is
a
trace.
B
So
as
well
as
this
observability
built
in
you
could
query
so
we'll
be
opening
up
the
interface.
You
can
clear
them
as
traces.
You
could
also
ask
start
to
ask
interesting
questions
like
show
me
whether
there
were
any
production
level
say
there
were
any
production
level
issues.
Performance
issues
for
pipeline
runs
that
had
their
performance
test
evaluated
to
pass,
meaning
more
or
less
your
performance.
Just
don't
reflect
actual
the
actual
situation
in
production.
B
So
the
idea
of
like
having
traces
behind
that
you
can
ask
like
kept
like
this
observability,
not
just
for
your
code,
but
also
for
the
automation
that
you
have
later
on
there
and
the
really
funny
thing
is-
and
I
had
this
discussion
with
somebody
who
comes
like
from
a
more
traditional
old-school,
itil
approach.
I
told
them
because
this
is
all
github
space,
so
everything
is
versioned
and
I
can
clear
that
races.
This
is
actually
more
ideal
than
the
actual
processes
that
you
have
defined,
because
everything
is
more
or
less
available.
B
You
have
like
full
auditability
of
everything
you
did
in
the
system,
because
the
only
way
to
interact
with
it
is
via
the
api.
You
can
simply
change
anything
in
there
in
in
any
other
way,
because
the
system's
not
going
to
to
allow
it
and
that's
why
you
have
tracing
and
photo
version
is
to
be
available.
B
B
B
This
is
nothing
you
put
anywhere
anymore,
so
this
is
like
fully
programmatically
managed
and
you
want
to
say
I
want
exactly
this
version,
but
I
need
it
in
another
environment.
So
I
can
try
something
deploy
back
into
that
environment,
and
now
I
want
to
sync
it
back
without
having
to
type
all
that
code.
That
would
be
required
because
that's
also
why
the
automation
into
the
pipeline
tools
doesn't
make
sense,
because
those
pipelines
are
highly
parameterized,
but
deciding
which
parameters
to
send
to
to
the
pipelining
tool
is
then
taken
by
cap.
B
That's
exactly
this
control
plane
idea,
you're
still
calling
the
pipelines
that
were
there
before,
but
you're
deciding
what
the
right
parameters
are
for
calling
these
pipelines
right
now,
like
your
networking
control
plane,
will
tell
the
hardware
okay.
This
is
what
how
it
should
be
routing
traffic
right
now,
because
that's
how
I
want
it
to
be
rounded
and
and
having
that
configuration
top
down.
So
that's
where
we
see
most
of
the
benefits,
eventually
that
it
allows
you
to
write
less
automation,
code
and
being
able
to
more
easily
orchestrate
it.
D
Yeah
mostly,
what
is
a
big
advantage
about
that
is
that
you
know,
write
your
decisions
down
into
a
file
that
is
declarative
and
that
you
can
also
explain
and
share
with
your
products,
for
example,
when
a
new
person's
come
into
your
team,
they
could.
You
could
explain
why
you
did
this
decisions
on
which
point,
probably
because
it's
version
controlled
right
away,
so
you
can
also
see,
for
example,
we
see
it
on
black
friday.
We
need
to
do
the
following
steps
or
on
our
last
last
big
deployment
when
we
were
making
a
breaking
change.
D
We
saw
also
that
that
we
need
to
do
change
a
little
bit
our
indicators
in
some
parts,
and
this
is
a
great
benefit
of
that.
Instead
of
okay,
I
know
the
person
in
my
company
mostly
you're,
doing
all
the
deployments-
I
know
doing
a
lot
of
deployments
and
they
have
all
the
decisions
in
mind
and
when
this
person
goes
away,
you
are
probably
also
have
some
problems.
B
We
had
two
two
very
interesting
situations
here
where
this
was
where
this
separation,
after,
like
what
we
have
in
the
shipyard
and
the
uniform,
was
super
helpful,
big
financial
institutions.
Their
biggest
problem
is
that
they
have
to
validate
that
the
process
that
you're
following
follows:
company
guidelines
and
what
they
had
to
do
before
they
had
to
read
pretty
much
everything
the
automation
code,
all
together
right
now,
what
they
do
they
put
in
the
they
can
put
in
the
shipyard
file.
B
The
shipyard
file
has
a
code
owners
file
in
there,
so
nobody
can
change
it,
so
they
have
no
control
over
these
files
and
then
they
can
add
in
the
automation
tools
that
they
want
to
add
in
there
but
yeah,
but
you
will
know
that
it's
always
enforced
and
for
the
black
friday
one
that's
one
of
the
was
one
of
the
initial
examples
you
can
go
into
the
shipyard
file
and
you
said
e-commerce
companies
throughout
the
year.
You
can
experiment
as
much
as
you
want.
B
Just
changing
like
one
file
in
the
shipyard
switch
approval
always
to
manual
disable
it
for
four
days,
and
then
you
re-enable
it
and
all
the
services
that
are
underneath
will
be
automatically
reconfigured
for
their
deployment,
that
they're
now
running
manual
and
you're
not
running
into
the
issue
of
maybe
forgetting,
like
one
pipeline
that
now
still
automatically
deploys,
because
there's
always
the
one
that
you
do
not
know
about.
B
So
this
is
like
one
of
those
one
of
the
use
cases
that
that
we
have
seen
somebody
say
now,
disable
it
now
and
because
we
were
talking
with
a
lot
of
the
people
who
were
working
say.
How
are
you
doing
this
today,
like
black
friday,
is
just
coming
up?
How
do
you
disable
that
people
are
just
randomly
deploying
into
production?
B
Should
talk
to
say
well
what,
if
there's
automated
processes
that
you
don't
know
about,
you
can't
tell
an
automated
process
to
do
something
differently.
Kind
of
has
a
point
there.
So
I
think
it
also
helps
to
your
point.
It
helps
from
the
auditability,
but
it
also
helps
that
you
enforce
it
because
you
would
get
a
new
project,
but
the
shipyard
will
already
be
in
there
as
it's
separately
managed.
B
F
E
E
D
Do
to
free
this
for
that,
but
you
need
to
interact
with
your
pipeline.
So
if
you're
going
to
the
documentation
that
you
know
the
benefit
of
captain,
so
you
need
to
add
an
extension
to
your
detail
ciamer
so
that
that
there's
a
check
for
if
it's
a
deployment
fees
are
not
currently
happening,
and
this
can
be
then
done
by
captain.
You
don't
need
to
touch
the
eclipse
here,
yammer
that
would.
D
Lit
yeah
process
of
I'm
having
a
separated,
ci
part
and
also
having
a
separated
cd
part.
Probably
you
could
also
doing
some
more
advanced
stuff
that
you
having
a
real
dedicated
part
of
the
do.
The
cd
part,
for
example,
for
staging
and
production
is
quite
different
than,
for
example,
to
deploy
into
the
integration
environment,
because
every
developer
can
deploy
to
the
integration
environment.
Changes
can
happen,
but
you
want
to
have
a
separate
process
for
the
cd
part
for
staging
and
production,
because
there
are
some
more
regulations.
B
This
and
there's
I
mean
we
haven't,
we
haven't
seen
it
right
now,
but
that's
like
also.
You
will
always
see
how
far
your
environment
is
behind,
which
is
also
which
I
also
find
very
nice,
because
everything
is
like
get
up
space
that
was
uk,
like
your
production
environment
is
five
service
changes
behind
your
staging
environment
because
we
know
everything
is
going
in
there.
B
I
B
Actually,
one
of
the
the
number
one
requirement
for
quite
some
time
where
people
said
well,
they
want
to
have
mental
control
over
the
automation
again,
but
they
want
to
have
it
in
the
way
way
where
they
can
only
do
things
manually
that
don't
break
stuff
have
us
is
kind
of
like
a
challenge.
So
how
do
you
know?
B
But
that's
why
we're
disabled
for
so,
if
you
have
the
quality,
give
an
evaluation,
for
example
in
staging
and
it
fails,
it
will
not
show
you
that
it
will
still
show
you
that
you're
behind
your
staging
environment,
but
it
will
not
allow
you
to
deploy
it,
because
this
is
actually
failed.
So
you
you're
not
allowed
to
push.
D
B
There's
also
by
the
way,
cube
contact
from
the
app
delivery
sick,
where
we'll
show
something
entirely
different
because
they
would
just
take
a
single
helm
chart
and
build
a
three-stage,
automated
deployment
pipeline
in
five
minutes,
so
that
the
app
delivery
talks
for
all
of
you
who
are
interested,
but
we
did,
I
mean
most
of
you
have
seen
like
this
whole
discussion
about
like
the
cncf
landscape.
It's
a
total
nightmare.
B
Everybody
marketing
is
happy
because
there's
like
three
thousand
logos
on
there,
but
nobody
technology
is
happy
because
there's
roughly
three
thousand
logos
on
there,
some
people
used
other
words
for
it
that
sounded
more
like
show.
So
what
we
said?
Okay,
let's
take
all
of
this-
but
let's
take
the
positive
side,
there's
a
lot
of
stuff
out
there.
So
we
created
this
small
project
which
we
call
potato
head
and
we
deployed
it
in
five
or
six
different
ways.
B
B
B
This
will
then
include
stateful
workloads.
It
will
have
multi-stage
secret
management
because
that
came
up,
but
the
whole
idea
was
instead
of
showing
people
that
the
landscape
is
too
complicated.
It's
like
hey.
These
projects
are
really
useful
and
you
can
very
nicely
combine
these
projects
together
and
do
stuff
that
actually
is
a
lot
of
fun
and
there's
like
from
matt
farina
from
helm,
contributed
carolyn
from
snook
contributed
to
cnep
example,
also,
if
you've
never
seen
senior
thick
bundles.
B
There's
an
example
of
synapse
stick
bundles
in
there
there's
even
a
helm
operator,
a
an
operator,
sdk
helm
operator
in
there
we're
working
with
the
kudo
team
to
get
a
kudo
operator
in
there.
So
the
idea
is
to
like
have
a
multitude
of
this
is
how
you
could
deliver
stuff
on
kubernetes
in
a
lot
of
different
ways
and
play
around
with
it
and
obviously
right
now.
Use
case
is
very
simple
because
it's
a
simple,
stateless
service,
but
the
goal
is
to
make
this
significantly
more
complicated
and
challenging
as
as
time
passes.
B
D
So
we,
you
will
add
a
lot
of
more
toy
story,
features
there.
So
jesse
will
come
up
with.
You
will
come
up.
B
We
are
adding
we'll
be
doing
some
like
the
idea.
Was
we,
but
we
didn't
want
to
take
that
usual
e-commerce
application.
We
really
didn't
want
to
do
it
and
not
yet
another
e-commerce
or
travel
application.
Let's
do
something
funny
microservice-y
and
I
was
actually
having
venus
with
potato
salad.
I
think
well,
potato
head.
B
It's
like
a
micro
service
application,
it's
the
potato
head
and
you
can
put
these
things
on
it
and
that's
how
that
idea
that
eventually
came
along
because
well,
you
actually
can
show
canary
releases
by,
for
example,
doing
a
canary
release
of
a
food
service,
and
the
left
foot
is
different
from
the
right
foot
because
you
can
deliver
different
types
of
feet
and,
as
this
went
on,
it
became
funnier
and
funnier,
but
the
the
underlying
idea
was
also
when
we
looked
at
at
hipster
shop.
B
The
challenge
was
that
hipster
shop
is
pretty
heavy,
so
if
you
want
to
set
it
up
in
a
multi-stage
environment,
that
was
something
we
were
suffering
from
other
on
the
cap
side
as
well.
It
becomes
really
resource
intensive
and
it
focuses
a
lot
on
building
microservices
in
many
different
languages,
but
problems
like
okay.
How
do
I
handle?
How
do
you
model
service
dependencies?
B
How
do
I
upgrade
multiple
stateful
workloads
or
things
that
are
not
really
handled
that
well
in
something
like
hipster
shop,
because
that's
not
really
what
they
were
really
focusing
on,
and
that
was
the
idea
behind
potato
heads
to
have
something
funny,
but
also
some
of
these
like
super
hard
problems,
and
even
the
cnap
example
is
really
nice.
So
how
do
I
deploy
something
to
an
aircraft
environment?
B
Okay,
it
looks
like
just
like
a
potato
head
when
it
deployed
there,
but
still
try
to
get
today
a
potato
head
in
an
air
gap.
Environment.
That's
actually
pretty
challenging
as
you
as
you
move
there,
and
it's
also
easy
for
for
people
to
learn.
So
that's
why
we
started.
B
This
project
and
just
keep
investing
in
it
and
again
then
you
start
with
multi-stage
secret
management
for
certain
party
services,
and
then
you
start
with
skip
talks
for
multiple
stages.
So
there's
a
lot
of
very
interesting
examples
that
that
can
be
put
in
there
and
I
don't
really
care
how
complex
the
services
is.
It's
just
delivering
back
an
error
in
those
is
complex
enough
for
that
service.
B
D
E
E
A
So
I'm
I'm
still
overwhelmed.
I
could
listen
to
you
all
day
long,
especially
on
the
observability
and
tracing
and
the
icd
deployment
parts,
but
I
think
we
should
like
say:
hey.
A
We
are
enjoying
our
evening
soon,
maybe
have
a
chat
or
maybe
revisit
the
topic
like
in
one
month
in
two
months,
three
months
time
and
have
another
another
coffee
chat
around
this.
Maybe
I
would
also
love
to
learn
more
about
dynatrace
and
what's
going
on
so
if
arlo
is,
would
be
up
for
like
doing
doing
this
by
yourself
or
like
bringing
in
more
colleagues,
I
would
totally
love
to
learn
more
about
it,
but
yeah
thanks
thanks
for
the
demo.