►
From YouTube: Keptn and Argo Rollouts: Volusion and NTT Data speed up Multi-Stage Blue/Green deployments
Description
For their new microservice architecture, Volusion and NTT DATA implemented a multi-cluster rollout orchestration. From a single source repository, GitHub Actions push the Keptn configuration which orchestrates the rollout, testing and evaluation of containers across staging and production. Watch this session to see how Keptn eliminates the need to build your own cross-tool and cross-cluster orchestration
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
Join us in Slack: https://slack.keptn.sh
Star us on Github: https://github.com/keptn/keptn
Follow us on Twitter: https://twitter.com/keptnProject
Sign up to our newsletter: https://bit.ly/KeptnNews
A
Hello,
everyone
to
today's
captain
user
group
meeting
captain
and
argo
rollouts
subtitle
as
you
can
see,
evolution,
entity
data
speeds
up,
multi
stage,
blue
and
green
deployments.
This
is
a
lofty
title
as
I
always
try
to
say
it's
going
to
be
a
really
cool
session,
because
over
the
last
couple
of
months,
ian
and
arden,
and
probably
other
people
behind
the
scenes-
have
really
implemented
some
cool
things
with
captain.
This
is
what
you
want.
What
we
want
to
showcase
today.
My
name
is
andy
gradner.
A
I
am
one
of
the
maintainers
of
the
captain
open
source
project
and
I'm
very
proud
to
work
with
users
and
partners
and
contributors
on
the
captain
project.
I
want
to
quickly
take
the
opportunity
and
let
every
one
of
you
introduce
maybe
ian
starting
with
you
just
like
who
you
are
what
you
do
at
volusion.
B
Addy
yeah,
so
I
am
the
director
of
architecture
of
illusion
kind
of
overseeing
all
of
the
operational
teams,
so
that
includes
our
data
team,
our
devops
team
security
operations
and
site
reliability
engineering.
So
I've
kind
of
been
played
the
part
of
making
sure
that
we
organize
all
the
developers
and
hand
off
between
developer
developer
teams
and
our
devops
teams,
and
things
like
that.
So
looking
forward
to
getting
in
and
digging
in
on
on
how
we've
worked
with
captain
in
the
last
a
few
months.
A
Perfect-
and
I
just
also
posted
the
link
to
the
check
to
your
linkedin
profile,
and
I
have
to
ask
you:
your
linkedin
name
is
tigerc10,
which
is
much
better
than.
B
B
That's
that's
my
that's
my
username
just
about
everywhere.
I
go
in
social
media.
It's
my
github
handle
as
well,
but
I
I
created
this
username
way
back
when
yahoo
mail
was
the
dominant
cloud.
Email
provider
tigers
are
my
favorite
animal.
A
B
It's
different
there's
been,
there
have
been
a
few
people
that
have
definitely
done
that.
A
Yeah,
okay,
all
right,
hopefully
no
harm
was
done,
we'll
hear
from
you
in
a
second,
but
let's
first
switch
it
over
to
artem.
Can
you
quickly
introduce
yourself.
D
A
I
think
we
first
want
to
hear
it
from
ian
kind
of
a
little
background
story
on
who
you
are
and
what
made
you
do
kind
of
what,
when
you
look
into
captain,
then
the
major
part
of
the
presentation
will
be
from
artem
blow
both
showing
stuff
in
in
slides
the
architecture,
but
then
also
doing
a
live
demo
and
then
at
the
end
we
kind
of
wrap
it
up
and
let
ian
kind
of
conclude
on
where
the
journey
is
heading
and
everyone
that
is
online
right.
Now.
A
If
you
have
questions,
please
use
the
q
a
feature
in
zoom.
I
will
moderate
and
if
you
watch
this
as
a
recording,
as
I
said,
I
just
posted
the
two
linkedin
links
to
ian
and
artem.
If
you
have
questions
to
them,
if
you
have
questions
around
captain,
then
I
can
also
suggest
you
just
join
the
conversation
on
slack
dot
sh.
B
All
right,
so
what
we
did
at
volusion
previously
before
we
discovered
the
glory
that
is
captain.
B
We
had
this
haphazard
approach
towards
deployments
and
there
was
a
very
well-intentioned,
a
proposal
for
doing
monorepo
and
trying
to
get
our
application
and
infrastructure
deployed
all
together
at
once,
and
it
turns
out
in
practice
this
caused
a
bit
of
confusion
right
because
what
ended
up
happening
is
on
you
know.
Let's
say
we
were
planning
to
do
a
deployment
on
thursday,
and
so
we
stage
up
some
changes
to
our
infrastructure
repository
and
then
tuesday
comes
along
and
some
other
team
says.
B
Oh,
we
need
to
deploy
our
app
today,
so
they
do
so
and
then
the
staged
changes
would
go
out
all
at
once,
and
so,
when
you
look
at
things
like
12
factor,
we
had
a
clear
blending
of
kind
of
the
the
infrastructure
and
the
applications
and
and
the
deployments
and
it
was
it
was
introducing
constraints.
It
was
introducing
complexity
and
time
really
so
we
started
looking
around
and
we
were
trying
to
figure
out
okay.
B
What
can
we
do
to
separate
our
infrastructure
from
our
application
deployments?
Our
micro
service
deployments
is
where
we're
transitioning
into
a
microservice
based
environment.
We
really
need
to
improve
our
deployment
stability.
We
really
need
to
make
sure
that
when
we
deploy
that,
if
something
does
go
awry
that
we
can
quickly
roll
back
and
that
we
would
be
able
to
build
in
some
automated
measure
of
the
the
quality
of
our
code,
and
so
we
started
looking
around
right,
we
looked
at
jenkins.
We
looked
at
spinnaker
and
both
really
great
tools
for
their
purpose.
B
B
This
looks
like
the
tool
for
us
and
for
for
us
we
we
started
looking
at
okay,
you
know
if
we
were
to
implement
captain
it's
going
to
take
us
a
little
while
to
figure
it
out,
because
it
is
such
a
new
technology,
and
so
we
started
looking
for
partners
to
help
us
along
our
journey
with
with
enhancing
our
deployments
and
really
enhancing
our
quality
and
confidence
in
our
deployments
and
releases,
and
so
we
found
ntt
data
right.
B
I
had
worked
with
matt
and
artem
in
the
past,
their
their
team
just
really
awesome.
Devops
engineers,
we
called
them
up,
and
we
said:
hey
we've
got
this
cool
project,
we
want
to
implement
captain
and
they
were
super
excited
the
minute
they
started
reading
through
it.
They
said
wow.
This
looks
this
looks
incredible.
Let's,
let's
figure
something
out
on
this,
and
so
to
that
end
I
think
that
that
kind
of
segues
over
to
artem
pretty
well
artem.
Take
it
away.
D
Thank
you
again
just
a
moment.
Let
me
move
to
the
next
one.
Okay,
let's
talk
about
the
solution
itself
and
we
start
with
technology
stack.
Initially
we
set
up
control,
plane,
captain
0.8,
and
for
today's
demo
I
have
replicated
the
environment
and
upgraded
to
version
0.10
that
it
was.
It
was
quite
smooth.
So
that
is
that's
great
thing
and
we
have
also
monitoring
it
is
based
on
diamond
trays.
D
Also,
we
initially
tried
to
use
prematures,
but
it
was
not
as
great
because
of
we
actually
plan
to
use
multi-cluster
deployment,
and
so
we
implement
captain
multi-cluster
setup
and
prometheus.
We
had
some
challenges,
so
we
decided
to
move
to
direct
race
for
csd
for
ci
pipeline.
We
use
github
action,
so
github
action
actually
built
for
us
docker
images
and
trigger
cap
captaincy
pipeline
and
for
performance
testing
and
functional
testing.
We
use
a
geometer
deployment
tool.
D
It
is
argo
allowed
and
we
choose
it
in
order
to
get
blue
green
deployments
out
of
the
box,
and
it's
actually
also
quite
flexible
with
english
controllers.
It
supports
about
five
different
indus
controllers
and
we
plan
to
use
nginx
ingress,
so
it
was
great
fit
for
us,
and
so
we
choose
to
use
algorithms
for
cloud
provider.
We
we
deployed
everything
in
gcp
and
they
use
instructions
called
terraform
in
order
to
deploy
all
the
instruction
components
like
kubernetes,
cluster
and
network.
D
So
let's
move
to
the
next
slide.
That
is
deployment
diagram
and
it's
a
show
exec
execution,
execution
architecture
of
the
system
and
component.
So
you
can,
you
can
see
on
the
side
three
clusters
for
control
plane.
We
have
a
separate
kubernetes
cluster
that
where
we
deploy
captain
control
plane
and
this
this
includes
all
core
components
of
captain
like
api
service,
shipyard,
controller
storage,
etc.
D
So
it's
actually
provides
us
dashboard
things,
but
we
can
orchestrate
our
csd
pipeline
and
you
can
see
also
two
identical
clusters
stage
and
production
and
on
every
cluster
we
have
captain
execution
plan
and
captain
execution
plan
communicate
with
a
captain.
Control
train,
gets
events
and
from
control
plane
and
trigger
integration,
and
you
can
see
that
we
use
archer.
Alas,
j
meter,
helm,
ingest
control,
so
it's
they
get.
Events
from
the
captain
execution
plan
and
trigger
some
deployment
for
us
or
any
other
functionality.
D
Also,
we
have
dynatrace
actually
dynamiters
get
their
matrix.
Good
answers
component
gets
a
metrics
from
the
cluster
and
applications
and
sent
it
to
dynatrace,
and
captain
dynasty
has
great
native
out-of-the-box
integration,
so
it's
actually
created
automatically
create
tags
for
us
and
when
you
add
new
application,
it's
automatically
propagated
to.
A
Hey
artem,
just
a
quick
follow-up
here:
I've
posted
the
link
to
the
argo
service,
the
on
on
github.com.
So
this
is
one
of
the
integrations
that
we
are
featuring
today.
So
in
case
anybody
is
interested.
It's
called
the
argo
service
which
really
is
actually
for
argo
rollouts,
and
the
other
thing
that
I
wanted
to
mention
here.
You
are
it's
really
nice
that
you're
really
leveraging
the
architecture
from
captain
as
it
is
intended.
You
have
the
central
control
plane
and
then
you
have
the
individual
execution
planes
that
are
subscribing
to
events
per
particular
stage.
A
So
we
probably
see
this
later
on.
If
for
those
folks
that
are
new
to
captain
in
captain,
you
set
up
projects,
projects
have
stages
and
then
within
each
stage
you
can
define
what
we
call
automation
sequences
and
when
kept
orchestrating
an
automation
sequence
in
a
stage,
it
will
then
basically
trigger
those.
We
call
them.
Captain
services
that
are
listening
for
events
in
that
particular
stage
and
you've
nicely
separated
this
with
clusters,
because
we
often
get
them
in
questions.
Like
you
know,
do
we
support
multi-cluster,
support,
multi-cluster
deployment
or
multi-cluster
automation?
D
Yep
yeah
and
that's
multicultural,
multi-cluster
setup
really
give
us.
I
would
just
want
to
say
benefits
that
we
get
with
this
multicultural
setup.
First
of
all
it
what
I
see
it
is
isolation
between
environments.
So,
if
something's
going
wrong
in
one
environment,
it
will
not
influence
on
production.
D
It
is
one
thing
another
thing:
it
is
security,
so
we
have
different
api
keys.
We
can
totally
is
isolate
them
and
they
can
be
in
different
networks
in
different
accounts
organization,
etc.
So
yeah,
that's
this
one,
I
think,
is
really
important.
D
Okay
and
next
slide
is
the
cicd
pipeline.
As
I
already
said,
we
use
for
we
use
github
action
for
ci
pipelines
that
actually
build
our
image
test,
scan
and
push
it
to
jfrog
artifactory.
It
is
docker
registry,
central
docker
registry
and
all
other
components
actually
pull
docker
images
for
deployment
from
this
registry,
and
you,
you
can
see
also
a
captain
cd
pipeline.
D
That
includes
two
stages:
the
security
and
road
and
every
stage
you
has
own
tasks
that
actually
trigger
different
integrations,
for
example,
deployment
that
is
arco,
roll
out
and
helm
j
meter.
It
is
test
evaluation,
dynamic,
trace
and
promote.
It
is
argo
rolled
out
again
between
stages.
We
have
quality
gates.
For
example,
we
evaluated
all
that
past,
we
evaluated
our
slo
and
it
meet
our
requirements.
D
For
example,
we
got
manual
manual,
approval
from
user
acceptance,
test
team
and
the
code
can
be
promoted
to
the
next
stage
and
below
you
can
see
clusters
and
the
application
itself
includes
these
three
components.
So,
first,
first
of
all,
it
is
deployment.
It's
actually
pods
of
the
applications
is
running
containers
services
they
allow
kind
of
these
abstractions
just
to
connect
all
ephemeral
parts
of
one
deployment
as
a
service
and
ingress
allow
us
to
expose,
through
nginx
english
controller,
our
application
to
the
internet.
D
Let's,
let's
move
forward
further
okay
and
this
one,
it
is
an
another
viewpoint.
So,
as
I
already
said,
we
use
blue
green
deployment
and
because
of
this
we
use
argo
allowed.
D
D
A
Just
a
reminder
for
everyone:
if
you
are
live,
if
you
have
questions,
please
use
the
q
a
feature
and
zoom,
and
if
you
watch
this
later
on
as
a
recording,
if
you
have
questions,
then
there
are
several
ways
to
get
in
touch
with
us.
The
easiest
is,
if
you
are
joining
us
on
slack
slack.captain.sh
or
you
can
also
reach
out
to
either
artem
or
ian,
the
linkedin
links
should
be
in
the
recording
at
the
very
top
somewhere
because
I
posted
them
earlier.
D
Okay,
so
this
is
repository
of
application,
so
it
is
a
sample
application
and
every
application
has
this
folder.
Captain
cd
that
includes
shipyard
file
and
shipyard
file
is
the
main
file
that
actually
orchestrates
all
the
activities.
That
captain
will
execute.
D
D
I
I
want
to
highlight
one
moment,
so
we
use
deployment
strategy
user
managed,
and
I
want
to
explain
the
reason
so
captain
how
integration
supports
three
different
strategies
for
deployment.
This
direct,
blue,
green
and
user
managed
and
the
user
managed
to
allow.
D
Actually
it
is
deployed,
as
it
is
count
chart
and
it's
give
us
how
control
on
the
deployment
process.
So
we
actually
we're
using
this
strategy.
We
actually
integrated
integrated
with
engineering
controller
and
manage
this
blue
green
deployment
through
the
hem
and
not
through
the
captain.
A
And
the
art-
and
maybe
that's
a
good
point,
because
a
lot
of
people
have
also
asked
you
know:
how
does
this
work?
What
is
what
what
deployment
strategies
are
allowed
and
do
you
only
support
blue
green?
Do
you
only
support
each
do
these
are
common
questions
that
people
come
up
with
because
with
blue
green
and
direct,
we
we
currently
assume
there's
istio,
because
we
automatically
create
your
virtual
service
crds,
but
with
the
user
managed
option,
you
can
specify
any
help
chart
just
as
usual,
but
any
ingress
controller
whatever
it
is.
A
So
this
is
why
let
me
also
post
the
link
now
into
the
chat
so
that
everyone
that
is
interested
in
our
integration
with
helm
get
some
more
documentation
on
these
individual
deployment
types
or
we
call
them
deployment
strategies.
D
Okay,
thank
you,
yep,
and
so
I
have
this
part.
So
we
have
evaluate
part.
D
So
it's
actually
trigger
trigger
evaluation
tasks
that
communicate
with
dyna,
trace
and
guys
indicators
from
dyna
trees
and
evaluated
based
them
on
this
law,
and
you
have
many
upper
wall
and
you
can
see
also
properties
and
pass
and
warning
so,
for
example,
if
affiliation
here
we
have,
we
might
have
some
flexibility
and,
for
example,
if
evaluation
or
green,
we
might
specify
that
this
is,
it
will
pass
automatically
and,
for
example,
if
it
is
something
is
boring
us,
we
can
ask
for
manual
approval,
so
kind
of
add,
more
steps,
and
if,
if
everything
green,
we
just
automatically
pass
approval,
if
not
green,
we
will
validate,
it
will
be
validated
by
person.
D
Okay,
let
me
move
to
other
folders,
so
we
have
hello,
close
service
stages,
and
here
we
have
three
different
folders
common
folders.
It
includes
all
the
services
that
we
deploy
and
it
doesn't
I
mean
it
will
be
deployed
to
all
environments.
D
So,
for
example,
it
is
dynatrace,
and
here
we
have
sli
seriously
indicators.
So
we
use
these
five
five
indicators
from
dynatrace
and
yeah.
Let
me
just
move
to
diameters
configuration
and
just
give
you
some
information
that
I
think
may
be
interesting
for
to
you.
So
you
can
see
here
that
this
actually
search
with
using
tags.
D
D
Captain
automatically
create
so
this
integration
automatically
creates
this
tags.
When
you
set
up
integration
between
captain
and
dyna
trace,
you
will
create
a
rule
for
automatic
tech
creation
and
its.
D
Some
values
we
get
from
secrets,
github
secrets
in
order
to,
for
example,
jfrog
pools.
You
know
to
get
artifacts
and
authorize
transition
keys
and
and
the
tempo
is
so.
We
have
whole
control
on
on
through
the
helm
and
because
we
specified
user
managed
strategy
deployment.
So
we
have
a
whole
control
for
ingress
deployment.
D
So
you
can
see
that
we
deployed
two
ingresses
because
we
have
blue
green
deployment.
We
have
one
ingress
is
main
that
available
for
end
users
and
another
one
is
review
as
prefix
review.
It's
actually
for
testers
who
will
review
the
application
before
promoting
it
to
production.
D
A
Perfect,
I
will,
by
the
way,
I
will
also
post
the
link
to
one
of
the
tutorials
that
we've
created,
especially
for
argo
rollouts
in
case
people
want
also
follow.
What
I
really
like
about
this
is
that
you
really
followed
one
best
practice
of
us.
A
That
means
you
have
your
source
code,
repo,
like
your
demo,
captain
hello
service
and
there
under
a
captain
folder,
you
specify
all
the
captain
relevant
files,
and
you
can
also
split
them
up
into
individual
stages
like
production
staging,
and
then
I
guess
your
ci
cd,
like
in
your
case,
your
github
actions,
are
then
pushing
them
to
captain
at
the
right
time.
This
is
really
perfect,
because,
with
this,
you
keep
everything
together
in
one
in
a
single
in
a
single
repo.
But
if
you
want
to
follow
that
approach,
that's
really
great.
D
If
you
go
to
actions
github
actions,
we
have
ci
pipeline,
it's
really
straightforward,
it's
just
get
credentials
for
artifactory
built
image
and
push
it
and
push
it
toward
the
factory
and
if
I
go
to
cd
pipeline
cd
pipeline
actually
gets
caught
from
our
two
repositories.
D
First,
one
is
application
code
and
second
one
we
have
a
cap,
it
is
captain
scripts.
It
includes
integration,
scripts
to
create
project,
create
service
in
the
project,
I'm
talking
about
captain
project
and
captain
service
and
trigger
deployment
in
in
captain.
So
right
now,
as
I
know,
github
action
already
has
a
login
for
captain
integration,
so
that
may
be
actually
be
replaced
by
a
github
action.
D
Plugin
we
get
credentials
for
gcp,
and
here
we
we
set
up
prerequisites
and
we
have
two
actions.
It
is
replace
staging
placeholder.
D
So
we
we
have
some
secrets
in
a
value
jamal
file
and
we
replace
with
real
secrets,
value
and
yeah,
and
after
that
we
trigger
captain,
we
actually
clean
it
and
create
new
captain
service
and
trigger
it.
D
One
of
those
potential
improvements
that
I
see
that
actually
integrating
a
code
image,
build
docker
image,
build
with
so
right
now
it's
two
separate
pipelines-
and
is
I
mean
when
one
pipeline
is
finishing-
you
don't
trigger
captain
deployment,
so
this
this
can
be
a
future
improvement.
A
D
Yep,
so
it's
it's
triggered
deployment
and
we
can
move
to
our
captain.
So
we
have
hello
service
product
project
and
it
has
a
hello
service
service
and
you
can
click
on
sequences
and
see
the
delivery.
Deployment
is
already
done
and
next
task.
It
is
test,
it's
a
trigger,
geometer
service,
so
we
already
got
approval,
so
it's
even
driven
architecture
of
captain-
and
you
can
see
all
the
events
here
so
first
of
all,
captain
send
event
trigger
event
to
one
of
the
integration
and
when
integration
gets
this
event,
it's
sent
back
event
has
started.
A
Maybe
also
for
those
people
that
have
not
seen
captain
before,
I
think
the
the
event
with
architecture
is
the
nice
thing
again,
because
if
you
look
at
your
shipyard
file,
you
really
only
specify
that
you
wanna
execute
a
test.
This
is
your
test,
your
process,
definition,
we
call
them
automation,
sequences,
deploy
and
test.
A
You
could
have
different
testing
tools,
subscribed
to
the
test
event
in
different
stages
or
for
different
projects
ever.
You
can
therefore
enforce
one
consistent
process
or
we
call
them
sequence,
but
let's
say
every
project
team
can
say:
well,
we
are
using
jmeter
because
that's
our
tool,
the
next
one
is
using
k6.
D
Yep
and
we
got
to
answer
from
the
test,
so
it's
actually
executed
and
all
green.
If
we
pass
our
test
oops,
sorry
and
next
step,
it
is
evaluate
here
it
connects
to
dyna
trace.
Let
me
try
to
get
some
information.
D
A
So,
by
the
way,
there's
a
new
we've
merged
the
dynatrace
service
and
the
dynamics
sli
service
into
one
service,
which
means
you
can
actually,
if
you
update
to
the
latest
dynamic
service,
you
can
get
rid
of
the
dynatrace
sli
service.
There's
no
need
for
that
anymore,
because
the
dynasty
service
is
doing
all
the
work
right.
That's
a
nice
thing.
D
Let's
move
to
services
tab,
and
here
we
have
evaluation
information,
so
you
can
see
that
we
are
relating
one
one
metric.
It
is
response
time
and
our
requirements
are
passed
so
because
of
this
it
is
green
and
the
score
is
100.
So
if
we
had,
for
example,
10
matrix,
we
can
actually
calculate
this
score.
So
for
one
metric
is
just
100,
because
one
metric
path
is
100.
But
if
you
have,
for
example,
several
metrics
we
can,
the
score
can
be
more
flexible.
A
This
is
also
pretty
nice
and
I
want
to
say
thank
you.
I
know
I
think
johannes
I
see
johan
is
on
the
call
that
the
team
has
built
the
nice
output
capabilities.
So
in
this
case,
when
in
promote
phase
you're,
actually
calling
argo
to
release
that
blue
green
deployment,
you
switch
it
over
that.
We
see
the
full
output
on
on
what
argo
has
done.
So
that's
also
pretty
nice,
and
now
I
think
the
this
sequence
was
done
and
it
already
kicked
over
the
next
sequence
or
kicked
on
the
next
sequence
in
production.
A
A
So
that
means
you've
decided
in
staging.
You
basically
flip
the
switch
in
the
end
that
everything
is
good.
So
in
production
in
staging
you
have
your
your
new
version
running
as
blue
or
screen,
and
then
now
it's
ready
to
go
to
production.
Yet
you
can
additionally
have
another
approval
in
the
beginning
of
of
that
stage,
because
you
can
say
hey,
I
still
want
to
make
sure
that
somebody
manually
clicks
on
it,
even
though
you
can,
as
you
mentioned
earlier,
you
could
say
automatically
approve
it
in
case.
D
A
A
Does
the
whole
orchestration
like,
in
your
case,
deployment,
testing,
evaluation
multi-stage
and
simply
what
you
do
from
your
ci
you're,
pushing
the
relevant
files
that
are
needed
for
that
orchestration
to
captain
and
then
captain
does
its
job,
which
also
gives
you
the
flexibility
that
you
can
with
every
code
change
you
can
change
your
even
your
shipyard
file.
You
could
change
the
process
that
captain
is
using
and
all
the
relevant
files
that
then
kept
needs
to
to
orchestrate
the
whole
delivery
pipeline.
Yeah.
That's
really
really
really
cool.
C
D
And
in
this
right
deck
we
actually
kind
of
have
of
the
screenshots
of
the
process,
so
these
are
github
action
activities
for
cd
pipeline
that
our
how
we
trigger
the
captain
deployment,
that
is,
the
quality
gate,
how
we
evaluate
it
and
you
can
see
that
all
pass
and
we
actually
move
forward-
and
it
is
two
stages
in
our
shipyard
file,
production
and
staging
they're.
Pretty
similar.
A
A
To
comment
on
one
thing:
if
you
can
quickly
go
back
because
what
you've
done
here
in
on
the
left
side
of
staging
on
the
right
production
in
staging,
you
have
split
up
the
whole
end-to-end
process
into
three
individual
sequences.
You
have
the
delivery
sequence
that
just
does
delivery,
and
if
that
succeeds,
it
will
trigger
the
evaluation
sequence,
which
does
the
evaluation
and
then
the
approval,
and
once
this
is
done,
then
it
does
the
promotion.
A
So
in
theory,
you
could
also
put
this
into
one
long
sequence
if
you
want
to,
but
I
think
it's
nice
that
you
kind
of
separated
it
out,
because
this
also
gives
you
some
flexibility.
A
It's
also
a
nice
separation,
I
think
of
of
like
breaking
bigger
sequences
into
smaller
pieces
and
for
those
people
that
have
not
seen
it
right.
You
have
the
option
to
specify
as
many
sequences
as
you
want
group
them
under
a
stage,
and
you
can
chain
them
together
automatically
through
the
triggered
on.
However,
you
could,
for
instance,
also
just
call
any
sequence
in
here
like
you
could
theoretically
call
just
evaluate
right,
you
say:
hey,
I
have
something
running
already.
A
I
don't
need
to
deploy
anything
anymore,
but
I
want
to
go
straight
in
to
evaluate
and
then
trigger
the
evaluation
as
well.
Then
you
would
need
to
hand
some
additional
context
like
what
time
frame
do
you
want
to
evaluate?
This
is
something
that
the
timeframe
is
normally
taken
from
the
length
of
the
test
execution
if
you
have
a
test
step
in
there
before
but
yeah,
but
this
is
really
cool.
D
Okay-
and
it
was
what
was
done
so
actually,
we
on
board
five
application
and
on
board
in
time
right
now
for
one
application,
it
is
one
hour
and
yeah.
We
have
much
more
application
in
the
pipeline
and
what's
right
is
actually
next
steps
that
we
want
to
discuss
what
improvements
we
can
we
can
add
to
to
the
to
the
current
the
pipeline.
B
Yeah,
so
you
know
very
clearly:
we
we
saw
how
captain
was
able
to
help
us
run
our
j-meter
tests
for
performance
tests
very
quickly
and
had
it
as
a
criteria
for
deployment
which
is
fantastic
so
for
us
we've.
Actually,
we
have
a
qa
testing
tool,
cyprus
or
running
end-to-end
tests,
we'd
love
to
get
cypress
integrated
with
captain
and
there's
a
few
very
interesting
methods
that
we
can
approach
either
through
a
custom
service
or
a
what
is
new.
B
This
captain
web
hook
service
that
was
added
recently.
If
we
can
invoke
the
the,
if
we
can
get
captain
to
invoke
a
web
hook,
we
can
get
something
else
to
execute
those
cypress
tests
and
then
report
the
results
back,
which
is
kind
of
incredible
for
us.
You
know
we
we
used
to
take
a
volusion,
we
used
to
take.
B
It
takes
an
hour
and
the
reason
it
takes
an
hour
is
because
we
have
to
wait
for
one
of
our
devops
engineers
to
register
the
application
with
captain
and
so
we're
looking
into
automation,
methods
for
when
the
developers
come
in
and
create
a
new
microservice
or
some
app
to
have
it
automatically
register,
with
captain
immediately
when
the
developers
are
ready.
Rather
than
having
to
wait
for
a
devops
engineer,
to
respond
to
the
request
to
add
that
new
service
to
the
captain
pipelines.
B
And
then
you
know
one
of
the
other
things
that
we
were
looking
at
using
the
captain
web
hook.
Service
for
is,
or
maybe
even
the
job
executor
has
to
do
with
databases.
B
B
So
if
we
can
get
captain
to
provision
that
database
rather
than
having
to
get
somebody
a
member
of
our
data
team
to
provision
the
database
ahead
of
the
captain,
enrollment,
that
would
help
us
to
speed
up
our
delivery.
Even
more.
We've
got
a
lot
of
really
cool,
interesting
capabilities.
We're
betting
on
captain
to
make
things
go
faster
and
build
a
little
bit
smarter
for
us
moving
forward.
A
Very
very
great
to
hear,
and
it's
especially
the
web
book
service.
I
want
to
say
again
thank
you.
I
see
johannes
he's
still
on
the
call
he's
kind
of
the
leading
product
manager
for
captain
and
thanks
to
him
and
his
team,
we
have
the
web
book
service
now
in
cape
0.10
and
for
the
web
book
service.
I
want
to
highlight
there's
two
options
for
the
web
hook.
I
think
johannes
calls
it
the
silent
mode
where
you
can
just
trigger
a
web
hook
to
just
notify
an
external
tool
about
a
certain
event.
A
That's
a
classical
example
would
be
you
want
to
push
a
message
to
slack
or
ms
teams
or
whatever
another
one
is
more
like
an
active
web
book
where
you
can
trigger
an
external
tool
and
then
captain
waits
until
his
external
tool
is
communicating
back
that
it
has
finished
its
tasks.
This
is
the
classical
use
case
where
you
want
to
call
like
a
testing
tool
trigger
it,
and
then
it
calls
back
and
things
like
that.
A
I
also
wanna.
I
see
that
johannes
actually
asked
the
question
in
the
beginning
artem
great,
to
see
how
you
leverage
captain
exactly
that
how
we
designed
it
for
do.
You
have
any
additional
enhancement
requests.
What
is
missing
in
captain
from
your
point
of
view?
Is
there
any
other
things
that
you're
missing
either
artem
or
ian.
D
Thank
you
guys
for
releasing
your
features
like
job
executor
and
web
hook
service
that
I
think
that
will
simplify
our
life
yep.
I
think
cyprus,
tests,
okay
kind
of
straightforward
kaplan,
provides
documentation
and
template
how
to
create
custom
integration.
So
I
think
adding
cyprus
to
captain
is.
It
should
be
straightforward
process,
so
yep.
A
Cool,
I
just
posted
the
link
by
the
way
to
the
web
book
service.
I
did
a
five
minute
recording
on
how
to
quickly
just
you
know,
walk
click
through
it
and
I
posted
this
also
in
the
chat
for
everyone
cool.
I
had
a
some
couple
of
other
questions
that
came
in
in
the
beginning.
Ian
you
mentioned
in
the
very
beginning.
You
tried
prometheus,
but
you
had
challenges.
A
B
Yeah,
so
when
we
did
prometheus
it
was
it
was.
I
mean
as
a
metrics
aggregator
like
getting
the
information
about
it
and
looking
at
that
service,
it
was
fine
right,
it
was
it
was.
It
was
capable
of
getting
the
job
done
at
you
know
back
when
we
first
did
it.
I
forget
which
version
of
captain
that
we
we
first
implemented
on,
but
the
prometheus
service
was
limited
in
that
it
wasn't
capable
of
picking
up
on
things
like
error
rate,
spikes
for
slis
right.
B
The
prometheus
was
able
to
monitor
things
like
cpu
right
and
it
was
able
to
memory,
or
you
do
watch
memory
right
and
so
for
those
quality
gates.
We
were
fine,
but
we
also
wanted
to
quality
gate
around
error
rates,
and
so
that
kind
of
necessitated
us
to
move
over
to
dynatrace,
which
was
you
know,
far
and
away
better
in
terms
of
the
capabilities
to
add
more
sli
slows.
B
Now
it's
nothing
against
prometheus.
We
could
have
continued
running
both
of
them,
but
dynatrace
also
happens
to
do
cpu
and
memory,
and
things
like
that.
So
we
just
we
were
like
yeah.
You
know
what
let's
just
switch
over
from
prometheus
over
to
dynatrace,
because
it
does
that
and
so
much
more.
D
Another
challenge
was,
we
use
this
top
multi-cluster
deployment
and
prometheus
is
deployed
on
every
cluster
and
we
can
challenge
with
communicating
with
parameters,
because
it
belongs
to
particular
cluster
and
for
different
stages.
We
have
different
prometheus
api
url
and
we
I
remember
that
we
had
challenge
to
communicate
with
prometheus
for
a
particular
stage.
Yeah.
A
Multi-Cluster
prometheus
metrics
queries,
because
it's
it's
handled
differently
in
prometheus
than
in
danger,
which
danger
is
the
same
endpoint
and
you
just
pull
the
data
through
tech
through
metadata,
but
with
prometheus.
You
really
need
to
go
to
different
prometheus
server,
and
it
seems
that,
hopefully,
I
pronounce
your
name
correctly.
Shandan
kumar.
You
are
happy
about
that.
If
this
is
coming
because
you
send
smiley
faces
and
and
hearts
so
yeah
also
artem
to
you,
johannes,
is
offering
any
additional
enhancement
requests
just
feel
free
to
reach
out
to
johannes
on
the
captain's
leg.
A
Thank
you
there's
one
more
question,
and
this
is
also,
I
think,
goes
to
something
in
the
beginning.
Quality
gates,
ian
you
mentioned
in
the
beginning.
Quality
gates
was
something
that
drew
you
to
captain
and
I
almost
I
just
want
to
anchor
that
this
is
really
something
that
we
see
as
kind
of
the
the
main
feature
that
a
lot
of
people
see
in
the
beginning
and
immediately
understand
the
value
and
then
later
realize
that
captain,
while
at
the
core,
it
uses
this
slo
capability
to
evaluate
the
health
of
the
system,
the
health
of
the
deployment.
A
It
really
then
allows
you
to
orchestrate
around
that
data
point,
and
this
is
the
nice
thing.
This
is
why
I
like
what
you
did
right,
I
mean
yes,
quality
gates
are
at
the
core,
but
then
you
said
well
now
we're
doing
deployment
and
testing
and
promotion
and
multi-stage,
but
just
as
you
were
initially
drawn
to
the
quality
gate.
This
is
what
most
people
just
start
with,
because
if
you
have
already
invested
in
your
deployment
and
in
your
test,
automation
and
you're
good
with
it,
then
that's
fine.
You
don't
need
to
replace
the
existing
investment.
A
B
Oh
yeah,
for
sure
it
was
very
powerful
and
you
know
again,
just
like
you
said
we
were
drawn
by
the
quality
gates,
but
now,
especially
with
the
job
executor,
we're
looking
at
okay.
Can
we
provision
databases
when
database
schemas
need
to
change?
Can
we
use
captain
to
run
the
migration
scripts
to
change
those
schemas
ahead
of
our
deployments
for
our
application
code?
There's
a
lot
of
doors
that
are
opening
as
we
continue
to
explore
the
capabilities
of
captain.
A
All
right,
gentlemen,
I
would
say
a
lot
of
thank
you.
I
found
johannes
from
oleg
oleg.
I
need
to
also
introduce
you
to
olek
olek.
I
just
joined
our
team,
maybe
oleg,
let
me
just
if
you're,
okay
with
it.
Let
me
quickly
promote
you
to
a
panelist
as
well,
so
that
you
can
quickly
say
hi
to
them
and
also
to
the
audience.
Olek
recently
joined
us,
and
you
mentioned
he
also
looked
at
jenkins,
but
then
decided
to
go
with.
A
A
I
was
just
saying
it's
it's
it's
it's
a
perfect.
It
was
a
perfect
story,
because
ian
was
saying
at
the
beginning:
they
looked
at
jenkins
and
other
tools,
but
then
decided
to
to
leverage
captain,
and
you
also
just
made
your
transition
from
being
very
core
and
vital
to
the
jenkins
community
and
you
still
stay
vital
there,
because
jenkins
is
a
big
and
thriving
community.
But
it's
really
great
that
we
have
you
now
on
the
captain
side
as
well
driving
the
community
here.
C
Yeah
there
are
a
few
particular
differences
between
the
captain
and
jenkins
so
drink.
This
is
a
great
automation,
server
where
you
can
do
everything,
but
it's
rather
a
classic
event
server,
while
captain
is
rather
driven
by
events
and
netflix,
and
also
it
doesn't
really
do
automation,
it
rather
does
orchestration.
So
we
work
in
other
tools
by
various
integrations
and
well.
Here
is
captain
job
executor,
of
course,
becomes
a
bit
more
tricky
because
yeah,
ultimately,
you
can
automate
everything
with
captain
too.
If
you
want
it's
a
good
question
whether
you
should
but
yeah.
C
A
Very
good,
all
right
folks,
I
think
we
almost
consumed
most
of
the
most
of
the
time
most
of
the
hour.
The
stuff
is
recorded,
please
people
that
are
listening
to
this
or
watching
this
on
youtube.
A
Well,
this
is
going
to
be
published
on
youtube,
probably
tomorrow,
and
then,
if
you
listen
to
this
and
have
questions
join
us
on
the
captain's
lag
reach
out
to
ian
ardem,
if
you
want
to
on
linkedin
make
sure
you
also
connect
with
olek
he's
going
to
be
a
public
face
on
the
captain
community
going
forward,
and
with
this
I
am
looking
forward
to
many
many
more
of
these
stories
from
you
and
also
others
that
you
just
inspired
to
try
something
new
and
different.