►
Description
A field report from amasol Managed Cloud offering
Presenters: Patrick Hofmann, Andi Grabner
Learn how amasol is using Keptn, GitLab, JMeter & Dynatrace on their way to the autonomous cloud with automated build, deployments & deployment validation with Keptn Quality Gates running in the Azure Cloud.
Learn more: https://keptn.sh
Get started with tutorials: https://tutorials.keptn.sh
A
Is
that
we're
not
just
talking
about
what
we've
done
on
the
captain
side,
but
really
what
our
users
have
done,
and
this
is
also
what
the
focus
of
the
community
webinars
are
about,
seeing
how
people
are
implementing
captain,
as
opposed
to
the
developer
meetings
that
we
have
on
a
weekly
basis,
where
the
development
team
shows
what
they
are
working
on.
These
meetings
are
about
our
users
and,
with
this
Patrick
I
wanna,
say
hi
to
you
to
mention
hi.
B
Sure,
I'm
Patrick
Hoffman
I'm
a
senior
consultant
amazake.
We
are
simply
great
unique
and
we
specialize
in
everything,
IT
monitoring
and
IT
management.
So
we
have
different
partners
and
implement
their
solutions
for
our
customers,
provide
services
around
everything.
Making
monitoring
so
I
worked
there
in
the
application
performance.
Monitoring
field
have
been
working
there
for
about
and
over
the
last
I
helped
build
our
Amazon
managed
cloud
what
we
call
it,
so
we
provide
managed
service
offerings
for
our
customers
yeah
and
that's
the
part
where
we
actually
utilize
captain.
It's.
A
Perfect
cool
before
I,
let
you
jump
over
to
your
to
the
slides
and
and
the
presentation
again
a
reminder
feel
free
to
put
in
questions
into
a
question
fields.
I
see
the
first
one
already
come
in,
like
Julian
has
already
asked
whether
other
planes
to
add
a
login
to
the
captain's
bridge.
While
this
is
not
part
of
this
community
webinar.
Today
we
were
happy
to
answer
this
and
you
can
already
answered
yes.
A
This
is
coming
in,
or
this
is
already
part
of
the
captain's,
oh
six
to
release
where
you
can
put
the
bridge
behind
a
login.
So
just
a
reminder
ask
questions.
Hopefully
it's
around
what
we
show
here,
so
we
can
actually
leverage
the
time
that
Patrick
that
we
have
Patrick
here,
but
if
it's
anything
related
to
captain
put
it
into
the
question
feature:
okay,
all
right,
Patrick,
let's
jump
into
the
slides,
because
we
have
a
couple
before
we
go
exactly
so
I
think
what
we
when
we
sat
down
right.
A
We
had
a
couple
of
things
we
wanted
to
highlight,
and
this
is
what
I
asked
you
kind
of
to
prepare
and
I
really
just
want
to
talk
about
the
agenda
and
then
maybe
put
in
questions
as
they
come
along.
I'm
really
interested
and
people
are
quite
are
asking:
why
do
people
use
captain?
What
is
the
real
reason
behind
it
all,
so
people
are
always
interested
in
hearing
where
people
are
installing
captain
because
kept
me
supported
on
many
different
platforms.
Then
the
major
part
will
be
the
demo
that
you're
given
and
then
at
the
end.
A
B
Start
so,
first
a
short
overview
of
why
we
utilize
captain
Adama
Saul
I
have
a
swallow'd
here.
So
basically
we
started
using
it
because
it
really
helps
us
and
enables
us
to
deliver
new
future
features
and
updates
to
our
customers,
while,
while
still
maintaining
a
high
standard
of
quality,
so
keep
that
up
for
our
customers
and
what
you
used
to
do,
that
is
to
captain
quality
gates.
Actually,
the
future.
We
utilize
here
to
get
to
make
sure
that
actually
only
good
builds,
make
it
into
production
and
actually
also
kept
and
safe.
B
There's
a
lot
of
time
there.
We
estimate
around
80%
because
we
don't
need
to
build
manual
pipelines
for
all
our
deployments
and
we
can
onboard
new
projects
pretty
easily
without
having
to
write
new
pipelines
for
every
project.
We
want
one.
What
year
that
was
one
of
the
major
whistling
captain.
That's.
A
Cool,
so
this
is
also
a
great
justification
for
the
reason
why
we
initially
built
captain
right.
It's
the
and
it's.
The
pipelines
from
the
past
have
been
great
for
applications
that
we
built
and
deployed
into
the
past,
more
monolithic
and
then,
let's
say
less
frequent
and
now
we're
building
event-driven
micro
service
applications
that
we
all
deploy
independently.
So
we
should
also
think
about
delivery,
pipe
lines.
A
B
And
captain
enables,
as
it
supports
a
lot
of
the
tools
we
already
used,
which
we'll
take
a
look
at
now
and
with
it
a
message
based
architecture.
It
also
helps
us
just
integrate
new
tools
and
if
we
may
need
in
the
future,
it's
pretty
easily
extendable
yeah.
So
let's
take
a
look
at
our
text,
XO.
B
What
we
use
for
four
applications,
it's
a
mean
stick,
so
we
use
MongoDB
for
a
database
for
a
back-end,
it's
Express
Jas
running
on
no
chairs,
and
then
we
have
angular
front-end,
that's
basically
their
common
base
for
all
our
web
based
applications
we
deploy,
we
use,
get
lab
as
a
code
repository
and
also
we
migrated
all
our
continuous
integration
which
was
previously
Jenkins.
We
migrated
that
to
also
get
lab
as
well,
so
we
have
one
central
view
for
all
our
code:
repository
and
continuous
integration
pipelines
and
then
for
a
deployment.
B
B
So
for
a
deployment
we
utilized
the
helm
service
that
comes
with
captain
to
automatically
deploy
our
new
versions
and
that's
actually
pretty
easy,
were
weren't
utilizing
helm
before
we
switch
to
captain.
But
if
you
already
have
your
company
otters
younger
files
for
all
your
deployments,
it's
basically
just
rearranging
them
up
to
fit
the
helm
architecture,
and
then
you
can
just
keep
on
going
and
don't
have
to
adopt
very
much
to
use
the
helm
service
here
for
deployments.
Yeah.
B
A
B
Can
we
use
Angelique's
and
sort
managers,
our
interest?
We
have
easy
HTTPS
certificates,
you
don't
have
to
manage
anything
there
and
for
testing.
We
utilize
jmeter
to
do
the
functional
and
low
tests,
and
we
also
use
the
jmeter
services
comes
with
captain,
so
we
don't
have
to
deploy
any
runoff
or
anything
there.
We
just
write
the
load
test
scripts
at
them
with
agate
ops
approach
to
our
repository
we
already
have
and
captain
then
automatically
pulls
them
and
execute
them
with
every
new
artifact
we
want
to
deploy
and
then
for
a
monitoring.
We
use
dynaTrace.
B
It's
automatically
deployed
on
a
cluster
with
the
one
agent
operator,
so
also
pretty
easy
error
to
get
it
rolled
out
and
everything
we
deploy.
Every
new
service
we
deploy
is
automatically
monitors
dynaTrace
and
we
can
then
utilize
all
the
metrics.
We
have
there
in
the
quality
gates
from
captain's
lighthouse
service.
We
actually
define
the
SLO
and
there's
allies.
We
pull
from
dynaTrace
and
have
some
quality
gates
that
only
the
good
builds
make
it
to
production
and.
A
B
B
Okay,
then
I
would
say,
let's
jump
direct
into
the
live
demo
and
take
a
look
at
what
captain
looks
like
in
our
environment.
So
comes
with
a
it's
called
the
captain
bridge.
B
It's
basically
an
overview
of
all
your
while
your
captain
workflow.
So
at
the
top.
You
can
select
your
project,
that's
our
self-service.
Our
user
management
portal
I
have
selected
here
currently
to
get
an
overview
of
all
the
stages
for
that
we
have
a
development
and
a
production
stage
into
which
we
deploy,
and
then
you
have
an
overview
of
other
services.
So
for
that
specific
application,
we
have
a
database,
the
MongoDB
database
a
front
and
then
the
back
end.
B
And
then,
if
we
expand
one
of
the
services,
we
see
all
the
recent
captain,
workflow
runs
or
deployments
we
did
and
if
we
select
one
on
the
right
side
and
we
get
all
the
informations
that
captain
captain
has
about
that
run.
So
every
box
we
see
here
is
one
event
that
captain
created.
So
it
started
with
the
configuration
change
that
we
pushed
that
there
was
a
new
artifact.
B
We
want
to
deploy
the
helm
service
automatically,
deploys
that
new
artifact
and
the
jmeter
service
automatically
starts
some
tests
against
it,
you're
pretty
quick
after
two
minutes
they
are
finished
and
then
the
lighthouse
service
automatically
pulls
all
those
metrics.
We
want
for
our
quality
gates
from
dynaTrace
and
doesn't
evaluation.
B
So
there
are
basically
two
things
we
need
to
define
for
the
quality
gates.
We
have
slis
so
service
level
indicators,
the
metrics
you
want
to
measure
so
for
this
service,
we
look
at
the
response
time
for
the
slowest
five
percent
of
our
users.
We
look
at
the
median
response
time
and
we
want
to
look
at
the
error
rate
to
gauge
how
the
new
service
is
doing
and
then
to
evaluate
that
we
define
s
ellos,
so
service
level,
objectives
that
every
new
artifact
has
to
meet
in
order
for
deployment
to
be
successful.
B
Those
as
allows
are
defined.
You
know
in
a
pretty
easy
AMA
format.
We
can
actually
take
a
look
at
it
here
so
for
every
objective
we
have,
we
can
define
a
criteria
on
when
a
bill
should
pass.
So
in
this
example,
everything
below
100
millisecond
absolute.
Well,
you
will
be
fine
and
we
can
also
add
a
warning
threshold
if
you
want
a
warning
and
when
the
bill
was
to
pass,
but
it
may
be
critical.
B
Everything
below
between
100
and
300
milliseconds
will
be
and
viewed
as
critical,
so
door,
the
absolute
values,
but
we
can
also
add
relative
values.
So
we
can
tell
captain
to
compare
actually
the
last
few
built
results.
In
that
case,
we
do
that
for
the
last
10
builds
and
if
there
are
difference,
the
relative
difference
is
above
200
percent.
In
this
example,
it
will
also
fail
or
warn
us
on
that
build
that's.
A
B
Then,
for
every
my
solo
we
have
defined
here,
we
can
give
it
a
weight
and
that
weight
is
used
to
actually
calculate
a
final
score,
a
total
score.
If
the
build
should
now
fail
or
if
it
should
pass
and
then
we
can
just
set
a
simple
percentage
value
for
the
whole,
build
passing
or
for
the
whole
build
getting
a
warning
yeah
and
that's
basically,
all
we
have
to
define,
and
then
we
get
a
pretty
easy
overview
over
the
last
build.
B
So
every
corner
has
one
build
and
see
the
status
did
sli,
so
we
have
to
find
that
they
meet
aslo
orange.
Did
you
get
a
warning
or
did
it
fail
completely,
for
example,
and
then
if
the
aviation
is
successful,
if
you
scroll
down
a
bit,
maybe
that's
just.
A
You
know
is
we
always
show
the
full
payload,
which
is
one
of
the
improvements
that
will
come.
I
mean
it's
great
because
you
see
what's
happening
behind
the
scenes
in
captain,
but
it's
also
something
that
most,
let's
say,
just
end-users,
that
don't
care
about
the
integrations
with
captain
how
what
what's
happening
underneath
is
not
that
useful.
That's
why
we're
also
I
think
going
to
to
put
this
some
rails
in
the
future,
but
it's
still
obviously
it's
great
to
have.
A
B
Basically,
right
here,
our
evaluation
was
successful,
so
captain
automatically
sends
another
configuration
change
event
and
then
the
build
is
deployed
to
production.
So
we
see
it
switches
to
the
production
stage
here
and
then
starts
the
whole
thing
at
the
beginning.
Deploys
the
new
version
into
production
run
some
tests
currently
for
production.
We
are
not
running
any
tests
there.
B
We
have
manual
synthetic
tests
and
dynaTrace
that
are
running
and
evaluating
the
production
build,
but
that
will
be
one
of
our
next
steps
to
actually
include
the
data
we
get
from
those
from
those
tests
into
Nepgear
as
well,
and
do
an
evaluation
based
on
other
tests
like
the
synthetic
tests
we
do
and
get
that
feedback
into
captain
as
well.
If
the
version
of
production
is
running
fine,
so.
A
A
For
those
people
that
don't
know
one
of
our
contributors,
he
wrote
an
additional
captain
service
that
is
listening
to
deployments
and
then
can
automatically
create,
will
manage
synthetic
tests.
So
every
time
captain
deploys
into
an
environment
it's
a
production,
then
this
service
will
take
care
of
automating,
making
sure
that
there's
a
synthetic
test
also
created
from
different
regions.
So
you
have
true
SLA
monitoring
as
well.
It's
pretty
cool
yeah.
B
So
then
we
can
take
a
look,
how
everything
starts
so
right
now
we
are
building
everything
in
in
our
kit
lab
CI
solution,
and
then
we
still
trigger
the
builds
manually.
Why
are
a
captain
CLI?
Because
we
want
to
do
manual
approval
before
we
push
anything
into
production,
but
as
we
will
take,
a
look
at
later.
Captain
will
support
that
use
case
in
the
next
version,
and
then
we
can
fully
integrate
it,
but
let
me
switch
to
our
get
lab.
So
here
we
have.
B
Our
get
lab
pipeline
runs
I
just
built
a
new
version
before
the
webinar
started,
so
we
can
take
a
look
at
that
so
now,
with
with
only
having
the
CI
and
get
lab,
the
pipelines
are
actually
pretty
easy.
We
have
a
short
clip
operation,
so
in
obvious
we
have
a
dynaTrace
UFO
to
visualize
the
build
state.
B
So
the
first
thing
we
do
is
send
a
short
map
request
and
then
the
UFO
will
start
flashing
and
show
a
little
new
build
is
running,
and
then
we
will
build
our
new
artifact,
so
we
only
build
the
parts
that
change
in
that
case.
I
only
changed
something
in
the
backend,
so
we
only
built
a
back-end
here
run
some
simple
unit
tests
in
a
pipeline,
and
then
we
build
a
docker
image
from
the
new
artifact
and
push
it
to
our
edge
our
container
registry
and
then,
of
course,
we
notify
the
dynaTrace
UFO.
A
B
B
In
that
case,
then
you
artifact
at
the
version
9.1
Oh.
For
now
we
can
deploy
that
version,
so
you
can
do
it
via
via
the
API
call,
or
you
can
also
pretty
simple.
Do
it
via
the
captain
CLI.
So
there's
a
simple
CLI
command
to
deploy
it.
We
can
do
that
now,
so
we
can
take
app
to
send
a
new
artifact
event.
We
tell
it
which
project,
in
this
case
ourselves
service
portal,
which
a
little
service
this
our
back-end
we
want
to
deploy.
B
A
B
I,
do
we
don't
want
every
build?
We
do
as
well
for
every
built,
we
get
a
new
image,
but
we
don't
want
every
image
we
build
to
go
into
a
def
and
go
into
production.
We
want
to
control
manually
there
what
goes
into
production,
but
once
we
decide
to
push
into
production,
we
do
what
we
did
now
we
sent
the
event
and
from
there
captain
will
do
everything
automatically.
It
will
start
now
with
the
deployment
here
and
F,
and
if
everything
is
fine
again
with
the
test,
it
will
automatically
deploy
to
production.
B
It's
running
so
the
first
events
from
that
run
should
already
be
here.
So
if
you
switch
to
our
dynaTrace
instance,
we
have
a
simple,
basically
a
captain
overview
dashboards.
That
shows
us
our
two
stages.
Def
and
production
shows
us
the
services
running
there
or
the
currents
throughput.
If
there
are
any
errors
in
the
response
time-
and
you
also
see
the
browser
monitory
Taurus,
we
run
here
against
our
production
environment
is
everything
is
available.
There.
A
B
Here,
let's
jump
into
the
backend
service
that
we
just
deploy
it,
so
we
have
the
front
and
at
the
back
end
those
are
our
two
services.
Currently
we
are
deploying
the
backend,
so
let's
jump
to
that
and
I
did
a
few
deployments
if
they
already
so
we
have
70
new
events
and
if
we
expand
them
a
bit,
we
see
so
for
a
current
event.
We
see
here
a
new
deployment
has
started.
We
get
quite
a
few
meta
data
here.
B
We
will
also
get
that
information
here,
so
we
always
have
the
reference
when
when
was
a
deployment
or
if
there
were
any
test,
started
or
stopped.
So
if
you're
analyzing
a
graph,
you
will
always
see.
Oh
yeah,
there
was
a.
There
was
a
deployment
here.
Maybe
the
drop
or
the
error
rate
increase
here
has
to
do
with
that
deployment.
You
always
have
that
reference.
It.
B
Yeah
and
then
we
also
utilize
utilized
a
meter
for
our
tests.
You
also
pull
out
some
metadata
there.
So
for
all
the
tests
we
are
running
down,
addresses
called
request
attribute.
So
the
information
like
the
name
of
the
load
test
we
are
running
there,
has
a
performance
test
and
always
the
idea
of
the
test.
That
is
running
so
you
can
filter
for
a
certain
test
or
if
you
want
to
analyze
a
certain
step
that
is
failing.
We
import
step
names
from
the
load
test.
We
always
have
to
tear
and
you
can
filter.
B
For
example,
something
is
going
on
wrong
with
the
login.
You
can
just
really
easy
focus
only
on
the
requests
for
the
login,
or
only
for
the
login,
for
the
last
people
to
analyze
them.
If
something
is
going
wrong,
you
have
to
see
what's
going
actually
wrong
here
cool,
so
we
can
jump
back
to
captaincy
yeah
yeah.
The
tests
were
finished
successfully
and
now
it
started
to
retrieve
all
the
metrics
and
probably
in
another
minute.
We
should
see
here
that
the
evaluation
is
done,
and
hopefully
the
bill
passed.
A
That's
very
cool
the
other
question
that
I
had
and
I'm,
not
sure
if
you
have
it
available,
but
for
folks
that
haven't
seen
this
before
in
order
to
define
the
process
because
we,
the
architect
the
captain,
is
that
we
are
separating
the
concerns
of
the
actual
process
and
then
the
tool
and
there's
actually
doing
things
and
it's
all
connected
via
events,
yeah
and
maybe
I
mean
I
should
see
that
evaluation
done
just
came
in.
But
what
I
wanted
to
point
to
is
the
shipyard
file.
They
will
be
interesting.
B
B
I
think
that
is
the
right
one
here.
Actually,
so
let
me
zoom
in
here
a
bit
and
then
let's
take
a
look
at
a
shipyard
file,
so
the
shipyard
file
is
basically
where
we
defined
the
different
stages.
So
right
now
we
had
development
and
production
as
a
stage
here,
but
you
can
easily
just
add
additional
stages
and
dine
em
and
then
the
captain
will
go
through
all
of
them
in
order
and
deploy
them.
So
what
you
define
for
every
stage
is
a
deployment
strategy.
B
Currently
we
do
direct
deployment,
so
the
helm
service
will
just
replace
the
currently
running
version
once
we
have
the
engine
its
ingress
in
place
once
that
are
supported,
we'll
be
able
to
switch
to
if
I,
say
more
advanced
and
deployment
strategy.
So
you
could
do
blue
green
deployments
to
have
two
versions
running
and
then
switching
over
or
you
could
also
do
cannery
deployments
and
just
switch
over
a
certain
number
of
users
to
see
if
you're
a
new
version
is
actually
performing
well
with
production
traffic.
Maybe.
A
Maybe
maybe
one
worth
here,
because
you
just
mentioned
Bluegreen
deployments
and
you're
waiting
for
our
capability
to
reuse,
your
nginx
for
the
blue
green
deployments.
Captain
can
already
do
blue,
green
and
Canaries,
but
we're
using
East
EO
right
now
and
like
in
your
case.
You
already
have
an
ingress.
You
don't
want
to
use
these
to
you
for
that.
That's
why
you
are
still.
You
are
still
holding
off
on
that
case.
B
Basically,
as
we
said
already,
captain
follows
a
get-ups
approach,
so
all
the
files
for
captain
are
taught
in
a
git
repository
and
you
can
simply
add
or
change
them
here.
So,
for
example,
we
have
here
our
SLO
file.
We
talked
about
with
other
metrics,
so
if
I
want
to
change
something
I
simply
change
in
the
repository
commit
the
change
and
for
the
next
artifact
it
will
take
the
numbers
into
account.
B
B
B
B
A
A
B
B
Basically,
currently
we
have
developers
who
are
developing
the
applications
and
I'm
the
one
who's
responsible
for
all
the
tooling.
We
started
with
captain,
maybe
three
or
so
months
ago.
So
currently
I
do
all
the
tooling,
but
once
we
have
set
that
up,
we
have
a
finished
setup.
Let's
say
we
plan
also
to
give
that
to
our
developers,
because
we
are
a
pretty
small
team.
We
are.
We
don't
have
that
many
developers,
so
it's
actually
not
that
hard
to
handle
for
us
mm-hm.
A
B
A
And
it's
a
one-time
thing
and
you
can
even
automate
it
also
from
the
pipeline
so
stuff
that
I've
done
from
from
Jenkins
in
some
of
my
work
is
from
the
Jenkins
CI
pipeline
I.
Just
make
sure
that
there
is
a
captain
project,
there's
a
Capon
service
and
if
it's
not
I,
just
automated,
you
can
automate
the
whole
thing.
That's
pretty
cool
yeah.
A
One
more
question,
and
again
this
could
be
it's
an
organizational
question
again
you
showed
the
SLS
and
it
slows
who
do
you
think
is
going
to
be
responsible
for
defining
them.
This
is
going
to
be
you
because
you
are
kind
of
the
platform
owner
or
is
this
something
where
you
think
that
developers
will
do
it
right.
B
Now
it's
just
a
platform
owner
but
yeah
that
the
goal
will
be
to
have
the
developers
actually
define
those
because
they
know
how
fast
the
code
can
run
and
what's
a
realistic
value
to
actually
achieve.
But
we
also
plan
to
utilize
dynaTrace
for
that
right.
We
have
all
the
monitoring
data
on
how
fast
it
is
pro
performing
in
production
and
what
is
a
realistic
value
that
we
can
currently
achieve,
and
we
can
utilize
that
our
our
static
thresholds,
mm-hmm.
A
B
We
started
with
0.5
and
what
I
think
and
now
we
are
on
the
newest
version,
so
0.62
we
had
to
do
one
manual
update,
but
that
was
actually
pretty
easy
and
for
the
last
versions
in
0.6
dot
zero.
They
have
been
automatic
updates
and
it
really
was
just
I'm
running
two
commands
from
the
documentation.
It
updates
by
itself
and
I
haven't
had
any
issues
with
the
updates
yet
yeah.
A
A
B
Yeah
sure
I
can
talk
about
it.
So
that's
our
outlook
what
we
said
before.
Currently
we
start
our
deployment
to
two
kept
manually
and
what
we
are
really
looking
for
over
twist
a
new
feature
called
delivery
assistant
that
will
come
with
the
next
captain
version
0.7
and
that
will
allow
us
to
automatically
from
our
git
lab
pipeline,
pushed
a
new
artifact
event
to
captain,
so
it
gets
deployed,
and
then
we
can
still
have
manual
approval
between
the
stages.
B
So,
as
we
see
here
for
every
stage,
we
will
see
what
version
is
currently
running
in
which
environment-
and
it
will
tell
you
when
they're
out
of
sync.
So
when
there's
a
newer
version
and
F,
it
hasn't
been
promoted
through
staging.
Yet,
for
example,
and
then
you
can
manually
approve
it
and
only
then
it
will
get
deployed
to
the
next
stage,
but
you
can
iterate
as
fast
as
you
want,
for
example,
in
the
development
environment
and
only
hit
the
change
to
production.
When
you
really
want
to
make
a
deployment
there,
yeah.
A
So
I
think
the
cool
thing.
Also
what
you
said
read
if
you're
developers
are
constantly
making
changes
your
get
la
pipes
and
rounds
built
in
your
container,
it
automatically
gets
pushed
into
dev.
The
tests
will
be
executed,
the
quality
gates
are
evaluated,
but
then
let's
say
there
release
manager
over
there.
Whoever
that's
going
to
be
can
say:
hey
I
have
five
new
bills
in
there.
They,
you
know
two
of
them
fail,
but
the
latest
two
are
good
and
we
decide
to
put
let's
say
one
of
them
in
enterprise
yeah.
B
A
A
B
A
A
What's
important
is
that
we,
you
know,
get
support
from
the
community.
One
way
of
supporting
us
is
following
us,
obviously,
on
social
media,
but
also
star
our
projects
on
github.
Whether
it's
you
know
cap
and
cap
know
some
of
the
same
folks.
Projects
also
join
our
select
channel
either.
You
know
you
can
find
the
Schleck
channel
link
and
also
other
things
on
Captain
community
I.
Think
for
slack,
there's
now
even
an
easier.
It's
a
selected
cap
and
or
SH
that
also
redirects
you
to
the
login
page
yeah
and.
B
If
you're
using
captain
already
and
haven't
checked
it
out
the
second
link
here,
the
Jenkins
servants
that
Andy
wrote,
we
also
utilized
it
in
our
environment
I'm.
You
should
check
it
out,
it's
pretty
great,
so
it
makes
it
easier
to
modify
the
load
patterns.
You
can
specific,
specify
different
load
patterns
for
different
environments.
Pretty
easy
and
yeah
make.
A
A
Cool
alright,
let's
see
if
there
is
anything
else,
not
right
now.
Well,
folks,
if
anything
comes
up
later
on,
you
know
how
to
get
a
hold
of
us.
You
know
how
to
find
us
the
recording,
thanks
to
Jurgen,
we'll
be
up
soon
on
the
Capon
youtube
channel
Patrick.
Thank
you
so
much
for
taking
the
time
and
for
doing
all
this
for
us,
yeah.