►
Description
Continuous Integration; Tips & Tricks - Paul Dragoonis , Independent Consultant
In this session Paul is going to share with you his tips & tricks learned when building top quality CI pipelines for large organizations such as the UK Government, banks and the private sector. Using Continuous Integration already? Whether you're a beginner or CI ninja, you'll gain value from this session as we cover a broad range of topics. We'll look at how we can leverage tools like Docker, Jenkins, AWS to truly push CI pipelines to the limit. After this session, you'll walk away with new ideas, new tools, and lots of real-world experience, for you to take with you into your journey of continuous integration.
A
A
A
We
begin
the
talk
I'll
just
briefly,
introduce
myself
my
name's
paul
dragonis,
I'm
a
software
engineering
consultant,
I'm
from
scotland,
I'm
a
public
speaker,
a
trainer
and
a
coach.
I
also
contribute
to
open
source
projects.
I've
been
doing
this
for
many
years
now.
Some
of
the
projects
I
contribute
to
are
the
are
being
part
of
the
php
team,
the
php
fig
standards
body,
I'm
a
conference
organizer.
A
My
experience
that
I
have
gained
through
my
career
has
been
gained
from
working
in
in
positions
within
engineering
teams.
I've
been
I'm
a
quality
engineer
and
sometimes
I'm
a
qe
lead.
I
work
as
a
test
engineer
a
devops
engineer
or
a
build
engineer.
A
Now
this
is
relevant
because
I
have
the
experiences
from
the
perspectives
of
each
individual
member,
that
comprises
of
most
software
engineering
teams
and
I'm
able
to
take
that
experience
and
build
pipelines
that
are
that
satisfy
each
team
members
requirements,
despite
having
all
of
this
experience,
we're
all
in
this
together
and
on
a
daily
basis.
I
know
I
go
from
the
two
states
of
every
programmer
going
from
thinking.
A
So,
let's
dive
into
the
talk,
this
is
in
ideally
the
dream
of
what
a
pipeline
should
look
like
in
theory,
but
a
pipeline
is
a
reflection
of
the
business
processes
and
the
technical
processes
coming
together
and
there's
usually
different
technical
teams
from
an
infrastructure
team
to
different
development
teams,
and
they
all
have
to
come
together
under
one
hood,
which
is
the
cicd.
So
it
ends
up
looking
more
like
this.
A
This
is
quite
a
typical
pipeline
in
terms
of
has
an
initialization
phase,
a
static
analysis
phase
with
linting,
then
doing
belt
docker,
building
to
get
images
then
doing
unit
tests
and
then
moving
on
to
higher
level
tests
such
as
business
value,
testing,
functional
testing,
api
tests
and
user
interface
testing.
A
When
you're
using
jenkins,
I
recommend
using
either
the
ec2
plugin
or
the
kubernetes
plugin.
Both
of
them
are
for
different
types
of
use
cases,
but
I
recommend
using
the
ec2
plugin,
because
it's
probably
probably
what
you're
most
familiar
with
you're
familiar
with
a
linux
server
and
it's
probably
going
to
match
your
local
environment
on
your
machine
too.
So
you
have
consistency
between
your
local
and
your
ci
environment.
A
I
recommend
using
spot
essences
because
they're
very
cheap
and
cost
effective,
and
I
also
make
recommend
having
different
types
of
instances
that
you
can
access
using
labels
in
your
jenkins
files
with
the
label.
You
can
target
the
right
type
of
machine
for
the
right
type
of
job,
that
you're
running.
A
Now
that
we're
finished
with
talking
about
hardware
and
infrastructure,
we
can
come
back
up
to
talking
about
the
pipeline
itself.
Again,
here
is
an
example
pipeline
and
when
jenkins
is
actually
operating,
there
is
a
setting
in
jenkins
co
that
you
can
activate,
which
you
can
change
from
the
default,
which
is
max
survivability
to
performance
optimized
mode.
A
So
this
will
change
the
whole
of
jenkins,
the
way
in
which
the
jenkins,
the
main
jenkins
master,
is
working
and
how
that
master
is
interacting
with
the
workers
now
by
default,
it's
max
of
ability
and
if
that's
that's
more
suitable
when
you
have
masters
or
multi-masters
involved,
but
when
you
have,
but
I
would
say
about
90
of
all
companies
using
jenkins
do
not
need
this
and
therefore
you
can
actually
enable
performance,
optimized
mode.
A
So
google
search
for
this
find
the
setting
and
switch
it,
and
I
recommend
this
in
order
to
make
jenkins
responsive
to
the
point
where
you
push
to
your
branch
or
when
you
merge
a
pull
request
and
that
has
as
a
get
trigger.
You
want
your
your
pipeline
to
start
executing
as
quick
as
possible.
In
order
to
achieve
this,
you
should
disable
tagging.
A
You
should
disable
getting
the
tags
and
disable
the
git
history
by
doing
a
shallow
clone,
because
in
the
context
of
ci,
you
don't
need
all
of
the
history
of
your
git
repository.
You
just
need
the
code
at
that
point
in
time,
and
so
by
activating
these
two
settings.
This
will
make
things
much
much
faster.
A
A
Here's
an
example
of
a
cleaning
task
for
the
code,
that's
living
on
the
hard
drive.
The
key
part
I
wanted
to
make
clear
here
is
that
we
don't
clean
at
the
end
of
your
jobs,
always
clean
at
the
start.
The
reason
for
this
shift
is
because,
if
you
have
a
failure
on
your
job,
you
don't
want
to
cut
to
clean
up
the
machine.
A
A
A
There
are
pre-built
docker
images
for
the
linting
tools
already
on
the
internet,
so
you
can
just
pull
them
down
and
and
then
take
your
code
put
it
into
the
container,
run
the
linter
and
get
the
results.
This
can
help
cut
down
the
amount
of
time
of
your
feedback
cycle
and
spotting
errors
much
much
earlier
in
the
process
and
also
there's
a
dockerfile
linter
in
which
you
can
actually
link
your
dockerfiles
before
you
even
try
to
do.
Docker
belts.
A
In
this
example,
you
can
see
we're
doing
your
docker
build,
but
there's
other
docker
images
here
that
future
steps
in
our
pipeline
actually
need,
for
example,
in
the
integration
tests
we
need
mysql,
redis,
etc
for
for
them
phases.
So,
whilst
the
docker
build
process
is
running
and
it's
taking
like
two
minutes
of
time,
you
can
actually
pad
things
into
that
two
minutes
of
time,
so
that
then,
before
you
don't
have
that
time
penalty
at
the
elevator
in
your
pipeline.
A
A
These
in
general
are
fine,
but
if
you're
in
a
scenario
where
you
already
have
the
code
on
your
onto
your
ci
worker
and
your
already
have
docker
installed,
then
just
go
ahead
and
do
the
docker
build
otherwise
you're
going
to
then
have
to
initialize
a
cloud-based
belt.
That's
going
to
take
lots
of
time.
A
It's
going
to
be
slow
to
it
slower
to
execute
the
docker
build,
because
the
hardware
in
which
you're
getting
your
docker
build
process
from
is
slower.
It's
shared
hardware
and
you'll
often
find
that
you're
in
a
queue
based
on
how
much
money
you
pay
these
companies.
They
will
put
you
in
queues,
and
you
don't
want
to
avoid
that.
So
you
can
just
immediately
start
building
when
you're
doing
your
docker
build.
Give
your
give
your
ci
machine
enough
course
to
do
the
one
or
many
docker
images
that
need
building
give
yourself
a
threat
to
the
course.
A
A
spot
instance
is
very
cheap.
On
amazon
I
mean
you
won't
even
hardly
even
notice
the
cost
and
compared
to
other
solutions.
So
because
of
this
go
ahead
and
do
your
docker
building
when
you're
doing
docker
build
command
and
you
give
your
docker
image
attack.
This
is
my
recommendation
for
a
simple
recommendation
for
creating
a
tag.
It's
here.
It's
get.
Branch
name
is
the
name
of
the
function.
All
it
does
here
in
jenkins
is
jenkins,
can
build
pull
requests
and
they
can
build
branches.
A
One
last
thing:
when
running
the
docker
build
command,
add
dash
dash
pool
research,
this
flag,
it's
going
to
pull
down
other
images,
it's
going
to
keep
things
up
to
date,
so
I
recommend
you
research
this
also
as
soon
as
you
have
a
successful
docker
build
push
that
image
to
the
registry.
Don't
wait
till
the
end
of
the
pipeline,
and
that
means
that
you
can
pull
that
image
down
onto
your
local
machine
and
you
can
debug
the
code
here
we're
moving
on
to
a
harder
topic
of
ci
cd.
A
This
is
where
it's
really
important
to
get
things
right
and
it's
very
easy
to
get
things
wrong
test.
Suites
are
this
usually
the
slowest
part
of
any
pipeline,
and
so
the
type
of
those
test
suites
the
quality
of
them,
the
way
in
which
you
build
them
and
if
you
build
them
with
the
intent
of
how
you're
going
to
execute
them
on
ci
will
give
you
really
good
results.
A
A
I
have
one
definition
for
every
one
of
the
test:
suites,
okay,
as
you
can
see,
I
have
an
entry
point
script.
So
this
entry
point
script
is
going
to
be,
and
so
rather
than
your
application
by
default,
running
php
or
you
know
a
web
server
or
whatever
it
normally
does.
You
can
overwrite
that
and
execute
your
test
suite.
A
Now
when
it
comes
to
something
like
this,
these
are
these
individual
test
suites.
We
need
to
make
sure
that
they're
isolated,
because
the
database
for
the
ui
test
suite
was
probably
not
going
to
play
nicely
with
the
api
test
suite
and
vice
versa,
and
so
we
need
separate
databases
for
this,
and
the
way
we
do
this
is
we
achieve
network
and
container
isolation
between
each
project.
Docker
compose
has
a
flag
called
dash
p
dash
p.
A
Basically
is
a
project
prefix
in
which
to
apply
to
docker
networks
and
containers,
and
normally
in
docker
compose
it
is
a
default
underscore.
So
if
you've
been
using
docker
compose
before,
then
you
will
see
that
it
has
default
underscore
and
then
the
name
of
your
containers
and
the
name
of
the
network,
and
so
what
this
means
is.
A
We
are
executing
the
it
says:
dash
rm
api
tests,
that's
hrm
ui
tests,
and
so
that's
the
different
service
name
in
the
yaml
file,
which
has
their
own
respective
entry
point
script
in
which
to
execute
the
respective
test
suite
under
the
hood.
A
A
Here
in
this
example,
I'm
separating
based
on
directory,
you
can
also
separate
on
tags,
for
example,
if
you're
using
bdd
or
gherkin
scenarios,
these
scenarios
have
tags
on
the
top,
so
you
can
target
things
by
tag,
maybe
not
by
necessarily
by
directory,
in
this
case
we're
attacking
that
by
directory
and
therefore,
if
you
have
a
test
suite
that
takes
one
hour
to
execute
and
you
split
it
into
five
parts
in
theory
with
the
right
hardware,
you
can
get
that
down
from
one
hour
to
20
minutes.
A
A
If
you
cannot,
then
you
can
segment
them
into
smaller
parts
and
if
you're
doing
ui
testing,
then
your
bottleneck
may
not
be
the
tests,
but
the
number
of
browsers
that
are
running
actively
in
the
test
suite.
So
you
can
scale
up
the
number
of
selenium
browsers
that
are
running
and
therefore
you
can
get
higher
results
at
the
end
of
your
pipeline.
You
will
want
to
run
some
things.
A
For
example,
you
want
to
notify
slack
when
you
have
a
successful,
build
or
a
failure
build.
This
is
an
example
of
my
slack
message,
which
I
have
with
links
and
handy
stuff,
all
the
information
you
probably
need
about.
Knowing
what
about
a
belt
without
actually
clicking
into
jenkins.
You
can
read
it
from
slack.
A
Also,
if
you
remember
remember
earlier,
we
are
fetching
or
gets
repository
without
any
tax.
So
in
this
example,
when
the
branches
master
and
only
master,
then
we
fetch
the
tags.
This
can
actually
take
up
to
five
minutes.
I
have
seen
in
some
repositories
just
to
fetch
the
tags.
Then
you
can
create
your
tag
and
then
you
can
push
it.
A
A
Also,
if
you're
having
failing
tests
for
the
ui
and
you're
using
selenium
browser
containers,
you
can
you
can
enable
debug
mode
on
these
browser
containers
and
therefore
they
will
run
a
vnc
server
inside
of
each
container
and
what
that
means
is.
You
can
then
download
the
vnc
client
from
your
machine
and
effectively
remote
desktop
from
your
local
computer
into
the
ci
server
into
the
specific
container
and
using
ssh
tunneling
securely
attach
them
together,
and
therefore
you
can
see
exactly
what's
happening
and
inspect
the
situation
and
try
to
solve
it.
A
I
store
these
in
the
jenkins
credentials
manager
and
we
access
the
access
these
using
the
with
credentials
keyword
here
we
put,
and
then
we
get
a
username
password
multibinding
and
therefore,
as
an
environment
variable
to
my
bash
script,
I
have
access
to
gatlink,
underscore
login,
underscore
user
and
password
and
therefore
that's
a
secure
way
to
temporarily
use
secure
information
without
actually
being
persisted
or
stored
on
the
hard
drive.
A
Another
thing
about
security
is
commonly
when
you're
doing
your
docker
build
when
you're
doing
a
docker
build
you're,
probably
pulling
down
dependencies
from
the
internet
using
npm,
install
or
composer
install.
Now
in
this.
In
this
situation,
we
need
access
to
our
own
private
repositories,
and
so
for
this
we
need
either
ssh
keys
or
we
need
special
tokens.
A
A
So
I
recommend
you
research
this
further
and
how
you
can
securely
build
and
build
your
docker
images
and
use
secret
information,
but
it's
never
stored
in
the
docker
image
that
actually
gets
shipped
around
the
internet
in
the
end,
but
it
is
important
to
know
that
you
can
do
this
now
in
docker
land
with
build
kit.
A
The
right
types
of
pipelines,
different
pipelines,
have
different
hardware
requirements,
tips
on
making
jenkins
work
very
fast
in
terms
of
activating
optimized
mode
in
jenkins,
but
also
the
the
git
settings
for
no
tags
and
for
shallow
cloning
will
get
things
happening,
much
snappier,
much
more
responsive,
linting
tips
and
feedback,
the
having
the
mentality
of
always
prepare
for
time
and
where
there
is
extra
time
you're
waiting.
You
can
pad
that
time
in
there
for
the
future
steps
running
your
test
weeks
in
parallel
will
cut
down
a
significant
amount
of
time.
A
But
when
you
have
pipelines
that
sorry,
when
you
have
test
suites
that
are
still
slow,
then-
and
you
cannot
parallelize
them,
then
you
can
use
segmentation
to
break
them
down
into
smaller
pieces
and
run
them
in
isolation.
And
you
can
that
start
to
break
down
test
suites
much
easier
and
therefore
you
can
also
refactor
them
in
smaller
pieces,
with
a
focus
on
debugging
and
maintenance.
A
So
now
we're
going
to
move
on
to
the
last
part
of
the
talk
about
building
developing
high
quality
pipelines.
Here's
an
example
again
of
another
pipeline
that
is
a
very
busy
pipeline.
There's
lots
of
things
happening,
so
it's
important
to
keep
stuff
this
stuff
under
control
and
to
have
quality
in
mind
and
feedback
in
mind
also
when
building
this
pipeline.
Otherwise
it
will
take
a
long
time
to
build
and
maintain
this
pipeline.
A
Here's
an
example
of
a
structure,
the
j,
the
jenkins
declarative
syntax,
has
it's
very
clean,
it's
very
simple!
It's
very
similar
to
the
yaml
syntax.
You
see
on
other
projects,
other
providers
of
ci
systems,
here's
an
example
of
what
you
don't
want
to
do
in
your
pipelines,
which
is
put
the
shell
commands
directly
into
your
jenkins
file
or
into
a
yaml
file
if
you're
not
using
jenkins
and
you're,
using
bitbucket
or
whatever
right.
A
A
So
vendor
locking
is
bad.
We
know
this
from
the
software
industry.
The
point
is
now:
you
cannot
rerun
the
tasks
on
the
ci
box
directly
or
localhost,
because
the
commands
are
directly
locked
into
the
pipeline
file
itself
and
therefore
development
time
or
when
you're,
developing
or
maintaining
the
pipeline.
The
feedback
cycle
is
very
slow
because
you
cannot
run
each
part
of
the
pipeline
in
isolation
and
that's
what
you
want
to
do
and
therefore
you
cannot
do
tdd
on
padding
pipelines.
A
A
This
part
is
how
we
break
the
coupling
from
the
vet
from
the
vendor
of
the
pipeline
to
to
the
actual
work
happening.
Okay,
when
you
break
that
coupling,
then
you
can
execute
each
step
in
isolation
independently
and
you
no
longer
actually
need
your
ci
software
mostly,
and
so
here's
an
example
of
what
your
docker
files.
Sorry,
your
jenkins
files
will
look
like
very
clean,
very
simple,
easy
to
read
when
we're
executing,
for
example,
the
unit
test.sh.
A
This
is
what
it
will
look
like
on
the
inside.
It's
basically
like
running
the
code
directly,
but
now
you
can
run
this
command.
This
shell
command
run
s
unit
sh
on
the
ci
box
itself.
You
no
longer
need
jenkins
to
do
this,
for
you
you're
in
control
and
you're
in
charge.
You
can
also
therefore,
run
this
on
your
local
machine.
As
long
as
you
have
docker
installed,
and
you
have
the
right
images,
you
can
go
ahead
and
you
can
do
docker
compose
run
and,
in
this
case,
we're
running
a
service
called
unit.
A
A
A
Let's
take
a
look
inside
what
unit
sh
is
so
going
back
a
little
step
again
here
is
the
scripts.
Now
the
scripts
is
different
from
steps.
Steps
is
what
jenkins
executes
now
from
the
docker
compose
says
perspective
unit
sh
is
in
a
file,
that's
inside
the
docker
image.
Okay,
these
are
examples
of
our
scripts
they're,
really
just
abstractions
and
ways
to
repeat
the
process
on
ci
of
running
the
independent
test
suites.
A
A
A
Now
these
files
are
getting
created
inside
the
containers
and
because
we're
using
volumes
we
want
to
persist
those
files
from
inside
the
containers
back
out
to
the
outside
world
into
our
ci
box
right.
The
problem
is
the
user.
That's
executing
the
command
inside
the
container
has
a
different
user
id
in
the
in
linux
to
the
id
on
the
host
and
and
what
there's
many
solutions
around
this.
A
This
is
a
very
well
documented
uncovered
topic
of
docker,
and
so
basically,
what
I
can
recommend
is
that
you
create
a
user
that
you
can
use
just
in
ci
or
in
general,
that
has
a
user
id
of
something
like
1000.
1000
is
basically
the
industry
standard
id.
It
comes
on
ubuntu,
it
comes
on
the
mac
and
most
jenkins
ci
systems.
A
A
So
that's
important
to
understand
about
permissions
and,
if
you're,
trying
to
clean
up
your
clean
up
your
jenkins
job
and
you're
getting
permission
denied
cannot
delete
files.
It's
probably
because
of
this.
This
issue.
A
Thank
you
for
your
time
and
coming
to
my
talk,
I
hope
you
learned
some
things
I
do
consulting
in
this
topic.
I
provide
companies
and
teams
with
support,
training
and
workshops
in
order
to
build
pipelines
yourself,
as
well
as
helping
them
build
them
for
you.
So
if
you're
having
any
issues,
you
have
slow
pipelines,
you
have
having
issues
with
your
pipelines.
You
can
contact
me
via
email,
twitter,
linkedin
and,
of
course,
you
can
ask
me
questions
on
the
the
qa
here
at
the
cdcon
conference.
A
I
hope
that
we
covered
enough
topics
on
a
a
small
amount
that
piques
your
interest
enough
to
go
and
research
and
learn
more
about
these
topics,
and
if
you
would
like
to
know
more,
then
please
get
in
touch
I'll,
be
more
than
happy
to
to
help.
You
thank
you
and
take
care.