►
From YouTube: SKILup Day: Continuous Testing - It's all about feedback
Description
Continuous testing involves performing software tests at every stage of the CI/CD pipeline. This tutorial will explore why feedback is important in every stage and what we can do with it.
A
B
My
name
is
izzy
ganbaro
senior
technical
marketing
manager
at
gitlab,
with
over
20
years
experience
in
the
iet
industry
and
focus
on
devops
and
continuous
integration,
contingency
delivery
and
focus
upon
casting
area,
and
I
will
be
delighted
to
deliver
for
you
today.
A
live
demo
of
continuous
testing
with
gitlab.
A
Previously
I
was
director
of
r
d
infrastructure,
where
I
led
a
devops,
the
devops
department,
and
where
I
found
my
passion
for
cicd
in
previous
roles,
I
helped
transition
r
d
from
waterfall
to
agile.
I
was
also
the
director
of
qa
and
director
of
product
management
in
numerous
companies,
I'm
also
on
a
personal
note,
I'm
also
the
mother
of
three
young
girls
and
I'm
a
certified
pastry
chef
though
I
never
worked
as
one.
A
So,
let's
talk
about
what
we
have
in
store
today,
so
a
little
bit
about
our
agenda,
we're
going
to
start
with
the
presentation
and
then
we're
going
to
be
we're
going
to
follow
up
with
a
demo
by
itsic.
So
we'll
talk
about
continuous
testing
challenges
for
testing
visualizing,
test
coverage,
review,
apps,
blue
green
deployments
canary
releases,
feature
flags
and
then
a
really
exciting
demo
from
itsic
great.
So
let's
get
started.
A
So
our
presentation
is,
it's
all
about
feedback
and
bill
gates
said,
and
I'm
quoting.
We
all
need
people
who
give
us
feedback,
that's
how
we
improve
and
really
that's.
Why
we're
talking
about
feedback
as
part
of
continuous
testing,
because
we
constantly
want
to
improve,
improve
in
the
quality
of
our
code,
improve
in
the
efficiency
improve
in
the
rapidness
of
our
delivery
and
so
on.
A
So
continuous
testing
is
part
of
continuous
integration
and
it
is
located
in
the
verification
stage.
So
here
of
the
devops
workflow,
it
comes
right
after
the
build
phase
where
the
build
artifacts
are
tested
and
verified
and
are
approved
before
moving
forward.
So
in
terms
of
where
we're
at
in
the
cicd
pipeline,
we
have
the
build
phase
unit
phase.
Then
we
have
the
testing
phase.
So
this
is
right
after
the
build
it
smack
in
the
middle
of
the
continuous
integration
stage
and
before
we
go
into
the
cd
pipeline.
A
A
recent
study
that
I
read
reported
that
70
of
the
organizations
have
adopted
agile,
but
only
30,
actually
automate
testing,
so,
in
other
words,
testing
processes
remain
stuck
in
the
past
they're
stuck
in
the
past
and
as
organizations
invest
a
considerable
time
and
effort
into
transfer
and
transforming
their
development
process
to
meet
stays
and
tomorrow's
business
demands
most
legacy.
Software
testing
tools
and
processes
are
unfit
for
the
type
of
continuous
testing
that
agile
and
devops
require
to
do
so.
The
first
one
that
I
want
to
talk
about
is
inability
to
shift.
A
Lift
tests
usually
cannot
be
implemented
until
late
in
each
sprint,
and
that's
because
of
stability
of
the
feature
that's
being
implemented.
The
ui
is
dependent
on
components
such
as
backend
apis,
and
it
just
takes
a
really
long
time
until
everything
gets
developed
and
is
finally
completed
and
available
for
testing.
A
A
That
means
that
team
lack
instant
feedback
on
whether
their
changes
impact
the
existing
user
experience
and
so
usually,
there's
going
to
be
some
kind
of
compromise
in
terms
of
how
many
tests
or
which
tests
are
going
to
be
run
per
build,
and
then
the
test
suite
expands
as
we
mature
in
in
the
stage
in
python
that
we're
we're
at
so
when
we're
just
testing
a
specific
build
that
may
or
may
not
be
stable
enough
to
reach
production.
A
Usually
we
just
need
to
confine
a
smaller
set
of
tests
that
are
defined
high
maintenance.
So
that's
a
big
challenge.
Ui
tests
require
considerable
considerable
amount
of
rework
to
keep
up
with
the
pace
of
the
frequent
changes
of
the
application,
and
this
results
in
really
slow
and
burdensome
test
maintenance
and
may
cause
some
automation
efforts
to
be
abandoned.
A
There's
a
lot
of
churn.
It's
not
only
related
to
ui
test
ui.
That's
a
really
good
example,
because
if
you
have
ever
tried
to
automate
ui
tests,
you
know
that
some
of
them
relate
to
coordinates
on
the
page
and
every
single
change
that
you
make
will
make
this
automated
automated
test,
just
not
relevant
anymore,
because
it
just
simply
won't
work.
But
it's
not
only
ui.
Software
is
rapidly
changing
and
the
automated
student
tests
have
to
change
together
with
the
code
in
order
to
keep
up
with
the
times.
A
Another
challenge
is
test
environment
instability,
and
this
has
to
do
with
the
fact
that
there's
inaccessible
dependencies
test
data
issues
and
so
on
that
commonly
cause,
timeouts
or
incomplete
tests,
or
sometimes
false
positives
or
false
negatives,
and
some
inaccurate
results
that
provide
you
from
delivering
the
fast
quality
feedback
that
agile
and
devops
require.
A
Now
that
agile
practices
have
matured
and
devops
initiatives
have
entered
the
corporate
agenda.
Continuous
integration,
continuous
testing
and
continuous
delivery
have
emerged
as
a
key
catalyst
for
enable
equality
at
speed
of
the
three
continuous
testing
is
by
far
the
most
challenging
of
them
all
to
overcome.
A
So
I
want
to
show
you
a
little
bit
of
what
you
can
do
inside
of
gitlab,
but
this
is
also
relevant
for
for
any
tool
of
choice
with
continuous
testing
being
part
of
the
continuous
integration
pipeline.
It's
also
possible
to
collect
analytics
and
history
for
every
pipeline
that
ran
so
in
this
slide.
A
What
we
can
see
is
for
a
specific,
merge
request
which
may
contain
one
or
more
commits
you
can
see
that
browser
performance,
metrics
and
load
performance
testing
they're
compared
for
to
previous
deployments
and
they're
visible
directly
from
the
merge
request
itself.
So
you
can
make
the
right
decision
on
how
to
continue,
and
since
this
is
done
for
every
single
individual
change,
it's
really
really
easy
to
pinpoint
a
problem
if
you
find
some
kind
of
degradation
in
the
analytics
and
quickly
fix
them
before
you
actually
ship
this
on
to
the
next
stage
of
delivery.
A
You
can
also
view
code-
that's
not
covered
directly
from
from
the
code
itself,
so
you
collect
the
test
covered
information
from
your
favorite
testing
or
coverage
analytics
analysis
tools,
and
you
can
visualize
this
information
inside
the
file
with
a
diff
view,
so
very
similar
to
the
diffu
that
you
can
see
between
code
changes.
It
will
also
show
you
which
lines
are
actually
covered
by
tests
and
which
require
test
coverage.
So
you
can
see
here
in
this
tool
tip.
A
It
really
mentions
that
there's
no
test
coverage,
and
it
really
shows
you
that
maybe
some
additional
work
needs
to
be
done
before
you
merge.
This
merge
request,
since
this
is
directly
connected
to
the
code
and
in
the
repository,
is
also
really
really
easy
to
fix.
The
missing
code
coverage
directly
from
here
and
to
ensure
high
quality
code
is
shipped
another
really
nice
thing
that
we
have
in
gitlab
is
called
review
apps.
So
everything
that
we
talked
about
right
now
is
directly
from
the
pipeline
or
directly
from
the
merge
request
and
review.
A
Apps
follows
that
same
line,
so
review
apps
provides
an
automatic
live
preview
of
changes
made
in
a
feature
branch.
What
it
does
is
it
spins
up
a
dynamic
environment
for
your
merge
requests,
which
also
allows
designers,
product
managers
and
other
people
on
the
team
to
see
changes
without
needing
to
check
out
your
branch
and
then
run
it
in
a
sandbox
environment
or
in
a
dedicated
environment.
A
It
really
gives
you
a
production-like
environment,
feel
to
see
what
a
specific
change
a
very
pinpointed
change
will
look
like
before
it's
merged,
and
this
also
lets
people
comment
on
it.
So
we
also
have
something
called
a
visual
review,
and
that's
this
thing
right
here
which
allows
your
peers
to
and
even
customers
to
deliver
feedback
directly
from
the
review
app,
which
is
that
specific
environment
that
was
spun
up
dynamically
and
is
basically
a
url
that
you
can
email
or
slack
someone
to
just
take
a
look
and
give
you
feedback.
A
A
The
next
thing
that
we
I
want
to
talk
about
in
terms
of
feedback
and
testing
feedback
has
to
do
with
the
cd
side
so
more
of
the
deployment
side.
So
I
want
to
introduce
the
concept
of
blue
green
deployments.
The
idea
here
is
that
you
deploy
you
deploy
to
two
different
environments.
A
Blue
is
exposed
as
a
production
environment
to
users,
and
the
development
continues
on
the
green
environment,
so
it
starts
with
the
blue,
and
then
developers
continue
to
develop
on
that
branch
into
that
environment
and
once
you're
ready
to
deploy
that
green.
All
the
different
changes
that
the
developers
have
developed,
you're,
ready
to
swap
environments.
A
So
again,
this
is
that
one
shot
everything
swaps
from
one
environment
to
another.
The
biggest
downside
for
this
is
that,
if
something
goes
wrong,
all
the
development
has
to
be
rolled
back
and
not
just
a
faulty
feature.
A
A
similar,
but
more
advanced
concept
is
called
canary
releases
still
on
the
deployment
side
in
the
canary
release.
You
start
with
all
your
machines
or
dockers
or
pods
at
100
of
the
current
deployment,
and
now,
when
you're
ready
to
try
out
your
new
development,
you
want
to
try
it
on
in
the
subset
of
users,
so,
let's
say
two
percent.
A
So
let's
take
an
example,
let's
say
I
have
100
machines
now
two
of
two
of
those
machines,
two
of
the
servers,
because
we're
talking
about
two
percent
of
the
traffic
we'll
get
the
new
deployment,
so
we're
gonna
send
two
percent
of
the
users
there,
while
the
other
ninety
percent
are
still
in
the
old
deployment.
The
blue.
A
It's
going
to
be
a
little
bit
difficult
to
figure
out
the
logic
of
how
you
want
to
route
those
users
and
you'll
need
to
decide
whether
the
user
routing
decision
needs
to
be
sticky,
which
it
probably
does.
So
you
can
decide
that
it
goes
by
specific
subnet
or
ip
or
location
or
so
on
on
how
you
route
your
users
and
now
what
you
want
to
do.
A
So
you
have
maximum
control
here,
because
you're
slowly
rolling
out
the
deployment
monitoring
the
feedback,
seeing
that
everything's,
okay
and
then
increasing
the
rollout.
A
If
anything
does
go
wrong,
you
can
just
drain
the
traffic
back
from
wherever
you're,
sending
it
and
route
it
back
to
the
production
environment,
to
the
old
servers,
and
you
can
either
scale
up
or
scroll
down
according
to
the
feedback
that
you
receive.
This
is
similar
to
blue
and
green
in
the
sense
that
if
you
do
decide
to
roll
back,
it
still
rolls
back
the
entire
development
and
not
a
single
feature
that
may
be
faulty.
A
This
is
what
canary
releases
look
like
in
a
good
lab,
so
you
can
see
you
have
a
bunch
of
pods
here
and
you
can
see
some
have
new
environments,
some
of
the
old
ones.
You
can
import
abort
the
increase
in
the
rollout.
You
can
even
roll
back
and
you
can
see
the
health
of
each
one
of
the
pods,
so
it's
very
convenient.
A
A
Even
before
it's
completed
and
that's
one
of
its
strengths,
a
feature
flag
can
be
used
to
hide,
enable
or
disable
the
feature
during
runtime,
so
unfit
finished
features
can
be
hidden
or
toggled,
so
they
don't
actually
appear
in
the
user
interface
and
they
don't
interfere
in
terms
of
what
the
users
interact
with.
This
can
allow
many
small,
incremental
versions
of
software
to
be
delivered
without
the
cost
of
constant,
branching
and
merging
so
future
flags.
A
If
you
think
about
it
kind
of
work
like
a
logical
case
in
your
codes,
it's
like
an
if
or
else
some
users
will
match
the
criteria
and
some
will
not
some
users
that
match
the
criteria
will
be
able
to
see
and
access
this
new
functionality
and
feature
and
the
others
the
others
will
not
so
very
similar
to
canary.
It
allows
us
to
get
immediate
feedback
for
the
new
functionality,
and
once
we
gain
confidence,
you
can
expose
the
functionality
to
more
and
more
users
until
you
hit
100,
but
a
really
nice
benefit.
A
A
It
allows
you
to
ro
to
roll
out
features
gradually
slowly
exposing
features,
and
if
something
goes
wrong,
you
can
confine
the
feature
to
a
small
audience
or
even
to
a
different
environment,
and
even
if
something's
not
ready
yet
to
be
deployed
to
production,
it
can
still
live
in
production,
but
it's
just
not
accessible
because
it's
hidden
behind
a
toggled
off
feature
flag.
So
it's
very
powerful
feature.
A
You
can
apply
different
different
flags
and
strategies
for
different
environments,
and
you
can
even
set
multiple
strategies
per
environment.
The
feature
flag
dashboard,
which
we
can
see
here
in
the
slide,
gives
you
a
high
level
view
of
what's
active
and
what's
not
active
and
exactly
which
strategy
is
applied
in
which
environment.
A
B
You
rit
what
a
great
presentation
so
now,
let's
demo
gitlab,
I
will
give
you
a
live
demo
of
how
you
can
integrate
continuous
testing
into
gitlab
ci
pipeline.
For
this
demo,
I
created
a
spatial
pipeline
with
a
few
kind
of
tests.
The
first
kind
is
static
code
scans
for
security,
compliance
and
code
quality.
The
pipeline
also
will
deploy
test
environment
via
its
kubernetes
integration
and
against
this
environment.
We
will
run
some
automations,
such
as
accessibility,
testing,
functional
testing
and
performance
testing.
B
All
test
results
will
appear
in
a
single
place
in
the
mail
request.
The
last
thing
that
I
want
to
show
you
in
this
demo
is
our
latest
feature
related
to
testing,
which
is
a
code
coverage
visualization
which
helps
developers
and
reviewers
to
identify
lines
of
code
which
doesn't
have
the
code
coverage.
The
demo
is
starting.
This
is
my
repository
in
gitlab.
Now
I'll
make
a
code
change
into
my
java
application,
very
small
change.
This
is
how
we're
doing
continuous
integration.
We
push
always
small
changes.
B
B
B
Then
it
will
check
for
vulnerabilities
in
the
container.
It
will
also
run
a
sas.
It
will
scan
the
code
and
make
sure
the
code
doesn't
have
any
known
vulnerabilities
and
also
it
will
scan
my
third-party
dependencies
to
make
sure
that
my
dependencies
doesn't
have
any
vulnerabilities
license.
Scannings.
We
scan
all
licenses
of
my
dependencies
and
make
sure
that
we
are
not
bringing
into
my
application
unapproved
license
secret
detection.
We
scan
the
code
and
make
sure
a
developer
didn't
unintentionally,
edit
any
credentials.
B
Passwords
keys
or
api
tokens
to
the
repositories
that
can
be
a
big
issue,
especially
if
it's
open
source
project.
Another
thing
is
that
after
this
stage
will
pass
git
lab
via
its
kubernetes
integration
will
deploy
for
me
a
testing
environment,
so
I
will
have
immediately
live
instance
of
my
branch
that
I
will
be
able
to
make
some
automations
on
that.
This
is
very
valuable
for
me,
because
I
can
now
create
automations
during
my
pipeline.
I
don't
have
to
wait
for
a
it
or
a
qa
or
anyone
to
create
for
me
a
testing
environment.
B
I
have
a
special
test
environment
only
for
my
branch
and
it
will
run
the
accessibility
testing.
This
check
will
make
sure
my
website
is
accessible.
It
will
also
run
in
dynamic
application
security
testing.
Then
it
will
check
for
cost
site,
scripting
and
other
vulnerabilities.
The
functional
testing
job
triggers
a
metric
software
jobs.
B
I
also
added
in
this
example
the
performance
stage,
and
I
have
here
two
kinds
of
performance
tests.
First,
is
we
do
a
load
performance
testing
on
the
server
I
defined
in
my
test
script,
100
virtual
users
that
will
hit
the
server
and
measure
the
time
it
took
to
get
the
first
byte
and
also
another
test
is
a
browser
performance
testing
that
check
the
browser,
rendering
and
the
time
it
took
to
render
the
browser
all
right.
So
I
see
the
pipeline
completed
successfully.
B
B
B
The
security
scans
found
one
eye
severity,
vulnerability,
sas
detected
no
vulnerabilities,
as
well
as
the
dependency
scanning.
However,
the
container
scanning
detected
one
a
is
ability,
vulnerability
which
we
need
to
fix.
We
can
also
open
the
vulnerability
and
get
more
details
such
as
the
identifier.
B
License
compliance
test
detected,
no
new
license
cell
can
open
the
report
and
say
the
list
of
licenses
that
we
are
using,
but
all
of
them
looks
good
means
that
we
don't
have
any
policy
against
those
licenses
and
accessibility
scanning
detected
one
issue:
I
can
expand
it
and
see.
What
is
the
issue
so
before
I
can
click
the
merge?
B
I
will
want
to
fix
some
of
the
issues
that
found
so
definitely
I
will
want
to
fix
the
vulnerability,
because
it's
a
disability
is
a
one
which
is
high
and
after
I
fix
all
of
those
issues,
I
will
be
able
to
click
the
merge
request.
Before
I
merge
my
code
to
the
master
branch
I
will
assign
this
merge
request
to
one
of
my
peers,
so
we
will
open
the
code
review
and
then
make
a
review
of
my
code.
B
Now
I
want
to
show
you
how
I
defined
all
of
this
ci
pipeline
and
I
will
open
the
dot
gitlab
ci
ml
file.
This
is
the
cicd
configuration
file
in
gitlab.
As
you
see,
gitlab,
the
configuration
is
done
via
code,
which
is
great
developers.
Don't
need
to
rely
on
devops
engineer
to
make
changes.
This
allows
developers
to
adjust
this
file,
for
example,
if
they
want
to
add
more
tests
or
make
some
more
deployment
for
environments,
they
can
easily
make
this
change
themselves.
So,
let's
see
how
I
defined
this
file
here,
I
the
some
global
variables.
B
A
B
B
So
for
the
most
jobs
I
didn't
have
to
write
them
myself.
I
just
included
them
from
the
templates
on
some
of
the
out
of
the
box
jobs.
I
wanted
to
customize
the
different
templates,
so
this
is
example
for
the
deploy
testing.
This
is
the
job
name,
and
this
is
the
stage
it
belongs
to,
and
here
are
the
scripts
that
I
want
that
the
job
will
run
and
also
jobs.
We
can
define
artifacts.
This
will
be
the
output
of
this
job.
That
will
be,
you
will
be
able
even
to
download
after
the
pipeline
completes
its
run.
B
This
is
the
definition
of
the
metrics
functional
testing
and
you
can
see
here
two
gitlab
keywords.
The
first
is
parallel
and
the
second
is
matrix
and
I
can
define
a
matrix
of
environments,
so
this
will
create
a
matrix
of
tests
for
all
of
those
combinations.
This
is
the
load
performance
job
definition.
B
We
execute
load
performance
with
open
source
tool
called
k6.
I
defined
this
is
the
job
name
and
I
defined
some
environment
variables.
Those
are
parameters
that
I'm
sending
to
the
tool.
First
of
all,
is
the
file
that
contain
the
url
of
the
application,
and
this
file
I
get
is
an
artifact
from
the
from
the
previous
job
from
the
deploy
testing
here
is.
This
is
the
file
that
we
store,
the
url
of
my
testing
environment
and
the
second
parameter
that
I
send
is
the
script
name
and
it's
a
folder
in
the
repository.
B
So
again,
I'm
open
merge,
request
for
a
different
project,
and
if
I
go
to
the
changes
tab
I
can.
I
can
see
in
the
right
side
the
changed
code-
and
I
see
this
slide
this.
This
green
line
shows
me
the
line
that
has
the
code
coverage,
but
here
for
those
lines
for
the
example,
I
don't
have
the
code
coverage.
So
this
as
a
reviewer.
It
saves
me
a
lot
of
time.
I
don't
have
to
go
to
json
files
or
any
other
xml
files
to
check
which
lines
doesn't
have
quadcopter
coverage,
github
visualized
for
me.
B
A
Thank
you
so
much
for
the
demo
itik.
This
was
great.
We
both
want
to
thank
you
very
much
for
joining
us
today
and
we
invite
you
to
meet
us
in
the
networking.
Lounge
we're
there
to
answer
any
question
or
give
us
any
feedback
waiting
to
see
you.
There.