►
From YouTube: GitLab Requirements Traceability Walkthrough
Description
In this video, I'll walk through the requirements traceability provided in GitLab, utilizing CI/CD Pipelines for running tests, linking tests to requirements via a traceability matrix, and finally showing the updated requirements list within GitLab.
Link to the Requirements Management Demo Project - https://gitlab.com/gitlab-org/requiremeents-mgmt
A
Hello,
everyone,
my
name
is
mark
wood,
I'm
the
senior
product
manager
at
gitlab
for
the
plan
certified
section
which
has
a
component
of
requirements.
Management
we've
made
a
lot
of
updates
to
requirements
management
over
the
last
few
releases,
and
I
wanted
to
create
this
video
to
show
the
capabilities
and
powers
that
are
currently
available
to
our
users,
who
are
interested
in
performing
requirements,
management
and
utilizing
requirements
for
their
particular
project.
A
So
this
project
simulates
a
situation
where
there
are
two
tests
that
are
running
in
the
test
folder
and
these
tests
will
either
provide
a
pass
or
fail
status
based
on
whether
or
not
they
pass
or
fail,
because
this
is
a
demo
they're,
just
outputting
pass
fail
right
now,
but
that
could
obviously
be
updated
to
be.
You
know
full
testing
of
your
source
code,
these
requirements,
which
we
can
see
over
here
on
the
left
side,
there's
a
list
of
requirements.
These
are
linked
to
the
test
through
the
use
of
requirements
trace
matrix.
A
The
tests
are
run
in
the
test
stage
of
the
cicd
pipeline
and
they
output
results
files.
The
dot
results
are
the
other.
It's
like
test
name,
dot
results.
Files
is
how
I've
chosen
to
do
it,
and
these
are
stored
in
artifacts,
in
the
test
jobs
and
by
default.
Gitlab
will
pass
the
artifacts
from
one
stage
to
the
next
stage,
so
the
results
files
are
passed
from
the
test
stage
of
the
pipeline
to
the
requirements
check
stage
of
the
pipeline.
A
Once
the
requirements
check
stage
of
the
pipeline
begins
running,
it
will
only
run
if
there's
a
commit
to
the
master
branch.
That's
how
I
have
it
configured.
You
don't
want
to
update
your
requirements.
If
you're
doing
you
know
work
to
a
side
branch,
it
runs
a
simple
python
script
that
I've
created
called
parse
coverage,
which
utilizes
that
trace
matrix.
I
just
showed
and
will
correlate
past
fail
status
of
the
requirements.
A
So
basically,
this
is
a
python
script.
It's
not
overly
complicated.
It
takes
in
the
trace
matrix
and
it
basically
looks
for
all
the
the
dot
results
files
and
makes
a
python
dictionary
of
pass
fail
based
on
the
tests.
So
there's
two
tests
that
I'm
running
so
one
passes
and
one
fails
in
this
demonstration
and
then
it
will
go
ahead
and
generate
an
appropriate
json
output
from
that
by
correlating
those
tests
through
the
trace
matrix
to
the
requirements
themselves.
A
Based
on
this
correlation,
the
json
is
output
and
the
json
is
passed
into
the
requirements,
reporting
functionality
that
we
have
documented
documented
here
in
this
requirements,
report
feature
of
the
cic
pipeline
and
I've
also
put
a
little
helper
here.
The
requirements
report
feature
of
the
cic
pipelines,
expects
properly
formatted
json
in
the
following
format:
basically
just
the
requirement
number
and
then
passed
or
failed.
So
in
this
example
right
here,
what
we're
saying
is
that
requirement.
6
has
passed
the
testing
for
requirement,
6
has
passed
and
therefore
requirement.
A
A
A
We
will
also
go
and
take
a
look
at
the
repository
itself
and
take
a
look
at
the
files.
We
have
two
tests
and
we
have
this
trace
matrix,
so
we'll
go
ahead
and
update
the
trace
matrix
here.
So
if
I
click
on
this,
we'll
see
that
right
now
we're
saying
requirement
one
two:
three
four
five
down
the
left
column
and
we're
saying
that
one
is
first
test
to
a
second
test,
and
I
believe
right
now,
I
have
first
test
is
passing.
Second
test
is
failing
in
my
testing
scripts.
A
So
if
we
look
at
our
requirements
that
will
actually
correlate
because
we
see
that
requirement,
one
is
satisfied.
Two
and
three
are
failing
and
if
we
go
back-
and
we
take
a
look
at
that
file
in
the
trace
matrix
here-
actually
I
have
it
in
a
separate
tab,
we'll
see
that
one
is
associated
with
first
test
and
two
and
three
are
associated
with
second
test.
So
we
see
right
now
that,
since
second
test
is
failing
requirements,
two
and
three
are
marked
as
failed
in
our
requirements
list.
A
A
Here's
my
new
branch,
here's,
my
new
merge
request.
You
know
a
basic
description.
A
A
I
can
actually
verify
that.
Yes,
we
moved
it
from
second
test
to
first
test,
which
is
just
fine,
so
knowing
that
we'll
see
that
a
pipeline
was
kicked
off
now
the
way
I
have
my
gitlab
ci
yaml
file
I'll
go
ahead
and
open
that
here
in
a
separate
window.
So
we
can
look
at
that
as
we're
browsing
through
things.
A
A
pipeline
is
currently
running,
we'll,
go
ahead
and
open
that
pipeline
and
show
that
we're
just
running
both
tests,
which
is
exactly
what
we
expect
since
I'm
not
merging
to
master
I'm
merging
to
a
separate
branch.
We
don't
want
to
update
the
requirements
over
here.
That
would
make
no
sense,
because
this
change
may
get
rejected.
We
may
never
actually
merge
this
and
to
update
the
requirements
list
wouldn't
work.
Well,
we
have
that
configured
right
here,
such
that
the
check
requirements
stage
of
the
pipeline
only
runs
if
we're
referencing
the
master.
A
So
that's
how
this
works,
and
you
could
obviously
change
this.
You
could
have
multiple
different
branches.
You
can
name
your
branches.
However,
you
want.
This
is
very
configurable
based
on
our
current
ci
cd
pipeline
configurations,
which
is
pretty
advanced,
so
I'm
going
to
go
ahead
and
I'm
going
to
take
this
merge
request
and
we're
going
to
go
ahead
and
refresh
it
and
see
that
this
pipeline
has
passed
a
little
check
box
and
I
go
ahead
and
merge.
So
I'm
going
to
go
ahead
and
click
the
merge
button.
A
What
happens
when
I
do
that
as
a
separate
pipeline
is
spawned
to
perform
the
merge
and
you'll
see
right
here
the
pipeline
kicks
off
and
we
see
that
first
test
and
second
test
again
are
running.
The
tests
are
re-running
with
the
merge
to
master
and
we'll
see
that
I
should
have
updated.
If
we
go
back
to.
A
Where
was
I
tess?
No,
we
were
in
a
trace
matrix
weren't.
We
trace
matrix,
we'll
see
that
I've
updated
number
two
to
be
second
test.
Now,
it's
showing
first
test
still
because
the
merge
hasn't
finalized.
Yet,
but
if
we
look
at
our
merge
request,
we
can
see
that
I
have
a
merge
request
to
change
requirement
two
and
we're
changing
it
from
second
test
to
first
test,
and
since
we
know
that
first
test
is
failing
in
this
example.
A
Once
this
completes,
we
should
see
requirement
two
in
our
list
here
go
from
failed
to
satisfied,
so
we
go
back
to
our
pipeline,
we'll
see
that
these
two
have
passed
and
for
demonstration
purposes.
I
have
this
pipeline
set
to
be
manual
run,
so
I
actually
have
to
click.
Go
that
way.
I
can
kind
of
show.
What's
going
on,
you
could
automate
this
there's.
A
A
The
job
says
it
succeeded,
which
is
excellent,
so
go
back
to
our
requirement
list.
It
will
note
here
now
that
requirement
2
is
passing,
which
is
excellent.
So
that's
one
method
of
updating.
Now,
let's
take
a
slightly
different
approach.
Let's
say
that
we
have
a
group
of
developers
and
testers
writing
and
fixing
the
tests
so,
instead
of
changing
the
trace
matrix,
if
an
error
was
found
there
we're
instead
going
to
update
our
test
so
right
now,
second
test
is
failing.
A
A
You
want
to
understand
the
completeness
of
your
product,
so
you'll
be
looking
at
your
requirements
list
and
you'll
say
hey
right
now
on
my
requirements
list
I
still
have
a
number
of
failing
requirements
which
means
that
our
product
is
not
complete.
Yet
we're
not
done.
We
need
to
get
done
and
the
way
to
get
done
is
to
verify
that
these
requirements
are
met
by
our
product.
A
If
we
go
to
our
merge
request,
we'll
see
that
again,
as
per
before
the
tests
are
running
in
this
pipeline,
so
we
can
get
the
results
for
these
tests.
We
can
see
if
they're,
passing
or
failing
it
says
the
job
succeeded
again.
I
can
go
ahead
and
download
the
arc
the
artifacts,
I
can
check
those
artifacts.
So
if
there
was
a
huge
test
report
here,
I
could
go
ahead
and
download
that
and
check
it
in
this
case.
It's
just
that
simple
script.
A
A
So
go
ahead
and
look
at
this
pipeline
we'll
see
that
we
have
again
the
check
requirements
and
we
have
the
test
running
we'll.
Let
them
run
to
completion.
I
also
wanted
to
bring
up
while
these
tests
are
running.
Some
of
the
other
features
we've
added.
We
now
have
the
ability
to
export
requirements
and
import
requirements
now
currently
exporting
exports
all
of
the
fields
which
is
fantastic,
but
we
are
going
to
make
that
selectable
in
an
upcoming
version,
so
you'll
be
able
to
choose
which
fields
get
updated
in
your
export.
A
The
goal
of
this
is
that
a
software
team
could
use
a
round-robin
technique
to
import
requirements
from
an
external
source.
Do
their
development
test
generation
and
test
development
against
those
requirements
within
gitlab
itself
natively,
and
then
once
they
have
completed
this,
and
they
have
a
complete
set
of
testing,
they
have
a
complete
product
and
the
requirements
are
satisfied.
They
can
go
ahead
and
export
this
as
an
artifact
back
to
either
a
system
team
or
a
higher
tiered
level
in
their
organization.
A
A
And
it
has
succeeded,
which
is
excellent,
so
if
we
go
back
to
our
requirements
list,
what
we
should
see
is
now
that
all
the
requirements
of
the
product
are
satisfied.
So
our
requirement
is
our
product
is
complete
and
we
could
go
ahead
and,
as
I
said
before,
we
could
export
this
to
a
csv
file
and
it
will
actually
mail
that
to
my
address
and
as
an
attachment
and
then
I
could
utilize
that
to
export
into
a
different
system
if
you
so
desired.
A
Thank
you
so
much
for
your
time
today.
If
you
have
any
questions,
please
feel
free
to
comment.
I
will
put
be
checking
the
comments
for
this
video.
I
will
also
put
a
link
to
this
repository
in
the
video
below
and
I
would
love
to
see
any
collaboration
or
any
questions
you
might
have
or
any
feature
requests
and
I'm
always
open
to
talking.
So
thank
you
so
much
for
your
time
today.