►
From YouTube: Hands-On GitLab CI Workshop - EMEA Time Zone
Description
Watch this hands-on GitLab CI workshop and learn how it can fit in your organization.
Learn how to build simple GitLab pipelines and work up to more advanced pipeline structures and workflows, including security scanning and compliance enforcement.
A
Thank
you
for
joining
us
today.
We're
excited
to
to
go
through
the
content
of
CI
with
you
all
I'm
joined
by
by
my
colleague,
Steve
Graham,
who
will
be
our
presenter
today
before
I?
Kick
it
over
to
him
just
a
couple
of
housekeeping
items.
First
off
we
are
recording
this
Hands-On
session,
so
you'll
be
able
to
review
the
content
later,
we'll
send
out
the
deck
as
well.
If
you
have
questions
that
come
up
throughout,
please
go
ahead
and
put
those
in
the
Q
a
portion
of
your
Zoom
window.
A
B
Thank
you
Taylor.
My
name
is
Steve
Graham
I'm,
a
customer
success,
engineer
on
gitlab
scale,
team
and
I'm
actually
on
Taylor's
team
I'm
excited
to
get
a
chance
for
us
to
dive
in
today
and
talk
about
gitlab
CI,
some
of
the
basics
getting
started
with
it.
Let
me
go
ahead
and
share
a
screen
here:
real
quickly.
B
There
we
go
way
too
many
windows
open.
Can
you
see
my
screen?
Okay?
Well,
you
got
it.
Okay,
great
all,
right,
so
real
important
today,
I'm
going
to
cover
a
couple
things
before
we
get
started,
but
it's
extraordinarily
important
today
that
we
get
your
provisions
subgroup
underneath
our
learning
learning
Labs
namespace
up
in
operational
and
to
do
that.
We're
going
to
need
to
have
you're
going
to
need
to
have
a
gitlab.com
account.
B
If
you
don't
have
one,
please
take
a
minute
and
go
sign
up
it.
The
workshop
that
we're
going
to
be
provisioning
for
you
today.
Normally
we
have
these
set
up
to
where
they
last
a
day,
maybe
two
this
one's
provisioned
for
four
days
and
the
reason
why
is
because
there's
a
fair
amount
of
content
to
get
through
that
we're
going
to
want
you
to
have
time
to
work
through
on
your
own
and
then
there's
also
some
optional
content.
B
If
you
want
to
get
into
things
like
security
and
compliance,
that
you
can
also
work
on
on
your
own
and
toward
that
end,
as
Taylor
said,
we're
going
to
be
sending
out
an
email
today,
it'll
be
later
today,
but
during
the
course
of
the
day,
that's
going
to
have
a
link
to
the
slides
that
we're
using
today
we're
also
going
to
be
sending
a
link
to
the
recording
so
that
you
can
come
back
here
and
review
anything.
B
You
want
to
we're
also
going
to
at
some
point
during
the
day
in
a
separate
email,
we're
going
to
be
inviting
you
to
join
a
slack
Channel.
Now
the
slack
channel
is
going
to
be
active
for
about
a
week,
and
we
just
want
you
to
have
access
to
the
customer
success
Engineers
so
that,
if
you're
into
a
problem
or
you
need
to
ask
a
you-
know
a
question
related
to
the
things
that
you're
working
on
you'll
have
access
to
somebody
that
you
can
reach
out
to
so
one
other
real,
quick
Point.
B
B
So
again
we're
going
to
be
going
through
some
content
today,
but
there's
also
going
to
be
some
optional
content.
You
can
work
through
on
your
own,
that's
related
to
security
and
compliance,
so
please
be
on
the
lookout
for
that.
B
If
you'll
bear
with
me
for
just
one
quick
minute,
a
lot
of
the
stuff
that
we're
going
to
be
doing
today
is
going
to
rely
on
the
old
navigation
now
I'm
on
the
old
navigation
here,
but
I
want
to
show
you
real
quickly
how
to
switch
back
and
forth
if
I
want
to
turn
the
new
navigation
on
which
will
be
the
default
for
most
of
you
at
getlab.com.
I
can
do
that
here
and
you'll
see
that
the
navigation
switches
we've
done
a
lot
of
work
on
this.
B
Are
you
our
ux
team
has
been
pretty
seriously
engaged
on
this
and
trying
to
manage
it
down
to
a
point
where
it's
the
most
efficient?
They
can
make
it,
but
just
know
that
if
you
have
to
switch
from
there
to
the
old
navigation,
you
can
just
turn
the
new
navigation
off
and
then
you're
going
to
get
the
old
menus
here
that
are
going
to
be
in
line
with
some
of
the
things
we're
going
to
be
talking
about.
B
I'm
sorry
to
bear
with
me
there
we
go
all
right
so
again
the
we're
going
to
be
provisioning,
a
subgroup
for
you,
you're
going
to
be
made
the
owner
of
it
so
that
you
have
full
control
over
it,
you're
going
to
be
ex
exporting
or
for
getting
some
projects
into
it
to
to
then
go
ahead
and
work
on
the
things
that
we're
going
to
ask
you
to
work
on
today.
B
Customer
success
engineer
I'm
not
going
to
waste
any
time
on
that
slide.
The
first
thing
we're
going
to
do
this
is
our
agenda
that
you
can
see
down
at
the
bottom
here.
First
thing:
we're
going
to
do
is:
go
through
your
lab
setup
and
again
you're
going
to
need
to
have
a
gitlab
gitlab.com
account,
that's
registered.
So
if
you
haven't
got
that
registered
yet
please
go
there
and
register
it
real
quickly
and
let
me
grab
a
couple
of
things
real
quickly.
B
Oh,
you
know
what
I
can't
because
I'm
a
panelist
I'm,
sorry
so
gitlab.com,
please
sign
up.
If
you
haven't
done
it
yet
we're
going
to
be
the
scenario
is
you're
on
a
new
team
Tanuki
racing,
your
company's
recently
swapped
over
to
using
gitlab
for
CI
and
CD,
and
there's
test
you
with
learning
about
the
different
pipeline
capabilities.
B
Now,
let's
go
ahead
and
go
through
the
lab
setup,
it's
important
that
you
don't
miss
out
on
these
next
steps.
This
is
probably
the
most
important
part
of
our
Hands-On
Workshop.
Today
we
want
to
make
sure
that
you
have
access
to
this
provision
group.
This
provision
group
is
going
to
be
a
git
lab
ultimate
group,
so
you
know
you
may
have
a
GitHub
premium
subscription,
but
this
will
give
you
access
to
all
of
gitlab's
tools
to
complete
our
exercises
with
today
and
through
the
course
of
this
week.
B
So
to
register
for
this
you
need
to
go
to
www.gitlabdemo.com
and
you'll
need
to
use
that
redemption
code
that
you
see
displayed
on
the
screen
right
there.
So
Taylor
I
don't
have
a
way
to
put
this
into
questions
right
now,
because,
apparently
it
won't.
Let
me
put
that
in
there,
but
if
you
can
just
be
prepared
to
answer
capture
that,
let
me
put
it
into
a
spot
that
you
and
our
other
panelists
can
get
to.
B
B
B
So,
if
you'll
capture
that
and
then
use
that
in
your
sign
up,
you'll
be
able
to
put
that
in
here
as
your
gitlab
username
and
you'll
be
able
to
provision
your
training
environment
now,
what
that's
going
to
do
is
produce
a
page
that
looks
like
this.
In
fact,
let
me
just
switch
screens
real
quickly
here.
B
B
If
you
want
to
do
that
once
you
do
that
you're
going
to
end
up
at
a
group
that
looks
something
like
this,
except
that
this
part
of
it
here
will
be
different,
unique
to
you,
but
it'll
be
a
test.
It'll
save
my
test
group
it'll.
Have
your
unique
string
there
and
then
you'll
have
the
ability
to
create
projects
here,
if
you
want
to,
although
we're
going
to
take
just
a
little
bit
different
route.
So
let
me
go
back
to
my
other
window.
B
All
right,
so
if
you
click
on
the
my
group
or
capture
the
link
there
and
then
go
to
a
go
to
the
you'll,
be
taken
to
your
group
now,
let's
go
ahead
and
move
on
from
here.
So
the
test
group's
going
to
look
like
this
again
except
you'll,
have
your
own
unique,
set
of
characters
there.
After
my
test
group,
if
you
find
yourself
here,
there
may
have
been
a
misclick
you
may
have
put
in
the
symbol
from
your
username
at
gitlab.com.
B
And
start
over
again,
you
can
put
in
your
invitation
code
again.
These
invitation
codes
are
wrong
but
put
in
your
gitlab
username,
provisioning,
the
training
environment
and
by
the
way,
if
you
do
this
more
than
once,
it's
not
going
to
provision
more
than
one
training
environment.
It
keeps
the
environments
unique
per
gitlab
user.
So
you
can
go
back
here
and
just
go
through
the
process
again,
if
you
want
to
to
just
find
your
way
back
to
your
back
to
your
provision
group,
if
you
want
to.
B
The
goal
today
really
is
to
get
you
a
chance
to
start
working
with
the
tools
that
are
built
in
to
get
lab.
Gitlab
pipelines
are
extraordinarily
easy
to
set
up
and
get
operational,
especially
when
you're
using
templates
that
gitlab
provides,
but
you
can
create
custom
jobs
of
your
own
too
foreign.
B
B
Now
to
do
this,
you're
going
to
need
to
navigate
to
this
particular
project
and
I
really
wish
I
had
a
way
to
put
this
into
our
q
a.
B
Yeah
I
think
I.
Don't
sorry
so
you'll
need
to
navigate
to
this
particular
project
and
then
that's
going
to
take
you
to
a
screen
that
looks
like
this.
B
And
what
we're
going
to
want
to
do
now?
I
want
you
to
notice
that
I'm
in
the
same
project
on
both
screens.
The
reason.
Why
is
because
our
instructions
are
going
to
remain
in
the
project
and
in
this
source
project
under
the
issues
we're
going
to
have
a
list
of
instructions
that
we're
going
to
want
to
go
through,
but
in
this
primary
from
the
main
page
for
this
project,
we're
going
to
want
to
hit
fort.
B
You
know
it
can
be
kind
of
slow
when
it's
getting
operational
here,
I,
don't
know
why
that
didn't
work
there.
We
go
one
of
the
things
that
you're
going
to
want
to
do
when
you
get
to
this
Fork
Project
screen
you're
going
to
need
to
select
a
name
space
and
it
needs
to
be
the
same
environment
that
you
just
got
provisioned
in
this
case.
That's
this
one
here.
B
It'll,
take
it
a
minute
to
Fork
it,
but
this
will
give
you
a
complete
duplicate
of
the
Upstream
project
that
you
just
formed
now,
as
you
can
see
it's
all
there,
except
that
we
don't
have
the
issues
the
issues
didn't
come
over
with
it.
So
that's
why
we
keep
the
second
one
over
here,
so
that
we
can
open
the
open
the
issues
up
and
be
able
to
go
to
those
if
we
need
to.
B
B
Let's
go
back
to
our
main
screen
again,
so
you
saw
that
I
was
doing
this
into
different
Windows.
You
could
certainly
do
it
in
two
tabs
if
that
would
work
best
for
you,
but
the
the
idea
being
that
we're
going
to
be
using
our
reference
material
from
the
original
project
in
this
left
screen
that
you're,
seeing
here
and
on
the
right
screen
we're
going
to
be
actually
working
on
our
project.
B
B
Now
this
next
part
is
kind
of
important
just
to
make
things
easier
for
you
once
you're
on
that
fork,
project
you're
going
to
want
to
go
down
to
settings
General
scroll
down
to
the
advanced
and
then
remove
the
fork
relationship.
B
It'll
want
you
to
put
in
your
the
name
of
the
project,
but
you
can
just
cut
and
paste
if
you
need
to,
but
to
just
show
you
that
real
quickly.
B
That
the
steps
for
completing
this
again
are
going
to
be
in
the
original
project
under
its
issues,
so
using
the
Dual
tab
or
split
window
philosophy
will
tend
to
give
you
a
great
way
to
be
able
to
follow
along
now.
Let's
talk
about
pipelines
generally
and
just
for
awareness.
This
is
an
area
of
deep
passion,
for
me
is
something
that
I've
made
a
personal
study
of
since
I
joined
gitlab,
and
it's
one
of
my
favorite
parts
to
get
live
is
being
able
to
build
out
these
Pipelines.
B
If
we
look
at
pipelines,
Anatomy,
we
can
see
that
it's
built
out
in
stages
and
stages
are
just
jobs
that
are
running
in
parallel
or
eligible
to
run
in
parallel.
If
you
have
no
friends
and
there
as
each
stage
completes,
the
next
stage
can
then
start
to
run,
but
this
deploy
job
isn't
going
to
be
able
to
run
until
test
day
and
test
pay
are
completely
done.
B
Now,
let's
talk
about
just
some
of
the
functions
that
you
have
available
to
you
inside
of
a
job.
The
script
is
shell
commands
that
execute
in
your
Docker
or,
if
you're,
using
a
shell
based
Runner,
it's
just
it's
just
the
commands,
literally
they're,
going
to
be
executed
in
your
environment.
B
You
know
this
is
the
one
part
of
a
job,
that's
absolutely
required,
and
then
we've
got
options
like
you
know.
We
can
set
this
stage,
which
is
something
that
you
really
should
be
doing.
We
have
before
script
and
before
script
runs
before
your
script
section
does.
This
is
a
place
where
you
could
potentially
pull
in
libraries.
B
You
need
to
do
stuff
like
that,
although,
if
you're
doing
that,
you
should
be
thinking
about
potentially
rewriting
your
Docker
image
to
make
it
more
efficient,
we
also
have
an
after
script
and
again
the
idea
being
that
you
know
it
can
run
cleanup
jobs.
Things
like
that
in
the
after
script
does
not
impact
your
job,
success
or
failure.
So
if
you
have
a
failure
in
the
after
script,
it's
not
going
to
fail
the
job
itself
in
the
pipeline.
B
B
B
You
know
what
Anonymous
attendee
I
don't
know
the
answer
to
that
I
don't
know
if
the
before
script
can
actually
impact
the
outcome
of
the
job.
I
think
it
can,
but
I
haven't
tested
that
specifically,
so
it's
something
that
you'll
want
to
make
sure
is
going
to
be
successful
for
you
now.
If
we
go
back
to
the
issues,
we
can
see
the
list
of
things
that
we're
going
to
want
to
be
working
through
I'm,
going
to
have
to
move
a
little
bit
quicker
than
this
for
today's
session.
B
So,
let's
talk
about
Job
execution
order
and
Dax
and
Dax
you'll
recall
from
our
webinar
last
week,
is
directed
a
cyclic
graph.
So
this
is
the
ability
for
jobs
to
have
interdependent
relationships
to
find
that
allow
them
to
run
out
of
the
normal
stage
order,
if
appropriate,.
B
So
after
showing
off
your
pipeline
to
the
team,
they
love
it,
but
they're
wondering
if
you
could
speed
up
the
process
a
little
bit
it
directed
a
cyclip
graph
is
definitely
one
of
the
ways
you
can
decide
that
you
know
you
decide
you're
going
to
show
off
your
skills
show
how
you
create
a
pipeline
with
different
execution
orders,
as
well
as
a
large
directed
a
cyclic
graph,
to
show
what
it's
really
possible.
B
Now,
as
I
said
in
the
previous
slide,
this
unit
test
is
not
eligible
to
run
until
the
build
up
in
the
previous
stage
has
completed.
At
that
point,
it's
eligible
to
run
intercepted
in
so
we
can
adjust
this
order
if
we
want
to,
we
can
make,
for
example,
and
I,
want
you
to
notice
that
both
of
these
jobs
in
this
test
pipeline
have
got
a
great
Dot,
and
that
just
means
that
they're
not
eligible
to
run.
Yet
until
this
build
app
is
done.
B
B
So
the
way
we
do
this
is
with
the
needs
keyword
in
a
job.
B
So,
let's
see
so
this
is
an
example
directed
a
cyclic
graph,
a
pipeline
that
uses
needs
to
run
as
fast
as
possible,
and
what
you
can
see
here
is
to
build
test
day
is
dependent
upon
build
a,
but
it's
not
dependent
upon
build
B
and
then
deploy
a
is
able
to
deploy
as
soon
as
test
day
is
completed,
and
so
this
gives
us
the
most
efficient
pipeline
for
each
of
these
two
build
and
test
scenarios
all
the
way
through
to
deployment,
and
that's
what
dissected
directed
a
cyclic
graphs
are
designed
to
help
us
do
now.
B
It's
actually
possible
to
build
a
stageless
pipeline
without
the
stages
I.
Don't
prefer
to
do
this
I
really
like
to
have
the
stages
there
just
so
it
enumerates
the
types
of
jobs
into
the
various
stages
that
I
like
to
put
there
I
will
use,
needs
a
lot
to
subvert
the
normal
stage
operational
order,
but
it's
possible
to
do
this.
If
you
wanted
to
do
it,
you
could
use
the
needs
keyword
in
every
single
job
to
declare
its
dependencies,
and
you
could
actually
have
a
stainless
pipeline.
B
If
you
wanted
to
take
that
route
now,
a
status
pipeline
can
make
your
pipeline
more
efficient
to
be
real
Frank
with
you,
I,
like
the
layout
of
having
it
in
stages,
but
I'll
use
needs
to
go
ahead
and
create
the
operational
orders
that
I
wanted
to
to
take
on.
B
So
when
we
do
this,
we
implicitly
implicitly
configure
the
execution
order
a
little
faster
and
right,
more
efficient,
Pipeline
with
less
cycle
time.
The
same
is
true
even
if
they're
in
stages,
so
even
if
they're
in
stages
and
happier
jobs
allocated
to
stages
but
they're,
using
their
needs
to
suffer
waiting
for
the
previous
stage
to
complete
you're,
going
to
get
the
same
net
effect
as
having
a
stageless
pipeline.
B
Now
again,
the
Hands-On
steps
are
contained
in
the
source
project,
and
this
was
execution
order,
Dax
that
we
just
went
through.
B
If
we
don't
allow
a
test
to
fail
and
then
the
test
fails,
it's
going
to
stop
the
pipeline
from
running
all
the
jobs
from
that
point
on
that
haven't
already
started
are
not
going
to
execute,
so
we
can
use
rules
and
failure
Clauses
in
our
gitlab
pipelines
to
help
get
around
this,
and
this
is
a
real
good
example
of
it
right
here,
and
this
is
what
you'll
see
these
exclamation
point
with.
An
orange
circle
tells
us
that
that
pipeline
has
fit
that
particular
job
has
failed,
but
it's
allowed
to
fail.
B
B
B
Then
it's
going
to
go
ahead
and
run
and
the
allow
failure
false
is
on
by
default
as
well.
So
your
your
jobs
if
they
fail-
and
you
don't
explicitly
make
them
true
for
the
allow
failure
they're
going
to
stop
your
pipeline,
but
a
job
without
any
rules
defined,
is
going
to
just
run
automatically
whenever
a
pipeline
is
created.
B
So
a
job
is
included
in
a
pipeline
and
for
real
evaluates
to
true,
and
it
has
the
Clause
that
went
on
success
when
delayed
or
when
always
different
use
cases
for
each
one
of
those
but
get
to
get
to
have
each
one
of
them
for
different
circumstances.
B
B
B
B
So
in
this
particular
case
we're
looking
at
one
of
the
predefined
variables
CI
pipeline
source
and
we're
looking
to
see
if
it's
where
and
in
this
case,
that's
if
you
were
to
go
to
the
cicd
in
pipelines
page
and
then
run
a
pipeline.
That's
what
this
web
means
means.
Somebody
went
to.
The
web
form
manually
ran
a
pipeline,
and
this
job
will
only
kick
off
when
the
pipeline
is
kicked
off
from
the
web
form.
B
So
it
only
has
one
rule
the
rule
is
going
to
default
to
when
on
success,
which
is
the
default
for
the
win,
Clause
interval
and
again,
if
it
doesn't
find
a
matching
rule
here,
so
somebody
does
a
commit.
B
We
have
the
ability
to
use
if
in
conjunction
with
changes
and
exists
and
changes
is
looking
for
changes
in
a
specific
file
list
of
files
or
potentially
a
directory,
depending
on
how
you
want
to
do
it
exist,
is
going
to
look
just
to
see
if
something
exists
in
the
repository
and
if
it
exists
in
the
repository,
then
it's
going
to
go
ahead
and
run,
and
these
can
both
be
used
in
conjunction
with.
If,
if
you
want
to
take
that
route.
B
Okay,
that's
good
to
know
so
you
ran
at
exit
one,
it
failed
the
job
and
and
then
there's
nothing
in
the
docs.
That's
saying
that
so
Taylor
would
you
mind
taking
a
note
that
and
we'll
Circle
back
with
the
docs
team
and
see
if
we
can
have
that
addressed
by
the
the
verified
team.
B
It's
just
making
sure
that
there's
parity
between
a
variable
and
a
value
that
you
want
to
test
for
not
equals
is
exactly
what
you
think.
It
is
it's
a
case
where
it's
not
equal
to
that
that
particular
value,
and
then
these
two
here
that
have
the
tildees
at
the
end.
That's
equals
tilde
not
equals
tilde's.
B
B
You
combine
multiple
tests
using
the
end
end
in
the
or
or
pipe
symbol
operators.
If
you
want
to
take
that
route,
you
can
compare
one
variable
and
and
compare
another
variable,
and
at
that
point
you've
got
this
complex
and
more
compound
rule.
If
you
need
to
have
it
so
job
attributes,
you
know
we
have
when
allow
failure
and
start
in
so
wind
has
just
been
on
success.
B
You
know
when
delayed
things
along
those
lines,
you'll
have
to
read
into
the
when
attributes
that
you
have
and
that
allow
failure
is
going
to
default
false.
So
if
you
want
to
allow
a
job
to
be
able
to
fail
and
not
fail,
your
whole
pipeline
you're
going
to
want
to
set
that
allow
failure
to
be
true
and
then
starting
just
gives
you
a
way
to
execute
the
job
in
some
delayed
fashion.
From
the
point
that
it's
eligible
to
be
run.
B
B
We
don't
want
to
run
it
on
a
merge
request
and
if
we
would
test
for
that
to
see
if
we
are
to
merge
requests-
and
if
we
are
women
would
be
never
that's
a
negative
Google
application
so
that
you
can
make
sure
that
job
doesn't
run
in
that
circumstance
on
success.
Again,
just
means
that
all
the
jobs
that
preceded
this
this
one
evolved
past
or
were
allowed
to
fail
one
of
the
two
and
it
it
can
go
ahead
and
proceed
and
run
on
failures
in
special
circumstance.
B
B
Win
manual
is
going
to
give
you
a
manual
play
button
in
the
pipeline
so
that
when
you're,
looking
at
the
pipeline,
you're
seeing
the
jobs
displayed
in
this
in
the
stages,
it'll
have
a
play
button
there,
and
this
is
something
that
you
can
actually
regulate
with
with
the
protected
environments
and
with
protected
environments,
I'm,
sorry,
protective
branches,
protected
environments,
you
could
actually
delineate
who's
allowed
to
click
on
that
play
button.
If
you
want
to
take
that
route,
and
then
delayed
is
what
we
were
talking
about
previously.
B
So
when
is
the
job
not
created
in
pipeline,
a
job
is
not
treated
and
not
included
in
a
pipeline.
It's
none
of
the
rules
defined
for
the
job,
evaluate
to
true
the
job
evaluates
the
true,
but
has
a
clause
of
whenever
again,
that's
the
implication
of
a
negative
rule
right.
This
is
just
in
this
circumstance.
We
never
want
to
run.
B
So
this
is
showing
us
when,
as
you
know,
being
used
in
conjunction
with
rules-
and
this
is
a
standalone
win,
Thunder
and
Rule.
So
if
these
two
things
don't
ever
match
in
this
particular
case,
see
I
popping
sources
equals
a
merge
request
event.
We
don't
want
this
job
to
run.
Co
pipeline
Source
equals
schedule.
We
don't
want
this
to
run,
but
in
all
other
circumstances,
it's
going
to
run
as
long
as
the
previous
jobs
in
the
pipeline
were
successful.
B
So
into
configure
for
manual
execution-
and
this
is
what
I
was
just
talking
about
a
minute
ago-
you'll
get
a
play
button
on
these
jobs
that
are
intended
to
be
manual
jobs.
They
can
be
open
for
anybody
to
click
on.
If
you
want
to,
although
just
know
it's
protected
branches
and
environments,
you
do
have
a
way
to
regulate
who's
allowed,
to
do
that
by
role
or
by
individual
username.
B
If
only
the
after
skip
script
fails,
then
what
a
retry
trigger
everything
scripts
that
were
just
after
scripts
on
me,
it's
going
to
retry
everything
retry
is
going
to
redo
the
whole
job
from
before
script
script
and
after
script
and
again
the
after
script
won't
fail
a
job.
So,
even
if
the
after
script
after
script
has
some
kind
of
failure
in
it,
that
would,
if
it
were
in
the
script
section,
would
fail
the
job.
B
You
know
it's
not
going
to
fail
the
job,
although
you
will
be
able
to
see
that
from
the
job
log.
So
you,
if
you
were
to
click
on
test
B
here
and
open
up
that
jobs,
page
we'd,
be
in
the
job.
Log
and
we'd
be
able
to
see
what
the
what
the
return
you
know
what
was
returned
from
the
commands
things
along
those
lines.
Any
error
messages.
B
B
B
So
this
is
giving
us
a
good
example
of
start
in
so
this
rule
FCI
commit
Branch
equals
Master
could
be
made
for
you,
then
the
it's
delayed,
we're
going
to
start
it
in
three
hours
and
allow
failure
is
true
here.
So
in
this
particular
case,
this
job,
the
stalker
bill-
is
allowed
to
run.
But
if
it
fails,
it
doesn't
fail.
The
pipeline,
your
other
test,
can
continue.
B
B
B
It's
going
to
look
to
see
if
we've
had
changes
in
Docker
file
itself
or
in
dockerscripts,
and
then
this
can
be
a
manual
job.
So
if
it
finds
changes
in
either
one
of
those
two
areas,
it's
going
to
present
us
a
manual
job
that
somebody
will
have
to
click
on
to
make
it
run.
B
Now,
let's
talk
about
the
variables
processing
order,
because
variables
can
be
set
in
your
gitlab
CI
getlab.ci.ymount,
they
can
be
said
in
the
project
that
can
be
set
in
a
group
that
sits
above
it
if
you're
self-hosted,
you
can
actually
set
them
in
your
instance
too,
and
then
you
know,
kitlab
is
going
to
go
through
and
create
these
predefined
variables.
B
You're
going
to
have
deployment
variables,
so,
if
you're
on
a
web
form,
if
you're
on
the
manual
red
pipeline
page,
you
can
actually
create
variables
there.
If
you
need
to,
then
you
have
the
yellow
defined
job
level
variables
to
be
able
to
find
global
variables.
B
You
have
inherited
environment
variables,
which
is
a
different
kind
of
scenario
where
one
job
sends
its
it's
values
down
to
the
next
one
for
its
script
section,
and
then
this
is
the
precedence
here
and
then
that's
what
gets
passed.
So
you
can
know,
what's
going
to
overwrite
what
in
your
variable
processing
before
it
gets
passed
on
to
the
git
lab
Runner.
B
B
53
minutes
after
the
hour
and
resume,
but
let's
just
take
a
real,
quick
break
and
take
a
breather.
Maybe
you
want
to
go
back
and
work
on
some
of
the
things
that
we've
been
talking
about
already.
B
B
We're
not
officially
back
yet
for
another
three
minutes
here,
but
cosmen
trefner
trendifer.
Would
you
ask
that
question
one
more
time?
I
hit
the
wrong
button
and
it
won't
allow
me
to
type
in
a
URL
for
you.
So
if
you
could
ask
that
one
more
time
or
if
somebody
could
just
put
that
question
in
there,
I
can
type
an
answer
out
for
it,
which
is
where
are
the
issues
at?
He
was
saying
that
he
doesn't
have
any
issues
in
his
project
that
he
forked
and
that's
because
they're
in
the
Upstream.
B
Foreign,
let's
go
ahead
and
we're
about
a
minute
early,
but
let's
just
go
ahead
and
start
resuming
again.
B
So
now
we're
going
to
talk
about
SAS
and
artifacts
you're,
going
to
have
cases
where
you're
going
to
want
to
be
able
to
create
an
artifact
and
leave
it
behind
for
a
subsequent
job
to
be
able
to
to
get
to
and
you're
also
going
to
want
to
be
able
to
engage
in
zest.
Scanning
SAS
scanning
is
available
all
the
way
through
to
our
free
tier.
So
you
don't
have
to
have
our
ultimate
tier
to
be
able
to
use
SAS.
There's
a
few
other
scanners
like
that
too.
So
just
be
aware,.
B
So
toward
that
end,
if
we
want
to
configure
assess
manually,
we
can
go
ahead
and
include
a
template.
Job
so
include,
as
you'll
recall
from
our
webinar
last
week,
is
how
we
can
include
files
coming
in
from
separate
certain
locations,
and
this
particular
one
template.
This
keyword.
Template
means
that
it's
a
job
that's
shipped
with
kitlab,
so
gitlab
ships,
the
SAS
testing
job.
It's
a
it's,
a
very
Universal
type
of
testing
approach.
B
B
So
what
is
the
template?
It's
a
way
to
share
the
CI,
there's
a
couple
of
different
ways
of
using
this
terminology,
but
when
we
use
the
keyword
template
it's
talking
about
something
that's
shipped
with
gitlab,
but
you
can
also
create
your
own
templates,
but
they
just
don't
use
the
keyword
template.
They
would
use
the
project
keyword
instead,
so
they
would
go
to
an
independent
project,
and
this
allows
your
teams
teams
to
assemble
templates
of
your
own,
that
you
share
with
the
various
projects
in
your
organization.
B
B
B
So
we
can
see
here
that
secure
analyzers
prefix
has
got
this
specific
location
to
go
to
to
get
its
secure,
analyzers
secure,
analyzer
version.
We've
got
a
secret
detection
excluded
paths,
which
means
in
this
case
just
check
everything,
and
then
we
can
see
down
at
the
bottom.
We've
got
a
secret
analyzer
job,
and
this
is
a
hidden
job.
I
want
you
to
notice.
B
So
if
we
look
at
the
included
SAS
jobs
default
Behavior
and
by
the
way
I
want
you
to
notice
that
these
are
links
and
when
we
send
these
slides
out
in
the
email,
we're
going
to
send
you'll
be
able
to
access
these.
If
you
want
to
go
through
this
deck
again,
you
can
look
at
the
SAS
documentation
to
understand
how
we
recommend
interacting
with
it.
You
can
also
look
at
the
SAS
template
itself
and
see
how
the
job
is
defined
in
GitHub.
B
So
you
can
see
on
this
example
over
here
on
the
right,
where
we've
set
the
variables
up
and
then
sassed,
we've
declared
SAS
again
here
on
this
bottom
part
portion
here.
So
we've
it's
already
been
included,
but
we're
also
declaring
it
a
second
time,
and
this
is
allowable
under
git
lab.
So
you
can
Define
the
job.
You
know
by
name
in
this
included
template
file,
and
then,
if
you
declare
that
same
job
name,
you
can
overwrite
things
coming
in
from
the
template.
If
you
want
to
take
that
route.
B
So,
by
default,
SAS
will
use
pattern
matching
to
decide
what
language
scanner
to
execute
be
aware
of
the
SAS
success.
Suite
of
security
testing
is
designed
to
cover
a
very,
very
broad
range
of
different
programming
languages.
In
our
case,
the
app
that
we've
got
encapsulated
in
the
project
that
you've
worked
over
to
your
private
groups
is
a
node.js
app.
B
Now,
artifacts,
you
know,
artifacts
are
something
that
you
can
create
right.
This
might
be
a
Docker
image
if
that's
appropriate,
it
might
be
a
build
image
if
you're
having
to
do
a
build
for
your
for
your
code.
B
You
know
it
could
be
anything
you
want,
including
certain
types
of
reports
and
pair
with
me
for
a
second
year,
I'm,
going
to
catch
up
a
little
bit
on
some
of
the
questions
and
see.
If
there's
anything
I
can
take
off
of
here.
B
B
Now,
if
you've
taken
the
time
to
provision
artifacts
in
your
jobs-
and
there
are
some
very
specific
keywords
to
use
for
that-
there's
a
few
different
places
you
can
download
them
from
on
the
pipelines
page,
you
can
download
an
archived
collection
of
all
of
the
artifacts
that
are
generated
by
that
pipeline.
B
B
You
know,
we've
delineated
the
stage
in
the
script
it's
going
to
run
and
then
we're
going
to
also
specify
its
artifacts,
and
you
know
it's
going
to
have
a
path
here,
so
we're
going
to
say,
pay
attention
to
this
particular
path
and
then,
in
this
case
we're
setting
it
to
expire
in
one
hour
now.
This
is
a
very
good
thing
to
engage
in.
You
might
want
to
expire
it
in
a
day.
B
You
might
want
to
expire
it
in
two
days,
whatever
works
for
you
and
your
team,
but
the
idea
being
that
you
know
it's
not
uncommon
for
self-hosted,
git
lab
or
even
if
you're
on
gitlab.com.
Remember
that
your
allocated
a
certain
amount
of
storage
there,
and
this
will
contribute
to
the
storage
for
a
single
project.
So
if
you've
got
a
huge
number
of
Docker
builds
out
there
that
have
been
kept
his
artifacts
for
one
reason
or
another,
you
know,
maybe
you
need
the
ability
to
download
and
test
those.
Those
builds
those
Docker
images.
B
Now
the
last
thing
is:
if
you
want
to
transfer
the
project
so
after
you
get
done
working
on
this
project
and
you
want
to
keep
it
somewhere
else,
you
know
you're
you're
going
to
be
you're
going
to
need
to
transfer
the
project,
because
this
group
that
we've
allocated
for
you
is
going
to
be
to
keep
de-provisioned
on
Friday
of
this
week.
B
So
you're
going
to
want
to
make
sure
you
do
this
before
Thursday.
If
you
want
to
take
that
route-
and
there
is
some
follow-up
work
that
we're
going
to
be
sharing
with
you
in
an
email
that
goes
into
security
and
compliance,
if
you
want
to
go
through
that
as
well,
so
just
be
aware
that,
if,
if
you
want
to
get
these
out,
you
can
transfer
them
out
to
a
different
location
and
then
have
them
there
for
reference.
You'll
have
to
transfer
them
out
to
you
know
it
might
be
your
private
namespace.
B
B
So
only
transfer
when
you're
done.
You
definitely
want
to
keep
it
here
in
this
ultimate
workspace
and
the
other
advantage
that
you've
got
there.
Is
that
if
you
reach
out
to
our
customer
success
engineers
in
the
slack
channel
that
you're
going
to
be
invited
to
today,
you
know
that
gives
them
a
chance
to
be
able
to
access
that
project
with
you
in
case.
We
need
to
try
and
help
you
do
some
troubleshooting
on
something.
B
All
right,
so,
let's
get
back
to
transferring
the
project
real
quickly
when
you
transfer
the
project,
if
you
do
not
have
an
ultimate
message,
you'll
lose
some
capabilities
that
you
know
some
of
the
things
that
you're
going
to
be
able
to
see
when
you're
in
the
ultimate
license
where
it's
got
everything
the
GitHub
can
do
enabled
by
the
way.
B
This
is
a
great
chance
to
get
out
there
if
you're,
not
using
ultimate
right
now
and
really
get
to
an
understanding
what
gitlab
is
capable
of,
and
if
you
follow
up
with
the
security
and
compliance
Workshop
information
that
we're
going
to
be
providing
in
the
email
coming
out.
B
B
B
B
B
So
I'm
going
to
go
into
my
version
of
the
project
here
for
just
a
minute:
I
want
to
make
sure
that
you're
aware
of
a
couple
of
things
when
you're
editing
your
gitlab.ci.yml
file.
The
best
way
to
do
it
is
in
the
editor,
that's
under
CI
and
CD.
Here
now,
under
the
new
navigation,
this
is
known
or
called
CI
CD.
It's
called
build,
so
just
be
aware.
Good.
If
we
go
into
the
editor,
you
can
see
that
it's
automatically
going
to
load
the
get
left
SCI
it
up.
B
If
we
were
to
add
additional
files,
they
would
show
up
in
a
retrieve
you
so
just
know
that
those
are
there
also.
The
other
advantage
to
using
this
if
you
want
to
take
that
route,
is
that
if
you
put
something
in
here
wrong.
B
B
B
Now,
if
we
go
to
the
CI
lit
what
we
would
have
to
do
with
that
x-strong
file
that
we're
working
on
is,
we
would
have
to
copy
the
contents
of
that
file
and
paste
them
in
here.
But
then,
if
there's
something
syntaxually
wrong,
this
Ledger
is
going
to
tell
us
that
it's
going
to
explain
to
us
what
the
problem
is.
So
be
aware
of
that.
B
B
B
This
actually
takes
you
through
how
to
support
the
project.
If
you
want
to
do
it,
then
we
want
to
create
a
simple
pipeline.
So
in
the
unit
test.
B
B
B
Of
a
distorted
view,
but
if
we
go
to
it,
we
can
actually
see
that
unit
test
doesn't
qualify
to
run
yet
because
build
app
is
not
yet
done
now.
These
are
going
to
take
forever
to
complete
and
that's
one
of
the
reasons
I
wasn't
going
to
be
able
to
walk
through
these
with
you
today,
but
let's
go
ahead
and
go
to
the
next
step
on
here.
B
B
Now
the
difference
is
this
time
that
we
put
the
needs
parameters
into
code
quality
and
into
unit
tests
both.
So
if
they
don't
have
any
requirements
to
run,
they
don't
have
to
wait
for
Bill
Dev
to
start
running
or
to
finish
running
for
them
to
be
able
to
execute.
So,
as
you
can
see,
code
quality
and
unit
tests
are
up
and
operational
right
away.
B
B
B
B
B
Again,
all
right,
any
any
questions
before
we
get
to
the
very
end
today,.
B
Please
be
on
the
lookout
for
the
slack
invitations
you're
going
to
get
the
email
which
is
going
to
have
the
link
to
the
slides
deck
and
the
link
to
the
recording
so
know
that
those
are
there.
It'll
also
have
some
instructions
that
you
can
follow
to
start
to
work
on
some
of
the
optional
information,
but
I
want
you
to
be
aware.
B
B
It
might
be
distracting
it's
quite
a
different,
quite
a
different
way
to
go,
and
then
there's
one
other
option
here
that
I
just
want
you
to
be
aware
of.
If
you
have
to
develop
complex
and
multiple
workflows
for
pipelines
for
your
teams,
this
is
a
project
of
mine
that
I
put
together
quite
a
while
ago,
and
it's
designed
to
create
a
very
rapid
testing
scenario.
B
Essentially,
it
dispenses
with
job
logic
altogether
in
favor
of
just
being
able
to
execute
and
test
rules
test
dependencies
between
jobs,
things
along
those
lines,
and
it
can
make
a
very,
very
fast
way
for
you
to
test,
because,
as
you
saw
every
single
time,
we
go
to
run
any
of
these
pipelines.
It
can
take.
You
know
many
many
minutes
to
finish
up
pipeline
in
this
other
project
here
this
one
that
I'm
talking
about
right
here
in
this
option
number
seven
in
that
particular
project.
B
It
can
run
it
even
a
fairly
complex
pipeline
in
just
a
couple
of
minutes.
It's
just
done
and
a
simple
one
can
run
in
less
than
a
minute.
So
just
be
aware
that
that's
there
and
it's
something
just
to
consider
if
you
need
to
design
multiple
independent
workflows,
so
you
can
test
those
workflows
together
in
a
place.
That's
not
in
a
project
that
has
other
people
potentially
dependent
upon
the
work
you're
doing.
B
Cosmic,
please
do
look
for
the
recording
it
will
be
sent
out
in
the
email
now
it
might
be
after
your
end
of
day
today,
because
we're
going
to
be
processing
this
with
a
team,
that's
on
the
US
in
the
U.S
on
the
Pacific
time
zone,
but
we'll
get
it
out
as
quickly
as
we
can,
but
please
be
on
the
lookout
for
that
video.
B
If
you
want
to
review
anything
the
slide
deck,
if
you
want
to
review
anything
from
there
and
then
some
follow-up
instructions
for
everything
else
and
we
apologize
for
any
technical
problems
we
had
today
we're
going
to
be
restructuring
these
going
forward
a
little
bit
to
make
it
easier
for
us
to
share
information
with
you
and
and
we'll
look
forward
to
to
the
next
chance
we
get
to
interact
with
each
other
and
and
again,
if
you
get
the
slack
invite
which
it
will
all
the
attendees
are
going
to
get.
B
C
Awesome
well,
thank
you
Steve.
We
appreciate
everyone
joining
us
and
and
like
he
mentioned,
we
will
be
sending
out
all
this
information
in
in
the
next
day
or
so
and
with
that
enjoy
the
rest
of
your
day.
Everybody.