►
From YouTube: Hands-On GitLab CI Workshop - AMER
Description
Watch the playback for a hands-on CI workshop, in which you will learn how to build simple GitLab pipelines and work up to more advanced pipeline structures and workflows, including security scanning and compliance enforcement.
A
All
right,
let's
go
ahead
and
get
underway.
My
name
is
Steve
Gray,
I'm
customer
success.
Engineer
here
at
gitlab.
I've
got
several
of
my
peers
on
here
today
to
help
with
question
and
answer,
and
if
you
have
a
question,
please
use
our
q
a
function
for
that.
If
you
have
a
comment
that
doesn't
require
an
answer,
feel
free
to
use
the
chat
for
that.
A
So
let's
go
ahead
and
get
out
of
the
way
today,
just
for
awareness
tomorrow,
sometime
you're
going
to
be
getting
an
email
that
has
a
link
to
the
slide
deck
that
we're
using
today,
as
well
as
a
link
to
the
recording
and
a
link
to
the
optional
content
that
we're
not
going
to
be
covering
today.
So
just
be
aware
that
that
optional
content
is
there
and
we've.
A
We
normally
provide
two
days
for
you
in
these
provisioned
subgroups
you're
going
to
have
at
gitlab.com
to
to
go
through
the
workshop
material,
but
today's
Workshop
you're
going
to
be
getting
four
days.
So
just
be
aware
of
that.
You
know
that
you've
got
more
time
to
work
with
that
and
that's
because
of
the
optional
material
that
we
want
you
to
review
and
potentially
work
through.
A
It's
real
important
that,
if
bear
with
me
for
a
second
here
while
I
re
rearranged,
my
desktop
it's
real
important
that
you
have
a
a
registered
username
at
gitlab.com.
A
If
you
need
to
go
to
the
second
link
there
and
sign
up
right
now
for
a
gitlab
user,
so
that
you'll
have
that,
because
without
that
we
don't
have
a
way
to
provision
you,
a
subgroup
you're
going
to
need
to
be
working
on
this
in
gitlab
and
by
the
way,
the
subgroup
that
you're
going
to
get
is
a
fully
provisioned
ultimate
subgroup
under
our
learning.
Labs
namespace
at
getlapped.com,
so
just
be
aware
that
that's
a
fully
functional
and
you
will
be
the
owner
of
it
and
then
it'll
be
de-provisioned
in
about
four
days.
A
So
you've
got
four
days
to
work
through
the
content
that
we're
going
to
going
to
be
going
through
today
in
the
optional
content
that
we're
going
to
be
doing
this
week.
Continue
on
from
there.
A
So
with
that
in
mind,
let's
go
ahead
and
get
into
get
underway
with
our
CI
Workshop
again.
My
name
is
Steve
Graham
I'm
customer
success.
Engineer
here
at
gitlab
I've
been
here
about
four
years.
It's
been
a
fabulous
ride
and
I've
absolutely
really
enjoyed
coming
at
gitlab,
and
one
of
the
reasons
I
joined
here
was
to
get
to
understand
and
know
the
tooling,
and
this
is
part
of
that
process.
So
this
is
something
I
really
enjoy
presenting.
A
This
is
our
agenda
for
the
day
we're
going
to
go
through
lab
setup,
we're
going
to
go
through
setting
up
a
simple
pipeline,
we're
going
to
talk
about
execution
order
and
direct
it
at
cyclic
graphs
and
by
the
way
you
may
hear
me
call
it
acrylic
graph
I,
don't
know
why
I
do
that,
but
what's
an
oil
I
call
it
acrylic
graph,
we're
also
going
to
discuss
rules
and
failures,
then
we're
going
to
talk
about
SAS
and
artifacts
and
then
at
the
end,
we'll
have
the
option
if
you
want
to
do
it
to
transfer
your
project
out,
although
we'll
be
covering
caveats
on
that
as
well.
A
Now
don't
miss
out
on
the
next
steps,
as
the
group
that
you
request
will
have
full
access
to
all
of
our
new
AEI
features
too.
If
you
want
to
take
a
little
time
and
dig
into
Ai
and
how
it
works,
but
AI
is
enabled
on
this
particular
namespace
at
gitlab.com,
so
you
can
play
around
with
that.
If
you
want
to.
A
So
you're
going
to
need
to
go
to
gitlab
demo
and
by
the
way,
let
me
put
this
back
in
just
for
the
benefit
of
those
who
will
hear
when
I
first
posted
it
you're
going
to
need
to
go
to
www.getlabdemo.com.
A
A
We're
going
to
redeem
the
invitation
code,
the
registration
code
that
I
pasted
into
chat,
and
then
you
can
click
on
that
redeem
and
create
account
there.
A
What
you'll
need
to
do
you're
going
to
need
to
go
to
gitlab.com
log
in
and
actually
capture
the
portion
of
your
username
that
doesn't
include
the
app
symbol,
so
everything
that
follows
yet
symbol.
This
is
really
important
that
you
be
able
to
do
that.
If
you
put
the
apps
in
the
lane,
you'll
get
you'll
very,
probably
get
an
error.
So
just
be
aware,.
A
A
So
again,
after
you
put
the
registration
to
registration
code
in
you're,
going
to
put
in
your
gitlab
username
and
then
you're
going
to
get
a
page
that
looks
like
this
and
it's
real
important
that
you
take
a
minute
capture
that
gitlab
URL,
because
that's
the
URL
to
your
session
based
group
that
again
it's
an
ultimate
group
you're
going
to
be
the
owner
of
it
and
just
add
it
to
a
notes
document
somewhere
or
bookmark
it,
whichever
you
you
deem
appropriate,
but
you
can
also
click
on
my
group
to
go
right
to
it.
A
If
you
want
to
now,
you
should
be
into
in
a
group
that
looks
very
much
like
this
it'll
save
my
test
group
at
the
top
it'll
have
a
different
string
here
that
you're
seeing
Oh
shoot
sorry
about
that.
It'll
have
different
string
after
the
dash
symbol,
then
what
you're
seeing
here,
but
that
well,
that
string
will
be
unique
to
you.
So
it
has
a
unique
name.
A
If
you
end
up
here,
something
went
wrong.
So
let's
go
back
and
go
through
that
one
more
time
you
need
to
make
sure
you
enter
the
correct
username
on
the
Redemption
stage
and
the
invitation
code
that
you
see
displayed
there
is
the
wrong
one.
The
right
one
is
at
the
bottom
so
and
then
you
would
put
in
your
gitlab
username
and
again
without
the
app
symbol
right.
You
can
use
everything
but
the
ad
symbol,
and
then
you
should
be
here.
A
So
let's
go
ahead
and
dive
into
that
for
just
a
second.
A
How
about
that
is
a
skit
lab
ultimate
that
we're
using
in
this
demo?
Yes,
it
is
the
the
group
that
you're
being
provisioned
at
gitlab.com
is
an
ultimate
group.
A
So
again
you
want
to
capture
this
URL
down
here
and
make
sure
that
you
either
bookmark
it
or
add
it
to
a
notes
document,
and
then
you
can
go
directly
to
my
group
and
you'll
see
that
my
group
looks
a
little
bit
different
than
yours
does,
because
it's
got
a
different
string
here
at
the
top,
but
this
is
essentially
what
we're
going
to
be
working
with.
A
So
the
username
that
you
need
to
use
Mason
and
Liz
is
your
username
for
gitlab.com
without
the
app
symbol
so
that
go
to
gitlab.com
login,
look
at
your
username
there
and
then
and
then
use
that
without
the
app
symbol
and
by
the
way
you
can
do
this
to
look
at
your
username,
oh
I'm,
sorry
I'm
clicking
the
wrong
one
there
with
me.
A
A
A
Now
we're
going
to
talk
to
you
setting
up
a
simple
pipeline
before
fully
pushing
out
the
application.
Your
team
wants
to
see
and
test
a
few
different
types
of
pipelines
to
see
what
fits
their
needs
best.
The
first
task
your
product
manager
gives
you
is
to
create
a
simple
pipeline
that
builds
and
tests
the
bracing
application.
A
So
now
we're
going
to
navigate
to
the
project
that
you
see
there,
which
is
included
in
the
paste
that
I
put
in
earlier
if
anybody's
joined
since
then,
they
may
not
have
access
to
it.
So
I'm
gonna
lifting
gravity
again.
A
Oh,
thank
you.
Justin
I
appreciate
that
so
that
Justin
just
put
in
the
access
to
the
the
link
to
the
project
that
we're
going
to
be
going
to
and
then
again
you're
going
to
be
seeing
you're
going
to
want
to
be
doing
the
project
that
we're
going
to
Fork,
which
is
what
we're
going
to
do
today,
has
got
the
instructions
in
it
when
you
Fork
it,
the
instructions
are
not
going
to
come
with
it.
The
instructions
are
in
issues
that
are
in
that
project
and
when
you
form
a
project,
you
don't
get
the
issues.
A
So
you'll
want
to
be
working
in
two
windows
or
side
by
side
like
I'm
doing
here.
A
But
one
of
the
options,
of
course,
is
going
to
be
your
personal
namespace.
Don't
use
that
because
that's
not
provisioned
as
an
ultimate
group,
almost
almost
invariably,
and
so
that
just
won't
work
so
pick
the
one
that
we
just
created
and
use
that
one
and
then
you
could
click
Fork
project
on
the
bottom.
A
A
Look
at
the
steps
for
completing
the
workshop
will
be
in
the
issues
of
the
project.
You
just
Fork
see
that
again
you
want
to
keep
that
project
in
in
a
separate
window
and
click
on
the
issues
there,
so
that
you
can.
You
can
see
the
instructions
that
we're
going
to
be
going
through.
A
To
to
be
able
to
make
this
work,
so,
if
you're
on
a
self-hosted
instance
and
that's
what
you're
used
to
with
your
work,
email
you'll
need
to
go
to
gitlab.com
and
register
a
user
and
and
capture
that
that
username.
A
A
You're
going
to
have
to
be
able
to
put
your
project
name
in
there,
I
just
copied
it.
A
All
right,
let's
go
back
to
our
pipeline
anatomy,
real
quick.
So
what
you're
seeing
in
front
of
you
is
what
a
gitlab
pipeline
looks
like
it's
broken
The
Columns
that
you're
seeing
there
are
stages
enlisted
inside
of
each
one
of
the
individual
jobs
that
are
assigned.
In
that
stage,
you
can
have
a
huge
number
of
jobs
inside
of
the
stage,
so
you
don't
have
to
worry
about
that.
But
the
thing
to
realize
is
to
develop
jobs
in
this
stage,
have
to
complete
where
the
next
stage
is
going
to
run.
A
A
The
other
thing
to
realize
is
that
you
know,
especially
if
you're
running
a
shared
Runner
Fleet.
The
probability
of
these
jobs
being
run
on
different
different
Runners
is
very,
very
high.
A
Now
Jeff
Stevenson
you're,
looking
at
a
job
here,
the
job
is
called
production,
presumably
a
deploy
job.
A
A
A
That's
going
to
fail
the
job
so
just
be
aware
of
that
before
script
or
commands
that
execute
before
the
script
statement
executes,
it's
concatenated
distributes
to
the
script,
but
it
runs
in
the
same
shell,
and
the
thing
to
realize
about
this
is
that
the
before
script
can't
if
something
fails
in
the
before
script
and
a
very
typical
use
for
this
would
be
if
you
need
to
load
libraries
or
something
like
that
into
the,
for
example,
into
the
docker
container.
A
Although
in
that
particular
reason
she
might
want
to
think
about
setting
your
own,
creating
your
own
doctors,
putting
them
into
the
docker
registry
container
registry
and
using
that
instead
of
using
the
before
script.
But
it
is
there
in
case
you
need
to
do
it,
there's
also
an
after
script.
That's
available
to
you.
A
It
runs
in
a
separate
shell
after
the
before
script.
In
the
script.
Statements
thing
to
realize
about
the
after
script
is:
is
that
if,
if
a
command
fails
in
the
after
script,
it's
going
to
fail
your
job?
So
that's
an
important
thing
to
realize.
A
A
Well,
you
know
what
thank
you.
Thank
you,
Dan
or
telling
me
so
after
script.
Codes
do
not
impact
job
success,
failure,
but
the
before
script.
Here
those
two
get
flipped
around.
A
Anonymous,
when
you
scroll
down
to
the
bottom
I'm
going
to
have
one
of
my
peers,
help
you
so
that
we
can
move
on.
But
when
you
scroll
down
to
the
bottom
of
the
advanced
section
under
settings
in
general,
and
you
click
on
the
remove
Fork
relationship,
you'll
get
a
little
text
box
that
you
have
to
to
cut
and
paste
the
or
copy
and
paste
the
projects
URL
name
into
before
you
can
remove
it.
A
All
right
so
get
lab
Runners.
Let's
talk
about
these
for
just
a
minute.
Klab
Runners
are
the
equivalent
of
an
agent
or
a
node
in
Jenkins,
and
these
are
what
actually
picks
up
jobs
and
runs
jobs.
Every
single
job
gets
an
independent
Runner.
A
They
could
be
the
same
Runner
if
you've
only
got
one
Runner
registered,
but
if
you've
got
a
shared
Runner
plate
like
what
we're
going
to
be
working
with
today,
then
the
probability
is
very
high
that
you're
going
to
have
you're
going
to
have
different
Runners
for
every
single
job
so
get
that
runner's
going
to
run
all
the
jobs
you
need
to
find
in
a
pipeline.
Now
they
can
be
Tagged.
A
So
if
you
want
to
create
a
runner
with
a
very
specific
build
or
load
on
it,
you
can
absolutely
do
that
and
then
you
can
use
the
tag
keyword
in
your
jobs
to
be
able
to
make
sure
that
that
job
only
runs
on
that
Runner.
If
you
need
to
do
that,
so
the
job
duration
could
take
a
little
bit
of
time,
but
the
jobs
are
typically
picked
up
within
about
five
seconds.
That's
that's!
The
average.
A
Now,
let's
go
through
the
Hands-On
steps
for
this
and
again
these
are
in
the
issues
for
the
the
project
that
we
formed,
which
you
can
see.
I've
still
got
them
in
here.
A
A
We've
already
got
a
pipeline
defined
here.
We
have
two
stages:
building
tests,
as
well
as
as
well
as
a
build
app
job
that
uses
the
before
script
and
script,
sub
keywords
in
the
unit
test
job
we
want
to
use
the
after
script,
keyword,
Echo
out
that
the
build
was
completed
so
to
edit
the
pipeline.
We
need
to
click
edit
pipeline
editor.
A
A
A
So
after
showing
off
your
simple
pipeline
to
the
team,
they
loved
it,
but
we're
wondering
if
you
could
speed
up
the
process.
A
little
bit
decided
you're
going
to
show
off
your
skills
and
show
how
you
can
create
a
pipeline
with
different
execution
orders,
as
well
as
a
large
directed
a
cyclic
graph
to
show
what
is
really
possible.
A
Now
again,
the
execution
order
default
behavior
is
that
the
jobs
in
the
next
stage
after
the
build
stage,
we'll
start
after
all,
jobs
in
the
previous
stage
of
completing
successfully?
That's
just
the
depot.
A
So
we
can
adjust
that
if
we
want
to
the
current
state
of
cicd
execution
job
in
the
test
stage
execute
after
all,
jobs
in
the
build
stage
are
completed
again.
We've
been
through
this
a
couple
times
now.
There's
the
desired
state
of
cicd
execution
code
quality
does
not
need
results.
Bill
can
execute
in
parallel.
A
So,
at
the
beginning
of
the
pipeline
execution,
both
jobs
are
now
blue,
dots,
I.E
executing
at
the
same
time,
and
this
is
what
we're
going
to
actually
see
when
we
get
it
up
and
learning.
A
It's
possible
to
do
what
VX
needs
to
so
in
this
particular
case,
what
you
can
see,
we're
using
an
example
directed
a
cyclic
graph
and
if
this
pipeline
uses
needs
to
run
as
fast
as
possible.
In
this
particular
case,
test,
a
is
only
Reliant
upon
test
or
build
a
and
then
its
deployed
job
can
run
as
soon
as
test
a
is
done
and
the
same
thing
for
build
B,
test
B
and
deploy
B.
A
So
let's
go
ahead
and
talk
through
that,
a
little
bit.
A
So
what's
in
it,
this
allows
the
needs
keyword
to
be
used
in
the
same
stage
previously
could
only
be
used
between
jobs
and
different
stages,
but
we've
changed
the
way
that
gitlab
pipelines
work
now
and
you
can
actually
make
a
job
this
dependent
upon
a
job.
In
the
same
stage.
We
can
declare
that
our
needs.
A
So
why
is
this
useful
status
pipelines
make
your
pipeline
more
efficient,
again
I'm
going
to
be
very
frame
I've,
not
really
seeing
that
in
action
personally,
I
like
to
keep
my
jobs
categorized
with
their
stages,
so
that
you
know
I
have
a
pretty
good
sense
of
what's
running
when
you
can
implicitly
configure
the
execution
order,
it's
faster
to
write,
and
this
makes
for
a
more
efficient
Pipeline
with
less
cycle
time,
so
how
to
navigate.
We
go
to
cicd
editor
and
then
the
needs
keywords-
and
this
is
in
all
tiers
after
14.2.
A
A
You
can
see
that
our
pipeline
jobs
ran
sequentially,
but
if
we
wanted
two
jobs
to
run
a
parallel,
we
can
do
that
within
each
keyword.
So,
let's
navigate
back
to
our,
let
me
show
you
this
particular
methodology
real
quickly.
Here
we're
going
to
navigate
back
to
the
CI,
get
dot
dot
getlab.ci.yml
file.
Then
we
have
the
option
to
edit
it.
The
pipeline
editor
now
remember
the
other
way
that
you
can
get
there
is
to
use
this
go
down
to
build
and
then
pipeline
editor.
A
A
A
A
A
A
A
A
A
A
Let's
talk
about
rules
and
failures.
Now,
if
you're
not
familiar
with
gitlab
pipelines,
we
have
the
ability
to
set
up
rules
and
the
rules
could
determine
when
a
job
would
run.
You
know
we
could.
We
could
make
a
rule
just
for
merge
requests
if
we
want
to
and
that
merge
request
rule
can
accommodate
a
specific
job.
It
could
be
put
on
the
job
itself
and
then
the
job
will
only
be
qualified
in
the
case
of
a
merge,
merge
request.
A
So,
as
you
come
back
to
the
team
and
show
them
your
new
pipeline,
you
notice
that
one
of
your
test
jobs
is
failing.
After
looking
after
taking
a
look
into
the
job,
it
is
determined
that
you
don't
actually
need
to
enforce
the
passing,
but
still
want
to
see
the
results.
This
is
relatively
common
with
test
jobs
right
and
we
all
know
that
it
is
test
jobs
just
fail,
but
we
want
to
be
able
to
see
the
results.
We
want
to
be
able
to
see
the
job
log.
A
We
want
to
be
able
to
look
at
what
failed
and
what
kind
of
error
message
we
got,
but
but
it
doesn't
have
to
stop
the
pipeline,
because
if
a
job
fails
any
new
jobs
that
would
be
qualified
to
run
after
that
job
fails
are
not
going
around.
So
we
don't
want
that
to
happen.
So
we're
going
to
look
at
some
ways
to
separate
that
behavior.
A
But
if
we
use
the
keyword,
allow
failure,
true
the
failing
job
is
logged
and
the
pipeline
has
failed,
but
it
doesn't
prevent
subsequent
jobs
from
from
executing
and
that's
what
you're,
seeing
the
orange
arrow
point
at
right
here.
Allow
failure
colon
true,
that's
the
key
word
we're
talking
about
foreign
defaults,
there's
a
keyword
that
we
use
in
rules
called
when
it
can
also
be
a
keyword,
that's
selling
just
to
the
job.
If
you
need
to
do
that,
then
the
job
has
a
default
state,
but
the
win
is
job.
A
Defaults
are
going
on
success
and
allow
failure
is
false.
So
again,
if
we
need
a
job
to
be
able
to
fail,
we
want
to
put
that
keyword
in
allow
failure
and
make
it
true.
Instead,
the
jobs
can
be
included
in
the
pipeline
and
for
Rural
evaluates
to
true
and
has
a
clause
of
when
on
success
when
delayed
or
when
always,
which
are
some
options
we
we're
going
to
explore.
A
A
Del
real
is
defined
in
no
win.
Clause
is
specified
because
the
default
for
when
is
on
success.
So
if
you
don't
create
a
rule
in
a
job-
and
you
don't
put
this
independent
when
Clause
into
job
that
jobs
can
be
qualified
to
run
by
default.
A
A
A
This
is
a
rule
that
only
evaluates
to
true
if
somebody
runs
a
manual
pipeline,
so
this
job
is
only
going
to
run
when
the
pipeline
is
kicked
off
from
the
web
form,
but
also
notice
that,
if
statements
can
reference
variables,
there's
a
huge
list
of
predefined
variables
and
you
can
search
for
gitlab
predefined
variables
and
you'll
get
the
list.
It's
a
monster,
but
there's
a
lot
of
predefined
variables.
You
can
use
to
help
form
your
up
form
your
rules
and
get
the
exact
scenarios
that
you
want
for
your
jobs,
so
rules
syntax.
A
Let's
talk
about
this
real
quickly,
there's
different
kinds
of
Clauses
we
can
use.
We
can
use
if
to
evaluate
a
rule
and
look
and
see
if
it's
something
that
we
actually
want
to
run,
which
again
gitlab
will
evaluate
for
you
before
it
hands
it
off
to
a
runner.
We
can
use
changes
which
is
gives
us
the
ability
to,
for
example,
in
a
commit,
or
in
a
mark,
request
to
look
and
see
if
a
specific
file,
a
directory
or
the
contents
of
the
directory
have
been
changed,
and
then
we
can
also
use
exists.
A
A
If
this
variable
doesn't
equal
this
and
then
the
next
two
that
you
see
the
equals
with
the
tilde
sign
and
the
knot
with
tilde
sign
those
are
regex.
You
can
use
regex
expressions
in
your
rules.
If
you
need
to
do
that
for
any
reason,
and
then,
of
course,
the
end
end
is
what
you
think
it
is.
It
just
joins
two
different
rules
together.
If
you
will
under
one,
if
Clause
it
allows
them
to
be
evaluated
together,
meaning
that
you
know
they
both
gotta.
A
They
both
both
got
to
be
true
for
that
to
to
to
work,
and
then
the
the
or
the
two
pipe
symbols
is
an
or
symbol.
A
A
You
know
10
minutes
after
the
job
becomes
eligible
in
the
run
or
in
three
hours,
and
that
gives
you
a
way
to
do
that.
If
you
want
to
do
that
and
then
the
win
options.
So
when
you
use
the
web
class,
remember
that
it
defaults
to
when
on
success,
which
means
if
every
job
passed
before
this
job
came
up
in
this
job
is
allowed
to
run.
A
But
we
can
change
that
we
can
make
it
always
if
we
want
to
do
that,
we
can
make
it
never,
which
is
an
example
of
how
we
would
make
a
negative
rule
right
in
this
particular
circumstance.
We
don't
want
the
job
to
run
at
all
on
success
just
means
what
we
were
talking
about.
Previous
jobs
have
succeeded,
but
there's
also
a
non-failure
and
that's
a
special
use
case,
but
if
a
job
has
previously
failed,
we
have
the
ability
to
run
a
job
in
that
circumstance
too,
so
that
you
know
if
a
job
fails.
A
Maybe
we've
got
a
job
that
goes
in
and
does
some
analysis
in
and
around
why
that
may
have
failed.
We
also
have
manual
manual.
Has
the
ability
for
you
to
if
used
with
protected
environments
and
protective
branches,
it
has
the
ability
for
you
to
delineate
who's
allowed
to
run
that
job.
That
job
will
actually
have
a
play.
Button
on
it,
which
we'll
talk
about
here
a
little
bit
more
but
delayed,
is
of
course
the
start.
In
that
we
see
in
the
job
attributes
it
could
be
minutes,
it
can
be
hours.
A
So
when
is
this
job
not
created
in
a
pipeline?
Job
is
not
included
with
pipeline.
If
none
of
the
rules
of
value
for
the
job
evaluate
to
true
a
rule
evaluates
to
true,
but
has
a
closet
one
Endeavor
again,
that's
a
negative
rule
right.
If
we
see
this
condition
happening,
never
ever
run
a
pipeline
or
never
run
this
job.
Forgive
me
now,
reels
are
defined,
but
a
win.
Never
Clause
is
specified.
A
So
what
you
can
see
in
the
in
the
job
on
the
on
the
right
side
of
our
screen
there,
you
can
see
the
rules
evaluated
that
are
being
evaluated
right
there.
Now.
The
important
thing
to
realize
about
these
is
the
rules,
go
in
strict
order,
as
you
put
them
into
the
job
and
the
first
one
that
matches
wins.
So
if
the
pipeline
source
is
a
merger
Quest
event,
this
particular
job
is
never
going
to
run
if
the
pipeline
source
is
scheduled
scheduled.
But
you
have
the
ability
to
do.
A
You
can
actually
schedule
pipelines
in
your
project
settings,
but
if
it's
coming
in
on
a
scheduled
event,
then
this
job
is
never
going
to
run,
but
in
any
other
circumstances
it
just
has
this
default
rule
upwind
on
success
and
notice
that
there's
no,
if
statement
in
that
last
rule
at
all,
it's
just
a
simple
default
that
went
on
success.
So
in
other
circumstances
it's
going
to
run.
A
So
let's
talk
a
little
bit
about
the
manual
execution,
we
can
create
a
win
manual.
Job
like
you
see
in
this
deploy
job
on
the
right
and
that's
going
to
give
us
a
play
button
on
there.
But
again
we
can
control
who
sees
that
play
button
and
you
can
click
on
it.
If
we're
using
protected
branches
in
protected
environments,
we
can
actually
tell
any
of
the
group
of
users,
we
can
delineate
an
individual
user
or
we
can
delineate
a
role.
We
could
say
anybody
who's.
A
A
A
So
in
this
particular
case,
if
CI
pipeline
source
is
set
to
merge,
request,
event
or
schedule,
the
job
is
executed,
it'll
execute
on
any
pipeline.
Those
criteria
is
met.
A
So,
let's
look
at
some
more
rules.
Examples
when
for
delaying
a
job,
remember
that
we
had
that
started
in
three
hours
and
when
is
delayed
so
the
when
Clause
is
delayed,
and
then
we
have
to
use
this
start
in
keyword
to
tell
gitlab
how
long
to
wait
for
that
job
can
start.
And
then
we've
got
the
allele
failure
set
to
true
on
this
job.
So
this
job
can
fail.
A
Promise,
if
no
rule
is
met,
AKA
that
final
win
on
success,
wasn't
there
what
would
happen?
The
default
remember
that
the
default
for
win
is
on
success.
So
if
you
don't
put
any
rules
in
a
job
at
all,
it's
going
to
be
eligible
to
run
as
long
as
every
job
that
preceded
it
didn't
fail.
A
Here,
I'm
going
to
let
one
of
my
peers
answer
that,
but
there's
there's
options
just
know,
there's
options
for
covering
this
and
you
can
Define.
This
is
a
really
neat
feature.
That's
built
into
gitlab.
You
have
the
ability
to
use
jobs
that
are
defined
in
an
external
project.
As
long
as
the
user
who's
running
the
pipeline
has
read
rights
to
that
external
project,
then
they
have
the
ability
to
run
that
Pipeline,
and
you
can
do
this
with
include
statements
in
your
dot
kitlab-ci.yml.
A
Well,
we
didn't
cover
that
one
I'm,
sorry!
Yes,
we
do
so
in
this
particular
case.
This
is
using
the
changes.
Keyword
that
you
see
here
so
we're
evaluating
whether
or
not
some
variable
that's
been
predefined,
equals
some
value
and
then
we're
looking
to
see
if
there's
been
changes
in
Docker
file
or
in
Docker
scripts,
and
if
either
one
of
those
got
changes
in
it,
and
you
can
see
the
two
different
forms
here.
A
How
about
there's
an
exists,
name
or
keyword
that
we
can
use
it
just
it's
very
similar
to
changes,
but
it's
called
exists.
So
what
we're
seeing
here
for
change
is
if
what
that
word
exists,
it
could
be
looking
to
make
sure
that
a
file
existed
somewhere
if
you
needed
to
needed
it
to
and
I
think
that
it
has
a
syntax
that
will
allow
you
to
look
for
the
existence
of
a
folder.
A
All
right,
so,
let's
talk
about
the
variable
processing.
Only
because
you
have
a
lot
of
places
you
can
Define
variables.
Variables
can
be
defined
in
jobs,
they
can
be
defined
in
the
global
section
of
your
pipeline.
If
you
want
to
take
that
route,
they
can
be
set
up
in
the
project
itself.
They
can
be
inherited
from
the
group,
The
Project
sits
below
and
then
they
can
also
be
specified
manually.
A
If
you
run
a
manual
pipeline
or
use
the
API
or
you
schedule
a
pipeline,
they
can
be
specified
in
that
process,
and
so
it's
important
to
know
what
the
Precedence
is
and
by
the
way
the
Precedence
that
you're
seeing
here
gets
higher
as
we
go
out
this
list.
So
the
things
that
are
at
the
very
bottom
at
the
lowest
precedence
and
the
things
at
the
very
top
have
the
highest
precedence
so
trigger
variables,
scheduled
pipeline
variables
and
manual
pipeline
run.
A
Variables
have
got
the
highest
precedence,
which
means
that
somebody
could
potentially
override
a
variable
that
you
define
in
a
pipeline
file
if
they
needed
to.
For
some
reason,
project
level,
variables
or
protected
variables
have
got
the
next
precedence
level
and
then
below
that
is
group
level
variables
which
can
be
inherited
from
the
group
that
this
project
is
underneath
and
then
there's
instance
level
variables
if
you're
self-hosted,
it's
actually
possible
to
set
variables
at
the
instance
level.
A
If
you
need
to
do
that
and
then
there's
inherited
environment
variables,
which
is
a
subject
for
another
another
session,
but
below
that
is
the
yaml
defined
job
level
variables
in
a
yaml
to
find
global
variables.
A
A
A
A
So
that's
we're
going
to
talk
about
using
this.
Allow
failure
keyword
the
ability
to
make
sure
that
the
job
doesn't
stop
your
pipeline.
So
what
if
our
code,
quality
job
has
been
failing?
It's
that
new
line
to
the
script
to
make
it
fail.
So
we're
going
to
induce
that
failure
in
our
code,
quality
job.
A
So
what
we're
doing
is
we're
setting
up
a
rule
that
makes
sure
that
it's
only
being
evaluated.
This
job
is
only
going
to
run
if
it's
a
commit
directly
to
me,
and
it's
been
it's
going
to
be
allowed
to
fail.
A
And
you
get
a
chance
to
check
your
check
your
work
again
there,
let's
go
ahead
and
click
commit
changes.
A
So
again,
this
is
not
going
to
stop
your
pipeline.
A
A
A
A
Okay,
get
back
into
it.
We've
got
a
lot
to
cover
here
so
and
by
the
way
I
apologize.
We've
been
going
very
very
quickly
today
and
you
know
I
I
hope
everybody's
able
to
stay
caught
up
on
it,
but
in
case
you're
not
again
you're
going
to
get
a
link
to
the
recording
tomorrow
and
you
get
a
link
get
a
link
to
the
slide,
so
you
can
review
them
again
if
you
need
to
so,
let's
get
back
into
it.
A
So
after
you've
fixed
up
your
pipeline
run
smoothly
again
in
exact
stops
by
to
check
on
the
progress
they
want
to
make
sure
that
they
are
taking
full
advantage
of
all
the
features.
Caitlin
is
offering
like
security
scanning
and
artifacts
and
ask
if
you
can
demo
this
in
a
pipeline
during
the
next
stand
up.
A
So
to
configure
SAS
manually,
this
is
actually
really
super
easy
to
do.
You
can
see
this
include
keyword
being
used
at
the
bottom.
This
would
be
not
in
the
job
itself
right,
but
out
in
your
Global
Pipelines
area.
You
can
include
you
can
put
this
include
statement
in
and
when
you
use
the
keyword
template.
That
means
a
template
that
ships
with
kitlab
now
it
would
be
easy
to
think
that
template
would
also
apply
to
your
projects,
but
there's
different
Syntax
for
that.
A
A
So
you
can
actually
Define
these
pipelines
if
you
want
to,
and
let
people
just
include
these
files
from
their
own
gitlab,
you
know
dot,
gitlab.ci.y
and
mail
files
so
that
they
can
consume
a
downstream
projects
and
don't
have
to
be
maintaining
pipelines
if
they
just
don't
need
to
be
in
the
it's.
The
way
that
gitlab
engineering
provides
capabilities
via
templates.
So
again,
that's
the
template.
Keyword
templates
are
not
magical,
they're
open
source.
You
can
actually
go
look
at
them.
A
Open
core
in
this
particular
case,
but
they're
publicly
viewable
at
gitlab.com,
you
can
go
ahead
and
read
that
you
know
this
is
just
using
gitlabs,
CI
and
CD
capability.
There's
nothing
now
nothing
magic
behind
the
behind
the
curtains
there.
So
templates
are
always
executed
into
a
cdhcicd
pipeline
through
an
include
statement
in
the
projects:
dot,
gitlab.ci.ymail
job
file,
template
jobs
are
created
in
your
CI
CD
pipeline,
based
on
their
defined
stage
and
any
applicable
rules.
They
might
have
so
the
four
types
of
includes-
and
this
is
an
important
one
to
talk
about
again.
A
We
include
template
that
we
see
up
on
the
upper
left
here.
That's
that's
content,
that's
provided
by
gitlab
and
if
you're
self-hosted,
it's
shipped
with
gitlab,
so
this
is
actually
built
into
gitlab
when
we
use
this
template
keyword,
that's
what
we're
talking
about
include
file
which
I
think
of
as
include
project
and
file
as
a
reference
to
a
wire
Mill
file.
It's
located
in
a
project
in
the
same
group
here
is
your
projects.
A
Project
is
contained
in
but
again
it's
you
would
give
a
path
to
the
project
itself.
That
has
the
pipeline
file
defined
and
then
you
would
delineate
the
file
that
you
want
to
be
able
to
load.
So
this
gives
you
the
ability
to
create
independent
pipelines
if
you
need
to
for
different
teams
uses
and
those
teams
to
be
able
to
specify
the
one
that
they
need
in
their
in
their
Pipelines
include
local.
As
a
use
case,
where
you
know,
maybe
you
have
several
different
pipelines
to
find.
A
You
need
to
be
able
to
run
them
all
in
different
circumstances,
and
you
want
to
make
maintenance
easier,
so
you've
defined
them
in
independent
files,
and
this
would
give
you
a
way
to
pull
files
in
from
your
main
pipeline
file
and
then
include
remote
is
kind
of
a
weird
one,
but
let
me
explain
it:
real
quickly
include
remote
is
going
to
be
going
to
a
publicly
visible
file
somewhere.
These
are
actually
available
on
gitlab.com.
A
If
you
need
to
to
take
that
route,
but
there
can't
be
any
authentication,
so
you
can't
put
in
a
username
or
password,
and
it's
got
to
be
absolutely
publicly
viewable.
A
So
let's
talk
about
some
options
for
customizing
job
behaviors
template
jobs
can
be
extended
using
key
value,
pairs
or
variables
specified
in
the
job
on
the
local
dot.
Getlive.Ci.Yml
file
can
be
used
to
replace
the
default
behavior,
and
you
can
see
some
examples
of
that
right
here.
So
we're
setting
up
variables
that
modify
the
behavior
of
the
particular
scanner
in
use
here,
but
also
note
that
environment
variables
can
be
used
to
change
behaviors
based
on
value.
A
So
understanding
the
include
SAS
jobs
default
behavior.
Let's
look
into
git
lab
SAS
documentation
to
understand
what
variables
are
available
for
overriding
and
note
that
this
is
absolutely
a
link.
So
when
this
deck
gets
distributed
out,
you'll
be
able
to
access
that,
and
then
you
can
also
look
at
the
SAS
template
itself
to
see
how
the
job
is
defined,
which
I
do
a
lot
of
and
I
find
it
very
illuminated,
because
it
helps
me
understand
what
that
job
is
is
doing
and
how
I
can
interact
with
it.
A
So
the
following
Docker
image,
related
CI
CD
variables.
You
can
see
some
listed
on
the
left
and
then
again
we're
using
the
variables
on
the
right
to
modify
how
jobs
is
going
to
run
foreign.
A
So,
if
we're
going
to
configure
the
SAS
language
scanner
for
node.js
by
default
SAS,
we
use
pattern
matching
to
decide
what
language
scanner
to
execute
and
again
a
choosing
pattern.
Matching
it's
looking
for
things
in
your
code
and
it's
going
to
make
the
very
best
decision
that
it
can
based
on
what
it.
What
it's
able
to
identify.
A
We
know
RF
is
node.js.
Let's
just
tell
the
job,
it
only
needs
to
use
node.js
by
excluding
all
the
rest.
The
SAS
template
defines
what
language
scanners,
to
avoid
using
in
SAS
excluded
analyzers,
which
is
a
variable
that
you
can
put
in
there
and
you
know
I-
can
set
the
exact
scanners
to
exclude
by
defining
this
variable
in
my
doc,
klab.ci
Dot
yml
sasjob.
A
So
the
implication
here
is
that
we've
included
the
job
we've
used.
The
include
stable
with
the
template,
keyword
and
we've
included
this
SAS
scanner,
but
we
can
now
modify
the
job
if
we
want
to
by
redecurring
the
jobs
name
which-
and
you
could,
in
this
case
on
the
lower
right.
You
can
see
that
defined
as
SAS
is
the
job
name,
and
then
we
can
override.
A
Okay,
we're
gonna
have
to
go
quickly.
Forgive
me
now.
Let's
talk
for
just
a
minute
about
artifact
downloads
and
get
Labs
UI
on
the
pipelines
page,
you
have
the
ability
to
download
all
the
artifacts
from
a
pipeline.
It's
going
to
give
you
a
an
archive
file
that
has
everything
that
was
left
behind
as
an
artifact
by
any
of
the
jobs
in
your
pipeline
on
the
jobs
page.
If
you
go
to
the
jobs
so
you're
looking
at
a
pipeline,
you
click
on
jobs
at
the
upper
top.
A
Each
individual
job
has
the
ability
to
do
the
same
thing,
and
if
that
job
is
again
you're
going
to
get
an
archive
file,
it
might
have
multiple
art,
multiple
artifacts,
to
find
so
they're
all
included
in
that
same
file
and
then,
when
you
go
to
a
specific
jobs
page.
So
if
you
click
on
a
job
and
you
go
to
the
jobs
page
you're
going
to
have
the
same
ability
with
the
download
button
there.
But
you
also
have
this
browse
button
here,
and
this
is
important
to
note
using
the
browse
button.
A
A
These
count
toward
your
storage
limits
on
gitlab.com,
so
defining
artifacts
such
that
they
expire
gives
us
the
ability
to
just
get
them
out
of
our
concern,
because
going
through
an
individually
deleting
artifacts
can
be
a
real
pain.
A
So
you
can
see
that
we're
using
the
expired
keyword
underneath
the
rfx
keyword
in
this
particular
job
in
the
upper
left,
and
it's
going
to
expire
that
artifact
in
an
hour.
Maybe
you
want
to
set
that
for
a
day
something
along
those
lines,
or
maybe
it's
something
that
you
you
really
just
don't
need
to
worry
about,
keeping
it
all.
You
can
expire
in
an
even
shorter
period
of
time,
but
you
can
expire
it
in
longer
periods
of
time
too,
and
then,
if
you
find
that
you've
got
an
artifact,
you
really
need
to
keep.
A
A
A
A
A
This
tree,
icon
up
here
at
the
very
top
will
let
you
take
a
look
at
all
the
files
that
are
being
pulled
into
your
particular
definition,
and
if
we
want
to,
we
can
actually
click
on
this
and
it'll
show
us
the
file,
which
is
very
convenient.
It's
a
great
way
to
be
able
to
go
in
and
actually
look
at
what
this
file
does.
A
A
A
A
So
again,
we're
just
declaring
SAS
as
a
job
again,
but
as
soon
as
we
do
that
we
have
the
full
object
that
was
imported
from
the
yml
file
and
we
can,
if
we
can
move
it
to
the
security
stage
and
by
the
way,
all
those
other
jobs
that
we
saw
defined.
When
we
looked
at
the
merge
yml
those
two
other
jobs
they
and
they
extend
this
SAS
job.
A
A
Bearing
let's
go
take
a
quick
look
at
the
build
artifact,
it's
still
building,
so
we
may
not
be
able
to
get
to
this
real
quickly.
Yet,
oh
yeah,
we
went
we're
not
going
to
have
time
to
wait
for
it,
but
if
you
go
to
the
job,
log
you'll
have
a
download
option.
If
you
want
to
download
it,
you'll
also
have
the
option
to
explore.
A
A
In
fact,
let's
just
pull
them
up
real
quickly,
but
let's
talk
real
quickly
about
transferring
projects.
Sorry
I'm
getting
a
little
bit
ahead
of
myself
trying
to
get
tuned
in
quickly.
Here.
A
If
you
want
to
transfer
this
project,
you
can
transfer
it
to
your
personal
namespace.
You
can
transfer
it
to
another
namespace
group
in
gitlab
if
you
want
to
take
that
route,
but
the
important
thing
to
realize
is
this
is
an
ultimate
group
that
we've
provisioned
for
you
and
that
you're
the
owner
of
right
now.
So
if
you
transfer
that
project
out
into
a
space
that's
premium
or
has
no
license
like
your
personal
interests,
you're
going
to
lose
the
ultimate
interest,
you're
going
to
lose
premium
features.
A
We
have
two
optional
pieces
of
content.
One
is
security.
Compliance
we'd,
really
like
you
to
investigate
this.
To
take
a
look
further,
it
follows
the
same
format
that
we
just
did
in
this
particular
project,
but
it's
going
to
give
you
another
project
work
again
the
instructions
for
what
to
do
with
it
are
in
the
project
itself.
A
So
once
you
open
that
project,
you
can
Fork
it
to
this
provisioned
subgroup
of
years,
and
you
can
also
follow
the
instructions
that
are
in
the
issues
on
that
project
and
the
other
one
is
for
complex
and
multiple
workflows.
If
you've
got
a
scenario
where
you
need
to
be
able
to
create
multiple,
independent
workflows
for
your
pipelines,
this
is
a
good
project
just
to
review
and
get
a
sense
of.
A
You
know
how
that
works,
but
it
also
gives
you
a
template
from
moving
forward
with.
If
you
want
to
use
it
to
Define,
multiple
independent
pipelines
very
very
quickly
and
the
re
the
way
it
does,
it
is
by
dispensing
a
job
logic
altogether.
The
jobs
just
do
nothing
but
spit
out
their
variables
so
that
you
can
see
them
and
know
what
you're
working
with,
but
that's
just
in
case
you
need
it
for
some
reason.
A
A
So
I
brought
us
in
about
two
minutes
too
too
late
and
I
apologize
for
that
we're
discussing
potentially
extending
these
workshops
to
be
a
little
bit
longer.
But
we
thank
you
for
taking
the
time
to
join
us
today.
A
And
we
look
forward
to
your
interaction
or
opportunities.
We
get
to
interact
with
your
teams
going
forward.
Thank
you.
Everybody.