►
Description
Watch this webinar recording to:
1. Gain a comprehensive understanding of CI pipelines in GitLab.
2. Learn how to overcome common challenges and learn effective strategies.
3. Discover best practices for configuring and optimizing CI pipeline workflows.
4. Explore advanced features and integrations in GitLab.
5. Review the engaging Q&A that was hosted live with our expert panel.
A
All
right,
let's
go
ahead
and
jump
in.
Thank
you,
everyone
for
joining
us
today.
We're
excited
to
have
you
with
us
before
we
jump
into
the
content,
just
wanted
to
go
through
a
couple
of
housekeeping
items.
First
off
this
webinar
is
being
recorded,
so
you
can
look
for
that,
recording
as
well
as
the
deck
to
be
sent
to
your
email
here
in
the
next
day
or
so.
If
you
have
any
questions
that
come
up
throughout
the
session,
fuel
feel
free
to
put
those
in
the
Q
a
of
Zoom.
B
Thank
you.
Taylor
I'm
really
excited
to
get
a
chance
to
share
this
content
with
everyone
today.
My
name
is
Steve
Graham,
I'm
customer
success,
engineer
at
git,
love
scale,
team
I've
been
with
gitlab
for
close
to
four
years
now
and
to
be
real
Frank,
kitlab
cincd
or
one
of
the
primary
motivations
for
me.
Joining
gitlab
I
wanted
the
opportunity
to
really
deep
dive
with
everything
tooling
oriented
from
gitlab,
and
this
role
gave
me
that
opportunity
we're
going
to
be
covering
gitlab
CI
concepts
with
a
very
large
list
of
advanced
topics
today,
just
for
awareness.
B
That
means
we're
going
to
have
to
go
at
a
fairly
good
pace,
so
please
do
be
looking
out
for
the
email
from
Taylor
following
the
session
that
we'll
have
a
link
to
the
recording
for
you
that
you
can
watch
and
you'll
also
be
able
to
download
this
particular
presentation
set
of
slides,
which
is
just
absolutely
filled
with
links.
So
our
goal
is
to
to
help
you
to
understand
a
wide
breadth
of
the
potential
variations
that
you
can
that
you
may
want
to
use
in
pipeline.
So
let's
go
ahead
and
kind
of
get
next.
B
B
You
know,
potentially
including
security
scanning
and
compliance
enforcement.
Now
one
of
the
things
we're
thinking
about
doing
with
this
one.
Most
of
these
Hands-On
workshops
are
provisioned
for
hours,
we're
actually
thinking
about
making
this
one
available
for
a
day
or
two
after
after
the
workshops
that
you
have
a
little
time
to
play
and
experiment
and
try
different
things,
you
might
want
to
try
you're,
also
going
to
get
an
opportunity
to
engage
with
a
git,
Live
customer
success.
B
Engineer
myself,
my
peers
that
are
on
the
call
today
in
a
couple
of
other
words,
we
can
all
be
consultant
to
advise
you
how
to
get
started
options.
You
may
want
to
consider
how
your
pipelines
can
be
governed
by
settings
in
your
projects.
We
can
also
help
answer
questions
that
may
come
up
after
today's
session.
B
So
this
is
our
agenda
today,
we're
going
to
be
covering
CI
CD
Basics,
which
is
just
a
quick
review
of
what
pipelines
are
what
they
look
like
the
different
elements
involved
in
each
we're
going
to
do
a
quick
review
of
the
pipeline
structure
file.
That's
pipeline
structure,
I'm,
sorry,
the
overview
of
the
pipeline
files,
then
we're
going
to
go
through
pipeline
structure
and
different
ways
to
to
hack
that
now
this
there's
a
lot
of
different
variations
for
pipeline
structure.
B
You
know
using
excludes
to
I'm
sorry
includes
to
include
separate
files
into
your
pipeline
main
file
and
then
using
extends
to
extend
jobs
for
teams
that
need
to
get
a
quick
leg
up
on
CI
and
CDU.
We
do
have
PS
offerings
for
this,
so
just
be
aware
that
that
is
one
of
the
things
that
you
can
do
and
I'm
not
going
to
spend
an
incredible
amount
of
time
on
this
slide,
but
just
know
that
we
do
have
PS
engagements
for
teams
that
need
to
get
moving
quickly.
B
B
You
know,
let's
start
talking
about
gitlab
CI
and
CD
Basics.
This
is
just
a
review
of
what
these
con,
but
some
of
the
concepts.
What
it
looks
like
in
gitlab,
let's
start
with
kitlab
flow
gitlab
flow,
is
the
recommended
workflow
for
branching
merging
things
back
into
a
protected
Branch.
You
know
default
Branch
main
Master,
whatever
that
might
be
for
you
and
your
team.
You
know
variations
on
this
or
get
workflow
GitHub
workflow,
all
of
which,
by
the
way,
are
supported
within
gitlab.
B
B
You
know
once
that's
been
assigned
to
somebody
to
work
on,
they
created
merger
Quest
and
then
immediately
they
start
iterating
on
that
merge
request
or
the
branch
right.
So
when
they
create
that
merge
request
in
gitlab,
it
gives
them
the
option
to
create
a
bridge
automatically
and
it'll,
give
it
a
very
random
name
that
makes
that'll
make
sense
to
you
when
you're
looking
at
it,
but
it
is
a
very
random
name
so
that
it
doesn't
con.
B
You
know
conflict
with
any
any
other
branches
in
the
repository
and
then
when
you're
done
working
on
that
you're
going
to
create
a
merge
request
to
bring
it
back
in
well.
In
this
particular
case,
we
create
the
merge
request
in
advance.
The
branch
is
there,
and
then
we
start
to
iterate
on
the
code
and
during
the
iteration
of
that
code,
we're
going
to
have
these
pipelines
running
in
this
section
right
in
here.
B
You
know
they're,
going
to
automate
the
build
and
the
tests
scan
for
vulnerabilities,
we're
going
to
collaborate
and
review,
and
it
may
be
necessary
at
that
point
for
us
to
go
back,
make
additional
changes
to
the
code
and
then
rerun
those
same
pipelines
again
so
that
we
can
go
through
that
same
process
again
when
we
feel
like
we've
got
a
satisfactory
result.
We
might
want
to
review,
launch
a
review
app
so
that
people
go
in
do
user
acceptance.
B
B
So
this
is
kind
of
a
picture
of
what
gitlab
CI
CD
is
conceptually.
You
know,
we've
got
the
code
to
commit
any
related
code
that
goes
along
with
it.
We've
got
the
pipeline
in
there
that's
going
to
do
the
build
the
unit
tests,
integration
tests
scans
scans
for
vulnerabilities
things
along
those
lines.
We
might
want
to
create
a
package
there.
If
we've
got
something
that's
released
as
a
package,
you
know,
and
then
we
can
launch
a
review
app
so
that
we
can
go
look
at
it.
This
is
a
specialized.
B
B
B
You
know
we
might
have
built
test
and
deploy,
and
then
jobs
or
scripts
that
perform
individual
tasks.
You
know
this
can
be
MTM
test,
NVM,
mvn
install
you
know
things
along
those
lines,
and
you
know
they
might
include
things
like
deploying
apps,
so
where
we
deploy,
this
could
be
on
an
environment.
A
job
can
include
an
environment
as
a
key
work
inside
the
job
itself.
B
You
know
we
might
deploy
to
test
review
staging
Canary
prod,
just
whatever
makes
sense
for
us
now.
I
just
want
to
make
everybody
aware
that
these
are.
This
is
a
list
of
the
security
scanners
that
are
available
in
a
premium
subscription.
So
if
you
have
premium,
these
are
some
scans
that
you
should
definitely
be
leaning
into
in
enabling
in
your
gitlab,
CI
and
CD.
B
This
is
a
longer
list,
of
course,
of
what's
available
in
Ultimate.
It
essentially
compromises
or
comprises
a
complete
list
of
security
scans
that
you
can.
You
can
use
against
against
your
code,
also
notice
that
each
one
of
these
is
a
link
you
can
follow
to
our
documentation,
for
so
just
remember
that
when
you
download
the
slides,
so
let's
talk
real
quickly
about
get
live,
CI
and
CD.
The
pipeline
file
itself.
B
Gitlab's
robust
support
for
cicd
over
an
extended
period
of
time,
has
resulted
in
very,
very
mature
pipelines
that
are
well
equipped
well
equipped
to
handle
an
extraordinarily
diverse
range
of
scenarios.
Getting
started
with
pipelines
is
easy,
they're
effort
they
could
be
effortlessly,
expanded
and
customized
to
meet
your
specific
needs
and
get
lab
offers
the
flexibility
to
Define
multiple
pipelines
per
project
that
can
be
mutually
exclusive.
If
you
choose
enabling
you
to
address
any
scenario
that
requires
an
independent
pipeline
in
your
project.
B
So
if
we
look
at
a
gitlab,
get
you
know,
dot,
gitlab.ci.ymo
file
or
whatever
you
choose
to
name
that
file
in
your
project.
The
first
thing
it's
going
to
start
with
is
the
section
that
we
call
globals,
you
know
gitlab,
we
have
all
files
are
declared
if
you're
not
taking
on
a
new
programming
language,
to
learn
these.
It's
just
it's
like
filling
in
an
ini
file
that
has
arrays
and
everything
else
attached
to
it
with
keywords.
It
uses
the
global
keywords
in
the
global
section
image.
B
Now
that
could
also
be
put
into
the
default
section
here.
If
we
chose
to
so
globals
are
going
to
Define
defaults
for
jobs,
which
is
what
this
default
section
is
here.
I'm
sorry
went
too
far.
It
allows
us
to
set
variables
and
you
can
see
one
being
set
there.
We
can
declare
stages
and
we
really
should
if
we
want
to
have
control
over
what
those
stages
are.
B
B
So
this
is
an
overview
of
gitlab
jobs
right,
so
this
jobs
in
a
pipeline
jobs,
jobs
can
be
hidden
and
that
just
means
they
start
with
a
period
hidden.
Jobs
are
used,
as
templates
for
Downstream
jobs
that
they're
never
instantiated
as
a
job
themselves,
if
they're
hidden
the
scripts
section
on
a
job.
This
section
right
here
is
absolutely
required,
so
you
have
to
have
that
or
your
yml
won't
be
valid.
B
Now
I'm
not
going
to
spend
a
lot
of
time
on
this,
because
we've
got
a
slide
that
covers
this,
but
there's
a
lot
of
different
ways
that
you
can
include
files
from
your
own
project.
You
know
if
it's,
if
that
local
up
there
is
a
keyword
that
means
you
know
to
look
for
this
particular
this
particular
name
to
file.
In
the
repository
that
you're
running
on,
you
can
use
template
files
which
are
shipped
with
Git
lab
and
come
in
a
whole
wide
variety
of
capabilities.
B
Like
all
of
the
scanning
capabilities
that
we
can
do,
you
can
include
files
from
other
repositories
in
gitlab.
You
can
even
include
files
from
HTTP
or
https
URLs
if
they
don't
require
authentication,
that's
so
important
because
there's
no
way
to
authenticate-
and
these
can
use
rules,
as
you
can
see,
I'm
using
rules
on
this
particular
one
here
to
today.
B
B
B
B
To
finish,
for
your
pipeline
to
be
able
to
get
all
the
way
to
the
end,
we
have
Dynamic
child
pipelines,
which
is
the
ability
for
you
actually,
if
you
need
to
create
some
decision
criteria
in
your
job
itself
and
based
on
what
you're,
seeing
in
the
code
be
able
to
create
custom
yml
that
runs
jobs
according
to
whatever
rule
set.
You
want
to
do
inside
your
job,
and
then
we
have
multi-project
pipelines,
which
is
a
very
popular
feature
with
large
model
repos
that
have
been
busted
up
into
their
components.
B
So
you
can
actually
spawn
a
pipeline
in
another
project
if
you
need
to
now
basic
Pipelines.
You
know
these
are
the
simplest
pipeline
in
git,
lab
jobs
run
independently,
sometimes
on
different
Runners.
All
jobs
in
the
stage
must
complace
successfully
before
proceeding
to
the
next
stage,
and
you
can
control
pipelines
with
pipeline
auctions.
B
B
B
So
it
doesn't
give
you
the
error
message
that
would
be
there
if
you
allow
failure
equals
false,
so
it
just
gives
you
a
way
to
be
able
to
see
the
test
results
without
having
to
fail.
The
pipeline,
for
whatever
reason
you
need
to
and
then
notice,
that
the
deploy
job
has
a
play
button
on
it.
B
This
is
using
a
when
Clause
of
manual
and
what
that
means
is
that
you
can
control
win
that
pipe
when
that
particular
job
runs
by
going
in
and
just
clicking
play
on
it
and
if
you're
using
protective
branches,
you
can
also
determine
who's
allowed
to
click
play
on
it.
The
same
is
true
protected
environment.
So,
just
be
aware,
those
things
are
possible
to
do
and
then
some
of
the
other
win
options
that
are
popular
is
delayed.
We
can
use
a
a
when
delayed
and
we
can
launch
it
in
15
minutes.
B
We
can
launch
it
in
three
hours
whenever
we
choose
to
when
on
failure
is
used
for
conditions
where
something's
gone
wrong
with
the
pipeline.
Some
things
failed
now
it's
time
to
go
back
in
and
need
to
do.
Some
cleanup
remove
some
things
from
different
places
in
git
lab
that
are
appropriate.
You
know
just
whatever
you
need
to
do
when
a
pipeline
fails,
it
might
even
be
just
notify
people
that
a
pipeline
failed
and
then,
when
always,
is
going
to
run
unconditionally,
no
matter
what,
whenever
it
comes
up.
B
So
next,
let's
talk
about
direct
as
a
cyclic,
graph
and
I.
Think
most
of
you
are
aware
of
what
this
is,
but
this
is
just
a
way
for
us
to
create
dependencies
and
relationships
between
jobs
that
defines
job
dependencies
to
optimize
your
pipeline
flow
jobs,
you
know
still
run
independently,
sometimes
on
different
Runners,
but
dependent
jobs
can
proceed
to
the
next
stage
without
waiting
for
the
jobs
in
the
stage
previous
to
that
job.
B
To
finish
so,
if
you've
got
jobs
in
tests
and
then
some
test
job
finishes
and
you've
got
a
job,
you've
got
jobs
listed
in
the
next
stage
as
soon
as
the
job
finishes.
That
is
a
requirement
or
dependency
for
this
job.
You've
got
in
this
next
stage.
It
can
just
that
next
job
can
fire
up
without
waiting
for
all
the
rest
of
the
previous
jobs
to
go
now.
Here's
a
real
good
example
of
this.
B
You
know
in
this
particular
project,
there's
a
build
for
Android
to
build
for
iOS,
but
no
reason
for
those
two
tests
to
have
to
wait
for
both.
To
finish,
and
in
this
particular
case,
you
can
see
that
tests
for
Android
is
dependent
upon
build
for
Android
and
as
soon
as
build
for
Android
is
done.
It
doesn't
wait
for
build
iOS
to
finish.
B
It
just
goes
ahead
and
fires
up,
so
the
job
still
runs
in
stages,
but
the
needs
keyword
overrides
the
the
need
for
the
previous
stage
to
complete
entirely
it's
eligible
and
run
as
soon
as
the
previous
job
that
it's
dependent
upon
is
is
complete.
Now
the
other
thing
that's
kind
of
cool
about
this.
Is
you
get
it
directed
a
SQL
graph?
And
if
you
go
to
the
C
CI
CD
pipelines
page-
and
you
pick
a
pipeline
to
to
examine
so
you
can
see
it's
jobs
list
in
its
stages.
B
One
of
the
options
at
the
very
top
is
you've
got
a
needs
Tab
and
if
you
go
to
the
needs,
tab
you're
going
to
get
this
directed
to
cyclic
graph,
and
this
can
show
the
relationships
between
all
jobs.
So
this
is
a
this
is
Handy
for
making
sure
that
you've
defined
all
the
appropriate
dependencies
that
you
need
to.
B
So
you
can
rent
child
pipelines
independent
from
each
other.
It
separates
the
entire
pipeline
configuration
into
multiple
files
to
keep
things.
You
know
simple
for
you,
which
is
to
say
that
you
can
have
independently
defined
dot,
yml
pipeline
files
that
are
just
for
these
child
pipelines
and
don't
ever
get
included
in
the
main
gitlab.
You
know
dot
gitlab.ci.yml
file
and
it's
useful
to
Branch
out
things
like
long-running
tasks
and
separate
Pipelines.
B
Now,
if
we
look
at
a
pipeline
that
has
this
parent
child
on
it,
what
we're
going
to
see
is
we've
got
these
Downstream
pipelines
that
are
happening
in
clicking
on
any
one
of
these
will
expose.
You
know
either
the
status
of
the
downstream
pipeline
or
the
independent
jobs,
depending
on
how
you
define
it.
B
We
can
see
that
we,
this
build
one
job
right
here,
is
going
to
trigger
an
independent
pipeline,
in
this
particular
case,
we're
using
only
instead
of
rules
and
it's
triggered
on
changes.
So
it's
going
to
look
for
changes
in
this
particular
path
right
here
and
if
it
finds
any
of
them
in
the
commit
that
we're
running
on
it's
going
to
go
ahead
and
launch
this
independent
pipeline,
and
then
this
trigger
is
the
keyword
that
that
lets
you
launch
a
downstream
pipeline,
and,
but
you
can
see
here
for
include-
is
it's
going
to
include
this.
B
So,
if
you
use
depend
that's
the
case,
if
you
don't
use
depend
that
pipeline
Downstream
is
completely
independent
and
whether
it
is
successful
in
its
completion
or
not
has
no
impact
on
the
on
the
Upstream
pipeline
now.
The
other
thing
it
does
is
that
strategy
depend
exposes
to
jobs
from
the
downstream
pipeline
in
the
Upstream
pipeline,
so
that
when
you
click
on
that
trigger
job
you'll
be
able
to
see
the
jobs
in
the
downstream
pipeline.
B
So
Dynamic
pipelines
are
a
different
concept
and
the
general
thought
is,
you
know,
you're,
going
to
run
some
tests
or
evaluate
something
that's
happening
in
the
project
in
the
context
of
one
of
your
jobs
and
you're,
going
to
use
that
to
create
an
independent
pipeline.
It's
a
dynamic
child
pipeline
and
it's
going
to
happen
in
a
later
stage
in
an
in
a
in
a
separate
job.
B
Cynamic
pipelines
look
like
this,
and
you
can
see
that
this
setup
job
right
here
is
set
to
leave
an
artifact
and
the
path
is
generated
config.yml
and
then
this
test
stage
is
going
to
declare
that
as
an
artifact
that
it
needs
to
be
able
to
use
to
to
run,
and
this
is
the
trigger
of
a
bridge
job.
So
the
setup
creates
the
the
generated
yaml
file.
Downstream
job
is
to
trigger
job.
B
B
B
So
an
important
note
about
this
is
any
user
who
could
trigger
a
pipeline
will
need
to
have
the
right
role
to
trigger
a
pipeline
into
Downstream
project,
so
they've
got
to
be
at
least
a
developer
in
your
project
to
run
a
pipeline
in
that
Downstream
project.
The
same
thing
is
true:
they're
going
to
have
to
be
at
least
a
developer.
B
You
can
specify
a
branch
in
that
Downstream
pipeline.
If
you
need
to,
and
you
can
pass
variables
to
the
downstream
pipeline
and
if
the
downstream
pipeline
fails,
it
will
not
fail
the
Upstream
pipeline
again
that
relies
upon
that
depends,
keyword
and
then
useful,
when
building
employing
large
applications
are
made
of
different
components
and
where
I
see
it.
A
lot
is
in
projects
that
used
to
be
large
model
repos
that
have
been
broken
out
into
the
individual
components.
B
B
B
You
can
specify
the
project
in
the
branch
right
here
see
under
the
trigger
job,
we're
specifying
the
project.
The
path
from
the
prod.
The
path
to
the
project
is
from
the
root
of
your
local
gitlab.instance.
You
know,
if
that's
gitlab.com,
it's
going
to
be
some
root
level,
namespace
followed
by
a
project,
and
then
you
can
specify
the
branch
that
you
want
to
use
in
that
Downstream
project
and
then
again
you
can
use.
This
strategy
depends
remember
that
that
exposes
the
jobs
in
the
Upstream
Pipelines.
B
If
you
need
to
be
able
to
see
that,
but
it
also
creates
a
dependency
between
the
two
so
that
I'm
sorry,
so
that
the
Upstream
pipeline
is
it
considered
successful
unless
a
downstream
pipeline
is
able
to
pass
now
the
job
with
the
triggers.
The
job
with
the
trigger
keyword
is
often
referred
to
as
a
bridge
job,
but
doesn't
have
to
have
this
name.
You
can
have
any
name
you
want
it
to
have,
and
then
the
downstream
pipeline
can
be
anything
you
choose.
So
it's
because
it's
not
running
the
that
Downstream
Pipelines.
B
Specified.Kitlab.Fci.Orml
it
could,
if
you
wanted
to,
but
the
presumption
here
is
that
if
that
Downstream
job
is
going
to
have
some
special
pipeline
definition,
that's
just
for
this
kind
of
use
and
you
can
use
any
any
pipeline
name
that
you
want
to
use
any
pipeline
file
that
you
want
to
use
all
right.
So,
let's
talk
a
little
bit
about
variables.
B
Variables
become
very
important
when
you
start
to
use
rules,
get
lab
predefines
an
extraordinarily
long
list
of
variables
that
you
can
use
to
define
your
rules
with,
learn
things
about
your
pipeline
and
so,
let's
dive
into
what
those
are
and
how
you
can
set
them
up.
You
can
set
your
own
variables
in
gitlab.
B
B
So
you
know
this
is
under
settings
CI
and
CD,
and
then
you
expand
the
variables
and
once
you
expand
it,
you'll
see
the
ability
is
there
for
you
to
be
able
to
add
a
key
name
and
a
value
right
there.
Now
there's
some
other
options
available.
There
too
you'll
be
able
to
hide
that
or
mask
it,
so
that
people
can't
Echo
it
out.
If
you
don't
want
that
to
happen
in
your
pipelines,
now
realize
that
gitlab
has
a
regex
mask
that
they
have
to
that.
B
Variables
can
be
entered
in
the
pipeline
run
page.
So
if
you
go
to
CI
CD
the
default
page,
it's
the
pipelines
page
with
the
list
of
Pipelines
and
at
the
very
top
you'll
see
this
run.
Pipelines
button
run
pipeline
button
and
it's
going
to
going
to
expose
a
form
where
you
can
create
new
variables
right
there.
B
You
can
also
Define
variables
in
your
CI
configuration,
so
this
is
a
global
variable
here.
This
is
outside
of
a
job
and
it
could
just
be
this
keyword
variables
and
you
can
Define
the
names,
the
key
names
and
the
values
you
want
them
to
be,
but
you
can
also
Define
them
inside
of
a
build
a
job
I'm,
sorry
inside
of
a
job.
So
this
is
a
job
called
build
and
it's
defining
variables
and
it's
defining
this
environment
staging.
B
B
A
B
Can
also
inherit
variables
now.
The
idea
behind
this
is
you'd
run
a
job.
You've
made
some
decisions
during
the
course
of
that
job.
You
want
a
downstream
job
to
know
about
that.
You
can
leave
a
dot
in
the
file.
It
can
be
any
name
you
want.
It's
just
got
to
be
dot
EnV
as
an
artifact
in
artifacts
by
default,
get
downloaded
on
all
Downstream
jobs
from
the
point
that
they're
created.
B
So
every
single
job
that
is
Downstream
of
this
job
will
be
able
to
grab
this
build.env
file.
That
has
this
single
value
defined
in
it
single
variable
defined
in
it,
and
you
know
if
we
look
at
this
and
we
Define
it
here
in
this
next
job-
that's
going
to
use
this
build.env
file.
Remember
it's
going
to
pass
to
that
automatically.
B
B
So
these
artifact
files
that
are
setup
is
dot.
Emv
files
take
precedence,
but
this
is
really
important:
they're,
not
accessible
for
rules
or
anything
outside
the
script
section
of
a
job.
So
as
soon
as
the
scriptures
and
fires
up
and
evaluates
that,
then
at
that
point
you
have
access
to
the
variables,
but
they're
not
going
to
be
accessible
to
a
rule
for
the
downstream
job.
B
Now,
if
we
talk
about
variable
precedence,
the
general
order
here
is
from
bottom
to
top
top
winds,
so
the
values
from
git
lab,
which
is
predefined,
variable
stuff
that
we
set
up,
which,
by
the
way,
I
probably
wouldn't
want
to
change.
If
I
review,
because
it's
important
that
you
know
what
those
were,
if
I
needed
variables
to
hold
things,
I
would
tend
to
use
ones
that
I
had
to
find
myself.
B
From
previous
jobs
previous
stages,
then
you
have
values
configured
in
the
pro
project
group,
for
instance,
and
then,
if
you're
running
this
manually
in
UI
from
the
pipelines
page
in
that
run,
pipeline
button
or
an
API
request
or
a
scheduled
job,
you
can
every
single
one
of
those
allows
you
to
set
the
you
the
variables
that
you
want
to
be
set
for
that
specific
run
and
those
take
the
highest
precedence.
And
then
that
applies
to
both
the
CI
engine
and
the
main
gitlab
server
where
rules
are
evaluated.
It
also
applies
to
CI
jobs.
B
So,
let's
take
a
little
bit
of
time
and
dive
into
rules.
Rules
are
easily
one
of
the
most
valuable
parts
of
building
gitlab
pipelines
and
they're
relatively
straightforward
and
easy
to
understand,
which
is
not
to
say
that
you
won't
stub
your
toes
once
in
a
while
or
have
to
hop
over
something
that
gets
in
your
way,
but
they're
relatively
easy
to
use
and
it's
important
to
understand.
First
winter
pipelines
running
gitlab,
so
any
new
commit
any
new
Branch.
You
create
any
new
tag.
B
You
create
the
merge
requests
that
you
create
and
by
the
way
a
Murphy
Quest
comes
from
a
source
branch,
so
understand
that
when
you
open
a
merge
request,
if
you
have
to
iterate
and
make
changes
to
the
code
you're
going
to
get
new
pipelines
every
single
time
that
you
put
a
new
commit
into
that
Source
branch.
In
the
context
of
that
merge
request,
you
can
also
launch
them
manually
from
the
pipelines
page
using
the
Run
pipeline
button.
B
On
rules
you
know,
we've
got
a
job
name
here
which,
by
the
way
is
anything
that's
not
a
defined
keyword
in
git
lab.
That's
not
a
defined
Global
keyword
in
gitlab.
It's
an
arbitrary
name.
You
just
set
yourself
whatever
you
need
it
to
be,
and
then
there's
a
keyword
under
jobs
called
rules
and
if
we
use
rules
rules
is
an
array,
and
each
array
element
starts
with
this
Dash
right
here.
B
B
Then
this
script
is
going
to
run
now.
The
thing
to
realize
about
this,
this
particular
job
in
this
role
is
that
this
rule
hasn't
implied
when
on
success,
so
it
hasn't
applied
in
the
same
array
element
right
below
the.
If,
if
you
were
to
type
in
when
colon
in
some
value,
you
could
override
what
the
default
is
and
the
default
is
when
on
success,
which
means
everything
before
this
job
was
successful
or
was
allowed
to
fail,
and
then
this
shot
is
only
going
to
run
from
the
web
form.
B
B
So
let's
talk
real
quickly
about
some
of
the
various
elements
of
a
of
a
of
a
rule.
You
know
you
have
Clauses
if
changes
so
changes
is
going
to
look
for
changes
on
a
path
in
the
repository
itself
if
it
finds
them
in
the
context
of
the
commit
or
the
merger
Quest,
then
it's
going
to
use
them.
It's
going
to
that.
That
rule
is
now
true
and
then
exists
is
very
similar,
except
that
he's
not
looking
for
changes.
B
B
Note
that
these
two
right
here
with
the
Tilt
these
are
used
for
regular
expressions
and
the
regular
expressions
and
get
lab
work
exactly
like
you
were
expected
to,
given
that
we
use
our,
we
use
slash
delimiters,
but
everything
works
exactly
like
you
would
expect
it
to
and
like
I've
been
accustomed
to
in
every
single
pass
role
before
coming
to
get
lab
using
using
regular
expressions
and
then
the
two
at
the
bottom
are
just
you
know
you
want
to
join
together.
B
A
couple
of
different
tests
with
the
ndns
you
can
do
it
viewer
or
just
lets.
You
do
exactly
what
you
think
it
is
you
can
this
condition,
or
this
condition
kind
of
scenario,
and
then
the
results
can
be
set
up
by
you
know
when
which
has
all
of
its
options
listed
on
the
right
there
always
never
on
success
on
failure
manual
or
delayed.
That's
the
various
forms
that
wind
takes.
B
Allow
failure
can
be
true
or
false
and
again,
if
it's
true
that
job's
allowed
to
fail
and
it
won't
stop
the
pipeline
and
then
results
starting
is
just.
We
want
to
start
15
minutes
from
the
point
that
the
job
is
eligible
to
run
right.
So
this
is
not
from
when
the
pipeline
starts,
but
when
that
job
is
eligible
to
learn
start
in
15
minutes
starting
three
hours,
you
can
pick
whatever
increment
makes
sense
for
you.
B
B
B
If
it's
a
push,
then
it's
going
to
run
this
job
as
well,
now
realize
that
you
can
do
if
CI
populate
Source
equals,
equals
push
and
then
test
for
a
merge
request.
Event
do
an
end
to
end
and
then
test
for
a
merge
request
event
right
after
that,
and
then
you
would
only
get
a
push
on
a
merger
Quest
event.
So
just
keep
thinking
about
that
and
then
in
this
rules
example
number
two:
this
pipeline
is
not
going
to
run
if
this
was
triggered
from
a
merge
request.
B
Events
so
notice
that
this
has
got
the
standalone.
It's
got
the
wind
Claus
included
in
the
context
of
each
rule.
That's
been
defined
here,
but
it
also
has
a
standalone
when
here
at
the
bottom,
that
has
no
rule
at
all.
So
in
that
particular
case.
Sorry
in
that
particular
case,
since
there's
no
rule,
if
the
two
rules
above
it
don't
match,
then
that
is
going
to
match
unconditionally.
B
So
first
rule
is
FCI.
Prepping
source
is
a
merge
request
event.
We
don't
want
this
particular
job
to
run.
So
this
is
a
negative
rule
here
and
then
the
next
one
is
if
CI,
pup
and
Source
equals
schedule.
We
don't
want
this
job
to
run
now.
Remember
the
first
match
wins
and
as
soon
as
it
finds
a
match,
it
stops
processing
rules,
but
if
it
doesn't
get,
if
neither
one
of
those
is
is
successful
in
matching
on
the
event,
then
we're
going
to
default
to
win
on
success
and
go
and
unconditionally
there.
B
B
So
this
job
in
the
pipeline
is
going
to
be
a
manual
job
we
can
set
it
up
to
you
know,
in
fact
it
is
if
R
equals
substring
value,
so
we're
testing
one
of
the
predefined
variables
against
some
string
value.
Maybe
it's
you
know,
you're
testing,
that
the
branch
name
is
main
or
master
or
something
along
those
lines
and
then,
if
there's
changes
in
the
darker
file
or
changes
in
the
scripts
for
the
darker
file,
we're
going
to
have
a
manual
job
and
notice
that
there's
nothing
else
in
this
roadblock.
B
B
Okay,
similar
scenario
here
see
I
commit
Branch
equals
main,
which
is
what
I
was
just
talking
about
a
minute
ago
in
this
particular
case.
If
this
matches
this
job
is
going
to
start
it's
going
to
when
delayed
it's
going
to
have
a
start
in
three
hours
and
allow
failure
of
Truth.
So
this
is
going
to
wait
three
hours
to
run
from
the
point
that
this
job
is
eligible
to
run,
and
then
it's
allowed
to
fail.
So
it's
not
going
to
impact
the
pipeline
status
regardless
of
outcome.
B
B
Workflow
rules
are
decisions,
bull
decisions
that
you
can
make
at
the
very
beginning
of
this
file
to
decide
if
I'm
going
to
run
a
pipeline
at
all,
and
even
if
you've
got
rules
that
match
Downstream
in
your
jobs,
you
can
decide
that
you
don't
want
to
run
a
pipeline
under
certain
conditions
or
you
can
decide
to
run
it
under
specific
conditions.
It's
just
up
to
you
in
this
particular
one.
It's
not
going
to
run.
B
If
there's
a
see
it,
you
know,
I
want
you
to
notice
that
it's
using
regular
Expressions
here
in
this
in
this
operator,
it's
looking
for
a
dash
WIP
at
the
very
end
of
the
commit
message.
It's
going
to
be
the
last
part
of
the
commit
message
and
if
it
finds
that
it's
actually
a
space
Dash
WIP,
but
if
it
finds
that,
then
it's
never
going
to
run
a
pipeline.
B
Remember
that
whenever
means
it's
a
negative
match
and
if
it's
a
CI
commit
tag
right,
so
if
they,
if
somebody
just
created
a
tag,
we're
not
going
to
run
for
that
either.
In
all
other
conditions,
our
default
is
going
to
be
unconditionally,
always
no
rule
just
unconditionally.
If
those
first
two
rules
don't
match
now,
I'm
always
going
to
run
no
matter.
What.
B
Now
these
are
this
is
an
example
of
variables
that
are
depending
on
something
the
variable.
Docker
file
is
going
to
be
set
differently,
based
on
the
rules
and,
as
you
can
see
here,
we're
defining
variables
in
the
context
of
an
if
statement,
one
of
the
things
that
you're
allowed
to
Define
for
a
role
this
is
inside
the
workflow
rules
you
see,
I
commit
ref
name,
has
the
dash
WIP
at
the
very
end,
then
the
variable
is
to
use
this
test.doctor
file
as
your
Docker
file.
B
But
if
the
CI
commit
ref
name
equals
the
CI
default,
Branch,
presumably
main
Master
whatever
that
might
be,
but
this
is
a
cool
way
to
not
have
to
put
code
in
the
name
of
your
branch
that
you're,
using
as
your
default
Branch
you're
just
testing
the
ref
name
against
the
CI
default,
Branch
name,
which
is
in
the
project
settings
in
that
case,
you're,
going
to
use
main.docker
file
and
notice
that
neither
one
of
these
has
a
win
defined.
So
the
win
is
on
success
for
both
of
these
by
default.
B
So
it's
going
to
be
set
differently.
The
dark
files
can
be
set
differently
based
on
the
rules,
and
this
pipeline
is
always
going
to
run.
There's
nothing
in
here.
That's
a
negative
rule.
There's
nothing
in
here.
Stopping
the
pipeline
from
running
the
only
thing
that's
varying
is
that
darker
file
variable
based
on
you
know
whether
or
not
that
Dash
WIP
is
there
or
whether
or
not
it's
coming
into
the
default
branch.
B
All
right
now,
let's
talk
about
artifacts
a
little
bit.
Gitlab
jobs
have
a
have
the
ability
to
generate
these
artifacts,
as
you
would
expect
in
Ci
or
CD,
and
they
have
the
ability
to
generate
artifacts
and
as
long
as
they
declare
them,
gitlab's
going
to
upload
them
back
up
to
gitlab
automatically
for
them
and
then
again
by
default.
Those
are
going
to
pass
to
every
subsequent
job
after
the
job
that
defines
the
the
artifacts.
B
So
here's
some
examples
here.
This
allows
for
saving
of
build,
artifacts
and
or
the
output
of
a
job
they're
available
for
use
by
any
subsequent
job.
The
default
is
that
they
automatically
downloaded
to
subsequent
jobs
and,
by
the
way,
that's
a
tip
for
potentially
increasing
the
efficiency
of
your
pipelines,
which
is
that
there
are
ways
to
tell
it
not
to
download
these
files.
If
a
job
is
not
dependent
on
any
of
these
artifacts,
you
can
pull
in
any
combination
of
paths
or
files
that
you
need
to
to
be
uploaded
back
to
gitlab.
B
You
can
use
exclude
to
limit
what's
needed
and
then
in
Downstream
jobs
you
can
use
needs
colon
artifacts,
which
is
the
sub
property
of
the
needs,
a
keyword
or
you
can
use
dependencies
in
either
one
of
those
cases.
If
you
put
it
in
an
empty
array,
nothing
is
going
to
get
downloaded
to
those
subsequent
jobs
at
all,
but
you
can
also
delimit
what
you
want
to
be
downloaded
there
as
well.
You
can
use
when
to
determine
if
what
art
went,
you
know
if
artifacts
will
be
stored
or
not
in
the.
B
B
The
best
thing
to
do
on
that
channel,
so
the
question
is,
you
mentioned
multiple,
mutually
exclusive
pipelines
in
a
SQL
project.
What's
the
best
practice
to
achieve
this,
and
is
there
a
good
way
to
label
or
distinguish
the
different
Pipelines
types
you
know
on
the
web
pipelines
page
in
web
UI?
Yes,
there
absolutely
is
so
the
the
way
to
think
about
this
I'll
tell
you
what
I'd
like
to
do.
Let
me
just
share
something
with
you.
B
This
is
where
I'm
testing,
mutually
exclusive
pipelines-
I
just
keep
this
open
in
case
I
have
to
come
here
for
any
reason,
and
what
you
can
see
is
that
the
pipelines,
these
two
jobs,
these
two,
these
two
files
up
here
at
the
top,
don't
do
anything
that
creates
a
pipeline
at
all.
They
Define
the
jobs,
they
Define
the
roles,
but
the
jobs
use.
B
They
have
a
default
rule
that
won't
run
at
all,
but
when
they
get
included
in
any
of
these
subsequent
pipeline
files,
they
get
rules
automatically.
That
will
run
based
on
these.
These
same
types
of
rules
that
you're
seeing
here
this
is
what
I
personally
like
to
do,
and
one
thing
that
you
can
do
that's
really
valuable
here
now
this
particular
project
has
one
pipeline.
B
You
can
run
where
you
can
manually
select
all
the
jobs
you
want
to
be
in
that
pipeline
by
pre-populating,
a
specific
variable
and
it's
called
job
selections
and
they
can
put
in
a
space
or
comment
limited
list
of
all
the
jobs
from
the
all
jobs
that
yml
file
that
they
want
to
be
able
to
included
in
that
Pipeline.
And
so
this
way
I
can
put
the
description
up
there
and
tell
them
what
it
is.
B
B
Sorry
I
got
really
sidetracked
there.
You
got
me
up
on
one
of
my
personal
pet
peeves
all
right.
So
if
we
want
to
download
artifacts,
we
can
do
it
very
easily
from
the
artifacts
download
capabilities
in
the
UI.
If
we
go
to
the
pipelines
page,
which
is
the
default
crmcd
page,
you
know
every
single
pipeline,
that's
shown
there
is
going
to
have
the
ability
to
download
all
the
artifacts
for
that
pipeline.
B
B
B
All
right
so
artifact
Administration
sub
managed
to
get
lab
instance.
You
know
Java
artifacts
can
be
stored
in
local
storage
block
storage
or
they
can
put
in
object
storage.
It's
really
up
to
you.
A
lot
of
people
like
to
use
object,
storage
because
it
doesn't
impact
the
capacity
levels
of
their
local
block
storage
and
they
can
change
that
Block,
Level
storage
in
S3,
for
example,
anytime
they
want
to,
they
can
expand
it.
B
Artifact
expiration
can
be
set
and
configured
at
the
instance
level,
but
it
can
also
be
set
in
the
jobs
themselves.
So
just
be
aware
that
you
know
we
can
set
an
expires
in
property
for
artifacts.
We
can
make
that
three
months.
That's
really
good
form
to
do
if
you're
running
on
gitlab.com
to
help
keep
things
under
control.
B
Now
the
last
Point
here
is
their
artifacts
fall
under
gitlab's
access
control,
which
means
gitlab's
authentication
is
running
in
front
of
artifacts.
Unless
you
know,
if
you
make
a
artifacts
public
for
one
reason
or
another,
you
know
gitlab's
not
going
to
try
and
get
in
front
of
it
with
any
kind
of
authentication.
But
if
you
don't,
if
they're,
private
or
they're,
internal
or
they're
part
of
a
private
job
or
part
of
an
internal
I'm,
sorry
private
project
or
internal
prod
project,
the
authentication
is
going
to
be
read
out
in
front.
B
So
these
are
not
just
static
files
sitting
behind
nginx
yeah.
We
have
a
lot
of
different
places
to
store
artifacts.
We
have
container
Repository,
we
have
several
package
repositories.
We
also
have
a
dependency
proxy
and
that's
important
to
understand.
So
if
you
need
to
pull
container
images
from
GitHub
I'm,
sorry
from
that's
the.
B
Yeah
from
from
the
container
repository
we
all
like
to
use
get
level
proxy
cash
flows
for
you,
so
if
you're
using
gitlabs
proxy
cash-
and
it's
turned
on
by
default
to
getlab.com,
it's
going
to
cast
the
latest
versions
of
anything
pulled
down
from
that
that
main
hub.
B
It's
almost
last
thing,
I'm
going
to
make
this
as
quick
as
I
can
just
know
that
this
is
a
way
to
limit
jobs
coming
into
your
Downstream
jobs
after
an
artifact
has
been
created,
notice
that
Testo
XX
OS
X
is
dependent
upon
build
over
sex,
which
is
going
to
make
sure
that's
the
only
thing
it
gets
and
then
test.
Linux
is
the
same
kind
of
thing
now.
B
Now,
this
last
section
that
we're
going
to
talk
about
I'm
going
to
make
this
as
quick
as
I
can
is.
It
includes
an
extends
which
are
just
ways
of
reusing
code
over
and
over
again,
if
you
need
to
so
includes
just
be
aware,
you
can
include
from
the
local
project
itself.
You
can
include
from
a
project
another
project
on
the
same
gitlab
instance,
whether
it's
gitlab.com
or
your
self-hosted.
B
Although
there's
a
case
of
a
workflow
in
our
auto
devops
file
that
are
just
shipped
with
Git
lab,
so
this
keyword
template
here
means
that
they
were
shipped
with
gitlab
and
there's
a
whole
list
of
those
available
for
you
that
one
of
the
list
of
the
scanning
pages
that
I
went
through
at
the
very
beginning
has
links.
This
will
tell
you
how
to
go
about
using
these
nickel
has
the
ability
to
extend
jobs.
One
real
good
example
is
this
dot
at
the
beginning.
B
Here
this
is
creating
what
we
call
a
hidden
job,
which
you
can
think
of
as
a
substantial
class.
Whatever
you
want
to
call
it
that
you
can
then
extend
by
using
this
extends
keyword
now
you're
not
limited
to
extending
hidden
jobs.
You
can
also
extend
normal
jobs
that
you
declare
and
just
overwrite
the
parts
of
them
that
you
need
to.
If
you
need
to
now.
This
is
an
alternative
to
yml
Anchors,
which
quite
frankly,
work
pretty
well
but
can
look
fairly
complex,
it's
a
little
more
flexible
and
you
have
the
ability
you
can
do.
B
Multiple
extends
and
one
of
the
things
I
like
to
do
is
I'll
put
rules
and
do
a
list
of
jobs
and
I'll
declare
the
job
that
has
the
rule
that
I
want
to
have
here
and
then
I'll
use
patterns
for
the
type
of
job
that
I'm
going
to
run.
You
know
if
it's
a
build
job,
it's
going
to
have
some
common
build,
commands
things
along
those
lines
and
if
I
need
to
override
anything,
I
can.
B
But
you
can
do
multiple
extends
on
the
same
line.
If
you
need
to,
you
can
also
do
continuous
inheritance.
If,
for
some
reason
you
need
to
do
that
so
job
one
is
extended
by
job
two,
which
is
extended
by
job
three
and
that's
the
inheritance
we're
talking
about.
So
it's
not
this
case
of
just
delineating.
What
we're
going
to
extend-
and
we
could
use-
includes
and
extends
together.
So
we
can
include
a
template
file,
the
template
file,
I'm.
Sorry,
we
can
include
this
included.yml
file.
B
It's
defining
a
hidden
job
called
template,
and
then
we
can
use
the
template
by
this
extend
statement
under
news
template.
So
in
the
end
that
use
template
job
which
doesn't
have
it
script,
defined
it's
defined
in
the.
What
it's
extending
and
remember,
script
is
required
for
jobs.
It's
just
going
to
Echo
out
hello,
the
the
point
being
that
you
can
combine
these
two
together
to
get
to
the
best
combination
of
whatever
works
best
for
you,
foreign
I'm.
So
sorry,
I've
run
this
all
the
way
to
the
end.
A
A
As
we
mentioned,
we
will
be
following
up
with
an
email
with
some
Key
Resources,
the
deck
they're
at
the
recording,
obviously,
and
then
also
some
opportunities
for
you
to
sign
up
for
that
Hands-On
Workshop
next
week,
as
well
as
a
one-on-one
meeting,
either
with
Steve
or
another
member
of
the
customer
success
engineering
team
and
we're
looking
forward
to
that
and
and
engaging
with
you
in
other
ways
thanks
everyone
for
joining
us
today,.