►
From YouTube: Advanced CD/CD with GitLab - EMEA Webinar
Description
Expand your CI/CD knowledge while we cover advanced topics that will accelerate your efficiency using GitLab, such as pipelines, variables, rules, artifacts, and more. This session is intended for those who have used CI/CD in the past.
A
Hi
everyone
Welcome
to
our
webinar
session.
Today
we
are
going
to
give
people
just
another
minute
or
so
to
jump
in
and
then
we'll
get
started.
A
All
right,
we
will
kick
things
off.
Welcome
everyone
to
today's
webinar
session.
My
name
is
Taylor.
Lund
I
manage
the
scale
CSC
team
here
at
gitlab
and
I'm
joined
by
my
colleague,
Sean
John
Hoyle,
who
is
a
customer
success
manager
within
the
team
as
well.
We're
excited
to
have
you
on
today
for
an
advanced
CI
CD
session
before
I
turn
it
over
to
Sean
I'm,
going
to
go
through
a
couple
of
housekeeping
keeping
items.
First
off
this
webinar
is
being
recorded,
so
you
can
look
for
that.
A
Webinar
recording
as
well
as
the
deck
to
come
through
here
in
the
next
day
or
two.
If
you
have
any
questions
that
come
up
throughout
the
session,
please
feel
free
to
put
those
in
the
Q
a
portion
of
your
Zoom
window,
and
we
have
some
customer
success,
managers
and
customer
success
Engineers.
That
will
be
able
to
help
you
out
with
those
questions
today
and
then
Sean
will
answer
some
questions
at
the
end
as
well
with
that
I
will
turn
it
over
to
Sean
on
to
go
through
the
session.
B
Thanks
Taylor,
my
name
is
Sean
John,
Hoyle
and
I'm,
a
customer
success
manager
for
our
strategic
Enterprise
accounts
and
I'm
coming
to
you
from
San
Antonio
Texas.
Before
joining
gitlab
I
was
head
of
customer
success
at
a
startup
where
I
created
best
practices
for
our
product
that
we
would
publish
and
educate
customers
on
how
to
utilize
the
product
and
go
through
improvements
and
things
like
that
with
their
analytics.
So
they
could
have
more
productive
conversations
with
their
counterparts
and
their
project
management
teams.
B
B
A
couple
things
that
are
out
of
scope
is
that
you
know
detailed
setup
and
configuration
for
CI,
CDN
Runners.
We've
got
a
course
on
that,
and
so
you
can
look
that
up
in
our
Professional
Services
catalog,
as
well
as
in-depth
use
best
practices
for
project
management.
We
have
a
course
on
that
as
well
and
if
you're
a
systems
administrator
and
you're
trying
to
understand
how
to
spin
up
gitlab
and
troubleshoot
it
upgrade
it
Etc
we've
got
our
own
sysadmin
training
for
that
as
well.
So
so
a
couple
things
are
out
of
scope.
B
So,
let's
take
a
start
with
what
we
call
the
gitlab
flow.
It's
probably
it's
basically
a
branching
strategy.
B
You
know
how
git
love
GitHub
flow
and
gitlab
they
have
their
own
so
for
git
flow
all
our
branching
strategies,
but
this
is
more
of
a
process
and
I
know
this
is
a
webinar
about
CI
CD,
but
we
wanted
to
go
ahead
and
start
with
a
bigger
picture
in
mind
because
beyond
the
technology
of
the
pipelines
itself,
it's
a
process
basically
to
ensure
that
you
get
the
most
from
your
investment
in
gitlab,
as
well
as
to
improve
collaboration
across
your
engineering
team
and
other
personas,
like
security
change
management
as
you're,
actually
writing
and
deploying
code.
B
So
that's
why
we
wanted
to
show
this
first.
So
this
is
the
flow
process
that
we
recommend
for
devops
teams
to
follow,
while
using
gitlab
capabilities
within
a
concurrent
development
life
cycle.
So
you
start
off
with
the
finding
and
managing
the
requirements.
The
desired
issue
that
you're
working
on,
whether
it's
a
technical
debt
issue,
whether
it's
features
and
bugs
or
it's
a
security
and
compliance
risk
type
issue.
That's
all
in
the
backlog.
B
So
as
soon
as
it
starts
to
become
in
scope
and
ready
for
a
Sprint
or
deployment,
you
assign
that
issue
and
then
that's
where
the
fun
begins
of
creating
your
branch
and
then
going
off
and
making
commits
on
that
branch
and
pushing
that
back
to
the
server.
So
we
actually
encourage-
and
we
even
have
prompts
in
the
UI
to
immediately
create
a
merge
request
after
you
create
that
branch.
So
this
works
for
more
than
just
the
gitlab
flow
branching
strategy.
It
works
for
many
branching
strategies
as
it's
kind
of
agnostic.
B
In
that
reason,
and
so
we
do
that
so
that
it
becomes
evident
to
the
rest
of
the
team
that
yes,
I
picked
up
this
issue
and
yes,
I,
have
a
work
in
progress,
merge
request
so
that
you
can
immediately
start
to
bring
the
right
stakeholders.
So
they
can
see
that
when
you
push
code
which
will
automatically
trigger
our
cicd
pipelines
by
default,
unless
you
configure
it
to
do
it
otherwise,
so
that
every
change
is
basically
running,
linting,
static
analysis
and
you're,
getting
a
really
fast
feedback
loop.
B
So
once
it's
good
and
once
it's
looking
good
and
it's
passing
all
these
checks
and
the
appropriate
approvals,
then
separation
of
Duties
would
actually
come
in
at
this
point.
So
if
there's
any
findings
in
those
scans
that
require
an
additional
person
from
a
security
group
to
come
in
and
either
approve
it
or
review
it
and
then
land
in
your
default.
So
that
would
be
once
a
merge
request
is
accepted.
It
then
runs
the
CD
workflow
for
deploying
into
a
release
and
Patrick
packaging
that
up
into
a
release
and
then
deploying
into
your
environment
of
choice.
B
So
now
kind
of
zooming
into
the
cicd
elements
of
what
we
just
saw
on
the
CI
CDI
pipeline.
You
know
that's
where
you're
building
your
application
and
then
behind
the
scenes,
we're
using
gitlab
Runners,
which
we'll
talk
about
in
just
a
second
and
then
you're
running
into
a
live
preview,
so
we're
getting
into
the
review
app
and
you
have
a
live
preview
of
your
development
Branch
before
you
even
know
to
send
it
to
your
default
branch
and
then
before
merging
it
into
a
stable
version
of
your
application.
B
So
you
can
deploy
it
to
multiple
environments
such
as
staging
and
production.
We
also
support
Advanced
features
such
as
Canary
deployments
for
this
incremental
rollouts
that
you
may
have,
and
then
you
can
also
monitor
performance
and
the
status
of
your
application
as
well.
All
with
inside
of
gitlab.
B
So,
let's
look
at
the
terminology
that
gitlab
uses
for
describing
and
defining
CI
CD
pipelines,
so
stages,
they're,
a
logical
group
of
jobs
that
pertain
to
a
certain
phase
of
the
pipeline
and
they
can
be
run
in
parallel,
and
so
what
it
really
does.
Is
it's
going
to
be
organized
all
this
together
and
then
it's
going
to
be
a
stage
as
a
next
piece
down.
B
So
the
stage
is
a
local
grouping
of
jobs
that
pertain
to
a
phase
of
the
pipeline,
so
they
can
be
run
in
parallel,
which
improves
the
performance
of
your
flow
in
that
pipeline.
So
this
is
things
like
a
stage
that
would
be
build
test
deploy.
Those
are
all
stages
inside
of
the
job
itself
and
then
the
actual
tasks
that
needs
to
be
performed.
B
An
example
of
that
would
be
an
npm
test
or
a
maven
install
bash
grips
and
then
shell
scripts.
So
that's
really
what
makes
up
that
and
what
you're
defining
in
infrastructure
code
using
yaml
files
to
do
the
environments
is
where
you're
going
to
actually
be
deploying.
So
it
points
a
deployment
destination
and
a
file
version,
and
it's
all
stored
in
the
project
that
pertains
to
it
at
its
most
basic
function.
The
last
piece
is
the
gitlab
runner.
B
That's
the
infrastructure
piece
that
executes
everything
you
see
on
the
left
side
and
you
have
as
many
Runners
as
you
do.
You'll
need
to
handle
the
volume
and
load
from
engineering
department.
You
can
even
use
your
PC
if
you're
trying
to
test
certain
things
out
or
if
you
have
any
other.
You
know
specific
use
cases.
So
just
then
keep
in
mind
that
this
means
the
members
of
the
project
will
also
be
using
your
laptop
as
a
runner.
B
So
the
pipeline
architectures
that
we're
going
to
cover
today
as
we're
doing
this
task.
You
can
ask
yourself
how
can
I
use
these
Concepts
to
improve
my
pipeline
performance
and
functionality?
So
we've
got
five
different
flavors
they're,
going
to
start
to
dive
into
the
first.
One
is
just
a
basic
vanilla
pipeline
that
we're
going
to
get
into
directed
acyclical
graphs,
then
parent
trial
pipelines
within
the
same
project
and
dynamic
child
pipelines
and
then
the
fifth
one
we're
going
to
hit
is
multi-project
Pipelines.
B
So,
starting
with
the
simple
basic
pipeline,
this
is
just
a
series
of
jobs
that
run
independently
and
you
could
put
them
on
different
Runners,
but
even
that
would
be
more
of
an
advanced
than
the
basic
setup
that
we're
usually
going
to
be
using,
usually
we'll
all
be
in
the
same
Runner.
And
then
all
the
jobs
in
the
state
must
complete
successfully
before
moving
on
and
proceeding
to
the
next
stage.
So
we'll
go
ahead
and
show
you
what
that
looks
like.
B
So
a
basic
build
test
deploy
pipeline
here
with
all
the
jobs,
so
you
can
see
if
the
build
itself
is
successful
and
then
you
can
start
to
run
a
series
of
tests,
probably
some
unit
testing
and
integration
testing
here,
there's
also
some
Logic
the
allow
failure
to
be
true
so
that
the
pipeline
can
proceed
even
though,
if
it
failed,
those
tests
that's
optional
and
you
can
configure
that
for
deployment,
and
it
could
be
very
it's
a
common
practice
that
around
deploying
to
environments
that
you
have
the
right
permissions
and
a
lot
of
requirements
for
separation
of
Duty
say
that
the
person
who
pushes
the
code
can't
deploy
the
code.
B
So
this
is
how
you
would
handle
scenario
like
that.
So,
when
you're
deploying
into
a
production
or
even
uat
or
a
staging
environment,
it
will
wait
for
somebody
with
the
right
permission
to
click
that
the
manual
intervention
to
deploy.
And
then
you
have
other
options
when
delayed
on
failure
and
it's
always
something
more
to
control
the
flow
of
a
pipeline.
If
you've
got
like
hard
requirements
to
always
run
something
to
basically
run
it
on
a
failure
or
delaying
running
a
job.
B
So
the
second
one
that
I
wanted
to
mention
is
this
is
a
big
architecture
Improvement
that
we
made
a
few
years
ago,
but
it's
a
very
foundational
piece
to
get
lab
CI
as
the
idea
of
a
needs
clause
or
what
you
really
needed
to
look
like
is
a
directed
acyclical
graph,
which
is
a
dag.
So
this
is
how
you
start
to
define
the
flow
of
the
pipeline
so
that
you
can
run
parallel
jobs
and
you
can
even
move
on
to
the
next
stage
when
certain
jobs
haven't
finished.
B
B
And
this
is
kind
of
what
it
looks
like
as
an
example
of
using
the
needs
keyword,
so
you
can
Define
the
job
relationship.
If
gitlab
knows
the
relationship
between
the
jobs,
it
can
run
everything
as
fast
as
possible
and
even
skip
into
subsequent
stages
when
possible.
So
in
this
example,
you
have
an
application
to
point
towards
Android
and
iOS
and
if
those
aren't
dependent
on
each
other
running,
so
you
wanted
to
create
a
needs
keyword
here.
You
can
speed
up
the
total
pipeline.
B
So
this
is
a
visualization
and
I
know
it's
a
little
bit
complicated,
but
you
can
see
from
the
beginning
the
flow
logic
by
using
our
our
UI.
If
you
look
at
the
relationship
between
jobs,
so
if
you
Mouse
over
and
highlight
the
dependency
paths
involved,
you
can
also
click
on
multiple
paths
and
it's
interactive.
So
if
you're
trying
to
figure
out,
if
you've
inherited
a
project,
it's
it's
a
little
complicated
and
basically
you're
trying
to
just
figure
out
what's
triggering
so
this
is
a
visual
way
of
doing
so.
B
So
the
third
architecture
we're
going
to
take
a
look
at
is
the
parent-child
Pipelines
So
within
the
same
projects.
This
is
very
useful
for
running
non-dependent,
long-running
jobs,
so
things
like
code
scans
or
building
and
deploying
the
front
end
to
back-end
Services
separately.
That's
when
this
would
start
to
become
very
useful.
B
It
combines
if
you
combine
this
with
the
direct
ascyclical
graph
pipelines,
you
get
the
benefits
of
both,
so
it's
best
to
to
use
every
tool
at
your
disposal
so
that
you
can
improve
the
efficiency
of
these
pipelines
and
it's
useful
to
Branch
out
long-running
tasks
into
those
separate
pipelines.
So
that's
why
the
parent
child
architecture
is
very
useful.
B
So,
along
the
lines
of
Simplicity,
this
feature
allows
you
to
follow
their
yaml
files
from
the
same
project
so
that
you
can
solve
for
those
issues
that
you
may
be
facing,
such
as
the
stage
structure
of
a
pipeline
where
all
the
steps
in
a
stage
must
be
completed.
First,
before
the
job
moves
on
to
the
next
stage
and
begins
that
what
happens
is
it
causes
arbitrary
weights
and
slows
things
down?
So
this
also
solves
for
the
configuration
for
the
single
Global
pipeline.
B
That
would
become
very
long
right
if
you've
got
one
really
long
Global
pipeline
to
find
it
could
be
very
difficult
to
manage.
So
you
can
start
to
modulize
by
using
more
than
one
yaml
file
within
that
same
project,
and
then
you
have
this
apparent
ciml
that
can
call
those
other
yaml
files,
so
it
just
becomes
cleaner
and
easier
to
manage
to
read
so
readability
really
improves
there
as
well
and
then
Imports
what
they
include.
B
Keyword
increase
the
complexity
of
the
configuration
and
create
the
potential
for
namespace
Collision,
where
jobs
are
unintentionally
duplicated,
so
it
can
solve
for
that
as
well,
and
what
it
really
helps
is,
even
with
the
pipeline
user
experience
it
can
become.
You
have
the
ability,
with
so
many
jobs
and
stages
to
work
with,
so
it
really
adds
more
on
the
readability
side
for
everyone.
B
B
So
this
is
sort
of
the
behind
the
scenes
of
how
to
write
the
file
for
this
parent
trial
pipeline.
For
let's
say
a
monorepo,
that's
deploying
these
microservices,
like
I,
was
just
showing
in
the
example.
So
what
it'll
do
is
it'll
run
the
job
if
there
are
changes
in
those
files,
so
you
can
Define
for
this
stage
in
particular,
you
can
Define
it
to
only
run
if
there's
changes
in
the
project
Dash
any
one
of
the
files
that,
within
that
project,
you
can
include
from
elsewhere
within
the
project.
B
So
you
don't
have
to
worry
about,
including
it
here.
You
can
include
it
other
sides
into
the
project
and
what
that's
going
to
do
is
it's
going
to
include
the
appropriate
Downstream
yaml
code
to
run
that
project
correctly
and
then.
Finally,
the
strategy
to
depend
means
on
this
pipeline
until
the
other
pipelines
have
finished.
So
that's
sort
of
what
the
code
is
behind
the
visual
that
we're
showing
here
so
number
four
is
the
dynamic
pipeline.
B
So
this
is
going
to
be
generating
pipeline
configuration
at
the
build
time
and
it
uses
the
generated
configuration
at
a
later
stage
to
run
as
the
child
pipeline.
B
So
this
technique
could
be
very
powerful
in
generating
pipelines
targeting
content
that
changed
or
let's
say
you
want
to
build
a
matrix
of
Targets
in
architectures
dynamically.
So
you
could
see
this
test
get
lab,
yaml
file,
it's
being
generated
dynamically
and
it
places
a
generated
yaml
file
in
the
job,
artifacts
store
and
then
references
it
later
to
actually
run
the
pipeline
that
has
generated
dynamically
there.
B
So
there's
three
main
use
cases
for
this
and
we're
going
to
talk
about.
But
you
can
specify
specific
branches.
You
can
pass
variables
to
Downstream
pipelines
and
then,
if
a
downstream
pipeline
fails,
it
will
not
fail
the
Upstream
pipeline.
So
there's
a
benefit
to
that
and
it's
really
useful
when
you're
building
and
deploying
large
applications
there
that
they're
made
up
of
different
components
to
have
their
own
project
and
build
pipeline.
B
B
So
it's
basically
a
simple
A
to
B
configuration
for
the
multi-project
and
then
the
other
one
is
an
orchestrator
project
that
manages
the
build
and
deploy
of
other
multiple
apps
across
that's
parents
project
and
that
houses
the
control
logic.
So
that's
orchestrating
large
deploys
across
many
subsequent
repositories.
B
B
B
B
So
in
terms
of
using
these
and
injecting
them
into
your
pipeline,
you
could
do
it
a
number
of
ways
through
the
UI
at
the
project
level,
the
group
level
and
instance
level,
and
you
can
also
set
variables
and
we'll
talk
about
how
you
can
do
that
in
a
second
sort
of
the
hierarchy
of
them,
which
one
takes
precedence
and
if
it's
defined
more
than
once
right.
So
you
can
see
within
the
project.
You
can
add
variables.
B
B
And
then,
with
the
cicd
configuration,
of
course,
you
can
define
those
variables,
both
custom
and
the
predefined
variables.
So
this
is
the
inherited
variables
and
it's
a
variable,
that's
added
to
an
artifact
and
the
build
job
script,
and
then
this
artifact
is
really
handed
down
to
the
other
jobs
as
inherited
variables.
These
take
precedence
over
variables.
They
were
defined
in
the
yaml
file
previously.
B
So
then,
in
terms
of
processing
the
sort
of
order
of
operations
which
takes
one
precedence
so
going
from
bottom
to
top,
the
top
being,
what
takes
the
highest
precedence
and
if
you've
defined
a
variable
manually
in
the
UI
or
if
you've
made
an
API
request
for
that
given
pipeline,
that's
been
triggered.
That's
going
to
take
the
highest
precedence
so
going
down
to
values,
configure
at
the
project
level,
in
instance,
with
a
group
coming
first
and
then
the
second
instance.
Third.
B
So
for
Define
for
defining
the
the
flow
logic
you
you
really
need
rules
and,
if
you're
thinking
in
terms
of
Jenkins,
like
let's
say,
groovy
scripts,
to
manage,
you
know
how
and
when
the
pipelines
are
triggered
and
when
they're
run.
So
that's
sort
of
the
equivalent
here.
So
I'll
talk
about
the
syntax
of
the
rules,
but
starting
with
the
way
that
you
can
trigger
a
pipeline
to
run
it
could
be
from
new
commits.
It
could
be
from.
B
B
So
if
you're
going
to
need
to
run
something
regularly
on
a
24
to
48
hour
schedule,
you
can
also
do
that
and
then
the
variable
for
this
setting
is
the
CI
pipeline
source
and
we'll
show
you
some
examples
of
where
you
can
change
it
in
the
pipeline.
So
if
you
want
to
disable
it
running
on
merge
requests,
if
you
need
to
use
that
variable
CI
pipeline
Source,
it
would
show
you
how
you
can
control
the
pipelines
when
they're
run.
B
So
the
basics
of
the
rule
and
sort
of
the
building
blocks
here
is
that
you
would
have
the
job
itself
being
defined,
and
you
know
what
you
have
on
that.
Where,
on
that
run
right,
would
you
want
this
job
to
be
kicked
off?
Then
you
would
need
to
create
the
rules
block
with
the
rules
keyword
and
then
you
can
Define
if
your
statements
to
reference
variables,
including
predefined
ones,
as
in
this
case,
like
the
CI
pipeline
Source,
it's
is
web,
so
if
you're
using
the
web
IDE,
then
you
can
trigger
the
job
to
be
run.
B
B
It's
just
gonna
do
an
echo
a
string
run
that
says
this
is
a
job
that
will
run
when
the
pipeline
is
triggered
from
the
web
form.
So
it's
very
simple,
but
you
can
start
to
see
the
construct
of
a
job
rules
block.
So
if
a
statement
and
then
a
subsequent
script
or
action
is
run.
B
So
a
quick
reference:
this
is
a
great
slide
if
you
want
to
save
this
away
for
later,
I
have
before
the
the
Clauses
that
you
can
choose
from
the
solve
the.
If
statement
you
could
also
do
if
you
know
the
changes
are
going
to
be
made
to
files
only
and
then
you
can
run
the
subsequent
script,
so
it's
very
useful
for
improving
performance
of
your
pipelines
and
then
the
operator,
if
you
know
it's
pretty
standard
for
writing
any
kind
of
script.
The
results
for
when
that
operation
is
true
and
then
what
happens
as
a
result.
B
B
So
here
are
some
tips
and
tricks
for
speeding
up
complex
pipelines.
Earlier
when
we
looked
at
the
directed
acyclical
graph,
it's
a
great
way
of
doing
that.
But
let's
take
a
look
at
some
of
the
other
ways
that
I've
covered
rules
and
variables,
so
the
first
one
is
setting
run
rules.
This
is
one
of
the
the
favorites
from
the
team.
B
Basically,
it
allows
you
to
say
that
these
files
didn't
change
so
I,
don't
want
to
run
this
job
or
I
only
want
to
run
on
certain
branches,
and
this
allows
you
to
say
what
jobs
run.
So
you
can
say
time,
especially
as
you
get
a
live
ciml
file.
It
can
be
over
a
thousand
lines
sometimes,
and
you
certainly
don't
want
that
running
every
single
time,
so
setting
up
the
cache
the
cache
setup.
B
If
you
know
that
files
live
on
the
runners
and
it
enables
the
use
of
existing
build
item
without
rebuilding
it,
between
pipelines
between
pipeline
runs
and
tags,
are
especially
useful
as
you
can
ensure
the
correct
Runner
is
being
utilized
for
the
right
deployment.
This
also
allows
other
teams
to
see
what
the
what
they
need
to
be
running
and
what
other
teams
are
running
as
well
and
parallel
testing.
B
So
this
one's
probably
self-explanatory,
but
I
wanted
to
make
a
point
to
mention
that,
as
you
bump
up
a
number
of
tests
that
can
be
run
in
parallel,
assuming
you've
already
sized
your
Runners
to
handle
that
it's
going
to
improve
the
performance
and
then
the
performance
before
script.
If
you
have
a
lot
of
say,
like
container
preparation,
build
up
in
a
previous
script,
it
might
be
a
sign
that
you
need
to
convert
that
section
into
a
Docker
file
and
maybe
a
new
repo
and
have
your
own
builds
container.
B
But
if
there's
any
other
type
of
scripts,
that
is
just
checking
for
certain
settings.
This
can
dramatically
accelerate
builds
where
there
are
a
lot
of
build
dependencies
to
wear
on
before
running
our
code,
and
one
more
thing
I
want
to
say
about
this
is
trying
it
both
ways.
It
allows
you
to
compare
and
contrast,
which
Rays
run
fastest
for
you
and
I.
B
Think
it's
just
a
good
rule
of
thumb
so
that
you
can
get
used
to
using
more
complex
Concepts,
but
then
you're
very
pragmatic
and
you're
not
really
going
to
use
it
because
it's
new
and
cool,
but
rather
it's
more
about
improving
the
performance
that
you
can
measure
and
see
it
between
different
pipeline
builds.
So
I
would
encourage
you
to
take
a
look
at
that.
B
So
let's
look
at
a
couple
of
examples
of
rules
and
actions.
So,
if
you're
trying
to
control
when
a
merge
request
pipeline
runs,
if
you
don't
want
to
run
your
CI
pipeline
every
time,
a
merge
request,
event
happens
and
you
can
set
that
rule.
If
you
want
it
to
run,
then
you
can
use
the
CI
pipeline
Source
on
a
merger
request,
event
or
SEI
pipeline
source
as
a
way
to
push
to
that
Branch.
So
that
would
then
trigger
it
unless
you
say
otherwise,
you
don't
have
to
use
that.
So
in
this
example,
it's
pretty
simple.
B
And
here's
another
example
where,
if
you
don't
want
to
run
a
pipeline,
if
it
was
scheduled
versus
triggered
automatically
so
using
the
CI
pipeline
source
for
this
and
then
saying
on
a
merger,
Quest
events,
you
don't
want
it
to
run
and
then
on
a
CI
pipeline,
Source
schedule.
You
also
don't
want
it
to
run.
Otherwise
this
job
will
run
if
the
previous
stage
was
successful.
B
B
B
So
if
you've
got
a
Docker
file
in
scripts
for
dockerizing,
your
applications
that
have
changed
then
you're
going
to
want
to
run
my
container
scanning
and
I'm
going
to
want
to
run
infrastructure
as
code
scanning.
Otherwise,
I
may
not
need
to
run
IEC
because
that's
all
Java
code,
and
that
was
change
and
it's
not
infrastructure
code
right.
So
I
can
dictate
that
if
those
files
aren't
changed,
then
I
can
Skip
and
save
time
on
my
Pipeline,
and
so
that's
why
I
think
that
this
change
is
a
really
good
one.
B
It
saves
time
and
speeds
up
pipelines
here
and
there,
and
what
this
one
is
saying
is
that
it
needs
a
manual
intervention
to
run
this.
If
it
senses
a
change
to
a
Docker
file
or
any
files
in
those
Docker
scripts,
then
it
would
be
a
manual
job.
B
B
So
if
you
say
let's
say
you
have
a
long
running
either
script
or
it's
a
custom,
long
running
integration
test,
something
that
you
want
to
take
a
while,
but
the
feedback
isn't
really
necessary
like
immediately,
then
you
can
start
a
run
in
the
delayed
or
in
three
hours.
So
what
it
does
is
it
runs
either
a
night
or
a
time
when
you
know
it's
not
going
to
block
progress
during
the
day
when
you're
working
on
this
branch.
B
So
this
one,
the
workflow
rules
they
control
when
the
entire
pipeline
will
run
and
they're
outside
of
the
job
definitions.
So
we've
been
mainly
talking
about
up
to
this
point
where,
if
we
see
the
commit
message
contains
whip
Dash
whip,
then
it
won't
run
the
pipeline
and
if
a
tag
was
applied
then
it
also
won't
run
that
either.
Otherwise
it
will
so
you
could
control
that.
So
what
I'm
saying
is
is
if
this
says
whip
I
just
want
to
check
it
in
and
maybe
have
somebody
on
my
team
take
a
look
at
it.
B
B
And
then
another
example
for
your
workflow
is
four
variables.
Is
that
this
variable
in
this
example
of
the
docker
file,
it
will
be
set
differently
based
on
the
rules.
So
you
can
see
that
here
it
will
always
run,
and
you
know,
by
using
the
keyword
when
always.
B
So
the
second
to
last
topic
I
wanted
to
get
to
is
artifacts.
So,
as
you
know,
you
can
run
pipelines,
especially
CI
you're,
building
and
Publishing
files,
those
generate
files,
binaries
and
packages
that
you'll
use
for
deploying
your
application.
It
also
generates
artifacts
for
reviewing
test
results,
so
we're
going
to
talk
about
managing
those
artifacts
for
a
minute.
B
Gitlab
allows
for
saving
artifacts
and
local
or
object
storage,
and
then
you
can
use
them
in
subsequent
jobs,
and
you
can
also
use
the
rules
of
logic
to
exclude,
depends
and
when
to
control
what
is
added
and
that
allows
you
to
determine
if
an
artifact
is
stored
or
not.
So
you
may
not
need
to
store
it.
It
may
be
something
that
you
just
need
in
that
job
to
run
in
your
tests
and
that's
good
enough.
B
So
some
of
the
keywords
around
using
exclude
to
limit
what
is
added
and
then
using
depends
to
limit
what
gets
downloaded
on
subsequent
jobs.
That
will
allow
you
to
determine
if
artifacts
will
be
stored
or
not,
and
then
the
expire
and
to
determine
when
artifacts
would
be
destroyed.
If
you
don't
want
to
keep
them
around,
which
is
very
good
to
have,
especially
for
stuff
like
test
results,
I
think
those
are
only
stored
for
48
to
72
hours
by
default.
B
B
As
far
as
Administration
goes
in
a
self-managed
get
lab
instance,
job
artifacts,
they
can
be
sorted
either
local
or
object,
artifact
expiration.
They
can
be
configured
at
the
instance
level
and
then
artifact
demos
fall
under
our
gitlab
access
control
and
it's
pretty
small
but
I'll
just
tell
you
right
now,
download
and
browse
job
artifacts,
all
the
way
through
they
go
through
guests
and
then
through
the
reporter
developer,
maintainer
and
owner,
so
that
they
can
browse
and
then
download
job
artifacts.
B
So
it's
good
to
understand
the
permission
structure
of
who
has
access,
and
that
is
assuming
that
you
they
already
have
access
to
their
project.
But
basically
you
just
want
to
make
sure
that
you
have
this
set
up
so
that
no
one
can
just
you
know,
do
whatever
they
want
and
that
they
have
the
actual
permissions
they
need
to
do
their
job.
B
So
here
are
some
of
the
container
and
language
specific
package
Registries
that
get
Live
support,
container
of
use
dependency
proxy,
and
then
we
have
some
language
specific
package,
Registries,
all
the
common
ones.
We
have
npm
nuget
go,
you
can
see
here
and
we
also
have
a
docs
page.
So
if
you
go
to
our
sites
and
you
look
under
packages,
you
can
find
the
full
list
and
the
details.
If
you
want
any
more
information,
just
to
dig
around
a
little
bit.
B
So,
lastly,
a
quick
example
of
a
dock
by
default.
All
artifacts
from
previous
stages
are
passed
on
to
the
next
stage,
but
you
can
use
a
dependencies
parameter
to
define
a
limited
list
of
jobs
or
no
jobs
to
fetch
artifacts
from
so.
You
can
use
this
feature
to
Define
dependencies
and
context
of
the
job,
and
then
you
can
also
pass
a
list
of
all
those
previous
jobs
from
which
that
artifact
should
be
downloaded.
B
So
you
can
see
here
in
this
example
that
we've
got
a
build,
OS,
X
and
build
Linux
stages
and
in
order
to
run
those
test
scripts
and
then
what
we've
got
to
do
is
make
sure
that
we're
using
the
right
artifact.
So
that's
what
the
dependencies
of
build
OS
X
come
in
to
play.
So
for
the
test
stage.
That's
what
you
would
write
for
as
a
dependency
and
then
likewise
for
your
Linux
testing
to
take
place
and
then
make
sure
you're
dependent
on
that
build
Linux
artifact
there.
B
B
It's
a
must
and
that's
why
I
think
we've
we've
covered
here,
we're
hitting
on
it
towards
the
end
and
if
building
upon
the
concepts
that
we've
covered
previously.
So
an
include
statement
is
really
how
you
bring
in
external
yaml
files
to
your
gitlab
CI
configuration
it's
helpful
because
it
allows
you
to
extract
common
components
and
improve
readability,
and
it
can
save
you
a
ton
of
time.
So,
if
you
utilize
other
templates
that
get
what
that
comes
with,
it
has
a
mini
templates
as
part
of
your
install.
B
So
all
of
our
out
of
the
box
tests,
like
SAS
secret
detection,
you
can
use
an
include
statement
and
that's
all
you
have
to
do
to
pick
up
the
appropriate
language
of
your
code
that
was
written
in
and
it
scans
for
all
of
the
SAS
rules.
In
that
example,
secret
detection.
It
does
the
same
thing
and
that
it's
not
language
dependent,
but
you
can
just
add
the
include
statement
for
Secrets
detection,
and
then
you
know,
basically,
it's
going
to
take
care
of
the
rest.
B
I
think
it's
two
lines
of
code
and
that's
why
it's
so
powerful.
It
saves
a
lot
of
time,
makes
it
all
more
efficient
and
there's
a
couple
different
methods
for
doing
this.
That
I
talked
about
in
the
templates
which
are
provided
by
gitlab
at
the
very
bottom.
But
you
can
also
know
include
a
file
from
your
local
project
repository
and
so
that's
sort
of
the
apparent
child
example.
I
gave
in
The
Architects
up
there
there's
five
different
architectures
of
writing
a
pipeline.
B
So
that's
the
local
include
and
if
you're
just
trying
to
find
it
within
your
own
project
file,
you
want
to
include
a
file
from
a
different
project
repository
and
then
remote
if
you
want
to
include
a
file
from
a
remote
URL.
So
if
there's
something
that's
publicly
out
there,
like
a
publicitlab.com
that
you
want
to
pipe
in,
it
needs
to
be
public
visibility
or
have
public
visibility
first,
but
you
can
do
that
and
then
here's
the
examples
on
the
Syntax
for
writing.
Those
includes.
B
So
extends
is
another
way
to
improve
efficiency
and
eliminate
needs
for
rewriting
or
writing
link
decode
in
your
yaml
file.
So
what
it
does
is
it
extends
as
a
similar
to
git
lab
anchors
or
yaml
Yankers.
It's
a
little
bit
more
flexible
and
readable,
so
I
did
want
to
touch
on
this
a
little
bit
and
what
it'll
allows
you
to
do
is
enhance
and
reuse
configuration
sections.
B
So
if
you
know
about
yaml
anchors,
you
know
that
it's
used
to
duplicate
or
inherit
across
your
grandma
file,
but
they're
really
only
valid
in
the
file
they
were
defined.
So
that's
where
the
extend
comes
in
and
you
can
inherit
up
to
11
levels
but
I
believe
in
our
documentation.
We
say
no
more
than
just
three
for
performance
reasons.
So
if
you
start
to
see
problems,
that's
probably
why
and
that's
why
we
recommend
three
instead
of
11.
B
and
then
what
it
really
does.
Is
it
merges
the
configurations
from
the
respective
jobs
into
your
current
one,
and
then
that's
really
where
it's
going
to
save
you
time.
So
that's
kind
of
the
example
you
see
here
with
the
dot
tests
and
our
spec
becoming
one
single
job
and
then
that's
been
extended
from
the
template
job
and
this
really
works
across
configuration
files
when
you
use
and
include
with
an
extend
so
in
this
example,
you
can
include
an
external
include
file
and
then
on
a
nice
little
script.
B
B
This
is
kind
of
it
right,
but
it's
a
powerful
feature
and
I
wanted
to
make
sure
we
called
it
out
here
so
I
think
we
have
about
10
more
minutes
here
and
I
know:
we've
covered
a
lot
of
ground,
specifically
about
components
and
reusability,
but
I
just
wanted
to
kick
it
over
to
Taylor.
For
some
questions.
A
Awesome
thanks
Sean
John,
so
before
I
jump
into
the
questions,
just
an
FYI
just
opened
up
a
feedback
poll,
so
if
everyone
could
take
just
a
quick
second
to
fill
out
those
couple
of
questions,
that
would
be
great
we'd
love
to
get
your
feedback
with
that.
I'll
ask
ask
a
couple
of
questions
that
have
come
through.
The
first
is
what
is
the
use
of
include
in
gitlab.
B
Sure
so,
for
includes
forget,
lab
you
can
use
and
include
to
include
external
yaml
files
in
your
cicd
jobs.
It
gives
you
a
couple
options.
You
can
include
a
single
configuration
file,
an
array
of
files,
a
default
config
file,
and
then
you
can
also
overwrite
included
configuration
values
and
arrays,
or
you
can
also
use
nested
includes,
so
those
will
probably
be
the
top
uses
for
including
gitlab.
A
Awesome
next
one
here:
when
should
I
use
a
child
pipeline.
B
You
would
want
to
use
a
a
child
pipeline
when
you
have
a
gitlab
parent
child
pipeline
under
the
same
gitlab
project.
So
I
guess
the
main
thing
would
be
useful
for
is
when
you
want
to
run
your
pipeline
under
multiple
conditions,
and
you
may
want
to
run
your
pipeline
on
multiple
merge
requests,
issue
events
and
push
files.
So
if
that's
the
example
that
you
have
and
that
would
win,
that
would
be
a
good
use
of
a
child
pipeline.
A
Great
last
one
I'm
seeing
here:
why
do
we
use
pipeline
in
gitlab
so.
B
The
reason
why
you
want
to
do
that
is
teams
can
configure,
builds
and
tests
as
a
team.
Basically,
they
can
write
that
pipeline
as
a
code,
and
then
you
can
track
that
and
store
it
in
a
central
repo,
so
that
allows
teams
to
basically
store
all
their
info
together
and
then
they
can
use
a
yaml
file
to
approach
any
kind
of
language
that
they
have
from
Jenkins,
or
you
know
anything
like
that.
But
the
premise
Remains,
the
Same,
that
that
team
can
all
work
together
and
all
their
work
is
saved
together.
A
Awesome
thanks
Sean
John,
with
that
we'll
wrap
up
today
thanks
everyone
for
joining
us
appreciate
you
taking
some
time
to
learn.
Gitlab
with
us
and,
like
I,
said,
we'll,
be
sending
out
the
the
recording
in
the
deck
here
the
next
day
or
so
hope.
Everybody
has
a
good
day
thanks.