►
From YouTube: Advanced CI/CD with GitLab Webinar - EMEA
Description
Expand your CI/CD knowledge while we cover advanced topics that will accelerate your efficiency using GitLab, such as pipelines, variables, rules, artifacts, and more. This session is intended for those who have used CI/CD in the past.
A
Thank
you.
Thank
you
for
your
time
for
joining
us
today.
Here
we
will
cover,
as
you
know,
the
advanced
and
git
lab
cicd
I'm,
very
glad
to
have
also
my
colleague
from
a
Solutions
architect,
Karina,
which
will
also
help
us
with
the
QA,
the
QA
section
at
the
end,
so
yeah
welcome.
A
My
name
is
Mariana
I'm,
a
customer
success
manager
at
gitlab
for
almost
nine
months,
I'm
a
covering
all
the
Amer
region.
Here
with
the
different
customers,
and
today,
I
will
I
will
present
you
the
advancing
gitlabs
the
ICD.
A
So
we
can
start
some
some
basic
considerations
here
for
today
we,
since
there
is
an
advanced
Workshop,
we
assumed
that
most
of
the
participants
have
some
knowledge
already
about
the
cincd,
some
basic
knowledge
and
going
further
on
those
basic
knowledge.
Today,
what
we'll
cover
would
be
cicd
the
basics,
a
quick,
a
quick
review
just
to
go
more
into
the
other
Advanced
topics.
A
Then
we
would
go
to
the
pipeline.
Some
some
important
features
that
you
can
use
to
to
speed
up
your
pipeline,
then
going
more
further
with
the
pipeline
components,
we'll
go
with
variables,
how
you
can
use
them,
also
to
optimize
your
your
pipelines
and
how
you
can
control
your
jobs,
using
rules,
the
the
famous
rules
and
then
how
you
can
also
manage
your
artifacts
and
finally,
the
includes
and
extends
functionality,
which
will
also
optimize
your
your
pipeline.
A
All
right,
so
yes,
it's,
you
probably
will
know
that
already
this
concept,
but
just
should
give
you
some
summary,
some
some
some
basic
overview
about
the
basics
of
CI
CD.
Here
we
have
the
workflow
that
gitlab
recommends
for
for
the
all
devops
process.
Of
course
it
is
recommended
it's
not
mandatory,
but
to
to
get
all
the
all
the
value
that
the
platform
is
here
to
to
provide
you.
We
recommend
this
flow.
A
So
basically
you
have
the
default
Branch
here
which
is
commonly
known
as
its
production
brand
branch,
and
there
is
much
over
other
specification,
but
you
can
think
of
the
default
as
your
production
Branch.
Then
we
have
the
future
branch,
which
is
where
the
developers
work
mostly
when
there
is
a
new
change,
so
you
can
think
a
new,
a
bug
fix
or
a
new
application,
the
a
new
future
application.
So
basically
what
would
happen?
A
The
developer
would
be
assigned
to
an
issue
slash
ticket
to
work
on
this
new
bug.
Let's
consider
there
is
a
bug
once
he
worked
he
working
on
this
bug.
He
would
create
a
merge
request,
which
is
in
the
commit
to
to
change
then
after
this
commit
the
CI
pipeline
will
start
running,
and
when
we
have
the
security
reviews,
we
have
all
the
process
of
building
testing
your
your
new
commit.
A
Then,
after
that,
when
you
would
give
also
the
possibility
for
other
team
members
to
review
what
was
done
and
in
case
there
is
need
for
a
new
change,
you
can
go
back
and
add
those
change
in
that
comets
in
that
merge
request
and
then,
once
it's
everything,
okay,
you
can
merge
this
commit
to
the
main
branch
that
is,
that
would
go
to
production.
You
can
deploy
that
through
production.
A
So
now
going
to
lead
into
the
part
of
the
gitlab
CI
CD
going
morning
deep
of
this
part.
So
we
have
we
have
the
code
and
we
have
the
related
code,
which
can
be
any
any
code
that
you
are
pulling
from
other
projects,
other
repositories
and
that
and
then
all
together
they
will
commit.
They
will
create
a
new
commit,
and
this
commit
after
that.
What
we
will
have
here
is
the
CI
Pipeline,
and
that
would
build
basically
build
test
like
unit
tests
integration
tests.
A
Then
you
can
review
that
and
deploy
doing
continuous
deployment
or
continuous
delivered
through
the
city.
That
is
the
city
pipeline.
A
And
finally,
in
this
first
section
we
can
also
cover
a
little
bit
about
the
anatomy
of
the
cicd,
which
is,
as
you
know,
the
pipeline,
as
we
already
talked
about
it
can
be
organized,
so
the
pipeline
would
be
consistent
of
a
stage
right.
That
is
the
stage
which
are
a
group
of
of
jobs
right.
A
So
the
stages
are
basically
how
you
are,
how
you
are
telling
your
pipeline
to
run,
so
you
can
have
tests,
you
can
have
build
test,
deploy
which
are
the
most
common
stages
and
inside
those
stages
will
have
the
jobs
which
would
basically
like
have
all
the
actions
how
how
like
the
tasks,
you
know,
what
does
this
pipeline
will
run
and
who
does
all
this
running
is
the
runner,
the
lab
Runner?
A
That,
in
this
case,
can
be
your
PC.
Your
computer,
for
example,
we're
a
server
and
yeah.
It
can
be
different
options
for
the
runner
the
runner
would
act
would
perform
this.
The
the
tasks
that
are
inside
those
job,
though
so
the
script
that
you
are
telling
inside
the
job
the
runner
would
perform
that,
and
then
we
also
have
the
environments
which
can
be
the
test
environment.
To
review
can
be,
can
also
be
a
canary
environment
and
production.
A
Just
just
I'm
sorry,
I
I
forgot
to
to
mention
at
the
beginning
of
of
this
webinar.
We
have
here
on
the
tab,
the
QA
section,
so
please
feel
free
to
write
down
all
your
questions
even
during
even
during
this
webinar
here
we
would
call
for
them
later,
but
if
you
don't
would
like
to
already
start
sending
your
questions,
please
feel
free
to
do
it
all
right.
So
we
talk
the
basics
around
cicd,
the
basics
of
the
the
pipeline.
But
now,
let's
see
what
are
the
options?
A
The
architectures
around
pipeline
that
you
can
use
here
and
some
options
that
would
help
you
to
speed
up,
add
more
efficient
to
your
pipeline.
We
have
some
some
some
options
so
hear
that
texture
by
itself
it's
it's
somehow
align
it
with
different
different
development
process.
Let's
say
so:
we
have
the
basic,
which
is
a
very
straightforward
one,
and
it's
the
simple
and
easier
one
easier
to
maintain,
we'll
cover
that,
then
we
would
cover
directly
as
the
clicked
graphic
good
for
a
complex
and
large
and
large
project.
A
We
then
we
have
the
parent
chart,
the
parent,
the
parent
and
child
pipelines
that
it's
good
for
monorepose,
which
is
now
being
more
in
in
into
action
right
now.
A
lot
of
users
are
going
to
monorepose,
it's
a
very
good
option
for
that
that,
then
we
have
the
dynamic
pipelines,
which
it
is
a
sort
of
I'm
going
morning
dip
into
the
pie,
the
parent
child
and
not
also
a
good
option
for
mood,
multiple
repositories,
and
then
we
have
the
mood
project
pipelines.
It
is
when
you
you
want
to.
A
You
want
to
build
light
light
projects
that
require
cross
project
independencies,
so
would
cover
all
those
options
now
and
starting
with
the
basic
as
I
mentioned.
The
basic
pipeline
is
the
simplest
pipeline
in
gitlab,
so
we
have
a.
We
have
the
jobs
from
the
same
stages.
They
they
would.
As
as
I
mentioned,
they
have
these
states.
We
have
the
jobs
that
are
inside
the
stage.
They
would
run
concurrently,
so
once
the
next
stage,
so
you
have
the
the
build
and
the
test.
A
For
example,
the
jobs
on
each
stage
would
only
start
after
the
all.
The
jobs
in
the
previous
stage
were
completed,
and
also
it
is
the
easiest
one
to
maintain.
A
All
right,
so
we
have
here
the
structure
of
the
basic
pipeline
so
again
considering
the
most
common
stages.
We
have
the
build,
we
have
the
test
and
we
have
the
deploy
and
in
this
in
this
case
here
we
we
have,
for
example,
some
options,
but
we
would
cover
for
for
rules,
for
example,
when
delayed
when
on
fail.
A
It
is
some
options
to
to
run
when
you
want
to
run
so.
For
example,
one
delayed
delay
there's
a
question
of
the
job
for
a
specific
duration,
and
we
have
then,
when
on
failed,
run
the
job
only
when
the
at
least
one
job
was
fail,
it's
a
very
good
option
here,
because
you
don't
need
to.
It
will
definitely
save
you
some
time,
especially
if
it
does
does.
A
So
we
have
the
the
icons,
let's
say
so
we
have
like
the
the
green
check,
which
is
the
success,
the
success
job
here
and
then,
for
example,
we
have
the
test
B
job
with
this
exclamation
icon,
which
is
regarding
to
a
low
fail.
That
is
true,
and
then
this
this
engine
icon
here
it
is
for
manual
pipeline,
which
is
another
option
that
will
cover.
So
it
is
a
rule
that
you
can
set
and
then
this
pipeline
will
run
manually.
A
Now
going
more
into
some
more
complex
architectures
here
we
have
the
needs
or
directly
a
cyclic
graph,
a
DAC
as
someone
to
call
it.
It
is
some
some.
So
basically,
we
will
start
now
using
like
covering
the
Futures
to
optimize
your
your
pipeline.
A
So
imagine
that
the
time
sometimes
the
time
to
conclude
always
staged
can
be
a
borrow
neck
can
add
some
some
more
time
that
it's
needed.
So
what
the
tag?
The
recollect
a
cyclic
graph
would
allow
URL
to
jump
to
run
the
jobs
out
of
the
order
here.
A
So
in
case
that
you
don't
need
to
follow
a
specific
order
and
it's
an
in
some
specific
change,
you
can
jump,
you
can
skip
that
so,
and
the
jobs
can
start
immediately
after
that
dependent
jobs
complete
even
some
jobs
in
the
previous
stages
are
still
running.
A
So
basically,
let's
see
that
more
in
detail,
because
I
guess
it
will
be
better.
So
in
this
example
here
we
have
the
build
the
test
and
the
deploy
stage.
So
what
is
happening
here
in
the
basic
pipeline?
What
would
what
would
happen?
Is
that
the
the
test
stage,
which
will
only
start
running
the
jobs
once
all
the
jobs
in
the
build
stage,
would
be
completed
with
a
direct
acyclic
graph?
A
We
don't
need
to
do
it
so
in
case,
for
example,
here
we
have
the
in
in
the
beauty
State
we
have
the
Android
job
and
the
iOS
job
to
for
the
test
stage
to
start
the
iOS
test,
for
example,
they
don't
need
to
wait
the
Android
to
be
completed,
or
vice
versa.
You
can
set
that
that
would
speed
up
in
some
that
would
put
more
information
on
your
on
your
pipeline,
especially
for
long
pipelines,
and
it
would
speed
up
and
it
will
save
you
some
times
so,
basically
how
you
do
it.
A
A
A
And
we
have
here
the
graphical
visualization
of
the
needs,
sometimes
I
understand
that
we're
from
more
complex
pipeline
or
complex
project.
A
That
would
be
a
little
bit
more
difficult
to
visualize,
but
you
can,
if
you
select,
if
you
select
the
relationship
you
can,
if
selected
the
path,
you
can
see
the
relationship
between
between
all
the
jobs
and
you
can
access
that
on
your
under
cicd
pipelines
and
needs
you
can
you
can
see
this
this
graph,
which
is
actually
another
way
to
to
have
a
visualization
of
your
Pipelines,
now
we'll
go
over
the
parent
and
the
child
Pipelines,
which
is
basically
which
will
now
split
complex
pipeline
into
multiple
pipelines
with
a
parent's
child
relationship,
which
is
definitely
improve
the
performance
and
allow
them
to
run
concurrently
very
again,
as
I
mentioned
very,
very
useful
for
monorepus,
especially
like
for
those
Moon
rappers,
with
a
large
number
of
projects,
a
repository
host,
a
large
number
of
projects
and
a
single
pipeline,
the
finishes
to
trigger
different
automated
process.
A
A
All
right,
so
we
have
here
the
an
example
how
how
it
looks
like
so
we
have,
as
you
can
see
here,
we
have
the
downstream
and
with
the
downstream
pipeline,
which
is
the
child
pipeline.
So
what
would
happen
here?
A
The
parent
pipeline
would
trigger
yaml
files
within
the
same
project
and
as
an
example,
we
have
a
monorepo
here
that
deploy
in
the
video
micro
service
so
a
b
and
c
and
each
one
it's
your
with
your
own
child
pipeline
that
differs
based
on
the
automaker
service,
so
those
the
test.
For
example,
the
test
stage,
will
trigger
a
pipeline
from
another
project
and
then
we
will
trigger
when
we
run
this
pipeline.
A
So,
as
you
can
see
here
the
trigger
job
and
will
trigger
another
EML
file
in
another
project
that
it's
the
child
pipeline.
A
We
have
here
now
an
example
how
it
is
like
the
zmo
file
when
we
are
triggering
a
child
pipeline.
So
what
we
need
to
take
into
consideration,
it
is
the
section
trigger
include
that
Define,
the
child
configuration
the
ml
file
in
the
child
pipeline
that
will
run,
and
then
the
parent
pipeline
will
continue
running
after
triggering
so
both
would
run
together.
A
You
can,
you
can
use
a
maximum
of
three
child
pipelines
so
use
local,
remote
or
template
configuration
files
up
to
a
maximum
of
three
child
pipelines,
and
you
can
also
in
this
example
here
in
the
build
Linux
job.
You
can
see
that
there
is
the
rule
change,
so
you
can
also
use
this
option.
We'll
cover
those
rules
more,
but
now
you
can
see
that
this
option
you
can
also
trigger
you,
can
tell
the
parent
chart
the
parent
pipeline
to
trigger
a
child
under
Center
conditions.
A
So
in
this
example
here
we
can
see
that
the
child
pipeline
will
only
be
triggered
when
chains
are
made
to
the
file
CPP
CPP
underline
app.
So
this
is
another
way
to
optimize
all
your
or
your
pipeline,
because
only
when
there
is
a
change.
You
don't
need
to
run
that
every
time
only
when
there
is
a
change,
you
would
run
that,
so
it
will
save
you
some
some
CI
minutes.
A
For
example,
if
you
are
using
the
shared
gitlab
Runner,
they
will
save
you
some
some
CI
minutes
and
you
and
of
course,
would
add
more
efficiency
to
your
pipeline.
A
A
A
So
here
it's
an
example,
so
what
you
would
do
like,
instead
of
running
a
child
pipeline
first,
that
aesthetic
emo
file,
you
can
define
a
job
that
runs
your
own
script,
to
generate
a
yaml
file
which
is
then
used
to
trigger
a
child
pipeline
again.
This
can
be
very
powerful
in
generating
pipelines
targeting
content
to
that
change
or
to
build
metrics
of
targets
or
architectures.
A
So
here
we
have
this
example.
When
we
have
the
gitilepsy
CI
emo
file,
and
then
we
have
the
test
kitlab
cim
file
when
you
place
a
generated,
emo
file
in
the
job,
artifact
story
and
then
the
reference
it
later
to
actually
run
the
papal
line.
So
you
see
here
that
the
desert
effect
it's
mentioned
initially
on
the
path
on
the
artifact
path
and
then
in
the
included
artifact.
A
Now
we
have
the
the
mood
project
pipelines
where
you
have
where,
for
example,
it's
a
as
a
good
example
here,
your
application,
your
or
your
project,
your
build.
It
is
composed
from
different
like
different
projects,
so
it's
different
parts
of
your
application
are
hosted
in
different
projects,
and
this
is
a
very
a
very
useful
one
to
the
smooth
project
pipelines
to
speed
up
your
pipeline.
A
So
every
it's,
it
still
will
follow
the
same
process,
so
every
project
would
Define
your
own
Pipeline
and
then
this
pipelines
can
be
chained
together
to
create
essentially
a
much
bigger
pipeline,
so
very,
very,
very
useful
future
here
for
this
proposed,
then
we
we
can
see
here
the
component
project
pipeline.
A
So
basically
what
would
we
have
a
component
from
from
this
this
application,
and
then
we
have
the
full
project
Pipeline-
and
here
we
are
considering
like
an
upstream
and
and
downstream,
as
you
also
can
see
on
the
pipeline
status
here,
you
can
see
that
there
is
the
the
downward
stream
pipeline
being
triggered
here.
It's
also
a
nice
way
to
to
check
the
status
and
all
the
dependencies
here
so.
A
Here
in
this
in
this
in
in
this
basic
example,
one
one
of
the
components-
let's
say
that
this
first
component
here
change
and
then
what
would
happen
that
the
project
pipelines
should
run
so
for
that
to
to
to
happen
you
just
again,
you
need
to
use
the
trigger
the
trigger
here,
the
trigger
keyword,
and
that
would
be
that
would
trigger
the
full
pipeline
to
to
run
you.
A
You
can
also
use
the
strategy
depend
if
the
earlier
jobs
in
the
pipeline
are
successful.
A
final
job
triggers
a
pipeline
on
a
different
project,
and
this
is
also
very,
very
useful
for
this.
For
this
proposal,.
A
Okay,
now,
sorry,
sorry,
that's
me
oops
I'm,
just
going
further.
A
Sorry
sorry,
just
like
my
my
mouse
just
start
clicking
our
next
topic,
it's
variables,
which
is
now
once
we
cover
all
the
functionalities
with
pipelines
or
that
textures
actually
of
the
pipelines.
Well
now
we
can
go
more
into
the
components
of
those
pipeline.
What
we
can
actually
use
as
another
way
to
add
more
efficient
to
each
of
the
pipeline,
always
always
thinking
about
the
best
way
to
optimize
and
add
more
efficient
to
your
pipeline.
So
let's
cover
now
variables
in
a
nutshell:
we
have
those
the
here.
A
You
can
see
that
we
have.
You
can
have
predefined
CI
CD
variables.
You
can
also
Define
those
variables
in
a
emo
file
and
those
variables
can
also
be
defined
in
a
project
to
group,
or
instance,
settings
level,
which
is
very
nice.
A
So,
instead
of
going
for
each
yaml
file,
for
example,
you
can
create
directly
at
the
project
settings
or
the
group
of
settings
instance
settings
those
those
variables
so
which
are
those
variables
so
those
variables
you
can
assign
values
to
that
Dynamic
values
assigned
to
environments,
allowing
all
your
teams
to
customize
the
pipeline
they
could
to
customize
the
pipeline
jobs.
A
You
store
values
that
you
can
then
reuse
without
the
need
to
hard
code
them,
which
is
very,
very
nice.
Instead
of
keeping
your
pipeline
very
with
a
lot
of
hard
code,
you
can
then
use
the
variables
for
that,
and
then,
as
mentioned,
we
can
you
have
the
option
to
you
will
see
now
next
that
we
have
the
option
to
set
those
variables
as
in
the
project
setting
in
the
project
level
and
at
the
emo
file.
So,
basically,
here
what
we
have
person
on
on
the
left?
A
We
have
the
project
settings
a
very
a
nice
option
here,
a
very
useful
option
to
set
project
in
the
project
settings
because
you
can
mask
those
variables.
So
if
it
is
a
password
like
if
it
is
some
some
sensitive
data
that
you
don't
of
course
want
to
have
that
on
your
emo
file,
you
can
mask
those
variables
in
the
project
settings
you
can
also
use,
set
those
variables
to
be
used
in
protected
Branch,
for
example,
which
add
the
more
security
for
those
data.
And
finally,
you
can
also
set
like
configured
variables
according
to
the
environment.
A
Now
we
had
like
inside
the
variables
we
have
the
inherited
inherited
variables
which
which
has
which
we
can
pass
from
the
environment
from
one
job
to
another
job
through
development
inheritance
those
variables.
They
cannot
be
used
as
the
ICD
variables
to
configure
a
pipeline,
but
they
can
be
used
as
javascripts
and
a
variable.
Then
here
is
added
to
an
artifact
in
the
build
job
script.
A
So
you
can
see
here
that
there's
a
variable
in
the
script
in
the
build
in
the
first
in
in
the
build
Dot
and
it
is
saved
as
an
end
file.
Then
this
aim
filed
saved
as
an
artifact,
so
artifacts
repor,
dot,
word
artifact,
and
then
this
take
precedent
then,
here
over
you
can
call
that
both
variable
value
from
deploy
job,
which
is
the
one
that
you
are
that
you
are
calling
before.
A
We
have,
then
a
pre-filled
variables
very,
very
useful
future
here
for
to
a
to,
like
give
more,
to
give
more
easy
to
be
easier
for
for
your
developers
when
there
is
the
need
to
run
a
pipeline
manually,
so
the
Run
pipeline
form
will
generate
a
pre-filled
variable
for
your
pipeline,
based
on
the
variable
definitions
in
your
gitlab
CI
IMO
file.
A
So
basically
it's
useful
when
overriding
a
variable
or
manually
running
a
pipeline,
and
then
what
is
the
main
point
to
solve
here
when,
for
example,
again,
you
need
to
run
a
pipeline
manually,
you
have
to,
you
would
have
to
select
to
go
to
the
run
and
select
the
drop
down
menu
with
all
the
variables,
and
then
you
probably
forget
all
the
variables
you
need
to
check.
A
Some
documentation
need
to
go
back,
see
what
is
what
variables
you
need
to
add
what
is
included
in
this
in
this
Pipeline
and
what
this
this
feature?
Does
it
we'll
press
fill
it
the
variables
with
the
values
that
was
set
on
on
the
on
the
ml
file
before
so
it's?
It
is
something
very
useful
to
save
time
and
say,
save
some
complexly
to
to
this
process
that
can
be
very,
very
complex,
sometimes,
and
then
here
we
have.
A
Finally,
we
have
how
we,
how
the
how
git
lab
process
those
those
variables
so
we
have
the
manual
pipeline,
run
triggers
Kettle
pipeline
variables.
Then
we
have
the
project
group
level
inherited
cicd
variables.
Then
we
go
to
the
job
level,
that
is
the
global
yaml,
the
final
variable
deployment
variables
and
predefined
CI
CD
variables.
So
it's
how
how
the
system
will
will
process
those
those
variables.
A
All
right
now
that
we
covered
those
components,
we
can
now
jump
to
the
rules.
We
already
saw
some
rules
before
so
the
rules
are
options
where
you
can
actually
tell,
for
example,
your
jobs
when
to
run.
You
can
also
tell
your
jobs,
for
example.
If
you
want
to
skip
then,
then,
or
if
you
want
to,
if
it,
if
even
if
this
job
fail,
you
can
skip.
A
That
is
not
important
and,
as
you
can
probably
imagine,
this
is
very
helpful,
very
helpful
for
different
scenarios,
different
cases,
but
so
you
can
also
here
save
you
some
some
time,
as
we
are
talking
a
lot
about
that
in
this
in
this
webinar.
So
to
start
here
we
have.
A
We
have
a
full
overview
like
a
basic
overview,
how,
when
a
pipeline
will
run
so
and
then,
when
those
those
rules
Come
into
action,
so
basically
those
proper
lines,
a
pipeline
can
run
with
a
new
commit,
a
new
Branch,
a
new
tag,
new
merge
requests
and
then
manual
API
call,
and
you
can
also
schedule
a
pipeline.
So
let's
maybe
go
over
a
little
bit
more
about
those
rules.
So
here
we
have
a
full
overview:
how
they
would
like
like
how
they
would
be
configured
inside
the
ml
file.
A
So
I
will
would
go
inside
the
job
block
inside
the
job,
and
then
we
have
the
the
rule
block
and
you
you
have
different
options.
You
can
also
maybe
take
a
screenshot
here
or
save
that,
because
it
is
one
of
the
most
useful
rules
you
have,
so
you
have
the
Clause
rules
with
Eve
chains,
operators
results
and
when
one
option,
so
you
have
all
this-
that
is,
that
can
be
configured
to
to
tell
your
jobs.
A
A
All
right,
so
our
our
next
one.
It
is
our
first
example
basic
example.
So
in
this
one
the
job
will
run
for
merger
request,
Pipeline
and
both
branch
and
pipelines
and
tag
pipelines.
So
when
you
use
the
if
CI
pipeline
Source
equal
to
merge
request
event,
you
are
telling
to
run
for
merge,
request
pipelines
and
when
you
use
mer
pipeline
Source,
equal
Posh,
you
are
saying
like
to
also
run
for
both
Branch
pipelines
and
tag.
Pipelines.
A
Our
second
example
here
we
actually,
we
want
to
prevent
our
job
to
run
for
merge,
requests
or
scheduled
pipelines.
We
can,
then
we
will
achieve
that
by
simply
adding
the
when
rule
and
never
it's
a
value.
So
you
can
see
here
that
the
rule,
if
merge,
requests,
CI
pipeline,
Source,
equal,
merge,
requests
and
setback
line,
Source
equal
push
when
never
then
say
like
never,
never
run
the
pipeline
for
those
for
those
cases.
A
And
finally,
the
last
one
is
when
the
rule
will
tell
their
own
success,
tell
the
job
to
run
if
the
previous
stage
was
successful,
so
it
actually
will
only
run
if
the
previous
text
stage
was
successful
and
never
for
merge,
requests
and
push
and
push
pipeline.
A
Our
third
example
in
this
job
we
will
set
to
create
a
container
a
container
image.
A
However,
as
you
can
see
here,
I'm
setting
for
this
job
to
run
manually
if
the
variable
is
equal
to
the
specific
string
and
the
following
summations
were
changed
so
that
that
example
can
be
very
useful
for
scanning
tests
when
the
change
to
specific
files
do
not
require
some
secure
tests
to
run
and
yeah.
So
you
can
see
that
otherwise,
like
this
job
will
be
manual
and
then
it's
also
again
speeding
up
a
little
bit
to
your
your
pipeline.
A
Our
last
example
of
this
rule
series,
in
this
case
our
rules,
conditioning
to
a
commit
to
the
master
Branch,
so
CI
underline
commit
Branch
equal
to
master,
and
that
would
be
manual.
So
if
this
commits
happen,
this
particular
job
will
be
manual
and
would
be
delayed
by
three
hours
and
also,
as
we
saw
some
some
in
some
previous
examples
we
also
have
here
they
allow
to
fail.
That
is
true,
which
means
that
the
pipeline
would
not
block
it
and
would
not
be
blocked
by
this
specific
job.
A
So
you
can
just
tell
that
the
next
stage
or
the
whole
pipeline.
If
this
is
failing,
you
can
continue
it
don't
you
can
continue
to
to
run
it
without
it,
so
without
blocking
the
whole
pipeline.
A
We
have
now
the
workflow
rules
that
will
control
when
the
entire
pipeline
will
run
and
are
outside
of
job
definition.
So
we
are
at
the
commit
message
with
whip.
Then
it
won't
run
the
pipeline.
If
a
tag
was
applied,
then
it's
also
run
want
to
run
authorize
it
it
will.
It
will
run.
A
And
finally,
the
workflow
we
have
the
depend
on
variables
the
same
as
as
another
option
here
for
for
the
variable
Docker
file.
We've
set
different
templates
in
on
the
rules,
and
then
this
pipeline
will
always
always
run.
A
Moving
forward
now
to
artifacts
artifacts
as
one
of
yeah
the
what
is
coming
out
from
your
build,
so
I,
build
and
published
state
will
always,
as
you
know,
generate
files
that
you
use
for
deploying
your
application
or
being
results
of
tests,
and
then
this
is
all
regarding
the
the
at
the
artifacts.
So
here
you
have
again
a
full
visualization
of
How
It's
configured,
so
gitlabs
allows
for
saving
of
build
artifacts
and
are
the
output
of
any
jobs.
A
You
can
also
use
those
artifacts
for
super
sequent.
Subsequent
jobs
can
pull
in
any
combination
of
paths
or
files
and
in
a
self-managed
instance,
you
can
save
that
store.
That
is
a
local
or
storage
object.
Storage.
You
can
then
use
rules
and
artifacts
to
exclude
so
exclude,
for
example,
limit.
What
is
added
depends
limit
what
gets
downloaded
or
supersequanted
jobs
when
will
determine
if
that
facts
would
be
stored
or
not
and
expiringly
to
mine,
when
the
artifacts
would
be
destroyed.
A
So
when
it
comes
about
downloading,
you
can
download
it
directly
on
the
pipelines
page,
you
can
download
it
directly
on
the
jobs
and
if
you
go
to
a
specific
job,
you
can
download
it
the
artifact
browser,
if
you
are
using
the
gitlab
package
registry
here.
A
The
artifact
Administration,
it
is
also
now
for
SAS
users.
It's
very
it's
something
very
useful
like
that.
You
can
set
that
at
project
level,
so
for
SAS
for
namespace,
you
can
set
the
project
level
to
keep
art
threads
from
most
recent
successful
jobs
or
not.
So
this
would
save
you
some
storage
space
for
sure.
A
So
this
is,
if
you
disable
this
option,
the
recent
jobs
would
not
be
kept
and
for
that
project
setting
for
to
change
that
to
configure
those
options
for
the
namespace,
the
person
has
to
have
at
least
a
maintainer
and
a
maintainer
permission
and
or
the
owner
of
the
project,
and
also
change
that,
and
you
can
also
for
self-manage
at
the
instance
level.
You
can
then
be
more
more
more
specific.
This
option
is
more
customizable.
You
can
set
the
default
time
the
default
that
Fashion's
inspiration
for
example.
A
Here
you
can,
you
can
see
that
it
was
seven
days
also
very
useful,
especially
for
to
save
some
space
and
to
have
something
more
more,
more
clean
on
your
side.
A
Gitlab
offers
seriously
you
can
see
like
the
all
the
package,
the
package
registry,
language
npm,
something
very,
very,
very
non-uh,
new
jet
by
P
all
the
language
specific
package
registry,
that
you
can
use
a
dependency
proxy
Docker
registry
and,
of
course,
for
that
I
would
definitely
recommend
you
to
go
to
our
documentation.
There
would
be
more
details
and
more
information
about
it
if
you
are
willing
to
to
use
a
gitlab
package
registry.
A
Dependencies
or
artifact
very,
very
useful
to
reduce
the
number
of
experts
passed
from
previous
jobs.
Improved
job
performance
mptrb
would
skip
download
the
artifacts
and
only
from
jobs
from
previous
stage,
so
by
default.
Our
threats
from
previous
stages
are
passed
to
the
next
stage,
so,
but
you
can
control
it
by
using
the
dependencies
keyword,
so
it
will
Define
a
list
of
jobs
to
fetch
artifacts
from,
and
you
can
also
set
a
job
to
download
no
artifacts
at
all.
A
So
it
should,
if
you
want
to
to
to
keep
that
more
more
clean,
more
efficient.
A
You
can
also
set
to
not
download
at
all
any
artifacts,
and
you
will
Define
the
dependence
in
the
context
of
the
job
and
pass
a
list
of
all
previous
jobs
from
which
the
artifacts
would
be
should
be
downloaded
only
jobs
from
the
stage
that
are
executed
before
the
one,
the
current
one,
that
you
are
using
the
dependency
keyword
and
defining,
and
when
we
Define
the
dependencies
with
an
empathy
array,
it
will
keep
downloading
any
artifacts
from
that
job.
A
And
finally,
to
conclude
our
our
webinar:
today
we
have
the
include
and
extends,
which
is
one
of
the
most
useful
Futures
here.
A
In
all
all
this,
all
the
webinar
contests
today
definitely
a
very,
very
useful
to
break
the
cicd
configuration
to
multiple
files
and
increase
readability
from
long
configuration
files
and
helps
also
to
avoid
duplicated
configuration,
for
example,
Global
default
variables
for
a
project,
so
very,
very
useful
in
general,
allows
the
inclusion
of
external
yaml
files,
and
then
you
can
split
one
long,
emo
file
into
multiple
files
to
increase
those
readability,
as
as
I
mentioned,
and,
and
you
can
also
store
template
files
in
the
center
repository
and
include
them
into
the
project.
A
So
you
can
have
here.
You
have
the
method
to
describe
it
and
you
can.
The
description
templates
includes
templates,
which
are
provided
by
also
by
gitlab.
A
We
have
then
the
extents
which
is
similar
to
yemo
file
anchors,
but
it's
a
little
bit
more
flex,
flexible
and
readable,
so
it
will
allow
you
to
enhance
and
reuse
configuration
sections.
So
if
you
know
about
emo
are
already
a
little
bit
about
yemo
anchors.
You
know
that
is
used
to
duplicate
or
an
inherit
content
across
your
emo
files,
but
they
are
only
valid
in
the
file.
They
were
defining
that
that
is
where
extends
comes
in.
A
You
can
inherit
up
to
11
levels,
and
but
here
on
our
side
we
only
recommend
no
more
than
three
and
what
it
does.
It
merge
the
configuration
from
test.test
and
with
the
respect
job.
So
it's
very
very
useful
and
then
you
can
also
hear
in
the
end
combine
both
you
can
combine
the
includes
and
extends
together,
and
this
is
configuration
files.
A
This
works
with
across
configuration
files,
one
use
and
includes
with
an
extent
so
in
this
example.
Here
you
want
an
an
external
include
file
with
a
nice
little
script,
so
you
can
include
that
in
your
gitlab
CI.
So
now,
instead
of
having
to
copy
and
paste
complicated,
gitlab
cimo
file,
you
can
create
job.
That
extends
that.
A
So
it
is
a
very
also
very
help
very
helpful
future.
Here.
A
So
again,
I
would
like
to
to
thank
you
open
now.
The
section
for
qas
I
saw
here
that
we
already
have
some
some
key
ways
here.
A
Let's
see
yeah,
we
have
I,
guess
all
them
all
them
were
answered
right
Karina.
We
have
a
lot
of
questions
here.
Yes,
thank
you.
Thank
you
again
Karina
and
thank
you.
Everybody
for
for
joining
today
was
again.
If
you
have
any
questions,
you
can
always
contact
a
git
lab
member
and
yeah.
You
can
also
check
the
next
webinars
in
our
in
our
web
page.
There
would
be
more
coming
next
months
and
thank
you.