►
From YouTube: Advanced CI/CD with GitLab - EMEA Webinar
Description
Expand your CI/CD knowledge while we cover advanced topics that will accelerate your efficiency using GitLab, such as pipelines, variables, rules, artifacts, and more. This session is intended for those who have used CI/CD in the past.
A
Good
morning,
everyone
thank
you
for
joining
today,
I'm
just
waiting,
some,
maybe
one
minute
more
or
other
people
to
to
join,
and
then
we
can.
We
can
start
you
can
grab.
Some
water
can
have
some
grab
some
coffee
and
in
one
minute
we
can
start.
Then.
A
All
right,
I
think
we
can
start
so
welcome
everybody
for
today,
Advance
CI
and
CD
worksheet
webinar
we
have
today
I
have
actually
helping
me
today,
osnet
and
Hikaru.
They
are
my
colleagues
from
this
dcsm
team.
So
just
let
me
introduce
again,
maybe
for
those
who
do
not
know
me.
My
name
is
Mariana
I'm,
a
customer
success
manager
at
kitty,
lab
covering
the
dark
team
and
based
in
Hamburg
so
and
alsanti
and
hikato.
They
are
my
colleagues
and
they
would
help
me
with
the
questions.
A
So
before
we
start,
you
would
see
at
the
bottom
of
our
presentation
here
that
you
have
the
Q
a
session,
so
you
can
drop
your
questions
there
and
Ozuna
and
Ricardo
would
be
more
than
glad
to
support
you
during
the
process.
Okay
and
then,
if
there
is
any
remaining
questions
by
the
end
of
the
of
the
webinar,
we'll
also
cover
them
all
right.
A
So
some
the
agenda
for
today,
but
also
before
starting
covering
that
it
is
important
to
mention
that
this
webinar
is
intended
for
those
who
already
have
some
knowledge
regarding
git
lab,
especially
regarding
the
pipelines,
the
CI
configuration
and
already
working
with
that.
So
this
is,
there
would
be
not
so
new
this
information,
but
would
also
help
with
the
current
knowledge
that
you
have
just
highlighting
that
we
are
in
this
session.
We
do
not
plan
to
cover
the
runner
configuration
and
the
cd
part.
A
Basically,
we
would
like
go
with
the
basics
of
cicd,
the
pipeline
structure,
architecture,
the
variables
how
you
can
control
your
jobs
using
rules.
We
have
also
a
little
bit
for
our
artifacts
and
finally,
some
some
components.
That
includes
and
extends
all
right.
So
I
know
that.
Maybe
this
is
a
very.
This
is
a
common
graphic
for
you,
but
it's
very
important
to
mention
and
to
go
through
the
recommended
process
here
at
gitlab,
so
the
it
doesn't
mean
that
you
do.
A
You
cannot
follow
something
on
your
side
or
something
that
your
team,
it's
already
use
it.
This
is
how
what
we
recommend
to
achieve
the
best
to
get
the
best
value
from
git
lab,
including
they
all
the
devops
cycle.
So
basically,
here
we
we
can
go
with
the
create
an
issue.
A
So
let's
say
that
when
we
have
when
you
on
your
team,
are
discussing
in
implementing
a
new
future
or
changing
something
in
your
code,
you
would
create
an
issue
and
then
this
issue
would
be
assigned
to
the
res,
the
the
team,
and
once
you,
you
created
a
merge
request,
a
new
branch
is
created.
This
new
branch
in
this
case
is
the
future
Branch,
where
your
your
team
would
working
on.
So
you
have
the
default
branch
which
you
can
consider
as
your
production
branch,
and
then
you
have
the
future
branch
in
this.
A
A
Then,
once
you
you
have
the
pipeline
that
was
completed,
you
can
discuss
the
results,
any
findings.
If
there's
any
vulnerabilities
that
were
found,
you
can
discuss
that
with
your
team,
and
if
it
is
everything
okay,
then
you
can.
You
can
approve
this
merger
request,
then
you
can
also
set
special
approves
to
special
people,
so
maybe
managers
here
you
can
designate
this
this
part
for
some
dedicated
people
once
there's
the
merge
request
is
closed.
A
Then
we
have
the
other
part
of
of
the
process,
which
is
the
C
the
city
pipeline,
which
we
run.
We
have
the
other
security
scans
that
were
part
after
the
deployments
and
then
the
update
of
the
security
dashboards.
A
So
you
have
your
code
project
and
then
you
want
to
commit
it
and,
if
necessary,
add
other
related
code
here,
which
is
also
possible.
For
example,
within
the
the
same
Repository
once
committed
to
the
CI
pipeline
would
be
triggered.
It
would
build
your
application
use
git
lab
Runners
that
you
would
see
now
in
the
next
slide
and
run
units
and
integration
tests
to
check
if
the
code
is
valid.
A
A
Now
that
we
have
now
that
we
discussed
the
CD
here,
it's
a
brief,
a
brief
explanation
about
the
anatomy
of
the
the
cicd
build.
So
we
have
the
pipeline.
A
Then
we
have
this
stage
which
are
group
of
jobs
that
find
when
the
when
to
run
the
jobs,
for
example,
a
states
that
run
tests
after
a
stage
that
compiled
the
code,
then
we
have
the
the
jobs
which
will
Define
what
this
pipeline
will.
Do.
It's
fine
what
to
do
in
this
specific
part,
so,
for
example,
jobs
that
can
compile
our
test
code.
A
In
this
case,
the
runners,
the
kitty
lab
Runners,
execute
the
actions
declared
on
the
jobs
you
can
have
as
many
Runners
as
as
you
want
and
yeah
multiple
import.
Also
here,
multiple
jobs
in
the
same
stage
are
executed
in
parallel.
If
there
are
enough
concurrent
runners
and
then
finally,
we
have
the
environments,
it's
where
you
would
deploy-
and
you
have
some
example
here-
test
review,
staging
Canary
deployments
as
I
mentioned,
and
production.
A
Now
we
will
talk
a
little
bit
more
real
into
the
pipeline
architectures
the
the
main
idea
here.
The
goal
is
to
apply
some
Futures
some
architecture,
some
different
architectures
with
the
goal
to
add
more
efficiency
to
your
pipelines.
B
A
All
your
process,
so
maybe
saving
some
time,
making
it
more
it's
faster
and
and
better
for
your
for
development,
so
starting,
we
would
cover
today
this
this
five
architectures
type.
So
we
have
the
basic
the
direct
acyclic
graph,
dag
or
need
some
time,
the
parent
and
the
child
pipelines,
which
is
one
of
the
most
known
architectures
here,
also
the
dynamic
the
dynamic
child
pipelines
and
wood
projects,
which
is
also
one
of
the
most
known
between
those
those
architectures.
A
Starting
with
basic
pipelines,
it
is
important
to
mention
that
it's
the
simplest
pipeline
in
gitlab,
the
jobs
run
independently,
sometimes
on
different
Runners,
and
all
jobs
in
a
stage
must
be
completed
successfully.
Before
proceeding
to
the
next
stage.
A
A
Here
it
is
the
VISA,
it
is
how
you
visualize.
This
is
how
the
status
the
the
visualized
graphic
of
the
pipeline,
in
this
case,
the
basic
one.
We
have
this
stage
the
build
the
test
and
the
deploy
stage,
and
then,
under
those
stages,
you
can
see
the
the
jobs,
the
jobs
that
were
that
were
defined
in
the
pipeline
and
then
those
icons
they
have
a
meaning.
It
will
also
depend
on
how
you
were
configuring,
your
pipeline,
which
type
of
conditions
you
are
using.
So
maybe
here
for
the
test
to
be
job.
A
You
can
see
that
we
set
this
job
to
allow
failure
and
we
would
cover
those
we'll
cover
those
keywords
later
those
rules,
but
it
is
some
some
possibility
here
and
actually
it
adds
some
efficiency
to
your
pipeline,
because
if
this,
this
job
is
not
so
important
and
can
be
failed,
you
are
not
blocking
your
pipeline
for
that
at
any
in
the
deploy
a
stage
here,
the
deploy
job.
It
has
this
engineer
icon,
which
means
that
it
was
also
what
that
this.
A
This
job
needs
to
be
run
manually,
so
you
need
to
actually
go
and
click
in
display
button
to
to
run
this.
This
this
job.
A
Now
we
go
into
a
more
a
more
specific,
more
detailed
architecture
here
and
the
needs
it's
a.
It
is
used
if
you,
it
is
good
if
you
have
long
staged
in
your
pipeline
that
are
taking
too
long
to
complete
and
it
would
the
needs
the
needs
or
the
tag
I
like
to
to
call
it.
Dag
will
speed
up
the
process,
so
basically,
you
can
set
with
that.
You
can
set
dependencies
between
jobs
of
a
different
states
allowing
allowing
the
jobs
to
run
out
of
the
order.
A
So
we
saw
before
in
the
basic
that
they
follow
an
order,
so
the
next
the
job
is
in
the
next
stage
will
only
start
If.
The
previous
stage
is
all
the
previous
stages
are
successfully
completed.
In
this
case,
we
can.
We
would
work
with
the
needs
keyword
to
change
this
behavior
and
actually
saying
to
other
jobs
that
they
can
start
running
in
different
order
and
we'll
actually
again
here
it's
another
way
to
reduce
some
time
with
your
pipeline.
A
So,
if
you're
looking
to
reduce
your
time
of
a
pipeline
that
needs
the
deck,
actually
it's
a
very
good
option.
A
So
here
we
have
an
example
of
the
need
of
the
usage
implementing
the
need.
So
first
we
have
the
the
visualization
of
the
pipeline
and
then
our
emo
file,
so
we
have
a
development
of
Android,
iOS
and
web
apps
in
multi-stage
pipeline.
A
The
test
stage
will
run
once
all
the
stage
of
the
build
stage
are
completed.
What,
however,
here
we
configured
some
dependencies,
so
the
iOS
test
jobs
can
start
when
the
previous
Android
builds
are
still
running.
So
the.
B
A
Does
not
need
to
wait
for
the
enjoyed
job
to
to
be
completed;
it
can
already
start
it,
and
we
just
added
the
needs
keyword
here.
Just
an
important
note
be
aware
that
jobs
cannot
depend
on
artifacts
from
previous
stages
that
are
not
listed
as
dependent
jobs.
This
is
one
thing
that
you
need
to
be
aware.
A
Here
it
is
a
visualization,
a
graphic
visualization
of
dag
and
you
can
access
that
on
under
cicd
in
pipelines,
and
then
you
do
have
in
the
top
the
needs
section,
and
once
you
select
the
path
you
would
see
all
the
dependencies
how
one
job
is
dependent
on
the
other,
and
it
can
also
be
a
good
way
to
understand
if
there's
something
else
that
you
can
do
and
every
or
if
that
match
your
your
goals.
A
Configuration
for
the
single
Global
pipeline
because
becomes
hard
to
manage,
especially
in
here
for
mono,
Rebels
hosting
large
numbers
of
projects
and
a
single
pipeline
definition
is
used
to
trigger
different
automated
process
for
different
components
and
the
pipeline
user.
Experience
has
too
many
jobs
and
Stage
to
work
with.
So
with
the
child
pipelines,
you
can
split
complex
pipelines
into
multiple
pipelines,
allowing
child
pipelines
to
run
concurrently.
A
So
here
we
have
an
example
and
again
the
visualization,
which
is
a
little
bit
different
than
the
previous
one,
because
we
can
see
here
that
we
have
the
downstream
Pipeline
and
the
other
one
is
the
Upstream.
So
what's
what
that's
mean
so
what's
actually
happening
here,
so
the
child
pipeline
triggers
other
yaml
files
within
the
same
project
and
when
it
does
triggered
another
pipeline
starts
running,
which
is
the
downstream
here.
A
So
in
this
example,
it's
specific.
We
have
a
monoreple
that
is
deploying
juvedo's
microservice,
a
b
and
c.
You
cannot
see
the
a
b
and
c,
but
it's
a
b
and
c,
and
each
one
has
its
own
pipeline.
That
differs
based
on
the
microservice
technology
and
needs.
So
you
can
see
in
the
downstream
that
each
one
we
have
different
jobs.
We
have
different
child
pipelines
that
start
running.
A
Here
it's
another
example:
the
architecture,
how
like
for
model
repo
again
so
here
we
can
see
the
emo
file
configured
when
we
use
the
child
pipelines
and
specifically
here
the
parent
pipeline
triggers
a
child
pipelines.
So
the
two
jobs
build
windows
and
build
Linux
are
triggering
other
yaml
files.
A
The
parent
pipeline
continues
running
after
tab
triggered
the
child
we
would
cover
like
I
mentioned
before,
but
would
cover
rules
the
rules
soon
here,
but
just
to
give
you
a
little
bit
context
because
we're
seeing
some
rules
you
can
use
them
to
trigger
a
child
pipeline,
earn
certain
conditions
as
well,
so
the
child
pipeline
here
only
triggers
when
changes
when
changes
are
made
to
the
file
at
CPP
underline
app
folder.
So
this
is
also
a
very
great
option
here
for
you.
A
So
if
you,
if
the
sparkling
only
needs
to
to
run
when
there's
a
this
change,
you
can
use
the
rules.
Does
we
add
it
here
as
an
option
also
to
combine
both.
A
We
have
now
the
dynamic
Pipelines.
It's
can
also
be.
Sometimes
we
refer
to
them
as
Dynamic
child
Pipelines,
because
they
have
like
their
kind
of
the
further
futures
of
the
chat
pipelines
with
them.
You
will
dynamically
generate
the
child
configurations
file
from
the
pipeline,
so
it's
a
little
bit
different
because
we
are
dynamically
generating
DML
files,
so
it
allows
you
to
generate
configuration
in
your
applications,
paths
variables
to
those
files
and
much
more.
A
This
is
also
a
great
addiction
addition
here
to
pass
variables
and
one
of
the
main
benefits
for
that.
It's.
It
keeps
repositories
clean
and
Scattered
emo
files.
A
We
have
an
example
how
how
the
dynamic
pipelines
work
so
still
here's
two
of
running
a
child
pipeline
for
a
static
emo
file.
You
can
define
a
job
that
runs
your
own
script,
generator
emo
file,
which
is
then
is
used
to
trigger
a
child
pipeline.
So
in
our
example
here
the
job
setup,
we
will
secure
a
script
that
we
produced
child
pipeline
configuration
and
then
stored
as
an
artifact.
Then
the
test
job
you
will
read:
they
store
that
fat
and
use
it
as
a
configuration
for
the
child
pipeline.
A
A
Basically,
a
pipeline
in
one
project
can
trigger
Downstream
pipelines
in
another
project
and
password
also
pass
variables
to
Downstream
pipeline.
So
here
we
are
talking
about
different
projects,
not
in
the
same
repository
very
useful
when
building
deployed
larger
applications
that
are
made
up
of
different
components
that
have
their
own
project
and
build
pipelines.
So
if
you
don't
want
to
use
monorepos
here,
for
example-
and
you
have
your
application
is
splitted
in
different
projects.
This
is
also
a
very
useful
future
for
you.
A
We
have
again
an
example
how
how
DML
file
will
look
like,
and
also
the
visualization
of
this
pipeline
after
it
completed
so
to
simplify.
Let's
consider
a
simple
case
where
we
ask
another
project
to
run
a
servers
for
our
Pipeline,
and
you
can
imagine
you
can
imagine,
imagine
an
app
that
is
divided
into
multiple
repositories,
each
hosting
an
independent
component
of
the
app
when
one
of
the
components
changed
that
project
prep
lines
writes
if
the
early
jobs
in
the
pipeline
are
successful.
A
Are
final
job
triggers
a
pipeline
on
a
different
project,
which
is
the
project
responsible
for
building
running
smoke
tests
and
deploying
the
whole
web.
Note
that
the
trigger
strategy
keyword
Force
the
trigger
job,
deploy
to
wait
for
the
downstream
pipeline
to
complete
before
it
is
marked
as
success.
If
the
component
pipeline
fails
because
of
a
bug,
the
process
is
interrupted
and
there
is
no
need
to
trigger
a
pipeline
for
the
main
app
project.
A
A
Now
we
would
cover
the
variables.
A
Also
very
useful
future
is
that
one
that
is
probably,
if
you
already
using
the
CI,
CD
right,
your
usage
to
that
using
in
different
process
and
parts
of
your
development.
So
let's
cover
just
some
important
information
here
about
them.
So
basically
just
a
basic
information.
The
CI
CD
variables
are
type
of
an
environment
variable
and
you
can
use
them
to
control
the
behavior
of
jobs
and
pipelines,
store
values
that
you
want
to
reuse
that
later
and
avoid
hard
coding
values
in
your
emo
file.
A
So
here
it's
an
example
how
you
can
define
those
those
variables,
so
you
create
a
variable
in
your
emo
file,
Define
the
variable
and
the
variable
with
the
variables
keyword.
Please
note
that
variables
saved
in
the
emo
file
are
visible
to
all
users
with
access
to
the
Repository
and
should
the
store
no
sensitive
project
configuration.
So
no
tokens
access,
no
passwords,
nothing
that
has
this
sensitive
level.
A
So
for
those
sensitive
variables
you
can.
You
can
configure
that
directly
on
the
project
settings,
because
this
is
another
future.
You
can
just
hide
it
at
the
valuable
as
the
value
as
you
see
here
in
the
UI
project
settings
and
only
some
specific
rows
have
access
for
that.
B
A
A
So,
in
this
specific
example,
the
build
job
script
save
the
variable
as
an
amp
file,
then
save
DMV
file
as
an
artifact
and
the
jobs
in
later
exchange
can
then
use
those
variables
in
this
script,
and
you
cannot
as
well
as
a
also
a
note
here.
You
can
also
pass
the
do.
Dot
and
variables
shoot
down
the
string
pipeline,
so
the
when
we
saw
that
in
the
mode
project
architecture
that
we
mentioned
the
variables
that
can
be
passed
through
other
broader
pipelines.
Here
it
is
an
example.
A
We
have
also
the
pre
filled
variables.
It
is
a
very
useful
future
and
an
option
and
they
are
useful
when
overriding
a
variable
or
manually
running
a
pipeline
and
so
how?
How?
How
do
they
help
the
users
do
not
need
in
this
case
here
that,
when
you're,
for
example,
running
pipe
running
a
pipeline
manually,
they
don't
need
to
select
all
the
required
variables
from
a
drop
down
menu
and
select
which
one
they
need.
A
So
here
the
form
will
generate
pre-filled
variables
for
you
added
for
your
pipeline,
based
on
the
variables
definition
in
your
em
offer.
A
Very
quickly,
the
level
of
importance
of
the
variable,
since
we
saw
that
there
are
different
types
and
and
so
many
options
with
variables
which
I
recommend
you
to
check
our
documentation,
but
just
to
clarify
the
level
of
importance.
We
have
the
the
first
one.
A
A
Now
the
rules
we
talk
about
the
rules
already
I
gave
you
some
some
example,
but
we
would
cover
more
examples
and
how
you
can
actually
apply
rules
for
both
your
jobs
and
for
your
pipelines.
So
usually
I
like
to
say
that
is
an
option,
a
good
option
to
decide
when
your
jobs
would
run
and
when
your
pipelines
would
run
so
for
those
who
are
looking
for
efficiency
to
add
more
efficiency
for
our
pipelines.
The
rules
are
for
sure
very
important
here
and
should
be
considered
as
an
option.
A
So
we
have
the
now
the
first
part
when
the
pipelines
run,
so
they
would
run
with
a
new
commit
with
a
new
Branch,
a
new
tag,
new
merge
requests
manually
through
an
APL
call
or
a
schedule.
So
those
those
are
the
options
when
a
pipeline
will
run
the
rules
for
the
basic.
A
This
is
the
common
instructor
here
for
the
the
rules
under
a
job.
So
when
you
use
the
rules
for
a
job,
the
Clause,
if
it
is
one
of
the
most
used
one
and
has
a
configured
or
predefined
variable
as
an
input-
and
we
also
have
here-
I-
separated
the
different
rules,
the
options
you
have
for
clouds,
operators
results
N1
options,
so
you
can
use
them
depending
on
their
SQL
sense
or
or
the
need
that
you
have.
A
Starting
with
I'm
sorry
with
the
rules
for
jobs,
so
with
rules
you
can
determine
when
a
job
do
run
or
when
it
should
be
excluded.
It
saves
a
lot
of
time
and
especially
when
heavy
jobs
do
not
need
to
run
every
time,
and
in
this
slide
here
we
have
the
predefined
CI
CD
variables
list
that
you
can
use
and
the
pipelines
type
that
the
variables
can
control
for
so
for
branch
pipelines
that
run
for
git
Posh
to
a
branch
like
new
commits
or
tags.
A
The
tag
pipelines
that
run
only
when
a
new
GitHub
is
positioned
to
a
branch.
The
merge
requires
pipelines
that
run
for
change
to
a
merge
requests
like
new,
commits
or
scheduling
the
Run
pipeline,
and
they
schedule
Pipelines
in
this
example
here.
So
we
can
see
that
the
job
one
will
run
for
merge,
requests,
pipelines
and
the
schedule
request,
pipeline
schedule
pipelines,
but
not
for
branch
or
tag
pipeline.
A
So
we
are
using
the
CI
pipeline
underlying
source
and
with
the
the
value
assigned
to
the
skew,
to
this
variable
will
then
tell
to
this
job
when
it
should
run.
A
A
And,
finally,
the
the
one
rule.
The
one
on
success
rule
tell
the
the
job
to
run.
If
the
previous
stage
was
successful,.
A
Our
second
example,
this
job
was
set
to
create
a
container
image.
However,
it
would
be
manually
triggered
if
the
variable
is
equal
to
the
specific
string
and
the
file,
as
I
mentioned,
were
changed.
This
example
can
be
very
useful
for
scanning
tests
when
the
change
to
specific
files
do
not
require
some
secure
tests
to
run
again.
This
can
optimize
your
pipeline
and
speed
up
your
development
process.
A
A
This,
the
other
part
that
we
like
to
cover
with
rules,
as
I
said
before,
that
we
can
associate
rules
to
tell
them
to
the
pipelines
when
to
run
and
to
also
tell
jobs
when
to
run
now.
It's
the
work
for
rules
when
we
would
tell
the
pipelines
when
it
should
run
in
our
slide.
Here
we
have
the
common
again
the
common,
if
Clauses,
for
the
work
rules
for
the
workflow
rules,
so
the
examples
and
the
details
how
they
can
be
applied
and
in
our
emo
file
example
the
box.
A
Here
we
prevents
pipelines
for
schedules
or
or
push
branch
and
tags
pipelines
and
then
the
final
rule
here,
the
one
and
always
runs
all
other
pipelines.
Types
include:
merge,
requests
and
Pipelines.
A
We
have
for
the
workflows
we
have
variables
depending
on
which
is
in
this
case
it's
our
last
case,
and
it
shows
that
if
the
commit
message
contains
this
week,
then
it
won't
run
the
pipeline
and
if
attack
is
applied
then
it
also
won't
run.
Otherwise
it
will
go.
A
Our
next
section
is
artifacts,
also
again,
I
I'm
sure
that
you
are
usage
to
them,
but
just
took
over
some
some
Concepts
basic
concepts
and
how
you
can
manage
them
in
your
gitlab
platform.
So
artifacts
are
files
stored
on
the
gitlab
server
after
a
job
is
executed.
Subsequent
jobs
will
then
download
the
artifacts
before
a
script
execution,
so
I
built
and
build
and
publish
a
stage
will
generate
files
that
you'd
use
to
deploy.
Your
application
will
be
resource
stats.
A
We
can
what
gitlab
allows
here.
It's
allows
for
saving
aspects
in
your
local
or
object
storage,
and
you
can
then
use
them
on
in
the
same
pipeline
in
subsequent
in
subsequent
jobs.
You
can
also
as
before.
You
can
also
apply
here
rules,
so
everything
that
we
are
talking,
that
almost
everything
can
be
combined
somehow
and
the
idea
is
to
combine
them
and
have
get
the
the
best
value
from
the
git
lab
and,
of
course,
add
some
efficiency
to
your
pipeline.
A
So
for
the
rules-
I
I
added
here-
the
options
that
you
have
so
you
have
the
exclude-
depends
when
an
expiring
and
how
they
would
behave
in
this
case
for
the
artifacts.
A
Where
you
can
download
in
the
gitlab
UI,
so
you
can
download
it
on
the
pipelines
page
on
the
jobs
page
all
in
a
specific
job,
and
in
this
case,
if
you'd
like
to
consider
the
artifact
browser.
It's
only
if
you
are
using
a
ticket
lab
package
registry.
A
Now,
the
administration,
you
can
there's
two
types
of
options
here,
so
it
would
depend
if
you
are
a
SAS
or
if
you
are
on
self
or
self-managed,
so
in
the
size
you
can.
It
is
present.
Is
that
project
level
here
in
the
left?
So
you
can
decide
if
you
would
keep
the
artifacts
for
most
recent
successful
jobs
or.
B
A
We
have
the
package
and
Registries,
and
also
mentioned
that
is:
there's
has
some
definition
dependencies.
So
in
our
table
we
are
listed
at
the
packages
supported
and
we
have
the
package
registers
for
private
or
public
register,
a
variety
of
common
package
managers,
container
registry
and
secured
and
private
Regis
for
container
image
or
built
on
Docker
registry,
the
infrastructure
registry.
The
current
supports
the
form
and
the
dependency
proxy
local
proxy
4,
frequently
used
Upstream
Docker
image.
A
And
we
can
now
hear
with
dependencies.
You
can
control
the
ad
for
the
artifact
download
behavior
in
your
pipeline
jobs.
It's
also
again
some
a
way
to
control
how
if
other
jobs
will
download
it.
Those
artifacts
would
neither
those
adverts
or
not,
because
by
default
jobs
in
later,
it
states
automatically
download
all
the
artifacts
created
by
jobs
in
earlier
stage.
A
A
You
can
also
set
a
job
to
download
No
Effects
error.
If
this
is.
This
is
also
the
case.
So
in
our
example,
here
defines
two
jobs
with
art.
Facts
build
OS,
X
and
build
Linux.
So
when
the
test
OS
OS
X
is
executed,
the
artifacts
from
build
OS
X
are
downloaded
and
extracted.
A
A
So
for
here
some
two
additional
details:
the
job
status
does
not
matter
if
a
job
fails
or
if
it's
a
manual
job
that
isn't
trigger
nowhere.
Workers
and
if
the
art
fed
of
a
dependent
job
are
expert
or
delete
it,
then
the
jobs
fails.
A
Our
last
topic,
for
today
it
is,
they
included
an
extended
keywords,
same
some
examples
and
explanation
how
you
can
take
some
advantage
of
those
options
in
your
pipeline,
actually
also
combining
them.
We
already
saw
the
included
before
in
our
child
pipelines,
but
you
can
also
use
that
in
other
case
and
other
scenarios,
and
how-
and
this
is
what
we
will
cover
today
here-
a
lot
of
information
in
this
in
this
slide,
but
I
I
thought
that
would
be
important
to
mention
those.
A
So
we,
as
I
mentioned,
we
saw
the
included
before,
but
you
can
also
use
it
in
other
options
and
which
actually
included,
does
it
configured
the
pipeline
behaviors.
A
A
So
here
we
have
some
options,
so
they
include
local
is
to
include
a
file
that
is
in
the
same
repository
as
the
configuration
file
contained
with
the
included
keywords.
A
The
second
one
is
the
project
and
the
file
so
to
include
files
from
another
private
project.
On
the
same
git
lab
instance,
you
can
use
both
the
included
project
and
included
file
you
can
use.
They
include
remote
with
a
full,
your
url,
to
include
a
file
from
a
different
location,
and,
finally,
you
can
use
the
included
template
to
include
gitlab
emo
files
templates.
A
This
is
a
very
if,
if
some
of
you
are
already
using
security
Futures,
you
would
see
that
our
security
most
of
our
security
scanners,
you
can
use
them
by
adding
the
template
to
your
emo
file.
A
Now
the
extent
so
the
included
it
is
used
for
a
pipeline
Behavior
and
the
extent
it
is
used
to
configure
jobs.
It
is
an
alternative,
3mo
anchors,
but
it's
more
flexible
and
more
readable.
A
So
just
here
anchors,
it's
easy
to
duplicate
content
across
your
document,
duplicate
it
or
inheriting
properties,
so
extends
will
allow
you
to
enhance
and
reuse
configuration
sections
in
this
example
here
the
respect
job
use
the
configuration
from
the
DOT
test
template,
so
the
dot
because
it
has
the
dot
in
the
front,
is
considered
a
hidden
job
and
they
are
only
use
it
for
inheritance
and
don't
run
on
their
own.
A
So
the
extents
blocks
means
to
inherit
the
reference,
the
reference
block
and
override
any
hashes
that
are
listed
here.
So
what
happens
at
the
end?
When
gitlab
creates
the
pipeline,
it
performs
a
reverse
tip.
Merge
based
on
the
keys,
merge
the
DOT
test,
content
with
the
are
the
airspec
job
and
finally,
it
doesn't
merge
the
values
of
the
key,
so
we
can
see
one
before
and
what
it
becomes
later.
A
Our
last
slide
for
today
for
our
session
today,
it
is
the
option
of
using,
include
and
extends
together.
We
can
use
them
across
configuration
files.
So
in
this
example
here
you
have
a
short
script
in
the
included
emo
file
and
now,
instead
of
copying,
pasting
and
complicate
your
emo
file,
you
can
create
a
job
that
extends
that
dot
template
job,
just
saying
what
image
to
run
it
on.
So
you
can
also
see
here
that
they
use
template
image,
open
and
extends
dot
templates.
A
It
is,
and
is
what
will
simplify
your
emo
file.
A
Now
we
have
the
time
for
more
questions.
I
will
wait
a
little
bit
more.
If
you
have
any
other
questions,
you
can
drop
them
here
in
the
Q
a
section
and
see
that
maybe
there's
one
more
an
outstanding
one.
My
colleague
Ricardo
yeah,
she
was
just
answered
and
I
would
give
just
few
minutes
more.
If
you,
if
you
want
to
add
more
more
questions,
you
would
receive
an
email
with
the
content,
with
the
slide
stack
and
the
recording
of
the
session.
A
So
if
you
have
any
questions
would
like
to
go
back
there
and
see
again,
you
would
receive
that
in
a
couple
of
days.
I
really
appreciate
your
everybody
here
today,
your
time.
Thank
you
very
much
and
please
stay
tuned
for
the
next
webinars
that
we
will
host.
Next
week
we
have
the
devsec
Ops
compliance
Workshop,
so
I
invite
everyone
interested
in
this
topic
to
join
us
and
next
month
we
would
have
more.
Thank
you
very
much.
I
wish
you
a
great
day
and.