►
From YouTube: Advanced CI/CD with GitLab
Description
Expand your CI/CD knowledge while we cover advanced topics that will accelerate your efficiency using GitLab, such as pipelines, variables, rules, artifacts, and more. This session is intended for those who have used CI/CD in the past.
A
Hi
everyone
welcome
to
our
webinar
session.
Today
we
are
going
to
give
people
just
another
minute
or
so
to
jump
in
and
we'll
get
started.
A
All
right,
let's
kick
it
off!
Thank
you.
Everyone
for
joining
us
today,
we're
excited
to
to
be
going
through
our
content
of
advanced
ci
and
cd
with
you
today
before
I
kick
it
over
to
conley
our
presenter.
I
just
wanted
to
go
through
a
couple
of
housekeeping
items.
First
off
this
webinar
is
being
recorded,
so
you
can
plan
to
receive
that
here
in
the
next
couple
of
days
in
your
inboxes,
so
you
can
look
forward
to
that.
A
In
addition,
if
you
have
any
questions
that
come
out,
excuse
me
come
up
throughout
the
session.
Please
put
those
in
the
q
a
portion
of
your
zoom
window
we'll
have
time
to
answer
those
throughout
and
colony
will
have
some
time
at
the
end
to
answer
some
of
them
live
as
well,
and
with
that
I
will
kick
it
over
to
conley.
B
B
All
right,
we
got
you
so
yeah.
Thanks
appreciate
that
taylor
thanks
so
much
everyone
for
joining
us
today.
I
wanted
to
say,
first
and
foremost
that
I
want
to
start
with
the
most
important
thing
here,
which
is
you
all
this
presentation
and
the
work
that
taylor's
team
does
is
all
in
hopes
of
providing
you
more
value
for
your
gitlab
subscription.
So
that
means,
if
you've
got
a
question,
definitely
don't
hesitate
to
put
that
in
the
chat
I'm
going
to
save
some
time
about
10
minutes
or
so
at
the
end
for
q.
B
A
so
we'll
address
some
of
these
questions
live,
and
then
we
also
have
a
couple
folks
from
get
lab
that
are
going
to
be
answering
them
throughout.
So
don't
be
shy,
so
my
name
is
conley
rogers,
I'm
a
senior
technical
account
manager
for
our
strategic
enterprise
accounts.
I'm
coming
to
you
to
you
from
atlanta
georgia.
B
Variables
are
super
important
and
foundational,
so
we're
going
to
hit
that
so
that
you
can
understand
rules
and
how
to
control
your
pipelines
better
and
then
controlling
when
your
jobs
run,
how
to
deal
with
artifacts
as
you're
building
code
you're,
storing
it
in
an
artifact
storage
in
those
binaries
that
you're
going
to
need
for
testing,
deploying
your
code
and
then
assembling
pipelines
from
components
so
kind
of
modular,
rising
your
cicd
jobs
to
be
most
effective
and
save
a
lot
of
time.
B
A
couple
things
that
are
sort
of
like
out
of
scope
is
that
you
know
detailed
setup
and
configuration
for
ci,
cd
and
runners.
We've
got
a
course
just
on
that,
so
you
can
look
that
up
in
our
professional
services
catalog,
as
well
as
in-depth
usage,
best
practices
for
project
management.
We've
got
a
course
just
on
that
and
then,
if
you
are
a
systems
administrator-
and
you
are
just
trying
to
understand
how
to
spin
up
gitlab
troubleshoot,
it
upgrade
it
etc.
We've
got
our
own
sysadmin
training
for
that
as
well.
B
So
those
are
a
couple
things
that
are
sort
of
out
of
scope
today,
just
wanted
to
get
that
disclaimer
before
we
we
dive
right
in
all
right,
let's
get
into
it.
So
let's
start
with
what
we
call
the
git
lab
flow.
It's
probably
sounds
like
a
branching
strategy.
You
know,
github
flow
get
lab
flow.
B
Git
flow
are
all
branching
strategies,
but
this
is
more
of
a
process,
and
I
know
this
is
a
webinar
about
ci
cd,
but
I
wanted
to
start
with
the
bigger
picture
of
mine
because,
beyond
the
technology
of
the
pipelines
itself,
it's
a
process
to
ensure
that
you
get
the
most
from
your
investment
in
git,
as
well
as
to
improve
collaboration
across
your
your
engineering
team
and
other
personas
like
security
and
change
management,
as
you
actually
are
writing
and
deploying
code.
So
that's
why
we
wanted
to
show
this
first.
B
So
this
is
the
flow
process
that
we
recommend
for
devops
teams
to
follow,
while
using
gitlab
capabilities
within
a
concurrent
development
lifecycle.
So
you
start
with
sort
of
defining
and
managing
the
requirements.
The
desired
issue
that
you're
working
on,
whether
it's
technical
debt
issues,
whether
it's
features
and
bugs
or
whether
it's
a
security
risk
compliance
type
issue.
B
B
Instead
of
creating
your
branch
and
then
just
kind
of
going
off
and
making
commits
on
that
branch
and
pushing
that
back
to
server,
we
actually
encourage
and
even
have
prompts
in
the
ui
to
immediately
create
a
merge
request
after
you
create
that
branch.
So
this
works
for
more
than
just
the
git
lab
flow
branching
strategy.
It
works
for
for
many
branching
strategies,
it's
kind
of
agnostic
of
that.
B
So
the
reason
that
we
do
that
is
so
that
it
becomes
evident
to
the
rest
of
the
team
that,
yes,
I
picked
this
issue
up
and
yes,
I
also
have
a
work
in
progress.
Merge
request,
so
you
can
immediately
start
to
bring
in
the
right
stakeholders,
so
they
can
see
when
you
push
code
which
will
automatically
trigger
our
ci
cd
pipelines
by
default,
unless
you
configure
it
otherwise,
so
that
every
change
is
running
basic
linting
and
static
analysis
and
you're
getting
a
really
fast
feedback
loops.
B
That's
just
from
the
developer
perspective.
Once
it's
ready
to
be
reviewed
by
say,
like
a
senior
developer
that
has
maintainer
privileges,
then
they
can
do
so
and
be
brought
in
with
a
single
app
mention
right
and
they
can
see
the
full
history
of
of
commits
pushes
that
had
taken
place
right
there
in
the
mr.
B
Then
it'll
spin
up
a
live
ephemeral
instance
in
like
a
non-production
environment,
you
know,
if
that's
the
type
of
application
that
you're
developing
and
that
it
can
do
such
so
that
we
can
perform
dynamic
security
testing
on
a
running
instance
and
test,
for
you
know,
crawl
site,
scripting
and
dynamic
attacks
like
that,
once
it's
looking
good,
it's
passing
all
those
checks.
B
The
appropriate
approvals
and
separation
of
duties
would
actually
come
in
at
this
point.
So
if
there's
any
kind
of
findings
in
those
scans,
it
may
require
an
additional
person
from
a
security
group
to
come
in
and
either
approve
it
or
review
it,
and
then
you
land
in
your
default.
So
that
would
be
once
that
merge
request
is
accepted.
B
It
runs
the
cd
workflow
for
deploying
into
a
release
and
packaging
that
up
into
a
release
and
then
deploying
into
your
environment
of
choice,
whether
that's
staging
then
to
production.
What
have
you
so?
I
wanted
to
start
there
just
to
give
the
bigger
picture
and
now
kind
of
zooming
in
on
the
ci
cd
elements
of
what
we
just
saw
so
the
ci
pipeline.
B
You
know
that's
where
you're
building
your
application
behind
the
scenes
we
were
using
git
lab
runners,
which
I'll
talk
about
in
just
a
second,
then
you're
running
unit
and
integration
tests
on
that
that
package
to
make
sure
that
everything
is
looking
valid.
B
You
can
also
look
at
a
live
preview,
so
that's
getting
into
that
review
app.
You
know
live
preview
of
your
development
branch
before
you.
Even
you
know,
send
it
to
your
default
branch
and
before
merging
it
into
a
stable
version
of
your
application,
then
you
can
deploy
to
multiple
environments
like
staging
and
production.
B
B
B
The
environments
is
where
you're
going
to
be
actually
deploying.
So
it
points
to
a
deployment
destination
is
all
in
one
file
version
and
stored
in
the
project
that
it
pertains
to
at
its
most
basic
function.
The
last
piece
is
the
get
live.
Runner.
That's
the
infrastructure
piece
that
executes
everything
you
see
on
the
left
side.
You
can
have
as
many
runners
as
you
like
as
many
as
you
need
to
handle
the
volume
and
load
from
your
engineering
department.
B
You
can
even
use
your
pc
if
you're
trying
to
test
certain
things
out
or
you
have
very
niche
use
cases.
Just
you
know,
keep
in
mind.
That
means
that
members
of
the
project
will
be
using
your
laptop
as
as
a
runner
so
not
made,
for
you
know,
production,
but
if
you're
just
trying
to
get
a
certification
and
get
very
familiar
with,
you
know,
setting
up
gitlab
ci,
it's
a
great
option
to
not
have
to
go
provision
something
in
the
cloud
or
with
your
os
teams.
B
This
is
what
that
would
look
like
so
basic
build
test
deploy
pipeline
here
with
the
jobs
to
find
so
you
can
see
like
if
the
build
itself
is
successful.
You
then
start
to
run
a
series
of
tests.
You
know
probably
some
unit
testing
and
integration
testing
here,
there's
some
logic.
The
allow
failure
to
be
true
so
that
the
pipeline
can
proceed
even
though
it
failed
those
tests
that
is
optional,
and
you
can
configure
that
and
then
for
deployment.
B
It
could
be
very
it's
a
common
practice
around
deploying
to
environments
that
you
have
to
have
the
right
permissions.
A
lot
of
requirements
for
separation
of
duties
say
that
the
person
who
pushes
the
code
can't
deploy
the
code.
So
this
is
how
you
would
handle
a
scenario
like
that,
so
that,
when
you
are
deploying
into
a
production
or
even
uat
or
staging
environment,
that
it
waits
for
somebody
with
the
right
permission
to
click
that
manual
intervention
to
deploy
it
and
then
other
options.
B
B
B
So
this
is
how
you
start
to
define
the
flow,
so
the
pipeline
flow
so
that
you
can
run
parallel
jobs
and
you
can
even
move
on
to
the
next
stage.
Certain
jobs
haven't
finished,
so
the
more
that
you
can
run
concurrently,
the
the
the
faster
that
job
or
that
pipeline
can
finish
as
a
whole.
B
So
this
is
great
for
moving
on
to
you,
know,
non-dependent
jobs,
so
that
you
can
keep
the
flow
moving
and
you
don't
get
hung
up
waiting
for
every
job
in
a
stage
to
continue
before
moving
forward.
So
that's
what
the
needs
keyword
allows
you
to
do,
so
it
defines
those
dependent
jobs,
and
this
is
kind
of
what
that
looks.
Like
just
an
example
of
using
the
needs
keyword,
you
can
define
the
job
relationships.
B
So
in
this,
in
this
example,
you
have
an
application
to
point
to
both
android
and
ios,
so
these
are
not
dependent
on
each
other
running.
So
if
you
wanted
to
create
a
needs
keyword
here,
you
can
speed
up
the
total
pipeline.
B
B
This
is
a
visualization.
This
is
pretty
complicated,
not
gonna
lie,
but
you
can
start
to
see
the
flow
logic
by
using
our
ui
to
look
at
the
relationship
between
jobs.
So
as
you
mouse
over
it,
it's
going
to
highlight
the
dependency
paths
involved,
and
then
you
can
also
click
on
multiple
pass
and
it's
and
it's
interactive.
So
if
you're
trying
to
figure
out,
you
inherited
a
project,
it's
very
complicated
and
you're,
trying
to
just
figure
out
what's
triggering
what
this
is
a
visualize,
a
visual
way
of
doing
so.
B
B
So,
along
the
lines
of
simplicity,
this
feature
allows
you
to
call
other
gamble
files
from
within
the
same
project,
so
that
can
solve
for
issues
that
you
may
be
facing,
like
the
staged
structure
of
a
pipeline,
where
all
steps
in
a
stage
must
be
completed
before
the
first
job,
and
the
next
stage
begins
that
causes
arbitrary
weights
and
slows
things
down.
It
also
solves
for
configuration
for
the
single
global
pipeline
that
could
become
very
long
right
if
you've
got
one
really
long.
B
Global
pipeline
defined,
it
could
be
hard
to
manage,
so
you
can
start
to
modularize
by
using
more
than
one
ci
gamma
within
that
same
project
and
then
have
this
kind
of
parent
ci
gamma.
B
That
can
call
those
other
gamble
files
right,
so
it
just
becomes
cleaner
to
manage
to
read
so
readability
improves
there
as
well,
and
then
imports
with
include
keyword,
increase
the
complexity
of
the
configuration
and
create
the
potential
for
namespace
collision
where
jobs
are
unintentionally
duplicated
it
can
solve
and
help
with
that
and
then
even
like
the
the
pipeline
user,
experience
can
become
ability
with
so
many
jobs
and
stages
to
work
with.
B
So
that's
more
on
the
readability
side,
so
you
can
see
here
in
the
example
that
we've
got
build
test,
deploy
as
well
as
downstream
pipelines
all
within
the
same
project.
So
you
may
be
deploying
multiple
microservices
and
you
need
a
different
language
and
a
different
pipeline
to
handle
each
microservice.
B
So
this
is
sort
of
the
behind
the
scenes:
how
to
write
the
file
for
this
parent
shop
pipeline
for
say,
a
mono
repo.
That's
deploying
those
micro
services
like
I
was
just
showing
in
that
example.
B
So
it'll
run
the
job
if
there
are
changes
in
those
files,
so
you
can
define
you
know
for
this
stage
in
particular
to
only
run
it.
If
there's
changes
in
this
project
dash
one
any
file
within
that
project,
and
then
you
can
include
files
from
elsewhere
within
the
project
so
that
it's
going
to
include
the
appropriate
downstream
yaml
code
to
run
that
project
correctly
and
then.
Finally,
the
strategy
to
to
depend
means
that
it
holds
this
pipeline
until
the
other
pipelines
have
finished.
B
Number
four
of
five
is
the
dynamic
pipeline,
so
this
is
going
to
be
generating
pipeline
configuration
at
build
time.
It's
uses
the
generated
configuration
at
a
later
stage
to
run
as
a
child
pipeline,
and
it's
useful
to
have
a
single
pipeline
configuration
with
different
settings
to
support
a
matrix
of
targets
in
architectures.
B
This
technique
could
be
very
powerful
in
generating
pipelines
targeting
content
that
changed
or
to
build
a
matrix
of
targets
in
architectures
dynamically
right.
So
you
could
see
this
test.
Gitlab
ci
gamble
file
is
being
generated
dynamically
and
places
a
generated
yaml
file
in
the
job
artifact
store,
then
references
it
later
to
actually
run
the
pipeline
that
it
has
generated
dynamically.
There.
B
And
the
the
fifth
architecture
we're
going
to
cover
today
is
is
called
the
multi-project
pipeline.
So
this
is
when
you
have
a
project
that
sits
sort
of
on
top
of
other
projects
that
can
then
trigger
pipelines
across
because
everything
up
to
this
point,
we've
been
talking
within
the
contract
of
a
single
project
or
a
single
code
repository.
B
So
there's
there's
like
three
main
use
cases
for
for
using
this
we're
going
to
talk
about,
but
you
can
specify
you
know
specific
branches.
You
can
pass
variables
to
downstream
pipelines
if
the
downstream
pipeline
fails,
it
will
not
fail
the
upstream
pipeline.
So
that's
a
benefit
and
useful.
When,
when
building
deploying
large
applications,
they're
made
up
of
different
components
to
have
their
own
project
and
build
pipeline.
B
As
the
title
offers
you
can
set
up,
get
live
ci
cd
across
multiple
projects,
so
that
a
pipeline
in
one
project
can
trigger
a
pipeline
in
another
project.
You
can
visualize
the
entire
pipeline
in
one
place,
including
all
cross-project
interdependencies
when
you
set
this
up
within
the
gitlab
ui
itself.
B
So
we
we
have
a
job
called
a
bridge
underneath
the
test
stage.
Okay
and
what
it
does
is
triggers
a
pipeline
in
the
project
path
from
branch
main
and
simple
as
that.
But
but
there
are
some
other
great
use.
Cases
for
this
sort
of
the
simple
a
to
b
is
the
first
of
the
three
where
a
project
that
just
triggers
the
pipeline
in
another
project
to
run
such
as
a
code
project
triggering
a
rebuild
of
its
documentation.
B
The
other
one
is
a
like
an
orchestrator
project
that
manages
the
build
and
deploy
of
multiple
other
apps,
and
it's
sort
of
that
parents
project
that
that
houses,
the
control
logic.
So
that's
orchestrating,
you
know
large,
deploy
across
many
subsequent
repositories
and
then
the
third
one
is
called
versioning.
But
it's
really,
you
know
any
variable
that
you
need
to
pass
to
downstream
projects.
B
This
is
a
great
reason
to
use
the
multi-project
pipeline,
so
you
know
say
you
need
to
pass
the
version
number
to
the
downstream
project.
You
can
do
that
with
a
multi-project
pipeline.
B
A
B
B
B
So
you
can.
You
can
kind
of
read
more
examples,
more
gamble,
code
that
you
can
copy
paste
and
start
to
play
around
with,
but
now
I
want
to
get
into
the
anatomy
and
components
starting
with
the
variables
so
cicd
variables
are
a
type
of
environment
variable
you
can
use
them
to
control
the
behavior
of
jobs
and
pipelines,
store
the
values
that
you
want
to
reuse
and
avoid
hard
coding
values
in
your
gitlab
ci
yaml
file.
B
B
In
terms
of
using
these
and
and
injecting
them
into
your
pipeline,
you
could
do
it
a
number
of
ways
through
the
ui
at
the
project
level,
group
level
and
instance,
level.
You
can
set
variables
and
we'll
talk
in
a
sec
about
sort
of
the
hierarchy
of
which
ones
you
know
take
precedence
if
it's
defined
more
than
once
all
right,
so
you
can
see
within
the
project.
You
can
add
variables.
B
B
B
And
then,
in
terms
of
processing
the
sort
of
the
order
of
operations
which
ones
take
precedence
so
going
from
bottom
to
top
the
the
top
being,
what
takes
the
highest
precedence
so,
if
you've
defined
a
variable
manually
in
the
ui
or
in
an
api
request
for
that
given
pipeline,
that's
been
triggered.
B
B
B
So
for
defining
the
flow
logic,
you
need
rules
and
so
in,
if
you're
thinking
in
terms
of
jenkins,
like
groovy
scripts,
to
manage
you
know
how
and
when
the
pipelines
are
triggered
and
run.
B
So
that's
sort
of
the
equivalent
here
so
I'll
talk
about
the
syntax
of
the
rules,
but
starting
with
the
ways
that
you
can
trigger
a
pipeline
to
run,
it
could
be
from
new
commits
to
a
merge
request.
It
could
be
on
a
branch.
It
could
be
on
a
new
tag.
B
B
So,
if
you're
needing
to
run
something
regularly
on
a
24-hour
48-hour
schedule,
you
can
also
do
that
and
then
the
variable
for
for
this
setting
is
ci
pipeline
source
and
we'll
show
some
examples
where
you
can
change
that
that
pipeline
source
or
you
can
disable
it
running
on
merge
requests
if
you
need
to
so
using
that
variable,
ci
pipeline
source
is
how
you
would
control
when
pipelines
are
run.
B
Then
you
need
to
create
the
rules
block
with
the
rules
keyword
and
then
you
can
define
your
if
statements
to
reference
variables,
including
predefined
ones,
as
in
this
case
like
the
ci
pipeline
source,
is
web.
So,
if
you're
using
the
web
ide,
then
you
can
trigger
this
job
to
be
run,
otherwise
it
won't
be
and
then,
as
it
claims,
this
job
will
only
run
when
the
pipeline
is
kicked
off
from
the
web
form.
B
B
Sorry,
I
think
my
headphones
cut
off
taylor.
Can
you
hear
me
yep?
We
still
got
you,
okay,
thank
you.
So
much
sometimes
it
just
does
that.
So
this
is
a
great
slide
to
screenshot.
If
you
want
to
just
kind
of
screenshot,
this
save
it
away
for
reference,
so
the
clauses
that
you
can
choose
from
you
solve
the
the.
If
statement
you
could
also
do
you
know
if
you
know
changes
are
made
to
certain
files.
B
Only
then
do
you
run
the
subsequence
script,
so
that's
very
useful
for
for
improving
performance
of
your
pipelines
and
then
the
operator
to
say
if
you
know
this
is
pretty
standard
for
for
writing
any
kind
of
script.
The
results
for
when
that
operation
is
true
and
then
what
happens
as
a
result.
B
B
So
I
threw
in
some
some
tips
and
tricks
for
speeding
up,
complex
pipelines.
Earlier
you
know,
the
directed
acyclic
graph
is
a
great
way
of
doing
that,
but
let's
take
a
look
at
some
other
ways
now
that
I've
covered
rules
and
variables,
so
the
first
one
is
is
setting
run
rules.
B
This
is
one
of
my
favorites
right
now
because
it
allows
you
to
say
these
files
didn't
change,
so
I
don't
want
to
run
this
job
or
I
only
want
to
run
on
certain
branches
right.
So
it
allows
you
to
say
what
jobs
run
when
so
that
you
can
save
time,
especially
as
a
get
live
cie
ammo
file
can
be
over
a
thousand
lines.
You
certainly
don't
want
that
running
every
time,
setting
up
the
the
cache
so
cache
setup
it.
B
So
if
you
have
a
lot
of
say,
container
preparation
build
up
in
a
before
script,
it
might
be
a
sign
that
you
need
to
convert
that
section
into
like
a
docker
file
and
a
new
repo
and
have
your
own
build
container.
But
if
there's
other
types
of
scripts-
and
it's
just
checking
for
certain
settings,
this
can
can
dramatically
accelerate
builds
where
there
are
a
lot
of
build
dependencies
to
wear
on
before
running
your
code.
B
So
one
last
thing
I
would
say,
is
trying
it
both
ways
and
compare
contrast
to
see
which
ways
run
fastest
for
you.
I
think
that's
just
a
great
rule
of
thumb
so
that
you
can
get
used
to
using
more
complex
concepts,
but
then
you're,
very
pragmatic.
You
aren't
just
trying
to
use
it
because
it's
new
and
cool,
but
rather
it's
actually
improving
that
performance
and
you
can.
B
Okay:
let's
go
through
a
couple
examples
of
rules
in
action,
so
if
you're
trying
to
control
when
merge
request
pipelines
run
right.
So
if
you
don't
want
to
run
your
ci
pipeline
every
time,
a
merge
request,
event
happens:
you
can
set
that
rule.
If
you
do
want
it
to
run,
then
you
can
use
the
ci
pipeline
source
on
merge,
request,
event
or
ci
pipeline
source
as
a
push
to
that
branch.
So
those
would
then
trigger
it
unless
you
kind
of
say
otherwise
that
you,
you
don't
want
to
use
that.
B
All
right,
another
example
here
where
you
don't
want
to
run
a
job
if
it
was
scheduled
versus
triggered
automatically.
Is
you
know
using
the
ci
pipeline
source
for
this
and
then
saying
that
you
know
on
a
merge
request?
Events,
you
don't
want
it
to
run
and
then
on
a
ci
pipeline
source
schedule.
B
You
also
don't
want
it
to
run.
Otherwise
this
job
will
run
if
the
previous
stage
was
successful.
So
this
can
just
again
save
time
or
even
control
the
types
of
tests
and
jobs
that
that
won't
run.
If
it's
a
schedule,
you
know
if
it's
a
scheduled
job
or
if
it's
say
you
wanted
to
kick
it
off
from
the
the
ui.
B
You
know
and
you
don't
need
it
to
run
a
whole
slate
or
a
whole
like
block
of
code
to
save
time
so
you're
only
trying
to
scan
like
a
binary
that
you're
pulling
in
as
a
dependency,
and
you
just
need
to
run
like
a
dependency
scan.
You
don't
need
to
do
the
full
sas
and
you
don't
need
to
do
secret
detection,
etc.
B
B
So
I
wanted
to
share
that
as
a
second
example
and
then
as
the
third
example,
if
you're
trying
to
to
control
when
scans
are
run
based
off
of
the
contents
of
that
commit
or
of
that
merge
request.
This
is
how
you
can
go
about
doing
it
right.
So
if
I've
got
docker
files
and
scripts
for
dockerizing,
my
application
that
have
changed,
then,
okay,
I'm
going
to
want
to
run
my
container
scanning
and
I'm
going
to
want
to
run
infrastructure
as
code
scanning
like
iac.
B
Otherwise
I
may
not
need
to
run
iac
because
it's
all
java
code
that
was
changed
and
it's
not
infrastructure
code.
So
I
can
dictate
that
if
these
files
aren't
changed,
then
I
can
skip
it
and
save
time
in
my
pipeline.
So
that's
why.
I
think
that
this
changes
is
a
is
a
really
good
one
to
save
time
and
speed
up
pipelines
here,
but
yeah.
What
this
one
is
saying
that
it
needs
a
manual
intervention
to
to
run
this
if
it
senses
a
change
to
dockerfile
or
any
files
in
docker
scripts.
B
The
fourth
example
is
a
sort
of
delayed
job,
so
this
job
is
going
to
run
three
hours
after
triggered
and
will
be
allowed
to
fail.
So
if
you've
got
say
like
a
long
running
either
script,
that's
custom
long-running
integration
tests,
something
that
you
know
is
going
to
take
a
while,
but
that
feedback
isn't
necessary
immediately.
B
The
workflow
rules
control
when
the
entire
pipeline
will
run
and
they're
outside
of
the
job
definitions.
We've
mainly
been
talking
about
up
to
this
point,
so
here
we
see.
If
the
commit
message
contains
whip
dash
whip,
then
it
won't
run
the
pipeline
and
if
a
tag
was
applied
then
then
it
also
won't
run.
B
Otherwise
it
will
so
you
can
control.
You
know
if
I'm
just
saying
hey
this.
This
is
whip.
I
just
wanted
to
check
it
in
and
maybe
have
somebody
on
my
team
look
at
it,
but
I
don't
actually
need
to
run
the
the
pipeline
on
it.
Then
this
is
a
technique
that
you
can
use
to
kind
of
control.
The
workflow.
B
And
then
another
example
for
for
workflow
you
know
for
variables,
is
that
the
variable
in
this
example
docker
file
will
be
set
differently
based
on
the
rules,
so
you
can
see
that
here
and
it
will
always
run.
B
The
second
and
last
topic
I
wanted
to
get
to
is
artifacts
right,
so
you
know,
as
you
run
pipelines,
especially
ci,
you're
building
and
publishing
files,
those
generate
files,
binaries
packages
that
you'll
use
for
deploying
your
application.
It
also
generates
artifacts
for
reviewing
test
results.
So
we're
going
to
talk
about
managing
those
artifacts
for
a
minute.
B
Gitlab
allows
for
saving
artifacts
in
local
or
object
storage.
You
can
then
use
them
in
subsequent
jobs,
and
then
you
can
use
the
rules.
Logic
of
exclude
depends
and
when
to
control
what
is
added
and
when
to
determine
if
an
artifact
is
stored
or
not,
you
may
not
need
to
store.
It
may
be
something
that
you
just
need
in
the
job
to
run
your
tests
on
and
and
that's
good
enough.
B
So
that's
you
know
some
of
the
key
words
around
like
using
exclude
to
limit
what
is
added
using
depends
to
limit
what
gets
downloaded
on
subsequent
jobs
using
when
to
determine
if
artifacts
will
be
stored
or
not,
and
then
the
expire
in
to
determine
when
artifacts
would
be
destroyed
very
good
to
to
have
especially
for
stuff
like
test
results.
I
think
those
are
only
stored
for
48
to
72
hours
by
default.
B
Where
to
find
and
download
artifacts
in
the
gitlab
ui,
so
within
the
pi
points
page,
you
can
see
that
you
can
download
the
artifacts
whether
that
is
the
compiled
code.
Whether
that
is
the
test
results,
you
could
do
it
on
the
jobs
page
within
a
specific
job
and
then
the
artifact
browser.
If
you're
using
our
our
package
registry,
you
can
you
can
download
it
from
there
if
you're
using
something
external,
then
you
would
just
be
sending
it
off
to
something
like
artifactory,
and
so
it
wouldn't
be
in
our
artifact
browser.
B
As
far
as
administration
goes
in
a
self-managed
gitlab
instance,
job
artifacts
can
be
sorted,
local
or
object.
Artifact
expiration
times
can
be
configured
at
the
instance
level
and
then
artifact
demos
fall
under
our
get
lab
access
control
and
it's
probably
pretty
small
but
I'll.
Just
tell
you
right
now
download
and
browse
job
artifacts
guests
all
the
way
through
reporter
developer,
maintainer
owner
they
can
do
that
they
can
browse
and
and
download
job
artifacts.
B
So
good
to
know
so
that
you
understand
who
has
access?
That's
assuming
they
already
have
access
to
the
project
and
group.
So
it's
not
just
anyone
willy-nilly,
but
that's
the
the
permissions
that
can
do
this.
B
Here
is
some
of
the
container
and
language
specific
package
registries
they
get
live,
supports
container,
of
course,
dependency
proxy
and
then
some
language,
specific
package
registries,
all
the
common
ones,
npm
new
get
go
proxy,
so
you
can
see
that
and
we've
got
a
docs
page.
B
So
by
doc,
by
default,
all
artifacts
from
all
previous
stages
are
passed
on
to
the
next
stage,
but
you
can
use
the
dependencies
parameter
to
define
a
limited
list
of
jobs
or
no
jobs
to
fetch
artifacts
from
so
to
use.
This
feature
define
dependencies
in
context
of
the
job
and
pass
a
list
of
all
previous
jobs
from
which
the
artifact
should
be
downloaded.
B
You
can
see
here
in
this
example
that
we've
got
at
build
osx
and
build
linux
stages,
so
in
order
to
run
the
test
scripts
that
we
need
to
we've
got
to
make
sure
that
we're
using
the
right
artifact
right.
So
that's
where
the
dependencies
of
build
os
x
come
into
play
so
for
the
test
stage.
B
That's
what
you
would
write
for
that
as
a
dependency
and
then
likewise
for
your
linux
testing
to
take
place
you're
dependent
on
the
build
linux
artifact
there.
So
this
is
just
an
example
of
how
you
can
choose
when
and
and
where
to
pass
artifacts
through,
and
you
can
have
multiple
generated
and
passed
to
specific
jobs
in
the
next
stage.
B
The
last
but
super
crucial
topic
is
on,
includes
and
extends
super
powerful.
If
you
take
away
anything
from
this,
I
would
say
this
is
a
big
one,
just
in
terms
of
scaling,
gitlab
ci
cd,
this
is
a
must-have.
So
that's
why
we're
hitting
it
here
towards
the
end
and
it's
building
upon
the
concepts
that
we've
covered.
An
include
statement
is
how
you
bring
in
external
yaml
files
to
your
gitlab
ci
configuration
it's
helpful
because
it
allows
you
to
extract
common
components
and
improve
readability.
B
If
you
use
an
include
statement,
that's
all
you
have
to
do
it's
going
to
pick
up
the
appropriate
language
your
code
was
written
in
and
it
scans
for
all
of
the
sas
rules.
In
that
example,
secret
detection
same
thing:
it's
not
language
dependent,
but
you
just
add
the
include
statement
for
secrets,
detection
and
boom.
It's
going
to
take
care
of
the
rest.
It's
like
two
lines
of
code.
B
So
that's
why
it's
so
powerful
and
it
saves
a
lot
of
time,
there's
different
methods
for
doing
this.
I
talked
about
the
the
templates
right,
which
are
provided
by
gitlab
at
the
very
bottom
here,
but
you
can
also,
you
know,
include
a
file
from
your
local
project
repository,
that's
sort
of
the
parent
child
example.
I
gave
in
the
in
the
architects
five
different
architectures
of
writing
a
pipeline.
So
that's
the
local
include,
if
you're,
just
trying
to
find
it
within
your
your
own
project
file.
B
If
you
want
to
include
a
file
from
a
different
project
repository
and
then
remote,
if
you
want
to
include
a
file
from
a
remote
url.
So
if
there's
something
just
publicly
out
there
on
like
publicgatelab.com
that
you
want
to
pipe
in,
it
needs
to
be
public
visibility,
but
you
can
do
that
and
here's
the
examples
on
the
syntax.
For
writing.
Those
includes.
B
Extense
is
another
way
to
improve
efficiency
and
you
know
eliminate
needs
for
rewriting
or
writing
lengthy
code
in
your
yaml
file,
so
extends
is
similar
to
gamble
anchors,
but
it's
a
little
bit
more
flexible
and
readable.
So
I
did
want
to
touch
on
this.
It'll.
Allow
you
to
enhance
and
reuse
configuration
sections.
B
So
if
you
know
about
yaml
anchors,
you
know
that
it's
used
to
duplicate
or
inherit
content
across
your
gamble
file,
but
they're
only
valid
in
the
file
they
were
defined
in
so
that's
where
extend
comes
in,
you
can
inherit
up
to
11
levels.
B
We
say
you
know
no
more
than
three
just
for
performance
reasons
and
then
what
it
does
is
it
merges
the
configurations
and
from
the
you
know,
respective
job
into
your
current
one
and
that's
going
to
save
some
time.
So
that's
kind
of
the
example
you
see
here
with
the
dot
tests
in
our
spec
becoming
one
single
job.
B
This
also
works
across
configuration
files
when
you
use
and
include
with
an
extend
so
in
this
example,
you
can
include
an
external
include
file
with
a
nice
little
script,
so
you
can
include
that
in
your
gitlab
ci
ammo.
So
now,
instead
of
having
to
copy
paste
and
complicate
your
fciemo,
you
can
create
a
job
that
extends
that
dot
template
job
with
the
scripts,
echo
hello
and
just
say
what
image
to
run
it
on
so
simple
example
but
powerful
feature.
B
I
wanted
to
just
make
sure
to
call
out
there
and
I
think,
we're
kind
of
spot
on
we've
got
about
10
minutes
here.
We
have
indeed
covered
a
lot
of
grounds,
and
I
love
that
section
about
components
and
reusability.
It's
a
great
place
to
leave
on
so
we're
going
to
pause
now
and
I'm
going
to
kick
it
over
to
taylor,
because
we
have
a
poll
and
we're
also
going
to
open
the
floor
up
for
some
questions.
Taylor.
A
Great
thanks,
conley
yeah.
I
just
opened
up
that
poll
just
a
couple
of
quick
questions.
We
we'd
love
to
get
your
feedback
on
on
today's
session
and
with
that
there
have
been
a
couple
of
questions
come
through,
calling
that
all
I'll
pose
to
you
to
to
answer
sounds.
B
A
The
first
one
here:
what's
the
difference
between
jenkins
and
get
lab
ci.
B
Yeah
this
is,
you,
know,
kind
of
the
incumbent,
one
of
the
original
best
ways
to
run
automation
for
for
your
projects
was
with
jenkins,
and
so
fundamentally,
we
approach
it
different
than
jenkins,
which
is
a
plug-in
based
model
where
you're
trying
to
maintain
install
upgrade
all
the
appropriate
plugins
to
kind
of
extend
the
features
of
jenkins,
and
so
that's
got
inherent
risks
and
overhead
that
you
know
you
have
to
keep
upgrading
you
have
to
maintain,
which
could
be
very
intense,
especially
at
scale.
B
B
Repositories
run
all
the
same
linting
and
testing
on
that
that
you
would
your
actual
code,
so
it
just
facilitates
you
know
more
reusability
and
collaboration
across
you
know
the
enterprise
and
so
very
fundamentally
different
ways
that
came
about
in
different
eras
as
well,
so
those
are
probably
the
biggest
differences.
I
would
highlight:
there's
also
some
similarities.
B
Just
in
terms
of
you
do
have
to
have
get
loud
runners,
which
are
very
similar
to
get
live
agents,
which
is
the
infra
to
run
those
pipelines
on.
So
there
is
some,
you
know
some
knowledge
transfer
that
will
serve
you
well.
A
Great,
thank
you
next,
one
that
came
through
here.
How
can
I
run
this
across
my
full
list
of
applications
and
enterprise.
B
B
B
So
we
didn't
go
into
it
too
much
in
this
talk,
but
there
is
a
whole
section
of
git
lab
just
focused
on
compliance
and
making
sure
that
a
set
number
of
jobs
scans-
you
name,
it
runs
across
your
full
site
application.
So
I'd
encourage
you
to
look
into
the
compliance
framework
and
the
compliance
pipelines.
B
Those
are
two
ways
that
you
can
apply
a
label
to
your
project
systematically
and
then
that
will
pipe
in
a
gitlab,
ci
ammo
that
runs
ahead
of
the
local
project,
so
you
can
put
in
there
anything
you
want,
and
the
developers
can't
change
that
unless
you
want
them
to,
but
you
probably
don't
and
so
there's
ways
to
make
sure
that
they
can't
change
that.
But
then
they
have
the
freedom
to
run
anything
they
need
to
around
building
testing
to
pull
in
their
application.
A
Awesome
great,
I
think,
there's
just
one
more
that
I'm
seeing
here.
What's
a
good
training
path
to
become
an
expert.
B
If
this
is
your,
you
know
day-to-day
job
and
you're.
Like
a
devops
engineer,
you
know
say
trying
to
write
those
templates
keep
them
up
to
date,
sort
of
that
golden
ci
image.
If
you
will
like
there
are
there's
definitely
training
resources
that
gitlab
provides.
We've
got
our
own
certification
path
that
you
can
take
for
becoming
gitlas,
ci,
intermediate
and
professional
as
well.
As
you
know,
just
getting
the
the
gist
of
best
practices
in
devops.
B
I
highly
recommend
the
aws
devops
engineer
professional
course
different
tooling.
It
may
even
make
you
appreciate
gitlab
more.
It
certainly
did
for
me,
but
it's
very
valuable
knowledge
to
have
here
trying
to
take
this
as
a
profession
and
go
deeper
and
deeper.
Then
you
know
I'd
recommend
those
those
different
options,
starting
with
gitlab's
own
training.
So
you
get
all
the
nuances,
but
you
know
broader
speaking,
there's
there's
great
content
and
other
tools
and
platforms
too.
A
Awesome,
thank
you.
Well,
I
think
we're
we're
getting
close
to
just
about
here
at
times
so
appreciate
everyone's
time
today.
Thank
you,
conley
for
the
great
presentation
and
thanks
everyone
for
the
great
questions
that
came
through.
I
think
this.
This
was
a
really
engaging
session,
so
with
that
we
will
wrap
up
and
and,
like
we
said,
we'll,
be
sending
out
the
slides
and
the
and
the
recording
here
in
the
next
day
or
so
have
a
good
rest.
Your
day,
everybody.