►
Description
Watch the playback for a hands-on GitLab CI workshop and to learn how it can fit in your organization!
We will kick things off by going over the differences between CI/CD in Jenkins and GitLab, syntax requirements, advantages to using GitLab, and how you can achieve the same outcomes in GitLab. Getting started with CI/CD in GitLab will take a lot less time than tends to be required for Jenkins, and your users can stay in a single platform.
We will then dive into how to build simple GitLab pipelines and work up to more advanced pipeline structures and workflows, including security scanning and compliance enforcement.
A
All
right
welcome
in
everybody.
Thank
you
for
joining
our
Workshop
today,
we'll
give
folks
just
another
minute
or
so
to
to
let
everybody
get
joined
in
get
signed
up
with
everything
that
you
need
to
get
signed
up
for
I'm,
going
to
actually
walk
you
through
the
lab
setup
process.
I've
got
a
a
QR
code
and
some
lab
setup
instructions
here
through
a
short
link.
A
If
you're
just
joining
us
today
welcome
in
we're
going
to
be
doing
a
Hands-On
workshop
on
gitlab
CI
for
Jenkins
users
today,
so
I
appreciate
all
of
you
joining
today
and,
like
I
said
we'll
give
folks
just
another
minute
or
so
to
to
get
joined
in.
But
this
is
your
first
Hands-On
workshop
at
gitlab.
You
want
to
make
sure
that
you
have
a
working
gitlab.com
account,
which
is
on
our
SAS
platform.
A
This
is
different
from
an
account
that
you
might
use
for
gitlab
on
a
self-managed
instance
if
your
company
or
organization
uses
gitlab
outside
of
our
SAS
platform.
So
if
you
don't
have
that
existing
gitlab.com
account
yet
I've
got
a
link
there
to
sign
up
for
a
new
gitlab.com
account
on
our
SAS
platform
and
then
also
the
lab
setup
instructions
which
my
colleague
Steve
has
kindly
shared
in
the
chat.
A
I'll
also
make
sure
to
get
the
instructions
kind
of
continually
shared
out
to
our
team
here
and
to
to
everyone.
That's
joined
into
our
Workshop.
A
Today,
all
right
give
maybe
around
another
10
or
so
seconds
and
then
we'll
get
started
so
feel
free
to
grab
that
link
there
for
our
lab
setup
instructions
and
also
make
sure
that
you
have
an
existing
gitlab.com
account
on
our
SAS
platform.
If
you
don't,
you
could
always
just
sign
up
using
that
link
and
I'm,
also
going
to
put
in
our
lab
setup
instructions
here
in
the.
A
A
Here,
yes,
the
session
is
going
to
be
recorded,
you
can
see
it
is
being
recorded
and
you'll
be
getting
a
copy
of
the
today's
recording
tomorrow.
Once
we've
completed
today's
Workshop
thanks
Steve
yeah
I
think
Steve
you'll
probably
have
to
take
the
lead
on
grabbing
my
snippet
and
periodically
sharing
that
out,
as
you
get
new
attendees
joining
in.
For
some
reason,
my
zoom
client
is
not
allowing
me
to
input
anything
into
the
chat,
but
as
long
as
you.
B
B
A
All
good
to
go
all
right
so
for
today's
Workshop
we're
going
to
go
going
to
first
highlight
some
of
the
key
advantages
of
using
gitlab
C
ICD
when
compared
to
Jenkins,
and
then
we're
going
to
jump
into
a
Hands-On
Workshop,
actually
logging
into
the
gitlab.com
SAS
platform
and
working
with
the
sample
pipeline
directly.
A
To
give
you
some
firsthand
experience
working
with
the
gitlab
CMO
file
and
sending
up
a
gitlab
c
pipeline
based
on
Concepts
that
we'll
be
sharing
with
you
today,
I'm
also
joined
by
my
colleague
here,
Steve
Graham
he's
a
customer
success
engineer
here,
based
in
the
Los
Angeles
area,
he's
going
to
be
taking
a
look
at
the
chat
for
any
questions
or
the
Q&A
function
would
definitely
recommend
you
use
the
Q&A
function
to
submit
your
questions,
but
if
you
know
you're
more
than
welcome
to
utilize
the
chat
as
well,
if
that's
a
little
bit
easier
for
you
to
use
so
a
little
introduction
on
myself,
my
name
is
Chris
guarte
I'm,
a
senior
customer
success
engineer
here
at
gitlab,
also
based
out
of
Los
Angeles
after
today's
session.
A
Please
feel
free
to
connect
with
me
on
LinkedIn
I've
got
a
QR
code
here
that
you
can
scan
if
you
want
to
connect
with
me,
I
usually
take
the
time
to
share
a
post
to
highlight
some
of
the
key
topics
I'm
discussing
with
my
customers
on
a
weekly
basis,
as
well
as
sharing
some
new
features
and
capabilities.
That
I
think
are
important,
based
on
my
regular
interactions
with
customers
in
the
field.
Just
like
you,
I'm
also
sharing
my
gitlab
profile.
A
So
you
can
see
some
my
latest
contributions
here
at
gitlab
and
you
can,
like
I
said
you
can
scan
that
QR
code
to
go
directly
to
my
LinkedIn
profile,
to
add
me
and
start
following
my
posts.
So
a
note
on
today's
resources
and
followup,
don't
mind
my
great
meme
there
tomorrow
you'll
be
getting
an
email
with
a
link
to
the
slide
presentation
and
a
recording
in
this
Workshop.
A
So
you
can
review
and
share
the
the
workshop
materials
with
your
peers
if
your
account
qualifies
for
a
customer
success
engagement,
a
meeting
with
someone
like
myself
or
Steve,
a
customer
success
engineer
or
a
regular
Cadence
of
calls
with
a
customer
success
manager.
A
You
should
expect
a
team
member
to
contact
you
within
the
next
week
to
inquire
about
any
questions
that
you
might
have
setting
up
potential
enablement
sessions
for
your
team
and
to
suggest
best
practice
guidance
that
might
help
you
move
from
Jenkins
to
get
cicd
more
successfully.
So
to
begin,
I
want
to
share
some
of
the
key
technical
advantages
of
gitlab
cicd,
as
well
as
a
mapping
of
terms
and
capabilities
to
help.
You
start
converting
your
existing
Jenkins
pipelines
into
gitlab
or
start
creating
new
projects
with
gitlab
cicd
in.
A
Mind
so
the
first
gitlab
PL
platform
Advantage,
is
reduce
tool.
Switching
for
all
this
is
really
realize
it's
it's
fully
realized
by
your
developers
and
devops
Engineers,
who
may
be
working
with
your
Jenkins
instance
and
the
Jenkins
pipelines
in
a
separate
Tool
when
you
move
to
gitlab,
for
all
of
that,
you
have
the
one
single
devops
platform
to
run
all
of
your
pipeline
automations
for
your
project,
so
pipelines
run
in
the
same
project
where
developers
are
making
their
commits
and
merge
requests.
A
So,
as
you
can
see
in
this
animation
here,
you
know
I
started
out
by
going
to
the
gitlab
project,
seeing
a
list
of
all
the
recent
commits
and
any
pipeline
that
was
associated
with
that
commit
you
can
click
into
that
Pipeline
and
see
exactly
what's
happening
behind
the
scenes
with
the
pipeline
automation
that
runs
directly
from
gitlab
and
also
from
merge
request,
you
could
see
any
of
the
proposed
changes
that
need
to
be
merged
into
the
main
or
default
Branch.
You
can
click
on
the
latest
pipeline.
A
That's
run
from
within
that
merge,
request
and
see.
What's
what's
happening
with
that
pipeline
automation
as
well,
so
all
pipelines
are
viewable
from
a
Project's
pipelines
page
and
because
developers
don't
have
to
switch
tools
to
review
pipelines.
They
have.
A
A
lot
more
time
to
focus
on
their
projects
and
coding.
Another
key
feature
that
you
can
enable
with
running
pipelines
directly
within
gitlab,
is
to
enable
environments
in
projects,
so
environments
allow
for
monitoring
successful
releases
from
your
projects
and
tracking
the
latest
commit
or
code
that
was
associated
with
that
release.
To
that.
A
Environment,
another
platform
advantage
that
I
wanted
to
share
with
you
that's
unique
to
our
ultimates
here
of
gitlab
and
just
as
a
note
for
today's
Hands-On
session,
you'll
get
an
ultimate
license
applied
to
your
sandbox
group.
A
So
when
you
create
projects
in
there
and
you
create
the
project
that
you'll
be
working
with
today,
you'll
have
all
of
the
ultimate
level
fun
functionality
to
play
around
with,
and
so
this
platform
Advantage
within
gitlab
when
you
move
pipelines
over
from
Jenkins
is
governance,
and
so
you
know
with
h,
with
this
governance,
you'll
you'll
be
able
to
more
effectively
manage
and
create
governance
across
all
of
your
organization's
projects.
A
So
with
an
ultimate
tier
subscription
on
gitlab,
you
know,
pipelines
can
trigger
additional
merge,
Quest
approvals
or
essentially
create
a
dynamically
created,
merge,
request
approvals
based
on
the
findings
of
security,
test
analyzers.
A
So,
for
example,
if
you
have
secret
detection
setup
and
you
want
to
automatically
block
the
merge
and
force
somebody
within
the
security
team
to
review
that
potentially
leak
secret,
you
can
do
that
with
a
feature
called
scan
result
policy.
So,
that's
you
know
the
ability
to
create
that
governance,
because
you've
moved
Mo
your
pipelines
over
ticket
lab.
Another
benefit
is
to
allow
for
the
creation
and
enforcement
of
compliance,
Frameworks
and
compliance
pipelines.
So
the
compliance
framework
and
compli
pip
compliance
pipelines.
A
It
really
offers
you
the
ability
to
enforce
key
pipeline
activities,
such
as
the
order
of
a
project
stages.
You
can't
move
the
order
of
you
know
the
build
test
and
deploy
stages
and
any
other
stages
that
you
have
in
between
those.
It
has
to
adhere
to
that
specific
order,
and
you
can
also
make
sure
that
specific
jobs
or
workloads
are
run
for
every
single
pipeline.
A
A
You
can
define
those
Upstream
in
a
separate
project
called
you
know
in
a
in
what
we
call
compliance
Pipeline
and
then
assign
those
to
specific
projects
using
the
compliance
framework,
and
this
is
really
beneficial
for
organizations
that
want
to
more
or
less
centralize
pipeline
configuration
and
create
a
separation
of
Duties
between
project
developers
and
those
who
manage
compliance.
You
can
basically
lock
down
that
that
separate
project
that
Upstream
project
where
the
pipelines
are
originating
from
where
you're
enforcing
a
lot
of
those
things.
A
So
if
you
want
to
ensure
that
your
developers
aren't
skipping
a
step
such
as
removing
a
critical
testing
job
or
stage
that
is
normally
long
running
just
to
push
out
code
more
quickly,
then
gitlab
ultimate
provides
you
that
governance
capability
to
enforce
those
those
specific
jobs
to
happen
on
every
single
pipeline.
So.
A
Interested
in
learning
more
about
these
features
getting
familiar
with
them.
You
do
have
an
ultimate
license
applied
to
your
sandbox
group
as
you
get
your
lab
environment
set
up,
but
we
also
offer
periodically
a
free
Hands-On
security
and
compliance
Workshop.
We
call
it
the
devs
Ops
work
Workshop
to
show
you
how
to
enable
and
use
these
in
another
Hands-On
session.
A
So
we
have
a
lot
of
ways
for
you
to
to
kind
of
get
familiar
with
this,
but
it's
going
to
be
out
of
scope
for
today,
but
just
want
to
give
you
a
heads
up
that
you
can
take
advantage
of
that
and
teach
your
Hands-On
sessions
all
right.
A
Another
platform
advantage
that
I
wanted
to
share
with
you
are
the
built-in
pipeline
triggers
that
you
have
within
gitlab,
so
when
compared
to
Jenkins
triggers
you
know,
gitlab
offers
a
number
of
built-in
triggers
for
kicking
off
pipeline,
essentially
starting
your
pipeline
automation
to
run
via
gitlabs
built-in
cicd
functionality,
without
relying
and
maintaining
a
plug
plug-in
architecture
to
support
different
types
of
plug-in
pipeline
triggers
that
you
might
have
been
used
to
maintaining
in
a
Jenkins
instance.
So
these
are
just
some
of
the
examples
of
you
know.
Gitlab
pipeline
triggers.
A
Essentially
what
happens
when
you?
You
know
what
will
trigger
a
pipeline
to
to
get
kicked
off.
So
if
you
want
it
to
happen
pretty
frequently,
you
can
have
it
happen
on
any
push
to
any
branch.
If
you
want
it
to
be
a
little
bit
more
judicious
on
when
it's
you
know,
maybe
changes
that
are
being
proposed
to
be
merged
into
a
default
Branch.
You
know
you
can
do
it
on.
You
know,
merge
requests.
A
You
know
running
a
pipeline
only
for
merge
request
only
rather
than
for
every
commit
you
can
do
it.
Based
on
external
repository.
You
know
activity
if
you're
not
already
on
gitlab
for
Source
Control
Management.
There
is
a
way
for
you
to
to
pull
in
git
laab
cicd
and
execute
the
the
pipeline
automation
based
on
activity
from
an
external
Source,
Control
Management
Solution.
A
That's
not
commonly
a
use
case
that
I
run
into
speaking
with
many
of
my
customers,
but
it
may
be
something
that
you're
interested
in
so
wanted
to
mention
that
as
well.
We
also
have
scheduled
pipelines,
which,
which
is
essentially
the
ability
to
run
gitlab
cicd
pipelines
at
regular
intervals.
A
So
this
is
similar
to
setting
up
Aon
in
Jenkins,
for
example-
and
this
is
useful
for
say
some
folks
who
want
to
run
a
nightly
build,
or
maybe
even
our
ultimate
users
of
gitlab
have
the
ultimates
here
and
want
run
want
to
run
a
dash
scanner
to
analyze
the
non-production
environment
on
a
regular
Cadence,
maybe
on
a
weekly
basis.
We
also
have
you
know
manual
triggering
from
the
API.
A
You
know
essentially
clicking
a
button
to
run
a
pipeline
from
the
pipeline's
page
within
your
project,
passing
along
specific
arguments
to
make
sure
that
the
pipeline
can
be
customized.
Based
on
your
needs
on
running
that
pipeline
manually.
We
also
have
the
the
API,
which
is
you
know,
a
way
to
trigger
a
pipeline
for
a
specific
Branch
or
tag,
maybe
based
on
you
know,
other
activity.
A
You
know
outside
of
that
project.
You
can
utilize
that
we
also
have
a
you
know
a
webhook
as
well
for
calling
you
know,
triggering
a
pipeline
say
from
another
project
or
even
another
source
outside
of
of
gitlab.
You
also
have
chat
UPS,
it's
a
way
for
you
to
run
a
cicd
job.
You
know
from
a
a
chat
client,
like
slack,
so
you
know,
chat
Ops
will
be
able
to.
You
know,
run
that
pipeline
for
a
specific
project
and
even
even
a
specific
job
as
well.
A
A
Trigger
in
a
parent
pipeline
in
the
same
project
from
or
from
an
external
projects
pipeline,
and
then
finally,
you
can
also
utilize
rules
with
regular
Expressions
against
different
cicd
variables
that
are
built
into
gitlab,
such
as
CI
commit
message
variable,
which
is
you
know,
the
ability
to
look
at
the
commit
message,
and
you
know
either
Trigger
or
not
trigger
a
pipeline
based
on
you
know
the
contents
of
a
commit
message.
A
So
in
the
commit
message
you
might
end
with
Dash
draft
and
set
up
a
rule
with
in
your
pipeline
configuration
so
that
any
commit
message
that
ends
with
Dash
draft
will
not
run
that
pipeline
or
that
job
within
the
pipeline.
So
you
can
definitely
utilize
that
to
add
more
flexibility
and
control
over
triggering
or
not
triggering
your
gitlab
pipelines.
Another
platform
Advantage
I
wanted
to
share.
A
Are
the
pipeline
template
repositories
that
you
create,
so
organizations
can
often
have
projects
that
utilize
either
a
Common
Language
common
framework,
a
common
build
process,
that's
shared
or
even
a
common
set
of
test
requirements.
So
with
gitlab,
we
provide
you
with
capabilities
to
create
pipeline
template
repositories
that
you
can
basically
include
pipeline
functionality
from
an
upstream
project
within
your
gitlab
instance,
and
essentially
start
inheriting.
Some
of
that.
You
know
Common
framework
of
automation
that
might
relate
to
your
build
process,
testing
and
so
forth.
A
So
for
the
first
point
here
you
know
using
gitlab
cicd,
you
know,
devop
teams
can
maintain
common
job
files
or
entire
pipelines
in
a
centralized
project
for
use
in
other
projects,
and
so
by
leveraging
a
template
repository
a
developer
can
now
focus
on
the
code,
that's
specific
to
their
project
and
leave
the
pipeline
development
to
the
teams
pipeline
experts.
A
You
know
you
can
create.
You
know
internal
or
private,
only
templates,
and
we
also
have
vendor
templates
that
that
are
included
as
well
through
gitlab
the
product
itself.
So
we
have
things
called
Auto,
devop
templates
that
help
you
automate.
You
know
your,
maybe
your
build
or
test
process
and
that
ships
directly
within
gitlab-
and
you
can
ins,
including
those
to
kind
of
scaffold
out
your
your
pipelines,
and
it
can
be
the
same
way
for
your
own.
A
You
know
build
and
test
automation
as
well,
so
for
projects
that
also
that
don't
want
to
maintain
pipelines
at
all.
You
could
actually
set
up.
You
know
a
project
setting
to
allow
for
a
pipeline
to
be
defined
completely
externally
outside
of
the
project,
and
you
don't
need
to
define
the
pipeline
Automation
in
that
project
if
you've
defined
if
you've
set
it
up
that
way.
So
it
gives
you
you
know
with
with
gitlab.
You
have
a
lot
of
flexibility
over
defining.
A
You
know
where
pipelines
can
come
from
as
well
as
how
to
include
or
inherit
from
from
other
project
pipelines.
It's
also
important
to
note
that
you
know
pipelines
will
utilize
the
downstream
projects
variable
context,
so
you
know
being
able
to
use
the
context
of
the
local
project
while
leveraging
the
functions
and
scripting
from
an
upstream
project
is
very
useful.
A
For
example,
if
you
have
a
project
that
has
specific
credentials
to
use,
you
can
more
easily
override
some
of
the
default
values
for
variables,
for
example
within
a
job
within
the
pipeline,
we'll
kind
of
get
into
that
in
our
Hands-On
session,
and
this
is
exactly
how
our
gitlab
security
scan
analyzers
are
configured
and
modified
based
on
the
unique
needs
of
each
project.
A
You
may
not
want
to
use
it
exactly
how
we've
defined
it
in
our
gitlab
templates
for
the
gitlab
security
analyzer,
such
as
as
a
SAS
scanner,
secret
detection,
and
you
can
override
that
directly
within
your
gitlab
pipeline
file.
Another
platform
advantage
that
I
want
to
share
is
that
jobs
run
an
isolated
environment
so
with
pipeline
jobs,
those
run
independently
of
each
other
and
have
a
fresh
environment
in
each
job.
A
Passing
artifacts
between
jobs
is
controlled,
using
the
artifacts
keyword
and
jobs
that
need
to
leave
artifacts
and
either
the
dependencies
or
need
keywords
in
jobs
that
rely
on
those
artifacts
and
then,
when
you
need
to
pass
those
variables
that
are
utilized
in
jobs,
gitlab
uses.,
EMV
files
to
accommodate.
For
that
particular
scenario,
gitlab
runs
a
cleanup
after
each
job
to
ensure
a
clean
working
environment
for
the
next
job.
So
you
don't
actually
have
to
add
cleanup
functionality
in
your
jobs
when
it's
required.
A
Gitlab's
pipeline
configuration
always
begins
with
jobs
and
jobs
are,
are
actually
the
most
fundamental
element
of
a
gitlab
CMO
file,
we'll
be
going
through
that
in
our
Hands-On
session,
we'll
be
showing
you,
the
the
syntax
and
how
it's
all
structured
and
with
those
jobs.
Those
are
picked
up
by
Runners
and
executed
in
the
environment
of
a
runner
itself,
and
what's
important,
is
that
each
job
is
run
independently
from
each.
A
Other
the
next
gitlab
platform
advantage
to
share
here
is
you
also
have
the
ability
to
configure
manual
jobs
with
optional
approvals
manual.
Jobs
can
be
set
up
using
the
job
keyword
when
manual
and
the
job
configuration
or
using
job
rules
that
have
a
when
manual
Clause.
So
what
this
means
is
when
you've
set
up
a
job
this
way
you
you
essentially
have
a
job
that
needs
to
be
manually,
triggered
or
enabled
through
the
gitlab
UI
it'll
have
a
play
button
on
it
within
the
gitlab
pipeline
interface
and
by
default.
A
Any
developer
can
run
these
men
manual
jobs,
but
that
may
not
be
desirable
in
some
cases,
so
in
pipelines
for
protected
branches,
only
users
who
are
allowed
to
push
or
merge
on
a
protected
Branch
can
run
that
manual
job
and
then,
if
the
job
is
targeting
a
protected
environment
for
say
a
deployment,
you
could
also
add
deployment
approvals
where
you
can
create
approval
rules
and
select
the
specific
users
that
are
allowed
to
run
that
manual
job
for
that
deployment
to
a
protected
environment.
A
So,
for
example,
you
might
have
a
deployment
approval
settings
configuration
that
might
say
that
your
production
environment
requires
five
total
approvals
and
only
allows
deployments
from
A
specific
group
that
you've
created
in
your
gitlab
instance,
and
maybe
one
user.
That
could
be
the
administrator,
for
example,
so
for
the
next
three
three
slides
here,
I'm
going
to
compare
just
very
briefly
the
the
gitlab
cicd
functionality
that
will
allow
you
to
convert
a
declarative,
Jenkins
file
to
gitlab
CI.
So
one
of
the
first
Concepts
here
in
as
a
reminder,
I'm
going
to
be
sharing.
A
You
know
we're
going
to
be
sharing
a
copy
of
the
recording
today,
as
well
as
the
specific
deck
that
I'm,
using
with
all
the
links
to
our
resources
and
kind
of
recommendations.
Here,
so
don't
worry
about
trying
to
to
grab
all
the
links
if
we're
sharing
links
here
in
the
chat
or
if
these
appear
to
be
clickable,
you'll
be
able
to
click
through
the
the
slide
deck
once
we
deliver
that
to
you
tomorrow.
A
So
the
first
thing
to
highlight
here
is
that
in
Jenkins
the
agent
section
is
used
to
define
how
a
pipeline
executes
for
a
gitlab.
We
utilize
Runners
to
provide
this
capability.
You
can
configure
your
own
Runners
in
any
host,
any
location
that
you
define
it
could
be
on
a
kubernetes
cluster.
It
could
be
on
a
single
tier
instance
could
even
be
on
your
local
workstation.
You
can
take
advantage
of
our
shared
Runner
Fleet,
our
SAS
Runners,
that's
only
available
for
our
gitlab.com
users.
A
So
if
you
don't
want
to
set
up
your
own
self-managed
Runner,
if
you're
a
SAS
user,
you
do
have
that
capability.
We
provide
different
types
of
Runners
running
on
different
operating
systems,
primarily
primarily
Linux,
but
you
also
have
Windows
and
Mac
OS
space
Runners
as
well.
A
I
believe
those
are
are
in
beta,
but
you
know
you're
still,
you
know
it's
still
available
to
you
to
use
on
a
SAS
environment,
but
on
a
self-managed
environment
you
can
certainly
install
the
runner
on
lots
of
different
types
of
operating
systems
like
like
I
just
mentioned,
and
we
also
support
using
tags
on
the
runners
themselves
when
you
set
up
a
runner
in
a
self-managed
fashion.
A
A
From
a
jenin
standpoint,
so
with
the
with
the
the
runners,
you
know,
if
you're,
depending
on
how
you're
executing
the
workloads
most
of
our
customers,
utilize
the
the
docker
based
executor
for
running
those
work,
workloads
on
on
the
git
lab
Runner
within
a
Docker
container,
you
can
define
a
essentially
what
Docker
images
should
be
utilized
for
execution
within
your
gitlab
pipeline
definition,
so
that
that
utilizes,
the
image
keyword
we'll
be
talking
about
this
a
little
bit
later.
A
But
today's
Workshop
we'll
be
utilizing
Runners
that
have
been
preconfigured
for
you
all.
So
you
don't
have
to
worry
about
setting
up
Runner
today,
they've
been
preconfigured
for
our
sandbox
group
and
they'll
be
utilizing
the
docker
executor,
which
means
that
all
of
our
jobs
will
be
running
inside
of
a
Docker
container.
A
The
next
term
to
highlight
here
is
post,
so
in
in
Jenkins,
the
post
section
essentially
defines
the
actions
that
should
be
performed
at
the
end
of
the
pipeline,
and
gitlab
supports
this
through
the
use
of
stages
and
stages
and
gitlab
Define,
the
grouping
of
jobs
within
a
pipeline
and
by
default
it
is
build
test
and
deploy
in
that
order,
and
any
jobs
assigned
to
the
before
or
after
pipeline
stages
would
run
as
expected.
So
I'll
be
sharing
with
you
how
this
Works
within
our
Hands-On
Workshop
today.
A
The
next
term
you
might
be
familiar
with
from
Jenkins
is
steps
and
the
steps
section
is
equivalent
to
the
script
section
of
an
individual
job
in
gitlab,
the
step
section
or
the
script
section
is
essentially
a
yaml
array,
with
each
line
representing
an
individual
command
to
run
within
the
executive's
environment.
So,
in
your
case,
it'll
will
be
the
docker
containers
environment.
A
So
another
term
here
to
highlight
from
from
Jenkins
is
environment,
and
in
gitlab
we
utilize
the
variables
keyword
to
Define
different
variables
at
runtime
that'll
be
utilized
within
your
pipelines
in
gitlab.
These
could
also
be
set
up
through
the
gitlab
UI.
If
you
don't
want
to
hard
code
them
within
your
gitlab
CI
ammo
file
directly
in
the
pipeline
definition,
and
that's
just
directly
in
the
cicd
settings
of
your
project
last
on
this
on
this
table.
Here
is
the
options
term
from
Jenkins.
A
So
in
gitlab
we
utilize
the
job
keywords
to
configure
the
behavior
of
of
jobs
within
gitlab
I
mentioned
that
earlier,
but
that's
to
run
or
set
up
our
automation,
workloads
that
would
be
performed
within
a
job
based
on
different
criteria
or
rules,
for
example.
So
our
keyword
reference
that
we
can
link
to
you
in
the
chat
here
is
essentially
really
a
long
list
of
items
here
that
that
you
can,
you
know,
essentially
reference
to
you
know,
look.
A
So
the
next
slide
here
I'm
going
to
be
talking
about
a
couple
things
here:
term
that
you
might
be
familiar
with
from
Jenkins.
B
A
Parameters
so
in
Jenkins
you
might
have
pipeline
that
was
run
manually
with
a
list
of
options
to
choose
from
called
parameters
in
gitlab.
You
can
configure
this
a
list
of
value
options
and
set
a
default
value
for
a
specific
cicd
variable
when
running
a
pipeline
manually.
A
So
that's
all
configured
in
the
git
Laci
yo
file,
and
this
might
be
useful
if
you
want
to
run
a
pipeline
manually
via
the
UI,
maybe
like
running
a
deployment
and
I
would
change
Behavior
based
on
the
variable
that
you
select
for
the
deploy
environment
variable
for
example,
so
you
can
set
a
default
of
the
staging
environment,
but
you
could
also
have
someone
select
from
you
know:
Dev,
staging
or
production,
for
example,
in
that
in
that
drop
down
and
that's
all
configured
through
the
gitlab
C
yaml
file.
A
And
then
it's
presented
in
the
manual
pipeline
running
interface
triggers
and
cron
from
Jenkins,
as
I
mentioned,
G
lab
supports
scheduling
pipelines
within
each
project.
A
And
this
helps
support
those
use
cases
such
as
a
nightly
build
or
Das
scanning,
and
then
the
last
kind
of
walkthrough
of
terminology
from
Jenkins
tools.
Gitlab
does
not
support
a
separate
tools
director.
You
know
our
best
practice
recommendation
is
actually
to
utilize
pre-built
container
images,
so
these
images
can
be
cached
and
can
be
built
to
already
contain
the
tools
you
need
for
your
pipelines.
So
that
way,
you
don't
need
to
do
a
lot
of
setup
within
your
gitlab
pipelines
themselves.
A
You
can
just
get
all
the
dependencies
and
everything
configured
in
a
pre-built
container
image,
and
then
pipelines
can
be
set
up
to
automatically
build
those
images
as
needed
and
deploy
them
to
a
a
container
registry
that
can
also
be
stored
within
gitlab
and,
if
you
don't
use
container
images
with
Docker
or
kubernetes,
but
use
like
a
shell
executor
on
your
own
system.
A
A
You
might
have
used
input
to
control
the
behavior
of
a
pipeline
based
on
the
parameters
presented
and
selected
by
the
person
running
the
pipeline,
but
in
gitlab
it's
not
needed
because
a
manual
job
allows
any
user
to
enter
in
cicd
variables
manually
to
control
the
pipeline's
behavior
and,
as
I
mentioned
from
the
previous
slide,
you
could
also
include
that
default
list
of
options.
A
That's
similar
to
the
the
parameters
feature
from
Jenkins
and
when
gitlab
does
support
a
when
keyword,
so
that
would
be
utilized
to
indicate
when
a
job
should
be
run
in
case
of
or
despite
the
failure
of
a
previous
job.
But
actually
we
do
have
an
additional
rule
system,
that's
much
more
powerful
and
we'll
be
providing
some
examples
as
we
work
through
the
Hands-On
Workshop
today.
A
So
now
that
we've
gone
through
some
of
the
advantages
of
the
gitlab
platform
approach
to
cicd
and
provided
a
comparison
of
terminology
and
approaches
to
take
within
gitlab
C
I
want
to
work
with
you
all
to
build
out
a
progressively
more
complex,
G
lab
pipeline
directly
in
our
lab
environment,
with
our
Hands-On
CI
Workshop.
A
So
here's
our
Workshop
agenda
for
today
we're
going
to
be
working
on
getting
our
lab
environment
set
up
I
know.
My
colleagues
here
have
shared
the
lab
setup
instructions.
Hopefully,
you've
been
able
to
work
through
that
asynchronously
as
I've
been
talking
through
some
of
the
platform
advantages.
A
But
if
not,
don't
worry,
I'm
going
to
be
walking
you
through
that
pretty
quickly
here,
so
we've
got
the
lab
setup
going
we're
going
to
set
up
a
simple
pipeline
within
the
the
sandbox
environment,
defining
our
execution
order
and
setting
up
dags
rules
and
failures,
setting
up
SAS
scanning
and
build
artifacts,
and
then
talking
about
how
you
can
transfer
the
project
to
your
own
workspace
and
your
own
personal
name
space
or
your
organization's
group
on
gitlab.com
and
then
finally
a
conclusion
to
wrap
up
everything
and
some
next
steps
for
you
all.
A
If
you
want
to
continue
engaging
with
us,
so
first
I
want
to
present
today's
Workshop
scenario,
we're
going
to
pretend
that
we're
officially
part
of
a
brand
new
startup,
that's
creating
a
public
leader
board
for
the
hit
new
racing
game,
Tanuki
racing.
Your
company
has
recently
swapped
over
to
using
gitlab
for
cicd
and
has
tasked
you
with
learning
about
the
different
pipeline
capabilities.
Let's
just
jump
into
our
lab
setup.
Just
a
reminder
here
that
today's
provision
lab
environment,
we
also
have
full
access
to
all
of
our
new
our
new
AI
capabilities.
A
That's
called
gitlab
Duo
these.
This
is
a
suite
of
AI
assisted
functionality
within
the
devops
platform.
That's
aimed
at
helping
you
increase
velocity
and
solve
key
pay
points
across
the
entire
software
development
life
cycle,
so
make
sure
you
take
advantage
of
this
time
as
well.
If
you're
interested
in
taking
a
look
at
that
all
right
so
to
get
get
started
with
provisioning.
Your
sandbox
group
on
the
gitlab.com
SAS
platform,
you
want
to
first
go
to
gitlab
demo.com
and
then
click
the
redeem
invitation
code
button.
A
We
got
the
invitation
code
directly
there
thanks
Rashid
for
putting
that
again
in
the
chat
for
our
late
late
joiners,
but
I've.
Also,
in
the
upper
right
hand,
corner
got
the
lab
setup
instructions
if
you
want
to
take
a
screenshot
and
then
copy
that
link
or
scan
that
QR
code
and
get
the
the
lab
setup
document
and
walking
you
through.
A
It
live
here
as
well,
so
you're
going
to
go
ahead
and
go
to
gitlab,
demo.com
click,
redeem
invitation
code
and
then
we're
going
to
grab
the
invitation
code
here,
paste
it
in
Click
provision,
training,
environment
and
we're
going
to
grab
my
gitlab.com
username.
So
if
you're
logged
into
to
gitlab.com
already,
if
you
haven't
signed
up
for
gitlab.com
account,
please
do
that.
This
is
different
from
a
self-managed
or
on
Prem
instance
of
gitlab
that
you
might
be
using
for
your
organization.
A
So
everything's
going
to
be
happening
here
on
gab.com,
once
you're
logged
in
you
can
grab
your
username
here.
Mine
is
a
C
guitar
everything
after
the
at
sign
and
the
invitation
code
is
has
been
dropped
in
the
chat,
and
one
of
my
colleagues
can
can
put
that
in
there
again.
Let
me
go
ahead
and
just
oh,
that
might
be
the
incorrect
one.
There
Rasheed
there's
a
new
one
here
in
the
the
Jenkins
Workshop.
A
Sorry
about
that
there
you
go
that's
the
new
one
there
and
for
some
reason
my
chat
wasn't
working,
but
now
it's
working
again.
So
apologies
for
that
and
that's
the
invitation
code
that
you'll
be
dropping
in.
There
starts
with
a
it's
aa827,
5
F7,
so
you'll
grab
your
invitation
code,
paste
that
in
there
put
in
your
gitlab.com
username
click
provision,
training,
environment,
and
when
you
see
this
page,
that
means
your
your
sandbox
group
has
been
created.
A
You
could
always
go
through
that
same
process
to
redeem
the
invitation
code
and
put
in
your
username
to
get
back
to
your
group.
If
you've
lost
your
place,
you
can
go
ahead
and
click
that
blue
my
group
button,
and
it
takes
you
to
this
page
on
gitlab.com,
which
is
essentially
your
sandbox
group,
where
you
can
create
new
projects
for
our
Hands-On
session
today.
A
That
was
just
kind
of
a
brief
overview
live
on
the
call
here,
just
to
make
sure
that
you
have
the
ability
to
create
that,
and
let
me
just
make
sure
here
that
I've
got
the
lab
setup
instructions
again
in
the
chat,
see.
A
That's
all
right!
Well,
just
walked
you
through
that
drop
in
your
invitation
code
grab
your
username
from
gitlab.com
put
that
in
there
click
provision,
training,
environment,
and
you
should
get
on
a
page
like
this
and
click.
The
blue,
Mig
gr
button
and
you'll
be
able
to
see
your
subgroup
well,
you'll,
be
your
your
sandbox
environment
for
creating
a
new
project
and
getting
hands
on
with
the
gitlab
c
and
getting
your
own
pipeline
set.
A
Up
you'll
have
a
different
group,
ID
unique
identifier
for
your
test
group,
so
don't
worry
about
if
it
doesn't
match
mine
everybody's
going
to
have
their
own
separate
group
to
be
working
on.
If
you
have
a
404,
please
let
one
of
my
colleagues
know
Steve
can
can
help
assist
there.
Usually
that
means
you've
mistyped
your
gitlab.com
username.
You
want
to
make
sure
that
it's
not
including
the
at
symbol
so
just
see
guitarte
for
mine
for
yours,
it'll
be
different.
A
You
can
grab
that
when
you
log
into
gitlab.com
on
our
SAS
platform
and
click
on
your
avatar
in
the
top
of
the
main.
A
Navigation
I
talked
about
getting
back
to
your
group.
You
could
always
go
through
the
invitation
code,
redeem,
an
invitation
code
process,
again
click
provision,
training,
environment
and
then
type
in
your
gitlab.com
username.
To
get
back
to
the
same
group,
it's
not
going
to
provision
separate
groups
for
you
every
time.
You
click
that
redeem
invitation
code
for
your
username.
A
You
always
go
back
to
the
same
group
that
you've
been
assigned
for
today's
Hands-On
Workshop
yeah
and,
if
you're
getting
a
44
that
might
be
the
mistyped
username,
and
you
also
want
to
make
sure
this
is
a
username.
That's
on
our
gitlab.com
SAS
platform.
So
if
you're
not
on
gitlab.com
in
your
web
browser,
you
want
to
make
sure
that
you've
signed
up
for
an
account
based
on
the
the
lab
setup
instructions
that
I've.
A
Right,
let's
go
ahead
and
begin,
hopefully,
everybody's
set
up
with
the
lab
environment.
I
know
some
folks
are
still
getting
that
set
up.
Just
want
to
do
a
quick
check
on
the
pace
everybody's
hanging
in
there
staying
along
with
me,
you
can
give
you
know
just
a
an
okay
thumbs
up
in
in
the
chat.
Let
me
know
everything's,
all
good.
A
Platform
so
for
this
first
section
before
fully
pushing
out
our
application.
In
our
fictional
scenario,
our
team
wants
to
test
a
few
different
types
of
pipelines
to
see
what
fits
their
needs
best.
The
first
task
our
project
manager
wants
to
give
us
is
to
create
a
simple
pipeline
that
builds
and
tests
this
racing
application.
A
So
you
want
to
First,
go
ahead
and
navigate
to
this
URL
I've
got
the
URL
in
the
lab
setup
instructions,
but
I'll
also
put
it
here
in
the
chat,
and
this
is
essentially
our
a
source
project.
That's
going
to
give
us
all
of
the
instructions
that
we
working
through
in
our
Hands-On
Workshop.
So
you
want
to
open
up
this
this
this
project
in
a
new
window
or
Tab,
and
ideally
you
can
keep
this
up
side
by
side
with
your
sandbox
group
that
was
provisioned
through
gitlab
demo.com.
A
So
if
you
put
that
up
in
the
separate
tab
or
window
you'll
be
at
the
repository
screen
here
and
then,
if
you
go
into
the
plan
section
and
issues
from
the
main
navigation
of
the
cicd
adoption
Workshop,
you
could
see
all
of
the
instructions
as
issues
for
our
Hands-On
Workshop
today.
So
as.
B
A
Mentioned
it's
going
to
be
best
to
open
the
the
project
into
side,
by-side
tabs,
because
when
you
start
to
Fork
this
project,
the
source
project
into
your
sandbox
box
group,
those
issues
won't
be
copied
over.
So
you'll
want
a
place
to
reference
that
source
project,
either
in
a
separate
tab,
separate
window
or
side
by
side
in
the
same
screen.
A
It's
going
to
be
a
drop
down
menu
to
select
your
specific
name
space.
You
don't
want
to
select
your
personal
name
space
because
you
may
not
have
a
runner
configured
there.
If
you
just
set
up
your
gitlab.com
account,
we've
provisioned
Runners,
specifically
for
this
Hands-On
Workshop
for
your
sandbox
group.
A
So
you
can
just
copy
that
unique
identifier
from
the
sandbox
group
that
was
provisioned
and
just
paste
that
in
there
and
that'll
automatically
identify
your
subgroup
that
you've
been
assigned,
and
then
you
you
want
to
go
ahead
and
rename
this
to
make
things
less
confusing.
If
you
keep
it
at
cicd.
Adoption
Workshop
when
you
refer
between
the
source
project
and
your
sandbox
project
can
get
confusing
so
I
like
to
just
call
it
Workshop
project
and
you
can
leave
the
project
slug
at
the
default
recommendation
here.
A
A
So
it'll
take
just
a
minute
there
to
to
create
the
fork
and
on
the
screen,
it'll
transfer
you
to
your
fork
project,
if
I
refresh
here
on
the
right
hand
side
here.
This
is
my
sandbox
group
that
was
provision
in
the
lab
setup
instructions.
If
I
refresh
that
you
can
see
the
workshop
project
now
listed
there.
A
So
that's
the
fork
project,
another
important
thing
so
I'm
going
to
do
here
on
this
left
hand,
side
I'm
going
to
go
back
to
that
source
project
because,
as
I
mentioned,
the
issues
aren't
copied
over
in
the
forking
U
of
the
Workshop
project.
A
So
you
want
to
go
ahead
and
just
go
back
to
the
cicd
adoption
Workshop
again,
I'm
recommend
having
a
separate
tab
or
window
for
that
and
put
that
side
by
side
with
the
the
sandbox
group
and
project
that
you
forked
and
then
just
bring
up
the
main
navigation
go
to
plan
and
issues.
So
you
have
these
readily
available
for
you
to
kind
of
walk
through.
A
Yeah,
so
the
name
space
that
you
would
use
to
Fork
essentially,
is
you
click
the
Forex
button
and
it's
going
to
be
unique
to
you.
So
when
you
redeem
your
test
group
through
gitlab
demo
through
the
lab
setup,
gitlab
demo.com
you'll
get
your
own
provision.
Subgroup
and
you'll
have
a
unique
identifier
here.
A
You
can
just
copy
that
unique
identifier
and
when
you're
copying
getting
the
fork
set
up,
you
can
just
paste
that
in
there
to
find
your
name
space,
so
hopefully
that
helps
you
get
that
get
forked
over
excellent.
So,
as
I
mentioned,
we
want
to
keep
this
in
in
that
separate
tab
or
window.
That's
our
reference
here,
the
source
project
for
all
of
our
Workshop
instructions.
So,
even
though
we
forked
our
project,
we
have
another
step
here
to
complete
and
that's
to
remove
the
fork
relationship.
A
So
that
way,
all
of
the
activity
that
we
create
in
this
Workshop
project
is
self
contained.
A
So
I'm
going
to
go
ahead
and
in
the
workshop
project
that
you
forked
here,
sandbox
namespace,
you
want
to
go
ahead
and
go
to
settings
in
general
scroll
all
the
way
down
to
Advanced,
expand
that
section
and
then
find
remove
Fork
relationship
and
just
click
that
red
button,
remove
Fork
relationship
and
then
type
in
Workshop
project
to
proceed
and
confirm
that
removing
of
the
fork
relationship
and
if
that's
successful,
it'll
say
the
fork
relationship
has
been
removed
and
you're
all
set
up
there
for
the
the
lab
exercises
that
we'll
be
gting
Hands-On
with
I'm,
going
to
switch
back
to
my
slides
here
and
feel
free
to
send
any
questions
along
the
way,
those
that
forking
and
removing
the
fork
relationship
processes
all
in
the
lab
setup,
environment
or
lab
setup
dock
that
was
shared
with.
A
You
looks
a
little
bit
like
this,
so
you
can
work
on
on
that
as
I
go
through
some
of
the
slides
here.
To
give
you
an
introduction
before
we
get
Hands-On
so
just
skipping
through
these
quickly.
I
walk
you
through
this
live
on
the
forking
process
and
removing
the
fork
and
getting
to
the
workshop
steps.
All
right:
well,
let's
get
started
getting
into
the
weeds
here
with
the
gitlab
pipelines,
Workshop
doc.
A
Here
in
the
chat
yeah,
it
has
a
link
there
to
the
the
workshop
instr
instructions
so
for
the
gitlab
pipeline
Anatomy,
you
know
essentially
for
getting
your
gitlab
pipeline
setup.
You
know,
as
I
mentioned,
jobs
run
independently
and
sometimes
on
different
Runners.
So,
as
you
look
at
the
fundamental
parts
of
the
git
lab
pipeline,
everything
is,
you
know,
starts
with
the
jww,
that's
where
all
the
workloads
are
defined
and
it
can
be
def.
You
know
separate
out
into
different
stages.
A
You've
got
the
in
this
example.
Here.
You've
got
the
build
stage,
the
test
stage
and
the
deploy
stage
and
you've
got
different
jobs
contained
within
those
stages
and
and
at
a
default
configuration
all
of
the
jobs
in
one
stage
must
complete
successfully
before
proceeding
to
the
next
stage.
So
before
the
test
stage
can
get
kicked
off,
those
jobs
can
get
started.
The
build
stage
actually
has
to
complete
first,
and
this
is
in
the
default.
A
Configuration
we'll
show
you
how
to
modify
this
in
later
in
the
actual
Hands-On
section,
but
in
a
default
configuration
the
jobs
from
a
previous
stage
need
to
complete
before
the
that
next
stage
can
get
started.
So
with
with
thinking
about
your
migration
into
gitlab,
you.
B
A
You
might
think
about
you
know
either
current
manual
process
or
processes
that
are
already
automated
within
Jenkins.
You
want
to
think
about
how
that's
separated
and
and
split
up,
and
so
you
know,
thinking
about
how
how
you
can
you
know
utilize
the
the
fundamental
concepts
of
gitlab
jobs
and
stages
to
to
get
at
that
structure
is
going
to
be
useful,
so
pipelines
are
defined
per
project
in
the
gitlab
CMO
file.
A
So
we
working
within
a
single
project
today
and
a
single
gitlab
CMO
file,
and
that
file
is
always
stored
in
the
Project's
root
directory
and
as
I
mentioned,
you've
got
a
pipeline,
that's
defined
by
stages,
and
each
stage
can
have
one
or
more
jobs
and
then
also
in
a
default
configuration
that
is
also
worth
highlighting
here.
So
you
can
see
that
the
test
stage
has
two
jobs
here.
A
A
You
can
also
do
it
that
way
as
well,
so
every
job
is
going
to
be
executed
by
its
own
Runner,
meaning
that
you
can
execute
as
many
jobs
at
one
time
as
you
have
those
Runners,
and
if
you
need
to
scale
Runners
up
and
down
to
meet
different
workloads.
We
also
have
actually
have
options
for
that
as
well.
A
We'll
make
sure
to
share
those
resources
for
you
if
you
need
to
get
set
up
with
your
own
G
lab
Runners
at
scale,
but
we're
not
going
to
be
setting
up
Runners
today
for
today's
Workshop.
We
also
have
different
statements
within
the
job
at
itself,
so
this
is
looking
at
the
anatomy
and
definition
of
a
job
that
defines
the
pipeline
automation
within
your
projects.
You've
got
the
script
section
before
script
section
and
the
after
script
section
so
before
script
is
exactly
what
it
sounds
like.
A
It's
commands
that
execute
before
the
the
actual
script
statement
executes
that
concatenated
to
script
and
runs
in
the
same
shell.
So
you
might
utilize
this
to
run
any
steps
that
are
necessary
in
order
for
the
script
statement
to
execute
properly.
A
For
example,
you
know
installing
the
right
CLI
tool
to
utilized
within
the
environment,
if
that's
not
already,
preconfigured
or
provided
within
the
docker
image,
for
example,
if
you're
using
the
the
docker
executor
and
then
within
the
script
keyword,
that's
actually
the
shell
commands
that
runs
within
for
that
job
and
the
statements
that
will
be
executed
by
the
runner.
These
are
just
essentially
bash
scripts,
and
so
you
know
in
that
example
that
I
just
provided.
A
If
you
have
a
before
script,
installing
the
AWS
CLI,
you
could
have
the
script
keyword
actually
executing
AWS
CLI
command,
since
it's
already
set
up
in
that
environment
and
like
the
before
script,
you
also
have
the
after
script,
which
is
optional.
It's
not
necessarily
required,
and
it's
also
good
to
note
that
the
before
script
is
optional
as
well.
A
The
only
thing
that's
required
in
the
job
definition
is
the
script
statement,
but
with
the
after
script,
it
runs
in
a
separate
show,
separate
from
the
before
script
and
script
statements,
and
it
runs
after
the
script
keyword,
as
you
could
tell
there,
with
the
name,
and
so
a
good
example
of
the
after
script
could
be
cleanup.
A
I
I
know
I
mentioned
that
cleanup
is
taken
care
of,
but
you
could
also
utilize
that
for
some
form
of
cleanup,
if
that
is
necessary,
and
also
you
can
also
evaluate
the
exit
code
of
the
script
in
the
after
script
section
and
have
additional
job
behaviors,
depending
on
the
results.
If
you
want
to
send
a
notification
to
your
chat,
client
or
send
an
email
out
based
on
the
result
of
what
happened
in
the
script
section,
you
can
do
that
in
the
U
in
the
after
script.
A
A
That
job,
it's
only
if
that
you
know
exit
code
is
nonzero
from
the
actual
script
section
itself.
A
The
git
lab
Runners
is
out
of
scope
for
today's
Workshop
we're
not
going
to
be
setting
up
a
git
laab
Runner.
Today,
it's
already
been
preconfigured
for
you
U,
but
it's
worth
highlighting
what
they
are
again
and
how
that's
being
utilized
so
gitlab
Runners
are
an
application
that
you
install
on
another.
You
know,
instance,
or
a
piece
of
infrastructure
that
you
can
manage
and
again,
if
you're,
a
s,
SAS
user,
you
actually
have
the
availability
of
SAS
Runners.
A
That
g
lab
manage
this
for
you,
so
these
Runners
are
an
application
that
works
with
the
gitlab
cicd
process
to
run
those
jobs
from
your
pipeline.
So
no
matter,
if
you're,
a
self-managed
user
of
Gab
gitlab
or
a
SAS
user,
you
could
bring
your
own
SAS
Runner
infrastructure,
so
you
can
utilize
self-managed
Runners
to
basically
install
the
gitlab
runner
on
in
structure
that
you
own
and
manage.
So
that
could
be
that
you
know,
example
of
you
know,
setting
up
you.
A
A
Due
to
time
constraints
for
today's
purposes,
you
know
my
colleague
here
Steve
and
our
devops
team
has
already
provisioned
and
set
up
Linux
based
gitlab
Runners
for
our
sandbox
project
to
use.
But
there
are
potentially
many
considerations
and
use
cases
for
different
Runner
configurations.
I'm
going
to
share
a
few
links
here
in
the
chat
for
more
details
on
on
the
runner
setup,
all
right.
A
So,
let's
just
jump
into
the
issue
and
I'll
walk
you
through
creating
a
simple
Pipeline
on
gitlab
and
sending
those
workloads
to
our
Runners
that
we've
already
provisioned
for
you
feel
free
to
follow
along
in
your
provision,
sandbox,
environment
or
just
look
over
my
shoulder
here.
A
So
you
could
do
these
tasks
at
your
own
pace
later
on
so
I'm
going
to
go
ahead
and
switch
over
to
my
secondary
screen
here
and
go
to
the
source
project
here
that
cicd
adoption
Workshop
project
that
I
shared
with
you
and
make
sure
that
we
go
to
the
first
issue
here:
simple
Pipeline
and
then
on
the
right
hand,
screen
here
I'm
going
to
go
to
my
workshop
project
that
was
forked
over
from
that
source
project
and
get
back
to
the
the.
B
A
Screen
there
of
the
work
shop
project,
so
the
first
steps
here
is
going
to
get
us
to
import
the
cicd
pipeline.
So
we've
actually
taken
that
step
already
by
walking
you
through
it.
So
we
can
skip
step
one
here,
that's
just
the
forking
of
that
source
project
and
we're
going
to
be
creating
a
basic
pipeline
now
so
we'll
skip
step,
one.
That's
just
the
the
forking
and
removing
the
fork
relationship
and
what
that
has
done.
A
Is
it's
given
us
a
starting
point
with
a
gitlab-ci
file,
and
this
is
the
the
definition
of
our
simple
pipeline
that
we'll
be
editing
and
modifying
for
today's
Workshop.
So
from
your
Workshop
project
go
ahead
and
if
you
aren't
already,
you
can
go
click
this
Workshop
project
link
or
in
the
breadcrumb
to
get
to
the
project
overview,
and
then
we
can
go
ahead
and
click
into
the
gitlab-ci
file
from
the
repository
View
and
take
a
look
at
it.
A
We've
got
our
two
stages
already
Define
here:
it's
not
necessarily
a
requirement
to
define
the
stages,
but
it
is
helpful
to
do
that
if
you
want
to
override
default,
Behavior
default
behavior
is
to
have
build
test
and
deploy
stages,
for
example,
but
if
you
are
defining
your
own
stages,
that's
the
only
stages
that
will
be
utilized
for
your
pipeline,
so
here
we've
only
defined,
build
and
test,
as
well
as
specifying
the
image
to
utilize
the
docker
image
to
utilize.
A
From
the
you
know,
the
docker
Hub
registry,
so
we're
utilizing
a
node
Docker
image
and
specifying
the
specific
version.
Number
we've
also
set
up
specific
caching
Behavior
to
Cache
modules
in
between
jobs
and
then
what
I
want
to
take
a
look
at
more
closely
here
is
that
we
have
a
build
app
job.
A
A
So,
in
the
before
script,
we're
essentially
setting
up
the
the
environment
for
the
the
node
application
that
needs
to
be
built,
and
then
the
script
statements
for
actually
executing
the
the
build
process
within
the
job
not
going
to
get
to
in
to
the
weeds
here
just
kind
of
highlighting
what
it's
doing
here
within
a
job
that's
been
defined.
We
also
have
the
unit
test
job
as
well,
and
that's
been
defined
to
run
in
the
test
stage,
and
then
we've
got
also
the
before
script
defin.
A
Here
that's
setting
up
the
container
environment
as
well
as
the
script
statements
to
run
the
testing,
that's
required
for
the
the
unit
test
stage
or
unit
test
job
in
the
test
stage
stage.
So
that's
just
a
walk
through
of
that
simple
pipeline.
A
But
what
we
want
to
do
here
in
step
three:
is
we
want
to
utilize
or
add
the
after
script,
keyword
to
Echo
out
that
the
build
is
completed
so
something
very
simple,
but
just
showing
you
an
example
of
taking
an
existing
Pipeline
and
adding
the
after
script
here,
so
we're
going
to
go
ahead
and
open
this
up
a
little
bit
more
sorry,
I
got
to
go
into
the
just
in
the
repository
view.
You
want
to
go
to
the
build
on
the
main
navigation
and
go
to
pipeline
editor,
and
then,
when.
A
Editor,
you
can
actually
edit
KN,
so
I'll
go
ahead
and
go
to
the
bottom
of
the
file
here
to
the
unit
test
section
and
then
add.
A
In
the
after
script,
keyword
and
the
script
to
run
in
the
after
script,
section
of
the
unit
test,
which
is
essentially
just
echoing
out
that
the
build
app
job
has
run,
and
we
can
actually
re
re.
A
Unit
test
job
since
we're
adding
that
to
the
unit
test
job
itself,
all
right
if
you've
made
any
syntax
errors,
for
example,
if
you
paste
it
in
the
after
script
and
it's
not
at
the
right
column
here,
we've
got
built-in
linting
here
to
make
sure
that
everything
is
set
up
correctly.
A
Well,
just
showing
you
what
happens
if
you
were
to
have
an
issue
in
your
Syntax,
for
example,
so
you
can
hover
over
it.
After
on
the
right
on
the
squiggly
line,
underneath
the
specific
section
you
can
kind
of
see
how
that
how
that
works
and
what
the
recommendation
is
so
the
the
indentation
is
is
very
important
here
to
to
adding
that
after
script
to
the
unit
test
job
all
right,
so
once
we've
added
the
after
script
here
we
can
go
ahead
and
click
commit
changes.
A
So
that's
where
you
can
update
the
commit
message
here.
This
might
be
helpful
when
you
go
to
the
pipelines
view
if
you
leave
it
at
default,
you'll
see
a
list
of
pipelines
as
we
go
on
through
the
workshop.
That's
not
very
easy
to
see
what
we've
changed
in
the
pipeline.
That's
related
to
that
change,
so
we
can
update
the
to
say
adding
after
script
to
unit
test,
we
make
sure
that
commit
is
triggered
on
the
or
pushed
to
the
main
branch
that
will
trigger
the
the
pipeline
to
build.
A
If
there's
no
issues
detected
here,
all
right
so
hit
commit
changes,
and
this
should
refresh
here.
You
can
also
refresh
the
page
to
see
the
pipeline
that's
been
kicked
off.
We
can
can
also
go
into
the
main
navigation
of
our
Workshop
project
and
go
to
build
and
then
pipelines
and
then
see
that
we've
got
this
latest
pipeline.
That's
kicked
off
based
on
the
push
or
changes
made
to
the
main
branch,
adding
the
after
script
to
the
unit
test
job.
So
you
can
click
this,
the
specific
identifier,
the
hash
101.
A
It's
going
to
be
unique
to
your
own
project,
we'll
click
that
and
we
can
actually
see
the
pipeline
running.
We've
got
the
buildup
job
running
first
and
as
expected
test
the
test
stage
and
the
unit
test
job
is
waiting
until
the
build
app
job
is
completed
and
then
once
that
build
app
job
is
completed.
This
unit
test
job
will
go
ahead
and
kick
off
as
well.
A
So
that's
a
simple
pipeline
we've
gone
through
and
looked
at
what
it's
doing
to
right
now
and
made
the
modification
to
the
unit
test
job
just
to
Echo
out
that
the
bill
job
that
the
unit
test
job
has
run
made
a
modification
there
to
to
the
echo
teex.
But
if
you
click
into
the
one
of
the
jobs,
you
can
see
that
it
hasn't
been
triggered,
yet
it's
still
waiting
on
the
the
build
to
complete
the
build
app
job.
A
If
you
want
to
click
into
that,
you
could
see
some
of
the
log
output
and
just
get
more
information.
This
is
helpful
for
troubleshooting
any
issues
that
you're
running
into
when
setting
up
your
pipeline
automation
within
gitlab,
we'll
let
that
run.
We
can
revisit
this
kind
of
see
how
things
ran
and
make
sure
that
the
the
after
script
was
executed.
All
right,
so
we'll
go
ahead
and
go
back
in
the
the
source
project.
A
The
issues
list
here
and
get
ready
to
move
on
to
issue
two
but
I'll
go
back
to
the
slides
here.
I've
got
a
couple
things
I
want
to
talk
to
you
about
before
we
go
into
that
second
issue
of
our
Hands-On
Workshop,
so
the
second
section
is
going
to
talk
about
Job
execution
order
and
dags
is
also
known
as
the
directed
as
graph,
but
for
short,
we
call
it
dags.
A
Dags
so
after
showing
off
your
simple
pipeline
to
the
team,
they
loved
it,
but
we're
wondering
if
you
could
speed
up
the
process
a
little
bit.
You.
A
In
that
test
stage
of
our
Workshop
project
to
wait
for
the
build
to
complete
and
they
actually
want
to
run
the
the
unit
test
a
bit
more
quickly,
for
example-
and
you
can
see
that
it-
it
did
start
running
after
the
build
app
job
had
completed.
So
we
wanted
to
speed
up
the
process
a
little
bit
and
decide
that
we're
going
to
show
off
some
of
the
skills
that
we'll
be
learning
here
and
show
how
we
can
create
a
pipeline
with
different
execution
orders.
A
A
large
dagger
directed
aslip
graph
to
show
what
it's
really
possible,
with
gitlab
pipeline
lines.
So
the
first
concept
to
cover
here
is
adjusting
the
execution
order
for
pipeline
efficiency,
so
that
pipeline
graph,
that
I
was
just
showing
you
with
the
the
build
app
and
unit
test.
Jobs
shows
us
the
different
stages
and
jobs
for
our
pipeline
that
we've
created
in
the
workshop.
A
So
this
is
another
example
here
with
the
build
app
job
in
the
build
stage
and
then
two
different
jobs
in
the
test
stage:
code,
quality
job
and
the
unit
test
job.
So
current
state.
If
we
leave
it
at
its
default
configuration,
the
jobs
in
the
test
stage,
will
execute.
A
After
all,
jobs
in
the
build
stage
are
completed,
and
you
could
see
in
the
pipeline
graph
the
when
the
pipeline's
being
executed,
that
gray
dot
means
that
it's
waiting
for
dependent
jobs
complete,
but
for
the
the
current
the
next
state
that
we
want
to
get
to
the
desired
state.
A
Is
we
actually
want
to
make
sure
that
certain
jobs
U
can
run
right
away
and
the
code
quality
job
does
not
actually
need
any
of
the
results
of
the
build
job
itself
and
can
execute
in
parallel
to
that
build
app
job,
but
we
want
to
still
from
a
visualization
standpoint,
the
pipeline
graph
standpoint.
We
want
to
keep
the
code
quality
job
in
the
test
stage.
We
won
want
to
move
it
into
the
build
stage.
A
So
so
the
QA
team
has
added
this
code
quality
job
in
the
test
stage,
and
that
really
makes
sense.
But
you
know
at
the
default
config
figuration
it'll
have
to
wait,
and
what
we
want
to
do
is
to
speed
things
up,
so
we
want
to
have
both
jobs
run
in
parallel
code
quality
and
the
buildout
job
running
in
parallel.
That
will
speed
up
our
pipeline
execution.
A
So
to
do
that,
we
add
this
empty
needs
keyword,
so
this
needs
keyword
specifies
that
the
job
the
empty
needs
keyword
specifies
that
the
job
can
start
as
soon
as
the
pipeline
starts,
regardless
of
what
stage
it's
in.
So,
even
if
it's
in
the
test
stage,
the
code
quality
job
is
in
the
test
stage
and
you've
got
the
build
app
job
happening
before
that
that's
defined
in
your
list
of
stages.
With
that
empty
needs
array
that
needs
keybord
with
the
empty
array
there.
That
means
that
the
code
quality
job
can
run.
A
B
A
They
have
a
code
Quality
Tool
that
they
want
to
invoke
from
the
command
line.
They
can
do
that
directly
there
in
the
script,
commands
and
use
that
directly
in
the
job,
but
gab
also
offers
a
code
quality
template
that
can
be
utilized
as
well
and
we'll
go
over
the
use
of
templates
actually
in
the
upcoming
section,
but
this
is
just
showing
the
example
of
the
use
of
the
needs
keyword
without
any.
You
know
dependencies
on
previous
jobs,
so
that
empty
needs
keyword
allows
us
to
execute
these
jobs
out
of
order.
A
A
So
we'll
add
that
make
sure
that
the
build
and
code
quality
jobs
execute
in
parallel
and
since
a
code,
quality
job
can
take
a
bit
of
time
scanning
the
entire
code
base.
We've
actually
decreased
the
time
to
pipeline
completion
because
we're
not
waiting
for
that
build
app
job
to
complete
first
before
running
code
quality
job.
So,
by
doing
this
change,
our
developers
can
get
feedback
about
their
code
changes
a
lot
more
quickly,
making
your
pipelines
a
little
bit
more
efficient.
All
right.
A
We
do
have
another
aspect
here
to
cover
called
just
the
more
advanced
needs
use
of
the
needs,
keyword,
which
is
the
directed
aslip
graph
or
the
dag,
where
you
can
utilize
needs
in
any
pipeline
right.
You
know,
for
example,
you
know
all
the
jobs
here
are
defined
in
a
simple
gitlab
CMO
file
and-
and
we
want
to
you,
know,
run
you
know
some
of
these
jobs
simultaneously
in
the
same
pipeline
that
relate
to
a
specific
aspect
of
this
project.
A
B
A
Ios
testing
job
to
wait
for
the
Android
build
to
complete,
for
example,
you
can
actually
just
wait
for
the
iOS
build
to
to
complete,
and
so
what
this
does.
Is
you
create
that
connection
between
unique
jobs
within
your
pipeline
and
that
they
can
execute
out
of
the
normal
stage
job
progression
order?
A
So
that's
only
the
Android
jobs
need
to
execute
in
that
specific
order
of
say,
build
test
and
deploy,
and
then
the
iOS
jobs
can
run
at
the
same
time
in
parallel,
but
depending
on
the
order
of
the
iOS,
build
and
iOS
test
jobs
before
the
iOS
deploy,
job
can
complete.
So
with
any
pipeline
that
has
three
or
more
needs
declared
those
dependencies
it'll
generate
that
directed
acylic
graph
will
give
you
that
clear
kind
of
illustration,
or
display
of
the
job
dependencies
and
job
order
defined
in
your
pipeline.
A
So
that's
an
example
here
and
we'll
get
in
more
hands-
on
to
it
in
our
Hands-On
Workshop.
Another
option
that
I
want
to
present
going
into
our
Hands-On
Workshop
is
the
gitlab
pipeline
workflows
that
utilize
the
stagel
pipeline
for
more
efficiency?
A
A
So
you
know,
essentially
it
could
allow
you
to
implicitly
configured
the
execution
order,
make
things
faster
to
WR,
rather
than
relying
on
the
default
configuration
of
the
order
of
the
jobs,
essentially
defining
the
order
of
or
the
order
of
stages,
defining
the
order
of
jobs
that
needs
to
execute
within
those
individual
stages.
All
right.
So,
let's
get
hands
on
again.
A
Let's
switch
back
to
the
the
source
project
and
your
sandbox
project
and
go
to
the
issues
tracker
on
the
cicd
adoption
Workshop
project
and
go
to
is
number
two
execution
order
and
dags.
So
this
challenge
will
build
off
of
a
simple
pipeline.
We
created
in
the
first
track
and
show
you
how
you
can
modify
the
execution
order
and
corre
create
a
a
dag
or
directed
asate
graph.
So,
for
the
first
step
here,
we're
going
to
modify
the
execution
order,
just
like
I,
walked
you
through
through
in
the
slides.
A
A
Pipelines
and
as
I
highlighted
earlier,
you
could
see
that
these
just
ran
sequentially,
but
we
wanted
to
run
these
in
parallel
now.
So
we'll
do
that
with
the
needs
keyword.
So
let's
edit
the
gitlab
CMO
file.
A
So
let's
go
ahead
and
go
back
to
the
main
navigation
of
Workshop
project
and
go
to
the
PIP
plan
editor
and
right
now
we
only
have
the
unit
test
job,
but
let's
go
ahead
and
add
in
that
code
quality
job,
that
our
QA
team
wants
us
to
add
for
our
project
copy
that
snippet
there
and
paste
it
there
at
the
end
of
our
gitlab
CMO
file
and
the
pipeline
editor
and
then
we'll
go
ahead
and
make
sure
that
both
of
these
jobs,
not
just
the
code
quality
job
but
unit
test
and
code
quality
jobs
are
going
to
run
at
the
same
time
and
as
soon
as
the
pipeline
kicks
off
right.
A
So
code
quality
already
has
that
that
needs
keyword
with
the
empty
array.
No
dependencies
there
we're
going
to
add
that
directly
to
the
unit
test
job,
depending
on
your
syntax
there,
where
you
pasted
it.
You
know
that
red
squiggly
line
means
that
you
need
to
correct
something
going
to
make
sure
it's
at
the
right
column
there
for
a
unit
test
job.
So
that's
going
to
make
sure
that
the
unit
test
job
can
run
as
soon
as
the
pipeline
kicks
off.
So
it
should
look.
Some
look
a
little
bit.
A
Something
like
this
and
we're
going
to
go
ahead
and
and
commit
those
changes,
make
we're
going
to
add
code
quality
and
run
test
jobs
right
away,
hit,
commit
changes,
put
that
on
the
main
branch
and
push
that
into
the
main
branch
and
then
we'll
be
able
to
see
that
pipeline
status
directly
from
the
pipeline
editor
or
we
can
go
to
the
main
navigation
and
click
go
to
build
pipelines
and
then,
since
we've
named
that
commit
specifically
all
of
our
pipelines
are
getting
kicked
off.
Based
on
commits
to
our
project.
A
We
see
the
ad
code
quality
and
run
test
jobs
right
away,
commit
go
back.
Sorry,
we'll
click.
The
the
pipeline.
Id
actually
we'll
see
the
pipeline
graph
here
and
the
build
app
job
is
running,
hasn't
finished
yet,
but
code,
quality
and
unit
test
are
already
running
right
away,
so
looks
like
our
needs.
Keyword
is
working.
B
A
We
expected
to
and
we're
creating
more
efficiency
now
within
our
project
pipeline,
just
by
adding
that
needs
keyword
without
any
dependencies
all
right.
The
next
step
here
is
to
set
up
the
directed
asli
graph.
So
what
if
we
had
a
lot
of
stages
and
relationships
between
jobs
that
we
wanted
to
run
as
soon
as
possible?
So
we
can
do
that
with
the
dag,
as
I
explained
earlier
in
the
slides,
we're.
B
A
Going
to
be
doing
anything
super
complicated
with
the
dag
it's
just
going
to
Echo
out
out
a
lot
of
you
know
a
simple
statement
to
the
the
job
log
within
the
Runner
environment,
the
executor's
environment,
but
we're
going
to
create
a
relationship
between
all
of
these
different
jobs
that
we're
going
to
be
adding.
So,
let's
go
ahead
and
go
back
to
the
pipeline.
A
U
make
sure
that
we
add
a
new
stage
here,
so
we've
got
build
and
test
already
defined,
and
you
know
adding
new
stages
to
your
gitlab
pipeline
is
just
as
easy.
As
you
know,
adding
another
line
and
with
the
right
syntax
there
you
can
name
the
stage
appropriately,
and
this
is
exactly
the
order
of
of
the
stages
and
how
it
should
be
executing
the
jobs
within
those
stages.
A
So
at
first
it
was
just
build
and
test,
and
then
we
added
the
deploy
stage
so
build
comes
first
test
comes
right
after
that,
and
then
employee
comes
after
that,
just
based
on
the
order
of
the
lines
that
they're
in
within
the
pipeline
definition
here
and
then
below
at
the
very
bottom
of
the
pipeline.
A
Editor,
you
want
to
add
all
of
the
jobs
that
we
want
to
add
underneath
the
code
quality
job,
so
we're
just
going
to
copy
that
using
the
snippet
here,
we're
not
going
to
be
able
to
type
that
in
manually
and
to
save
some
time
here
and
just
paste
that
in
there
and
you
can
see
it's
got,
the
needs
key
were
defined
for
a
lot
of
these
different
jobs.
To
make
sure
that
we
create
that
directed
aate
graph
to
create
more
efficiency
within
our
pipeline.
A
So
before
committing
those
changes,
we
can
actually
utilize
the
visualized
tab
to
see
that
relationship
that
we've
created
between
different
jobs
just
by
pasting
in
that
snippet
there
so
build
a
needs
to
happen.
First,
before
the
test,
a
job
can
complete.
If
you
hover
over
that,
you
can
see
that
relationship
same
for
test,
B,
test,
C
and
so
on,
but
test
B
doesn't
rely
on
build
a
to
complete
it'll.
A
Just
rely
on
build
B
to
complete
before
test
B
can
can
actually
run,
and,
as
you
kind
of
go
through
to
the
the
last
stage
that
we've
just
defined
the
deploy
stage,
if
you
highlight
over
a
job
in
the
deploy
stage,
you
could
see
how
that
connects
there
as.
A
Well,
all
right
so
I'm
going
to
go
back
to
the
edit
Tab
and
commit
these
changes.
I'm
going
to
mention
that
adding
the
D
those
changes
to
the
main.
A
Branch
and
then
we
can
go
back
to
the
U
main
navigation,
build
and
then
pipelines
and
we'll
see
our
latest
commit
for
adding
the
dag,
we'll
go
ahead
and
click
the
pipeline
ID
for
the
directed
as
graph
and
see
all
these
different
jobs
that
have
been
kicked
off.
You
see
that
the
build
job
is
started
off
for
build
a
as
well
as
build
app,
and
you
see
as
build
a
build
B
build
C
completed
these
test.
A
test,
B
test,
C
and
so
on.
Jobs
are
kicked.
B
A
And
then
won't
won't
kick
off
these
deploy
job
C,
test
C
completed
first,
so
deploy
C,
already
kicked
off
so
kind
of
got
to
see
that
in
action
where
some
of
these
jobs
can
execute
out
of
order
based
on
that
needs,
relationship
that
we've
defined
all
right,
so
that
wraps
up
section
two
we're
going
to
move
on
to
section
three
rules
and
failures,
but
I
want
to
take
a
a
quick
break.
I
know:
we've
gone
a
little
bit
over
an
hour
now
with
a
lot
of
content
here.
A
So
let's
take
a
seven
minute
break
as
we're
about
a
little
bit
over
halfway
through,
we
can
grab
some
more
coffee,
something
to
drink,
take
a
bio
break
and
we'll
be
back
to
tackle
even
more
cicd
functionality
after
we
return
just,
let
me
know
how
how
the
pace
is
going.
Are
you
keeping
up?
Okay
with
everything,
give
me
a
thumbs
up
in
the
in
the
chat
before
you
take
your
break
and
we'll
see
you
back
here
in
about
7.
A
A
A
A
A
A
A
A
A
All
right
everybody
welcome
back
from
the
break.
Hopefully
everybody
had
a
chance
to
stretch
and
have
something
to
drink
and
take
that
quick
break
all
right.
Let
me
go
ahead
and
proceed
to
go
to
this
next
section
here
for
rules
and
failures.
So
let's
set
up
the
scenario
here
for
for
this
section,
as
you
come
back
to
the
team
and
show
them
your
new
pipeline,
you
notice
that
one
of
your
test
jobs
is
failing.
After
taking
a
look
into
the
job.
It's
depend.
A
It's
actually
determined
that
we
don't
actually
need
to
enforce
it
as
passing
and
still
want
to
see
the
results.
So
this
next
section
will
show
you
how
to
utilize
rules
and
failure.
Clauses
and
your
git
lab
pipelines
to
allow
that
kind
of
behavior.
So
first
want
to
talk
about
allowing
job
failure,
as
I
mentioned
before
in
the
script
section,
any
any
activity
that
happens
within
the
script
section
that
generates
an
exit
code
within
the
environment
of
an
exit
code
other
than
zero.
A
So
an
exit
code
of
one,
for
example,
will
result
in
a
job
failure,
but
perhaps
we
want
to
find
a
way
to
configure
that
pipeline.
So
that,
even
when
say
the
unit
test
job
fail,
all
of
the
jobs
in
our
pipeline
could
still
execute
we're,
not
failing
the
entire
pipeline,
so
the
pipeline
will
still
proceed
and
continue
on.
A
So
we
can
do
that
by
simply
adding
the
allow
failure,
keyword
to
the
job
configuration
and
setting
that
to
true,
and
so,
as
you
can
see
there
in
that
pipeline
graph
with
that,
allow
failure
to
set
to
True
you'll
get
an
exclamation
point
instead
saying
that
the
pipeline
L,
the
job
in
that
pipeline
is
failed,
but
it
doesn't
actually
prevent
any
subsequent
jobs
from
executing
all
right.
What
about
the
rules?
A
Keyword
so
I
mentioned
about
rules
and
controlling
the
behavior
of
when
jobs
are
executing
or
when
a
job
is
created
in
a
pipeline,
so
rules
determine
when
to
include
or
exclude
jobs
in
pipelines.
A
job
can
be.
You
know
the
defaults
for
for
jobs
are
when
on
success.
So
you
know
when
the
previous
job
or
previous
stages
have
completed
successfully,
it's
going
to
go
ahead
and
kick
off
and
allow
failure
is
set
to
fall.
A
So
if
that
job
fails,
then
the
the
pipeline
will
not
be
able
to
continue
so
those
are
defaults
and
that's
why
the
Behavior
is
kind
of
expected
to
to
work.
A
But
you
can
always
modify
that
as
as
needed,
and
a
job
is
included
in
a
pipeline
if
a
rule
evaluates
to
true
and
has
a
clause
of
when
on
success
when
delayed
or
when
always
so,
you've
got
got
different
criteria
there
of
of
when
a
job
is
included,
and
it's
also
included
in
a
pipeline
if
no
Ru
is
defined
right
so,
but
the
job
has
a
clause
of
when
on
success
when
delayed
when
always
and
the
job
is
included
in
a
pipeline
if
no
rues
defined
and
no
when
Clause
is
specified,
and
so
you
got
an
example
here
for
a
job
called
only
on
web
run.
A
And
so
what
does
the
rules
keyword
do
here?
The
rules
block
is
actually
evaluating
before
the
script
executes
within
the
runner
environment,
so
you're
not
actually
doing
the
workload
yet
until
the
ru
rules
has
determined
that
the
the
job
can
actually
execute.
So
here
we've
got
an
if
statement.
Looking
at
the
def,
you
know
one
of
the
included
cicd
variables
within
gitlab
called
CI
pipeline
source
and
checking
to
see
if
it's
equals
web.
So
that's
meaning.
A
If
the
pipeline
is
run
from
the
web
interface
of
gitlab,
then
then
this
job
will
actually
execute
if
it's
not
being
the
line
source
is
not
being
triggered
from
the
the
web
interface
so,
for
example,
you're
just
committing
a
push
to
the
main
branch.
For
example.
It's
not
going
to
execute
this
job.
So
that's
exactly
how
that
works.
A
There
you've
got
lots
of
different
syntax
to
cover
here
for
the
the
rules
keyword.
So
first
you've
got
your
Clauses
that
determines
when
to
add
the
job
to
the
pipeline.
We've
got
operators
that
help
you
eval
EV
valuate
and
compare
the
value
of
variables,
for
example,
job
attributes,
which
are
conditions
for
the
job
itself
and
when
options.
A
So
these
are
quite
extensive
when
you
use
any
number
of
these
in
combination
with
each
other
to
add
more
complexity
to
your
pipelines
and
I'll
drop
some
links
here
in
the
chat
and
we'll
go
through
it
in
the
Hands-On
exercise
to
adding
that
complexity
for
your
sandbox
project,
but
as
a
reference
here.
A
These
are
all
kind
of
like
the
different
capabilities
that
you
can
set
up
with
your
gitlab
cile,
the
jobs
within
it
to
to
control
their
execution
and
I'll
touch
on
one
example
that
I've
seen
in
my
experience,
so
we
got
you
know
when,
for
example-
and
you
know,
we
wanted
to
make
sure
that
you
know
when,
when
the
when
the
job
will
be
created-
and
we
also
have,
for
example,
that
it's
to
be
set
up
with
on
success,
so
I
only
want
to.
A
You
know,
run
that
job
when
the
previous
job
is
successful.
You
know,
for
example,
in
another
example:
you
want
to
create
a
job
based
on
the
failure
of
a
previous
job
and
maybe
trigger
opening
a
ticket
in
a
different
system
or
notifying
specific
individuals
on
a
team
sending
an
email
out
to
those
individuals,
or
you
know,
communicating
via
chat,
so
you
could
set
when
on
failure
based
on
the
previous
job.
A
If
that
job
is
failed,
then
you
can
notify
other
members
of
the
team,
so
rules
for
when
a
job
is
not
created
in
a
pipeline.
So
a
job
is
not
included
a
pipeline.
A
If
none
of
the
rules
defined
for
the
job
evaluate
to
true
so
this
is
an
example
of
different
roles
that
are
defined
and
a
rule
evaluates
true,
but
has
a
clause
of
when
never
that
will
make
sure
that
the
job
is
not
included
in
the
pipeline
and
if
there's
no
rules
defined,
but
when,
if
you've
got
a
when
never
Clause
specified.
A
So
this
is
an
example
job
here,
that's
just
going
to
Echo
out
in
the
the
job
log
within
the
runner
environment,
but
it
would
only
run
if,
if
the
pipeline
Source
is,
is
not
Emer
request
event.
So,
if
that,
if
the
CI
pipeline
source
is
a
merg
quest
event
when
never
actually
means
that
it
won't
actually
run
this
job.
A
If
we've
got
the
merg
request
created
and
the
pipeline
is
not
going
to
trigger
that,
if
you've
got
the
pipeline
source
as
schedule.
So
if
you've
scheduled
a
pipeline
through
the
project
configuration,
this
job
is
naturally
not
going
to
be
run
as
well,
and
so
that's
kind
of
a
way
to
make
sure
that
you
can
add
that
complexity
there.
Otherwise,
we've
got
when
on
success,
meaning
that
if
the
previous
job
is
completed
successfully,
then
this
job
will
actually
run
so
this.
A
This
is
just
showing
you
how
to
control
the
the
pipeline
triggers
for
for
a
job
based
on
on
different
rules
and
if
statements
so,
we've
got
when
we
want
to
configure
a
job
for
manual
execution,
and
this
is
commonly
for
say,
a
deploy
job.
So
we've
got
a
deploy
job
Define
here
and
it's
set
up
to
run
in
the
deploy
stage.
We've
got
a
script
command
to
run
the
deploy
activity
for
this
deployment,
but
it
just
happens
manually,
and
so,
as
I
mentioned
earlier,
in
the
kind
of
platform
advantages.
A
This
is
what
actually
looks
like
an
interface
with
that
play
button,
and
you
know,
depending
on
how
you've
configured
your
project
with
protected
branches,
protected
environments,
you
can
actually
control
who
has
access
to
that
button
and
whether
any
approvals
are
required
before
that
manual
activity
could
take
place.
But
that's
going
to
be
out
of
scope
for
this
Workshop
today,
but
I
just
wanted
to
highlight
that's
the
capability
that
this
enables.
We
also
have
another
rules.
A
Example
here
with
multiple
rules,
so
the
the
job
here
that's
defined,
you've
got
if
pipeline
Source
equals
merge,
request
event
and
if
it's
schedule,
so
we
need
to
make
sure
that
both
the
pipeline
source
is
a
merge
request
event
and-
and
we
scheduled
the
job
or
scheduled
the
pipeline.
Then
the
job
will
execute.
A
So
you
can
combine
different
criteria
essentially
or
not
combine,
but
you
can
it'll
execute
if,
if
either
of
these
criteria
will
will
will
meet
that
rule,
so
if
either
of
these
actually
so
it's
not
both
requiring
that
both
of
these
run.
It's
it's.
If,
if
it's
a
merge
request
event
or
if
it's
a
scheduled
pipeline
but
it'll
evaluate
in
sequential.
A
Order
all
right
and
then
let's
take
a
look
at
a
different
example
of
using
the
the
when
keyword
So,
based
on
this
job
definition
for
the
Dr
Bill
job.
You
know
when
is
this
job
created
and
when
will
it
execute
right?
So
it's
utilizing
the
the
rules
when
keyword
to
actually
delay
the
start
of
this
job
and
it's
going
to
begin
in
three
hours,
so
you
might
be
asking
three
hours
from
what
and
based
on
our
documentation.
A
This
is
basically
the
timer
starts
immediately
after
the
previous
stage
is
completed.
So
if,
if
this
is
you
know,
a
job
that's
been
defined
in,
you
know
say
like
the
build
stage,
but
there
was
another
stage
before
the
build
stage.
This
won't
actually
get
the
timer
kicked
off
until
that
previous
stage
is
completed
and,
of
course,
you've
seen
allow
failure
by
now-
and
this
allows
that
the
job
will
allow
previous
jobs
to
fail
and
still
be
able
to
kick.
A
A
As
I
mentioned,
it's
that
sequential
order
of
evaluating
each
rule
and
moves
on
to
the
next
one
until
it
evaluates
to
true.
So
if,
if
it
does
evaluate
the
EM
request
event,
if
it,
if
it
triggers.
A
If
a
if
a
mer
request
event
happens,
then
essentially
the
the
job
won't
be
kicked
off
here,
and
it
won't
need
to
proceed
to
the
next
rule,
because
the
job
is
not
going
to
be
kicked.
A
Off
all
right
and
a
final
example
here
is
the
use
of
rules,
changes
and
the
if
statement,
so
you
can
check
to
see
if
a
variable
is
set,
you
know,
cicd
variable
is
set
and
it
matches
a
specific
string
value.
A
So
you
could
utilize
this
and
say,
like
the
manual
pipelines
interface,
for
you
know,
passing
along
cic
CD
variables
for
kicking
off
a
pipeline
manually,
and
so,
if
that
value
is
passed
in
manually,
then
you
know
that'll
evaluate
as
true
and
it
can
potentially
get
kicked
off
as
well
as
changes
right
changes
will
look
at
specific
files
or
directories
within
your
project.
A
So,
for
example,
this
Docker
Bild
job
will
only
run
if
there
are
changes
to
the
docker
file
or
any
file
changes
within
the
docker
scripts
directory,
and
then
the
job
will
run
manually.
So
it's
enabled
to
run
manually
only
if
the
the
rules
here
are
evaluated
as
true.
This
slide
here
quickly
goes
over
the
the
variables
processing
order,
so
the
order
precedence
of
variables
is
from
highest
to
lowest,
starting
with
the
cicd
pipeline
variables
that
are
defined
there.
A
Going
on
down
to
the
project
level,
variables,
group
level
variables
and
so
on,
you'll
get
a
copy
of
these
slides
after
today's
Workshop
you'll
get
that
tomorrow.
But
I
wanted
to
highlight
that
there
is
a
a
specific
processing
order
for
for
the
variables
as
they're
defined,
depending
on
where
they're
defined
all
right.
B
A
Jump
back
into
the
Hands-On
steps
here
for
rules
and
failures
and
put
that
into
practice.
So
let's
go
back
into
the
issue
tracker
for
the
source
project.
Here:
C
ICB
adoption
Workshop
go
to
rules
and
failures
and
then
in
the
sandbox
project.
Let's
go
ahead
and
just
go
back
to
the
main
Workshop
project
page
there.
A
So
let's
go
back
into
build
the
main
navigation
of
our
work
workshop
project
go
to
pipeline
editor
and
then
remove
all
that
code
at
the
very
end
for
the
dag
right
after
code
quality,
starting
with
build
a
highlight
all
that
and
delete
that
out.
So
you
should
have
a
much
shorter
pipeline
file
for
the
gitlab-ci
yo
file,
just
code,
quality,
job
unit,
test,
job
buildout
job
and
then
also
what
we
want
to
do
is
remove
the
deploy
stage.
A
So
we're
just
back
at
the
build
and
test
stages
within
our
pipeline,
so
it
look,
should
look
a
little
bit
like
this
and
then
now
we're
going
to
add
some
rules
to
the
existing
jobs
that
are
remaining
here.
So,
let's
start
with
a
basic
one
on
our
unit
test
job.
So,
let's
just
say
we
only
care
about
the
unit
test
running
if
we
have
changes
being
pushed
into
the
main
branch.
So
let's
go
to
the
unit
test
job
at
the
very
end.
B
A
Submit
basically
any
changes
that
are
pushed
or
committed
to
the
main
branch.
If
that
evaluates
is
true,
then
the
unit
test
job
will
go
ahead
and
be
allowed
to
execute.
Before
committing
this,
we're
going
to
add
the
allow
failure
keyword
in
The
Next
Step,
so
that
we
could
still
allow
it
the
pipeline
to
run,
even
though
a
failure
has
happened
on
one
of
our
jobs.
So
what?
If
our
code,
quality
job
has
been
failing?
A
Let's
simulate
the
the
failure
within
a
code,
quality
job,
we're
going
to
add
in
an
additional
line
here
and
copy
the
snippet
here
for
the
exit
code.
So
we're
going
to
simulate
the
failure
of
code
quality
by
sending
the
exit
code
of
one
within
this
code
quality
job,
and
then
we
want
to
allow
failure
on
a
rule
that
we
just
set.
So
let's
test
this
out
on
the
code
quality
job.
So
let's
change
the
rules
to
the
code
here
below
copy
the
rule,
snippet.
A
Main
as
well
as
allow
failure
is
true,
so
this
should
look
like
this
here,
so
we're
allowing
this
job
to
run
if
the
commit
Branch
equals.
B
A
A
So
because
we
push
to
the
main
branch,
we
should
be
able
to
see
both
the
code
quality
job
run,
as
well
as
the
unit
test
job
run,
because
that,
if
statement
has
evaluated
to
True
looking
at
the
commit
Branch
as
M,
let's
go
to
our
build
pipelines,
menu
item
here
in
our
main
navigation.
Let's
go
back
here
and
click
the
pipeline
ID
and
we
see
both
the
code.
Quality
job
and
the
unit
test.
A
Job
have
have
run
exclamation
point
noting
that
the
code
quality
job
did
indeed
fail,
but
it's
allowed
to
fail
so
that
tool
tip.
There
shows
that
if
you
click
into
it,
you
can
see
what
happened
there
and
we've
got
the
job
failed
exit
code,
one
because
we
simulated
that
there,
but
because
of
the
configuration
of
the
job,
we're
allowing
it
to
fail
and
still
the
unit
test
job
and
the
build
app
job
continue
to
run.
So
that's
showing
how
rules
and
failures
work
with
this
handon.
A
Exercise
all
right
so
now
that
we've
applied
rules
and
the
allow
failure
keyword.
Let's
look
at
our
last
section
here,
focusing
on
on
enabling
sast,
which
is
a
static
analysis,
security,
Tool
for
analyzing,
our
source
code
for
vulnerabilities
and
how
artifacts
are
managed
in
our
pipeline.
A
So
after
we
fixed
up
the
pipeline
and
run
smoothly,
one
of
the
executives
on
our
team
stops
by
to
check
in
on
the
progress,
and
they
want
to
make
sure
that
U
we
as
an
organization,
are
taking
full
advantage
of
all
the
features
of
gitlab
like
security
scanning
and
utilizing
the
artifacts
functionality
and
ask
if
we
can
demo
this
in
a
pipeline
during
the
next
stand
up,
so
how
to
get
SAS
from
gitlab.
A
So
this
is
pretty
easy
because,
with
with
gitlab,
we
actually
provide
the
sast
analyzer
The
Tool
for
scanning
your
source
code
for
any
any
vulnerabilities
with
the
static
application,
security
testing
job
directly
within
gitlab
itself,
and
this
is
available
for
both
premium
and
ultimate
customers.
But
you
get
more
advanced
vulnerability,
management
and
Reporting
capabilities
in
Ultimate,
and
so
it's
pretty
easy
to
do
this.
A
You
just
add
in
the
vendor
template
that
gitlab
provides,
and
so
rather
than
going
to
an
external
Tool,
you
could
leverage
the
built-in
SAS
analyzer
SAS
tool
into
your
gitlab
pipeline.
A
So
that's
how
it
works
and
want
to
cover
what
is
a
template
so
that
utilizing
adding
a
SAS
scan
job
to
your
project
is
as
easy
as
a
template.
But
you
know
what
is
a
template
in
gitlab,
so
it's
a
way
to
share
the
cicd
capabilities
with
other
teams
in
your
organization
and
it's
also
a
way
to
consume
those
cicd
capabilities
from
other
teams
and
in
the
situation
of
our
security
tools.
A
It's
also
the
way
that
our
gitlab
engineering
team
provides
those
capabilities
via
templates
as
well
and
there's
nothing
magical
about
a
gitlab
cicd
template.
It's
really
just
other
gitlab
C
yaml
files
that
you're
just
including
into
your
project,
gitlab
C
configuration,
and
it's
also-
and
it's
always
included
into
a
pipeline
through
that
include
statement
in
the
project.
Skit
laab,
CMO
file
and
the
template.
A
Jobs
are
created
in
in
the
pipeline,
based
on
the
defined
stage
that
it's
defined
in
from
that
that
template
file
and
any
of
the
applicable
rules
that
that
defined
there
as
well.
So
that's
essentially
how
our
our
engineering
team
is
able
to
share
the
sast
analyzer
jobs
with
you,
and
it's
also
how
you
could
share
cicd
best
practices
with
other
teams
without
your
throughout
your
organization.
So
there's
four
types
of
includes
that
you
can
leverage.
So
we've
got
the
include
template,
which
is
all
of
that
content.
A
That's
provided
by
the
the
gitlab
product
itself
in
the
engineering
team,
so
the
sast
analyzer,
the
code
quality
scanning,
our
Auto
develops
functionality
for
automating
your
build
process,
your
testing
process
and
so
on.
You've
got
the
include
file
keyword,
which
is
a
way
to
reference,
a
yaml
file
located
in
a
project
in
the
same
group
hierarchy
as
your
current
project.
So
this
is
commonly
how
a
lot
of
customers
that
I've
worked
with
set
up
their
you
know,
basically,
their
template,
CI,
ammo
files.
A
You
know
they've
got
a
single
project
that
you
know
they
create
all
these
template
files
in
and
then
you
can
include
those
files
Downstream
in
a
in
a
separate
project
within
the
same
hierarchy.
You
also
have
include
local.
So
if
you're
doing
a
decomposition
of
your
gitlab
pipeline
within
the
same
project,
you
can
do
that
locally
within
the
same
project
and
make
it
a
little
bit
easier
to
read.
A
If
you've
got
a
lot
of
automation
happening
within
your
gitlab
pipeline,
you
can
separate
that
out
into
separate
files
within
the
same
project
and
then
finally,
you
have
include
remote.
If
you
have
maybe
template
files
that
are
included
from
a
publicly
available
project
repository,
you
can
do
that
as
well.
So
there's
a
number
of
ways
that
we
can
customize
the
behavior
of
jobs
in
our
pipeline.
We
can
override
the
default
Behavior
or
template
job
by
specifying
keywords
in
a
local
job.
A
This
will
keep
most
of
the
job
the
same
but
use
any
values
set
specifically
in
the
job
in
the
local
CI
file.
We
could
also
utilize
environment
variables
to
control
certain
behaviors,
with,
if
conditions
or
by
value
set
in
the
job.
A
Itself,
so
you
know
remember
the
the
predefined
variables
that
we
discussed
earlier.
You
know
this
is
exactly
how
our
our
sass
analyzers
can
be
built
upon
or
customized
based
on
the
different
project
needs.
A
We've
got
different
variables
here
for
the
sest
analyzer
and
configuring
different
items
within
the
SAS
analyzer
that
are
specific
to
your
project
as
well.
So
if
you
define
those
within
the
the
Project
Specific
gitlab
CMO
file,
you're
going
to
be
overwriting,
the
defaults
from
that
inherited
Tempo
file,
so
configuring,
the
SAS
language
scanner
for
node.js
you've
got
a
couple
different
options
here
you
could
exclude
different
paths.
You
know
lots
of
different
options
to
configure
this.
A
It's
going
to
be
a
little
bit
out
of
scope
for
for
today's
Workshop,
but
I
wanted
to
share
that.
This
is
essentially
how
you
could
you
know,
exclude
specific
analyzers
from
being
you
know,
detecting.
You
know
certain
code
that
might
be
as
part
of
this
project
and
you
only
want
to
utilize
the
nodejs
based
scanner,
for
example,
and
then
for
artifacts.
You've
got
a
really
great
way
to
both
browse
and
download
artifacts
that
are
generated
from
your
pipelines
from
different
places.
A
In
the
UI,
you
can
do
that
from
the
pipelines,
page
the
individual
jobs
or
even
the
jobs
listing,
and
then
you
can
also
browse
a
directory
of
the
artifact
that
was
generated
directly
from
the
UI,
without
even
downloading
it
so
for
build
artifacts.
A
You
know
in
our
simple
pipeline
for
building
unit
test,
we've
already
utilized
the
artifacts
keyword
to
store
the
dis
directory
that
contains
results
of
the
build
we've
set,
the
artifact
to
expire
in
an
hour,
but
even
if,
after
an
artifact
expires,
it
won't
be
deleted
until
after
a
newer,
artifact
becomes
available,
so
it'll
be
deleted
after
the
next
build
run.
One
hour
after
that,
artifact
was
created.
A
If
we
don't
set
an
expiration
time
in
our
job,
the
gitlab
instance
level
default
value
would
instead
be
used,
and
if
you're
your
SAS
customer,
the
default
exporation
for
artifacts
is
set
to
30
days.
There's
also
one
other
thing
to
be
aware
of:
you
must
have
the
appropriate
permissions
to
view
and
download
artifacts
for
public
projects,
any
user
with
guest
permissions
or
greater,
can
view
and
download
them,
and
then
for
private
projects.
You
need
reporter.
Permissions
are
greater
all
right.
A
And
go
back
to
the
issue
tracker
in
our
source
project,
go
to
SAS
and
artifacts,
and
we're
going
to
go
ahead
and
add
in
the
static
analysis,
security
testing
Tool
to
our
existing
pipeline.
So
let's
go
ahead
and
our
Workshop
project,
let's
go
to
the
pipeline
editor
and
then
copy
the
snippet
here
to
include
the
template
file
from
the
gitlab
project.
You
can
do
that
after
the
image
declaration
there
and
then
once
you
add
in
that
include
statement
you
can
see
here
if
you
haven't
expanded
this
already,
there's
a
a
tree
button
here.
A
If
you
click
that
open,
you
can
see
a
reference
to
all
the
template,
jobs
that
have
been
included
and
if
you
click
on
it,
you'll
see
the
exact
source
code
for
the
template
file.
A
I
won't
go
through
it
all
today,
but
that's
just
showing
you
how
that
works,
and
then,
if
you
also
want
to
see
how
that
is
merged
with
your
existing
jobs,
you
can
click
full
configuration
and
you
can
see
how
that
how
that
looks
like
there
with
all
of
the
contents
of
the
SAS
template
file,
along
with
the
the
unit.
B
A
And
code
quality
jobs
as
well
that
we've
created
all
right-
let's
see
here,
I
think
that's
it
so
we'll
go
ahead
and
commit
to
those
changes
for
adding
the
SAS
scanning
and
we'll
see
that
running
subit
that
to
the
main
branch.
If
we
go
back
to
the
main
navigation,
build
and
pipelines,
we'll
see
our
add
SAS
scanning
commit
click
in
the
pipeline
ID.
For
that
and
there
you
go.
We
just
added
the
template
file
for
the
SAS
jobs
and
based
on
the
default.
A
A
Here,
so
it
does
wait
for
build
job
to
complete
before
it
could
run.
But
if
you
wanted
to
override
that
behavior,
you
can
do
that
with
the
inheritance
model
and
then
we're
going
to
go
through
that
here
in
step.
Two:
let's
go
ahead
and
go
back
here
in
main
navigation.
Go
to
pipeline
editor
and
let's
just
say
we
want
the
security,
the
SAS
scanning,
instead
of
happening
in
the
test
stage.
A
That
keyword
needs
keyword
with
MTR
to
make
sure
that
the
the
SAS
scan
can
run
as
soon
as
possible
to
make
sure
our
pipeline
runs
efficiently.
The
last
thing
we
want
to
make
sure
that
we
are
storing
the
build
artifact
within
our
pipeline,
so
we
want
to
store
the
results,
the
buildout
job
as
an
artifact,
so
we're
going
to
go
ahead
and
just
copy
this
entire
build
app
job
definition
and
then
replace
the
existing
one.
A
Support
and
the
SAS
overrides
to
the
main
branch
admit
those
changes.
Let's
go
back
to
build
and
pipelines
and
we
can
see
that
pipeline
running
now,
click
into
the
pipeline
ID.
We
see
that
the
security
job
or
security
stage
has
been
created
and
all
of
the
the
SAS
jobs
are
running
as
soon
as
the
pipeline
kicks
off.
So
it's
running
a
lot
more
efficiently,
now
so
awesome
and
then
once
the
build
app
job
completes,
we
should
be
able
to
see
the
artifacts
being
populated
here.
So.
B
A
Let
that
run
I'm
going
to
wrap
us
up
with
our
slides
here.
We've
got
about
six
minutes
left
for
today's
Workshop
going
to
go
back
back
here,
and
so
this
is
just
a
reminder.
If
you're
just
getting
started
with
the
workshop,
the
sandbox
environment,
you
have
full
access
to
the
the
sandbox
group
and
the
ultimate
subscription
on
the
sandbox
group
for
four
days.
So
we
don't
recommend
you
transfer
this
project
from
the
sandbox
environment.
Until
you
completed
all
the
exercises
and
you
feel.
A
Comfortable,
so
the
the
transfer
instructions,
if
wish,
to
transfer
this
and
save
your
work,
it's
in
the
source
project,
the
cicd
adoption
workshop
and
we've
got
the
transfer
project
instructions
here
and
that
issue.
But
again
don't
do
that
now
you
don't
need
to
do
that.
You
don't
need
to
be
in
any
rush.
A
We
actually
have
a
few
other
optional
issues
here
for
walking
you
through
security
and
compliance
and
more
complex
and
multiple
workflows
that
my
colleague
Steve
here
has
created
as
a
convenience
to
you
for
more
exploration
and
setting
up
your
Gat
lab
pipelines
within
the
the
Hands-On
sandbox.
A
Next,
we
reviewed
the
rule
keyword
and
how
to
better
control
when
jobs
execute
and
how
failures
can
be
allowed
or
not,
depending
on
the
configuration
of
those
jobs
and
then
finally,
we
enabled
SAS
scanning
just
now
and
take
a
look
at
how
artifacts
are
generated
and
downloaded.
So
there's
a
lot
to
cover.
We
spent
almost
two
hours
getting
everything
kind
of
introduced
to
you
and
getting
handson
with
our
Workshop.
So
thanks
again,
for
sticking
around
also
want
to
make
sure
that
we
can
help
you
be
more
successful
even
after
today's
Workshop.
A
So
GB
offers
many
different
ways
to
maximize
the
value
of
your
investment
with
gitlab
know.
This
is
a
really
big
change
to
go
from
Jenkins
to
gitlab
and
so
you're
just
really
getting
started
by
learning
about
all
the
different
capabilities
that
Gab
C
ICD
offers.
So
to
maximize
the
investment
through
training.
You
know
access
to,
subject
matter
experts
and
best
practices.
We
have
these
four
different
resources
here
that
I've
listed
we've
got
a
self-paced
learning
management
system
and
a
paid
certification
process.
A
If
that's
of
Interest
so
level
up
is
our
learning
management
system.
We've
got
courses
that
are
a
little
bit
more
Hands-On,
more
longer
form
than
what
we've
done
today
in
our
Workshop.
We
also
offer
private
instructor-led
training
with
a
live
access
to.
You
know
a
Professional
Services
team
member
to
provide
personalized
training
over
multiple
days
for
your
organization
and
similar
to
today's
adoption
Workshop.
We
do
offer
monthly,
webinars
and
workshops.
A
So
these
are
introductory
presentations
and
Hands-On
workshops
across
multiple
use
cases,
and,
as
I
mentioned,
you
know,
there's
a
lot
of
security
and
compliance
that
we
offer
within
gitlab.
So
if
you
are
moving
your
pipelines
from
Jenkins
to
gitlab-
and
you
are-
you
know
interested
in
some
of
those
functionalities-
of
improving
your
security
posture
and
creating
governance
and
compliance
within
your
projects
and
your
organization
on
gitlab,
you
know
we
do
have
that
securing
and
compliance
Workshop
as
well,
so
I
encourage
you
to
sign
up
for
that.
A
And,
finally,
you
can
engage
with
somebody
like
me
and
Steve
and
other
members
of
the
Cs
organization.
So
definitely
look
to
schedule
time
with
a
member
of
a
customer
success
team
make
sure
that
you
reach
out
to
your
sales
representative.
A
If
you
know
them,
if
not
no
worries,
if
you're
a
paid
user
of
gitlab,
you
could
always
follow
support
ticket
and
see
if
your
account
qualifies
for
customer
success
engagements
and
feel
free
to
send
us
a
chat
message
with
your
company
email
address
and
we
can
reach
out
to
you
today
after
after
today's
call,
if
you're
to
see
if
you're
eligible
set
up
a
call
with
you
as
well.
Another
tool
that
I
didn't
mention
here
I
did
allude
to
is
the
part
of
your
support.
A
If
you're
a
get
lab
premium
or
ultimate
customer,
so
our
support
Engineers
could
help
you
navigate
our
documentation
for
cicd
and
our
vendor
templates.
So
we
do
have
a
scope
of
support
and
how
to
get
help.
And
you
know,
as
I
mentioned,
you
can
acquire
with
a
support
team
to
see
if
they
can
connect
you
with
your
your.
A
And
they
can
schedule
time
for
customer
success
engagement
if
needed.
Well,
thank
you
very
much.
We've
got
just
a
couple
minutes
left
here,
but
I
appreciate
you,
you
all
hanging
out
with
us
and
going
through
the
Hands-On
exercises
and
look
forward
to
connecting
with
some
of
you
if
you're
interested
in
some
of
those
one-on-one
conversations
that
we
offer
our
our
customers
as
well.
Thanks
again
have
a
great
rest
of
your.