►
Description
Watch the playback for a hands-on GitLab CI workshop and to learn how it can fit in your organization!
We will kick things off by going over the differences between CI/CD in Jenkins and GitLab, syntax requirements, advantages to using GitLab, and how you can achieve the same outcomes in GitLab. Getting started with CI/CD in GitLab will take a lot less time than tends to be required for Jenkins, and your users can stay in a single platform.
We will then dive into how to build simple GitLab pipelines and work up to more advanced pipeline structures and workflows, including security scanning and compliance enforcement.
A
All
right,
we
got
a
pretty
good
crew
in
here
already,
so
let's
go
ahead
and
start
diving
in
today
we're
going
to
be
covering
some
differences
in
between
Jenkins
and
gitlab.
A
So
you
can
get
a
sense
of
you
know
how
to
start
translating
your
pipelines
over
to
gitlab,
and
we
also
want
to
to
give
you
a
chance
to
start
to
play
with
gitlab
for
yourself
start
to
play
with
the
Pipelines
during
the
course
of
today's
Workshop
you're,
going
to
get
provisioned
and
ultimate
subgroup
under
the
gitlab
learn
Labs
namespace
at
gitlab.com,
and
that's
going
to
give
you
access
to
all
the
features
gitlab
has
available.
So
you
can
start
to
play
around
and
we
would
encourage
you
to
do
that.
A
We're
going
to
have
some
fairly
specific
things
we're
going
to
do
during
the
course
of
the
workshop
today.
There
are
some
optional
things
that
you
can
pursue
and
we'll
be
talking
about
that
as
we
go
along
but
they're
up
to
you.
If
you
want
to
do
them,
you
can,
and
if
you
don't
that's
okay
too,
but
we
want
to
encourage
you
once
you
get
this
once
you
get
this
subgroup
provisioned
to
go
ahead
and
play
around
and
do
what
you
want
to
do
and
experiment.
A
Maybe
even
do
some
research
in
our
docs
and
see
if
you
can
get
some
things
of
your
own.
So
let's
go
ahead
and
let's
go
ahead
and
get
started.
A
So
today
we're
going
to
be
talking
about
again
the
differences
between
Jenkins
and
gitlab,
with
respect
to
how
you
configure
pipelines
and
by
the
way
just
for
whatever
it's
worth
gitlab
is
super
easy
to
get
started
with
pipelines
in
one
you're,
not
in
a
programming
language
like
you
are
in
Jenkins
we
use
declarative,
Warren,
Mill
and
so
you're,
essentially
setting
up
a
configuration
file
for
your
pipelines
and
putting
commands
in
there
that
represent
the
steps
you're
going
to
want
to
do.
You
know
for
what
we
call
jobs,
but
what
traditionally
in
Jenkins,
are
called
stages.
A
So
let's
go
ahead
and
just
get
going,
my
name
is
Steve
Graham
I'm,
a
customer
success
engineer
and
by
the
way,
I've
got
one
of
my
peers
on
here.
Chris
guytart
Chris
is
one
of
our
senior
customer
success.
Managers
in
fact
he's
the
only
one
Chris
is
a
rock
star
and
he's
here
to
help
out
and
just
kind
of
give
us
a
sense
of
you
know
what
makes
sense.
A
Is
we,
as
we
keep
going
here
by
the
way,
I
put
my
LinkedIn
profile
on
this
slide
and
you're
going
to
get
a
copy
of
this
slide
tomorrow,
you're
going
to
get
a
copy
of
this
slide
in
an
email
tomorrow,
and
you
can
also
get
a
a
link
to
a
recording
of
this
session
today,
so
that
you
can
review
it
or
share
it
as
you
see
fit.
A
So
let's
go
ahead
and
start
diving
in
so
again,
tomorrow,
you're
going
to
be
getting
an
email
with
a
link
to
the
slide
presentation
used
today
linked
to
the
recording
we're
making,
and
that
gives
you
the
opportunity
to
review
it
or
share
it
with
your
peers.
If
your
account
qualifies
for
a
customer
success.
Engineer
now
these
are
the
smaller
Accounts
at
gitlab
that
if,
if
your
account
qualifies
for
customer
success,
engineer
engagements,
you
should
expect
one
of
our
customer
success.
Engineers
to
contact
you
next
week
to
inquire
about
any
any
questions.
A
You
might
have
potential
enablement
sessions
for
your
team
and
to
assist
with
any
clarifications
that
may
help
you
in
your
conversion
to
get
lab
CI
and
CD.
However,
if
you
have
an
assigned
customer
success,
manager,
you're
not
going
to
be
hearing
from
a
customer
success
engineer.
So
if
you've
got
a
regular
inside
customer
success
manager
that
you
have
the
opportunities
to
meet
with,
please
reach
out
to
them
for
the
same
thing.
A
Going
to
start
by
talking
about
some
of
the
gitlab
platform
advantages,
one
of
the
primary
ones
is
reduce
tool.
Switching,
so
people
are
having
to
make
their
commits
in
git
lab
and
then
go
over
to
Jenkins
and
either
manually
run
a
pipeline
or
maybe
you've
got
a
trigger
set
up
there
that
you
know
just
fires
a
pipeline
for
them,
so
pipelines
all
run
in
the
same
project
where
developers
make
their
commits
and
merge
requests
and
all
pipelines
reviewable
from
a
Project's
pipelines.
A
You
might
have
you
know:
production,
staging
and
Dev
in
you
know,
in
that
kind
of
a
circumstance
you
can
look
and
see
what
the
latest
latest
commit
was
that
was
pushed
out
out
there
or
you
know,
maybe
you
use
tags
in
production
and
then
you'd
be
able
to
see
that
specific
tag
attached
that
that
environment,
if
you've
got
an
ultimate
subscription
in
which,
by
the
way
is
our
top
tier
subscription
pipelines,
can
trigger
additional
merge,
request
approvals
based
on
security
test
outcomes,
and
this
is
really
neat.
A
It's
going
to
go
through
setting
up
these
these
policies,
and
it's
also
going
to
go
to
the
next
bullet
point
which
is
to
you
know
you
can
create
your
SEC
teams,
can
create
compliance
framework
Pipelines
that
they
can
enforce
on
projects
if
they
want
to.
So
you
can,
you
can
actually
run
pipelines
that
do
testing
in
these
Downstream
projects.
If
you
want
to
do
it
now.
This
is
the
next
thing
you
don't
have
to
build
triggers
into
your
pipeline.
A
Forget
lab
gitlab's,
just
got
them
built
in
and
you
regulate
these
triggers
with
rules
in
your
either
in
your
jobs
or
in
your
Global.
Workflow
rules
and
rules
are
an
evaluation.
You
know
they're
evaluating,
maybe
the
branch
that
you're
making
the
commit
on
or
if
it's
a
merge
request.
There's
several
other
things.
A
So
any
push
on
any
branch
will
trigger
a
pipeline.
A
merge
requests
will
trigger
a
pipeline,
pull
requests
trigger
pipelines.
You
can
manually
trigger
pipelines
from
the
UI
on
the
pipelines
page
there's
a
run
pipeline
button
that
you
can
use
to
just
manually
trigger.
If
you
want
to
take
that
route,
we
also
have
the
ability
for
you
to
schedule
pipelines
it's
it's
so
differently
than
it
is
in
Jenkins,
but
he's
just
very
Crown
like
syntax,
and
you
can
set
up
pipelines
to
run
and
you
can
also
oh
Chris.
Are
you
seeing
my
screen.
A
A
We
also
have
the
ability
to
create
it
with
web
hooks
in
with
chat
Ops.
If
you
want
to
take
that
route
now,
triggers
can
also
come
from
a
dish
from
external
sources,
so
you
can
trigger
trigger
an
independent
pipeline
in
a
project
from
a
parent
pipeline.
A
A
Now,
it's
also
possible
using
rules
with
regular
expressions
against
the
CI
commit
message
variable.
It's
it's
actually
possible
to
inspect
the
commit
message.
Jocelyn.
Yes,
this
meeting
is
being
recorded,
it's
actually
possible
to
inspect
the
commit
message
and
look
for
keywords
and
trigger
a
pipeline
for
them.
A
A
The
users
who
are
triggering
the
pipelines
have
the
ability
to
triggered
you
know,
pull
these
pipeline
files
into
their
project.
They
can
do
there's
a
couple
of
different
ways
that
can
happen,
but
all
they
have
to
have
is
read
access
to
the
Upstream
repository
that
has
the
pipeline
definitions
in
it.
So
your
devops
teams
can
maintain
these
projects
that
have
you
know,
template
repositories
in
them
and
they
don't
have
to
worry
about
everybody
being
able
to
commit
to
it.
A
So
the
pipeline
template
repositories
wanted
some
things
to
note
about
this
or
that
when
you
pull
these
files
in
and
run
them
as
pipelines
in
a
downstream
project,
they're
going
to
use
that
Downstream
projects
variable
context.
So
you
have
the
ability
and
we'll
be
talking
about
this
more
as
we
go
through
the
the
workshop
today,
you
have
the
ability
to
set
variables
in
a
whole
host
of
different
ways
and
it
so
the
variable
context
that
they're
going
to
run
in
is
in
the
downstream
project,
not
the
Upstream
and
again
this.
A
In
order
for
someone
to
be
able
to
run
that
pipeline,
they've
got
to
have
read
access
to
that
centralized
repository,
but
that's
all
they
have
to
have,
and
you
know
this
centralized
repository
can
have
a
ton
of
different
pipelines
in
it.
If
it
needs
to,
it
can
contain
multiple
pipeline
definitions
for
different
conditions
that
exist
in
the
downstream
pipelines
or
for
different
types
of
projects.
Maybe
you've
got
python
projects
in
node.js
projects
and
you
create
independent
pipelines.
A
For
that,
and
you
know
you
can
use
as
many
files
as
you
need
to
in
that
Upstream
project,
because
the
users
who
are
using
them
Downstream
can
delineate
the
specific
files
that
they
need
to
be
able
to
pull
in.
A
So
projects
can
include
pipeline
files
from
external
repositories
in
their
pipeline
files,
so
they
can
create
their
own
dot,
gitlab
dash
ci.yml
file,
and
then
they
can
use
an
include
statement
to
include
the
Upstream
projects
pipeline
files
if
they
want
to,
but
for
projects
that
don't
want
to
maintain,
and
we
all
know
that
these
exist.
There
are
some
developers
who
just
don't
want
to
engage
in
pipeline
development
at
all.
It's
actually
possible
for
you
to
go
into
project
settings
and
delineate.
A
The
York
pipeline
file
for
your
project
exists
in
this
centralized
repository
so
that
the
developers
in
that
project
don't
even
have
to
create
a
DOT
getlot.ci.yml
file
at
all.
A
A
Anytime,
you
bring
in
a
file,
that's
included
into
your
projects,
you
know
using
the
syntax
in
the
dot
gitlab.ci.yml
file,
it's
going
to
run
in
the
context
of
your
pipeline
of
your
project
itself,
so
you
don't
even
have
to
pass
those
to
the
included
file.
It's
already
running
in
that
context
already
so
pipeline
jobs
run
independently
of
each
other
and
they
have
a
fresh
environment
for
every
single
one
yeah.
This
is
unlike
a
Jenkins
agent
where
job
you
know
the
stages
run
sequentially.
A
A
You
know
when
you
need
to
pass
variables
between
jobs,
so
let's
say
your
job
produces
a
variable
and
you
need
to
be
able
to
then
evaluate
that
variable
in
the
context
of
a
downstream
job.
A
You
can
use
that
without
EnV
files,
which
are
which
are
an
artifact,
that
the
Upstream
job
would
leave
behind
and
the
downstream
job,
but
then
require
using
the
needs
or
the
dependencies
keyword,
and
then
it
can
evaluate
it
in
the
context
of
its
runtime
and
by
the
way,
gitlab
runs
a
cleanup
after
every
single
job
to
ensure
a
clean
working
environment
for
the
next
shift.
The
idea
is
that
the
runners
are
highly
disposable
and
highly
portable
and
Runners
can
run
for
any
project.
A
A
So
the
use
case
that's
most
commonly
used
here
is,
you
know,
maybe
you've
got
to
deploy
job
to
production
and
you
want
you
want
to
make
sure
and-
and
maybe
this
runs
in
the
context
of
a
merge
request
or
perhaps
a
merge
commit
going
into
your
default
branch,
and
you
want
to
make
sure
that
every
all
the
tests
look
good
from
your
perspective,
and
maybe
you
even
want
to
add
some
approvals
onto
that,
so
that
somebody
can
approve
it
being
released
to
production
when
you,
when
you
just
configure
a
manual
job
in
the
context
of
a
you,
know,
a
commit.
A
If,
if
you
will
they're
going
to
have
a
play
button
on
them
all
manual,
jobs
are
going
to
have
a
play
button
on
them
by
the
way,
but
any
developer
can
run
these
manual
jobs.
So
the
way
that
you
get
around,
that
is,
you
use
things
like
protected
branches
in
pipelines,
protected
branches.
Only
users
who
are
allowed
to
push
or
merge
to
that
protected
Branch
can
run.
A
Those
can
run
can
run
the
manual
jobs
if
the
job
is
run
in
a
protected
environment
which,
by
the
way,
is
a
setting
that
is
in
your
projects.
You
can
actually
create,
like
you
want
to
make
production
a
protected
environment.
A
You
can
also
add
deployment
approvals.
Now
these
are
independent
of
who
can
who
can
click
on
the
the
Run
button?
You
can
add
two
or
three
approvals,
if
you
think
that's
appropriate,
or
maybe
one
if
that's
appropriate,
and
you
can
delineate
who
those
users
are
that
have
to
be
able
to
approve
it,
but
you
can
also
pick
the
people
who
are
allowed
to
run
the
jobs
in
protected
environments
too.
A
Now,
let's
just
talk
about
some
of
the
differences
in
syntax
here,
just
for
a
few
minutes,
kit,
library
or
Jenkins
agents
are
what
we
call
runners
in
git
lab
now
they're
a
very
different
kind
of
concept.
You
know:
agents
tend
to
be
highly
customized
for
projects
That's
Not
Unusual
at
all,
Runners
tend
to
be
highly
disposable,
so
they
can
be
reused
over
and
over
again,
you
have
the
ability
in
Jenkins
to
create
a
post
set
of
steps.
If
you
need
to
or
a
post
job,
you
can
support
that
with
additional
stages
in
gitlab.
A
If
you
need
to
do
it,
but
remember,
the
gitlab
runners
run
cleanup
jobs
already.
So,
if
you're
using
that
to
do
some
cleanup,
it's
not
easy
to
get
left
and
then
chicken
stages,
which
I
tend
to
see
as
something
that
we
would
call
jobs
and
get
that
because
they
tend
to
have
steps
enumerated
underneath
them.
A
We
actually
have
a
keyword,
recall
stages.
Two
but
stages
to
us
is
a
container
for
jobs.
So
when
you
see
a
gitlab
pipeline,
you'll
see
critical
columns
and
jobs
will
be
listed
in
each
one
of
the
columns
and
those
are
stages
to
us.
So
just
be
aware
of
that
difference,
and
then,
when
you
delineate
the
steps
in
a
Jenkins
stage,
that
would
become
a
script
in
a
job
in
gitlabs
Pipelines.
A
So
you
know,
script
is
an
array
in
yml
and
you
can
put
individual
commands
in
there
so
that
those
all
get
executed
and
then
environment.
Now
to
us.
This
is
just
variables.
Variables
can
be
declared
in
a
job
if
you
want
to
declare
them
specific
to
a
job
or
they
can
be
declared
globally
for
an
entire
pipeline.
If
you
want
to
take
that
route
too,
either
one
is
sufficient
to
get
the
job
done
and
then
the
options
that
you
have
in
Jenkins.
A
We
equate
to
to
job
keywords,
so
there's
keywords
that
set
the
properties
for
a
job
and
you
can
use
those
to
do
the
same
kinds
of
things
you
could
do
with
options
in
Jenkins.
A
But
it's
also
possible
for
you
to
create
a
list
of
options
so
that
people
get
a
drop
down
list
of
things
that
they
can
select.
From
that.
You
know
it's
it's
supported
in
the
same
way
that
it
is
with
Jenkins.
A
Now,
with
respect
to
triggers
and
Cron,
you
know,
gitlab
is
tightly
integrated
with
Git
sem
pulling
options
for
triggers
are
not
needed
and
we
support
a
Syntax
for
scheduling
pipelines.
We're
not
going
to
go
through
that
today,
but
you
there's
by
the
way,
when
you
get
a
copy
of
this,
this
particular
deck
that
I'm
working
off
of
today
you'll
see
that
there's
just
a
ton
of
links
in
it
that
you
can
use
to
go,
follow
and
look
at.
A
You
know
the
various
things
that
and
how
they
work
at
get
lab
for
awareness
if
you're
looking
at
this
stream
that
I'm
on
right
now,
the
Jenkins
links
that
are
underneath
the
Jenkins
column
go
to
the
gitlab
documents
that
are
that
describe
how
to
migrate
from
Jenkins
to
get
lab,
and
those
tend
to
explain
these
differences
that
you're
that
we're
talking
about
here.
The
ones
on
the
left
go
to
our
feature
pages
so
that
you
can
look
at
those
directly
and
understand
how
to
work.
With
that
specific
feature.
A
And
this
is
the
last
of
these.
So
with
respect
to
tools,
you
know,
I've
looked
at
the
tools
in
Jenkins,
there's
only
a
few
of
them
right
now.
They
primarily
support
Java,
but
we
don't
have
any
kinds
of
any
kind
of
tools
directed
in
git
lab
best.
Practices
in
gitlab
are
to
create
containers
of
your
own
that
already
have
these
libraries
pre-loaded
in
them,
and
then
you
can
store
those
in
gitlab
and
consume
them
in
your
pipelines.
If
you
want
to
so
it
makes
it
very
easy
and
convenient
way
for
you
to
you
know.
A
Stub
out,
these
containers
create
your
own
Docker
files,
whatever
you
need
to
do
and
be
able
to
store
them
in
gitlab
during
the
course
of
the
run
of
the
pipeline
and
with
respect
to
input
it's
similar
to
the
parameters
keyword.
This
is
not
needed,
because
a
manual
job
can
always
be
provided.
Runtime
variable
entry
now
gitlab
does
support
a
win
keyword.
A
It's
used
to
indicate
when
a
job
should
run
now.
This
is
specific
to
jobs
in
this
case,
but
you
know
when
a
job
should
run
or
if
it
should
run
in
case
of
failure,
which
is
a
special
use
case
that
we'll
talk
about
more
as
we
get
going,
most
of
the
logic
for
controlling
pipelines
can
found
can
be
found
in
our
very
powerful
role
system.
Rule
system,
which
is
linked
here
in
this
document.
A
What's
that,
that
subgroup
has
been
provisioned
under
our
gitlab
learn:
labs
namespace
again
it's
an
ultimate
subgroup
with
all
of
our
features
available
to
it,
but
you're
going
to
be
the
owner
of
it.
So
you'll
have
complete
access
to
do
everything
you
need
to
do
in
there
you're
going
to
have
access
to
this
workshop
for
four
days,
this
Workshop
environment,
this
subgroup
that
you're
going
to
be
creating
today
we
routinely
set
these
up
so
that
they're
available
for
two
days
for
most
of
our
workshops.
A
A
This
is
our
agenda
for
today,
we're
going
to
go
through
lab
setup,
I'm
going
to
go
through
setting
up
a
simple
pipeline.
Next
we're
going
to
move
on
to
execution
order
and
direct
it
as
cyclic
graphs,
which
is
a
unique
feature
in
gitlab.
That's
pretty
exciting,
then
we're
going
to
talk
about
rules
and
how
to
deal
with
failures
and
we're
going
to
deal
with
instantiating
gitlab
SAS
jobs
which
are
available
regardless
of
subscription.
A
So
this
is
available
all
the
way
down
to
our
free
tier,
and
if
you
want
to
set
up
zest,
you
can
which
is
just
static,
application
security
testing
and
we're
also
going
to
talk
about
artifacts,
how
to
delineate
them
and
then
how
to
require
them
and
at
the
very
end,
we're
going
to
talk
about.
You
know
how
to
transfer
the
project.
We
won't
touch
on
that
for
very
long,
but
it's
available
as
a
step
in
our
in
our
process
today.
A
So
let's
go
ahead
and
get
started
today,
you're
officially
part
of
a
brand
new
startup
that
is
creating
a
public
leaderboard
for
The
Hitman
racing
game,
Tanuki
racing.
We
always
like
to
take
a
chance
to
put
our
logo
out
there.
Your
company
has
recently
swept
over
to
using
gitlab
for
CI
and
CD
and
has
tasked
you
with
learning
about
different
pipeline
capabilities.
A
So
please
don't
miss
out
on
the
next
step.
Since
the
group
you
act,
you
request
will
have
full
access
to
all
of
our
new
AI
features
for
four
days
now.
Just
for
awareness,
AI
is
something
if
you're
a.
If
you
have
a
namespace
with
a
subscription
on
it
at
gitlab.com.
You
can
request
that
AI
be
enabled
in
that
namespace,
and
it
is
in
this
particular
namespace.
A
To
register
this
this,
this
Workshop
into
provision
it
again,
you'll
need
to
have
a
gitlab.com
account
and
once
you
get
there
you're
going
to
get
page.
That
looks
like
what
we're
seeing
on
the
right
side
of
my
screen
today,
you're
going
to
need
to
click
on
the
redeem
and
bit
invitation
code
and
then
you're
going
to
be
using
this
registration
code.
That's
displayed
on
the
screen
that
Chris
will
be
pasting
in
in
just
a
minute.
A
Now,
once
you
click
on
that,
read
the
button
you're
going
to
get
this,
and
this
is
where
you
get
to
paste
in
your
invitation
code
and
you're,
going
to
click
on
redeem
and
create
account
and
by
the
way,
the
very
next
step,
you're
going
to
need
your
username
at
gitlab.com.
Now
real
important
here
is:
we
don't
want
to
include
the
app
symbol,
but
if
you
go
to
gitlab.com
You're
logged
in
you,
click
on
your
avatar
in
the
header
you'll
be
able
to
see
your
username
displayed
down
below
that.
A
A
The
gitlab
URL
that
you
see
displayed
down
here
really
either
bookmark
that
or
copy
it
out
to
a
document.
You
may
be
keeping
with
notes
in
it.
But
if
you
forget
it,
you
can
come
back
and
go
through
this
process
again
and
you
will
get
provisioned
exactly
the
same
one.
It's
not
going
to
create
a
new
subgroup
for
you,
because
the
subgroup
that
it
creates
is
a
hash
value.
A
That's
a
combination
of
this
registration
code
in
your
username
at
git
lab,
but
when
you're
ready,
you
can
just
click
on
my
group
and
it'll.
Take
you
to
your
group
and
that's
going
to
look
a
lot
like
this.
Now
you
can
see
my
test
group
Dash
and
then
a
weird
string.
It's
just
that's
just
a
hash
that
follows
the
the
dash
and
it's
Unique
to
you.
A
So
it's
going
to
be
different
than
what
you're
seeing
on
the
screen
right
now,
but
if,
by
chance
you
end
up
with
a
404,
something
went
wrong:
you'll
need
to
go
back
and
start
over
again
most
commonly
what
happens?
Is
people
tend
to
put
in
there
at
the
ad
symbol
for
their
username
when
they're
putting
in
their
user
name
in
the
appropriate
field?
A
So
just
go
back,
do
it
again
and
you
should
end
up
in
the
right
spot
so
again
go
back
www.getlabdemo.com.
A
But
in
the
invitation
code
that
is
assigned
to
our
Workshop
today,
you
put
in
your
gitlab
username
and
provision
the
training,
environment
and,
hopefully
again
you'll,
be
back
to
you'll,
be
back
to
your
provision
test
group.
A
Take
a
quick
minute:
let's
go
through
what
we
just
talked
about
the
way
I'm
going
to
be
using
a
split
screen
today,
just
so
that
I
could
be
able
to
show
you
multiple
things
at
once,
but
you
can
see.
We've
got
this
I'm
logged
in
to
gitlab
demo.com
I'm,
not
logged
in
by
the
way
I've
just
hit
the
hit
the
URL
I'm
going
to
redeem
the
invitation
code
now
I'm
going
to
capture
this
real
quickly
share
with
me.
A
And
you're
going
to
get
this
page
and
again
seriously
bookmark
this
URL
when
you
follow
it
or
copy
it
put
it
into
a
document
that
you
may
be
keeping
notes
in,
but
when
you're
ready,
just
click
on
buy
group,
and
it's
going
to
send
you
to
your
group-
that's
already
been
provisioned
for
you
and
again
you're
the
owner
of
this
group.
So
you
can
do
anything
you
want
to
with
it.
A
Let's
go
back
to
our
agenda
real
quickly.
Next,
we're
going
to
talk
about
setting
up
a
simple
pipeline,
we're
at
this
point
and
we're
gonna
probably
end
up
just
a
little
bit
shorter
than
two
hours.
We've
got
provision
today,
so
we'll
just
keep
going
here
before
fully
pushing
out
the
application.
Your
team
wants
to
test
a
few
different
types
of
pipelines
to
see
what
fits
your
needs
best.
First
task,
your
product
manager
gives,
you
is
create
a
simple
pipeline
that
builds
and
tests
the
bracing
application.
A
You
know
Crystal
paste
this
into
our
our
chat,
but
we're
gonna
need
to
navigate
to
this
project
and
then
you're
going
to
need
need
to
navigate
to
the
issues
in
that
project.
And
again
this
should
be
in
a
separate
window.
A
Hopefully,
you've
got
a
large
screen.
You
can
work
with
two
regular
size
windows,
but
if
you
don't-
and
you
need
to
just
do
the
split
screen
methodology
you
know
just
do
whatever
your
operating
system
requires
to
do
that
and
again
we're
going
to
be
working
in
split
screens
today.
So
we
can,
with
this
source
project
that
we're
going
to
be
working
from.
It's
got.
A
The
instructions
in
it
we're
going
to
Fork
that
and
when
we
Fork
it
it's
going
to
copy
the
code,
but
it's
not
going
to
copy
the
instructions
that
are
in
the
issues.
So
just
be
aware,
we're
going
to
start
by
going
to
this
project
this
source
project
and
we're
going
to
click
on
the
fork
button
in
the
upper
right,
then
you're
going
to
have
to
select
your
provisioned
ultimate
group
under
gitlab.
Learn
labs!
A
A
The
primary
reason
for
that.
We
really
want
you
to
stay
in
a
space
where
we
can
potentially
help
out
if
we
need
to
so
pick.
Your
pick
your
session
there
and
then
you're
going
to
click
on
Fork
project
down
on
the
down
at
the
bottom.
A
And
then,
as
soon
as
your
project
gets
formed,
you're
going
to
need
to
remove
the
fork
relationship.
This
will
keep
it
from
creating
some
potentially
merge
requests
going
into
the
first
project,
which
you're
not
going
to
have
access
to
to
be
able
to
create
a
merger
question.
So
we're
going
to
go
to
settings
and
go
to
General
scroll
down
to
the
advanced
section,
expand
that
and
then
scroll
down
to
remove
Fork
relationship
and
then
click
on
that
button
to
remove
the
fork
relationship.
A
So
I'm
I'm
in
the
you
know
what
I'm
going
to
go
ahead
and
go
here
in
the
right
browser.
Just
a
quick
second.
A
Now
again,
we're
gonna
we're
gonna
Fork
this
source
project
and
then
we're
going
to
need
to
be
able
to
put
it
into
by
the
way
I'm
going
to
need.
I
have
access
to
so
many
groups
to
GitHub.
It's
just
ridiculous.
A
A
A
A
A
We've
got
test
a
and
test
B
and
the
test
stage,
and
then
we've
got
a
deploy
job
in
the
deploy
stage,
so
jobs
run
independently,
sometimes
on
different
Runners
and
again
that
it's
possible
to
subvert
this.
If
you
want
to
use
Tag
Runners-
and
you
only
have
one
Runner-
that's
possible
to
do-
if
you
really
want
to
do
it,
but
discardability
of
jobs
is
super
important
in
gitlab.
If
you
want
to
just
follow
best
practices
there,
so
jobs
run
independently,
sometimes
on
different
runners.
A
All
jobs
in
this
stage
have
to
complete
successfully
before
proceeding
to
the
next
stage,
so
build
jobs
got
to
be
completely
done
before
test
Aid
and
test
B
are
going
to
start
to
run
by
the
way.
There's
ways
to
subvert
that
that
we're
going
to
talk
about
and
then
the
deploy
job
can't
run
until
test
day
and
test.
B
are
done.
A
Now
what
we've
got
here
in
the
codes
code
block
is
is
a
job
got
a
job
that
we're
calling
production
and
that's
the
way
it's
going
to
show
up
inside
of
our
stage
and
by
the
way
you
can
see,
stages
deploy
function
there,
but
it's
up
to
you
to
arbitrarily
name
your
stages.
Anything
you
want
to
this
particular
job
has
before
script.
A
Now
you
know
before
scripts
are
most
commonly
used
to
pull
in
libraries.
Things
like
that
that
might
be
needed,
and
in
that
kind
of
a
scenario,
your
best
bet
is
really
to
start.
Creating
your
own
Docker
images
create
a
Docker
file,
build
it
store
it
in
gitlab,
and
then
you
know
pull
it
down
to
render
pipelines
in.
But
you
do
have
this
before
script
capability.
A
The
important
thing
to
realize
about
the
before
script
is:
it
runs
in
the
same
shell
as
the
main
script,
and
this
is
what
you
think
of
as
steps
right
now
in
Jenkins.
So
it's
it's
going
to
do
the
same
thing.
It's
a
it's!
A
wine
Mel
array
which
you
can
tell
by
looking
at
these
dash
symbols
here
that
follow
the
Declaration
of
at
the
yml
component.
A
So
it's
going
to
run
in
that
before
script
is
going
to
run
in
the
same
shell,
that's
going
to
run
your
script
commands,
which
means
that
if
the
before
script
has
a
failure,
that's
going
to
fail
your
job,
we're
going
to
talk
about
how
to
deal
with
field
jobs
as
we're
going
to
do
this
Workshop
today
too,
but
you
also
have
the
ability
to
put
an
after
script
now.
A
This
would
typically
be
something
that's
going
to
just
do
some
cleanup
whatever
you
need
to
do
it,
you
know
after
running
your
steps,
and
then
that
runs
in
a
separate
shelf.
So
if
you
have
a
some
if
command
that
fails
in
that
particular
section,
that's
not
going
to
fail
your
fail.
Your
job
in
the
pipeline.
A
Now
get
lab
Runners,
they
run
all
the
jobs
you
to
find
in
a
pipeline
again
they
could
be
text
so
that
specific
jobs
will
be
run
on
certain
Runners
and
you
might
have
a
real
use
case
for
that.
You
know
your
team
might
be
developing
firmware.
You
might
want
to
set
up
a
shell-based
runner
that
has
all
the
libraries
loaded
to
be
able
to
load
firmware
onto
an
attached
device.
That's
on
Hangout
on
USB.
A
It's
operation.
You
know
it
takes
What
It
Takes
right,
but
the
jobs
are
typically
picked
and
picked
up
within.
You
know
five
seconds,
sometimes
it
depending
on
how
busy
the
runner
Fleet
is
for
you,
because
if
your
self-manager
going
to
be
managing
your
own
runners,
if
you're
not
self-managed
in
your
gitlab.com
you're
using
the
shared
Runners,
it's
going
to
be
subject
to
the
availability
of
the
runners,
but
we've
got
pretty
good
availability
in
our
runner
plates.
There.
A
A
A
The
default
name
for
these
Pipelines
pipeline
files
is
dot
gitlab
dash,
ci.windmel.
You
can
change
that
if
you
want
to
in
your
project
settings
I
recommend
that
you
don't
just
because
this
particular
terminology.
This
name
is
very
well
understood
known
in
gitlab,
but
you
can
see
that
we've
got
two
stages
here.
We've
got
a
build
and
a
test
stage.
A
We've
got
it
the
rest
of
this,
but
these
are
all
Global
directives
here.
Image
just
creates
a
default
image
for
our
jobs
to
run
in
you
know,
that's
a
Docker
image
and
then
where's
delineating
a
cache
key
to
be
used
on
our
runners
and
then
we've
got
to
build
app
to
find
and
we've
got
a
unit
test
defined
and
that's
that's
all.
We've
got
in
our
pipeline
right
now.
A
So
in
the
unit
test,
let's
see
unit
tests,
we
want
to
use
the
after
script.
Keyword
Echo
out
to
build
this
completed.
This
is
just
to
play
around
with
the
after
script,
so
you
get
a
chance
to
see
that
it
connects
cute
commands
as
well,
so
to
edit
the
pipeline.
We
need
to
click
edit
in
the
pipeline
editor
and
then
you
can
see
that
we're
actually
using
the
pipeline
editor.
In
this
case,
we
do
have
a
web
IDE
if
you
want
to
use
it,
but
the
pipeline
editor
has
some
real
advantages
with
respect
to
it.
A
A
So
now
it's
giving
us
an
image
of
what
the
unit
test
should
look
like
to
me.
That
is,
in
fact
what
it
does
look
like.
So
once
you've
added
the
code,
you
can
click
commit
changes,
let's
go
ahead
and
do
that
that's
going
to
immediately
trigger
our
pipeline,
you
can
see
checking
pipeline
status
up
at
the
top
of
the
screen
here
in
just
a
minute.
A
It's
going
to
give
me
a
link
to
the
pipeline,
so
I
can
go,
take
a
look
at
it
if
I
want
to
so,
let's
go
ahead
and
do
that
real
quickly,
and
what
you
can
see
is
that
the
build
app
is
running
right
now.
The
unit
test
is
this:
you
know
grayed
out
circle
with
a
dot
in
the
middle,
which
just
means
it's
not
eligible
to
run.
Yet
it's
not
going
to
be
eligible
to
run
until
the
build
app
is
done.
A
You
know
we're
not
going
to
wait
for
this
pipeline
to
remain.
We
can
go
back
and
check
it
later.
If
we
want
to
but
the
the
goal
is
you
know
what
what
it's
giving
us
instructions
to
do
if
you
want
to
do
this,
for
yourself,
is
that
if
you
were
to
click
on
any
of
these
jobs,
you'd
be
able
to
see
the
job
log?
A
So,
let's
talk
about
some
Concepts
real
quickly.
We
can
adjust
the
execution
order
for
pipeline
efficiency
if
we
want
to-
and
in
this
pipeline,
that
you're
seeing
down
here,
you
can
see
that
these
two
jobs
have
got
the
grayed
out
circles,
with
the
dots
in
the
middle,
which
just
means
they're
not
available,
so
jobs
in
the
test
stage
execute.
After
all,
jobs
in
the
build
stage
are
completed,
but
our
desired
state
is.
A
We
want
to
add
a
cool
code
quality
job
and
it
does
not
need
the
results
of
a
bill
that
can
execute
parallel,
because
the
code
quality
job,
the
one
specifically
that
we
ship
with
gitlab
has
the
ability
to
execute
directly
on
the
code.
It
doesn't
require
the
build
at
all
and
we
want
to
keep
the
code
quality
in
the
test
stage.
So
we
just
want
to
subvert
the
processing
order
here.
A
So
we
use
this
needs
with
an
empty
array,
so
we
declare
leads,
which
ordinarily
is
an
array
that
we
would
put
job
names
into.
Let
me
just
make
it
empty,
and
that
means
that
this
job
is
eligible
to
run
just
as
soon
as
the
pipeline
starts
that
way.
At
the
beginning
of
the
pipeline
execution,
both
jobs
are
now
in
the
running
state.
A
Now
we
have
the
ability
to
do
Advanced
needs
and
this
gets
into
directed
and
cyclic
graphs,
which
literally,
is
you'll,
see
lines
going
through
a
pipeline
showing
dependencies
between
jobs.
So
it
runs.
We
use.
You
know
the
needs
directive
in
this
particular
case,
to
be
able
to
allow
jobs
to
run
just
as
it
is
a
relative
public
run.
A
So
in
this
particular
case
you
know
we're
doing
multiple
builds
right,
so
we've
got
build
a
build
B.
This
might
be
IOS
and
Android
test
a
it's
eligible
to
run
as
soon
as
Bill
day
is
done.
It
doesn't
have
to
wait
for
build
B
and
deploy
a
is
eligible
to
run
just
as
soon
as
test
a
is
completed
and
the
same
thing
you
can
see
in
in
the
build
sequence
of
jobs
too,
for
build,
build
B,
it's
actually
possible
for
you
to
build
status
pipelines
now.
A
It
allows
the
Deeds
keyword
to
be
used
in
the
same
stage
and
by
the
way
that
works
without
a
stageless
pipeline,
but
previously
this
could
only
be
used
between
jobs
and
different
stages.
We've
recently
made
changes
to
that
in
gitlab
in
our
15.
extreme,
and
so
now
jobs
could
be
dependent
on
jobs
that
are
in
the
same
stage.
A
So
why
is
this
useful
stages
pipelines
make
your
pipeline
more
efficient,
implicitly
configure
the
execution
order.
It's
faster
to
write,
more
efficient
pipelines
with
less
cycle
time
now
to
be
very,
very
Frank,
even
if
you
put
all
your
jobs
into
stages,
but
then
you
take
the
time
to
create
these
needs
dependencies
all
the
way
through
for
every
single
job,
you're
going
to
get
the
same
effect.
A
So,
first,
let's
talk
about
execution
order.
If
you're
coming
right
from
the
last
track,
you
should
still
be
on
the
pipeline
page,
but
if
you
navigate
it
away,
you
could
just
go
back
to
your
project
and
use
the
left
hand,
navigation
menu
to
go
through
to
build
pipelines
and
pick
the
hyperlinks
starting
with
the
most
recent
pipeline
image.
A
A
And
by
the
way,
you're
seeing
a
very
odd
layout
here,
that's
because
my
screen
is
so
narrow,
but
you
can
see
that
our
pipeline,
which
is
the
only
one
that
I've
run
in
this
project,
has
passed
and
if
we
click
on
this
pass,
which,
by
the
way
it
could
be,
this
could
be
failed.
It
could
be
a
warning
if
we've
got
jobs
in
there.
A
They
ran
sequentially,
and
then
the
thing
we
were
supposed
to
do
from
the
last
step
was
to
look
and
find
this
Echo
that
we
put
into
the
after
script.
So
we
added
the
after
script
capability.
We
added
it
and
you
can
see
the
command
listed
right
here.
Echo
build
that
job
is
run
and
build
up.
Job
is
run,
it's
just
been
echoed
out.
A
A
Now
something
is
to
go
through
build
pipeline
editor,
but
it's
also
possible
to
do
that
here.
We
can
edit
this
file
in
the
pipeline
editor.
This
is
being
that
this
is
the
pipeline
file
for
this
project.
It's
the
only
one,
that's
going
to
show
us
the
option
to
edit
the
pipeline
editor,
but
let's
go
ahead
and
follow
the
path.
It's
telling
us
to
go
build
pipeline
editor.
A
A
A
B
A
A
And
you
can
see
built,
the
build
app
is
still
running,
but
now
the
code
quality
is
already
done
and
the
unit
test
is
running
right
now.
Now
we
put
in
a
very
simple
code
quality.
We
just
put
a
stud
job
in
there.
It's
not
the
actual
code
quality
test
that
ships
with
gitlab,
but
we
just
put
it
in
there.
So
we
could
illustrate
how
this
works.
A
A
A
We're
going
to
create
a
new
deploy
stage
that
we're
going
to
put
a
job
into
and
then
we're
going
to
add
a
direct
to
the
sigma
graph.
We
did
this
pipeline.
That's
fairly
extensive,
so
we're
going
to
capture
all
this
code.
That's
in
this
next
section
down
we're
going
to
actually
append
that
onto
our
pipeline.
A
A
Now
our
job
is
pending,
which
just
means
it's
waiting
for
a
runner
to
pick
it
up,
but
you
can
see
that
build
it,
build
a
and
build
app
or
running
right
now
and
if
we
take
just
a
minute-
and
you
can
see
several
jobs
fair
enough
all
at
once
there.
So
the
next
set
of
Runners
checked
in
picked
up
the
jobs
that
were
available
and
started
running
on.
A
If
we
go,
if
we
click
on
needs
up
here
at
the
top,
we
can
actually
see
what
is
graphically
represented
as
a
directed
in
cichlid
craft.
Now
it's
not
going
to
include
any
jobs
that
don't
have
needs,
defining
them
in
some
way
and
have
some
dependency
chain
to
show.
But
you
can
see
building
test
days,
dependent
on
Bill
day
employees
dependent
on
test
day
and
building.
A
So,
let's
move
on
to
the
next
step,
we're
going
to
talk
about
rules
and
failures.
The
rules
are
just
a
way
and
by
the
way
roles
have
several
different
applications
in
gitlab.
You
can
put
them
in
jobs,
so
the
jobs.
If
you
want
to
put
your
rules
in
jobs,
you
can
absolutely
do
that
and
that's
the
most
common
way
of
building
pipelines
so
that
jobs
have
Independence.
You
know
you,
you
might
have
half
a
dozen
rules
in
any
specific
job,
so
they
can
execute
under
certain.
A
You
know
certain
evaluated
circumstances,
and
maybe
they
even
have
the
default
that
just
run
always
under
some
circumstances,
but
you
also
have
the
ability
to
deal
with
failures.
Now
you
might
have
a
test
something
along
those
lines
that
is
stuck
in
your
pipeline.
You
know
it's
some,
some
tests
that
your
team
has
put
together
and
they
just
really
need
to
get
through
it.
A
A
All
right
so,
as
you
come
back
to
the
team
and
show
them
your
new
pipeline
knows
that
one
of
your
test,
jobs
is
failing
now
the
normal
default
Behavior
circular.
If
a
job
fails,
it's
going
to
stop
the
pipeline
pipeline's,
not
getting
any
jobs
that
are
eligible
to
run
after
that
job
fails
and
that
haven't
already
started
running
are
going
to
not
be
able
to
not
be
eligible
to
execute
after
that.
A
So
after
taking
a
look
into,
the
job
is
determined
that
you
don't
actually
need
to
enforce
it
passing,
but
you
still
want
to
be
able
to
see
the
test
results
and
again.
This
is
by
checking
check
clicking
on
the
job
name
and
going
through
to
the
job.
Blog
section
will
show
you
how
to
use
rules
and
failure,
Clauses
in
your
gitlab
Pipelines,
so
allowing
job
failure
and
what
we're
seeing
on
the
left
here
is
foreign.
A
We
can
see
there's
a
unit
test
at
the
top
and
it's
got
allow
failure
colon
true,
so
the
default
for
gitlab
is
allow
failure
false
you
don't
have
to
put
allow
failure
in
there
at
all
if
you
want
your
jobs
to
stop
your
pipelines,
but
if
you
want
for
some
reason
to
tolerate
a
failed
job,
you
need
to
put
in
a
little
failure.
You
know
calling
true.
A
Now,
what
you're
going
to
get
when
you
do
that
is
you're
going
to
get
a
test
that
looks
like
this
now.
A
test
that
fails
normally
is
a
red
red
circle
with
a
red
X
in
the
middle
of
it
and
again
it
just
stops
the
pipeline,
so
this
deploy
job
but
not
be
eligible
to
run
anymore.
A
A
A
So
again,
you
can
actually
have
jobs
that
don't
have
rules
at
all
they're,
going
to
run
for
every
single
pipeline
trigger
unless
you're,
using
workflow
rules
where
you
can
actually
shut
the
pipelines
down.
If
you
want
to,
but
this
job
can
just
have,
went
on
success
when
delayed
or
when
always
and
it'll
be
included
in
every
pipeline,
if
no
rule
is
defined
and
know
when
Clause
is
specified,
remember
that
when
on
success
is
the
default,
so
you
can
just
create
the
job
it's
going
to
run
in
every
pipeline.
A
So
if
we
want
to
talk
about
what
a
rule
looks
like,
it's
certainly
possible
to
do
an
evaluation
in
the
context
of
a
role
I
want
you
to
notice
that
rules
is
an
arrays.
Let's
cut
these
dashes
following
it
on
separate
lines
in
this
particular
case
and
by
the
way
rules
always
evaluate
prior
to
the
script,
rules
run
in
gitlab
itself
and
then,
if
the
job
is
eligible
to
to
run
in
that
particular
pipeline,
it
gets
added
to
the
to
the
queue
of
jobs,
that's
eligible
and
ready
for
runners
to
pick
up.
A
So
in
this
particular
case,
we're
looking
at
the
pipeline
Source
variable
and
if
that
equals
web,
this
job
is
going
to
run
now
notice.
It
doesn't
have
any
other
rules.
So
it's
going
to
have
to
match
that
specific
circumstance
for
this
job.
To
run,
but
the
important
thing
to
realize
about
this
is
that,
if
statements
can
reference
variables,
kitlab
has
a
very,
very
long
list
of
predefined
variables
like
look
at
things
like
you
know,
is
this
branch
of
default
branch?
Is
this
a
merge
request?
Is
it
a
pull
request?
A
You
know,
is
this
being
triggered
by
the
API?
Is
it
being
triggered
by
the
manual
pipelines
page,
which
is
what
this
particular
one
is
here?
Web
refers
to
the
manual
pipelines
page,
so
we've
been
going
through
to
build
and
pipelines,
and
looking
at
that
page
and
there's
a
run
pipeline
button
at
the
upper
right
that
if
we
were
to
click
on
and
run
a
pipeline,
this
rule
would
be
true,
so
this
job
will
only
run
when
the
pipeline
is
kicked
off
from
the
web
form.
A
A
Maybe
you
only
want
to
run
them
on
your
default
Branch?
Maybe
you
only
want
to
run
them
in
the
merge
request.
This
is
the
way
for
you
to
make
sure
that
that
that
happens,
so
the
if
Clause
lets
us
evaluate
an
expression.
You
know
this
can
be
if
a
variable
equals
some
value
and
there's
lots
of
options
on
that,
and
then
we
have
the
ability
to
look
for
changes
in
the
code
too.
A
So
if
the
commit,
if
the
commit
has
changes
to
a
specific
file
or
to
the
to
a
set
of
files
in
a
certain
subdirectory,
you
know
that's
a
form
of
a
rule
out
by
itself
and
we
also
have
the
ability
to
look
for
exists.
So
maybe
we
put
in
an
access
keyword,
our
devops
team
is
managing
these
Pipelines
and
they're.
A
Looking
for
a
Docker
file,
if
Docker
files
there,
they
want
to
fire
up
a
job
that
builds
the
docker
file,
so
operators
of
what
you
think
they
are
from
anybody
experience
with
programming
equals
equals
just
means.
You
know
these
two
things
match
not
equals
means
they
don't
match,
and
then
you
can
see
the
tilde
symbols
equals
tildes,
not
till
the
those
are
for
use
with
regular
expressions
and
they
evaluate
to
be
the
same
thing
as
the
two
above
them,
but
it
just
delineates
it.
A
A
regular
expression
is
going
to
follow
after
during
the
course
of
this
rule
and
by
the
way
you
can
use
regular
Expressions
anywhere
in
your
gitlab
role,
you
can
use
it
against
variables.
You
can
use
it
against
commit
message,
just
whatever
makes
sense
for
you,
then
the
two
at
the
bottom
and
and
the
or
or
symbols
are
what
you
think
they
are.
A
You
know
we
can
evaluate
ones
that
we
can
look
at
one
environment
variable
on
the
left
side
of
that
end
in
and
make
sure
it
meets
something,
and
then
on
the
right
side
of
that
end
in
both
of
those
got
to
be
true.
For
that
rule
to
pass
and
the
auror
is
the
inverse
of
that,
we
can
look
at
one.
You
know
one
in
one
variable
make
sure
it
matches
something
or
another
variable
make
sure
it
matches
something
in
either.
One
of
those
are
true,
then
you
know
that
that
rule
succeeds
job
attributes.
A
Are
you
know
when,
of
course,
which
has
options
listed
on
the
right
there?
It
just
tells
gitlab
what
circumstances
that
job
needs
to
run
in
and
then
allow
failure
which
again
defaults
to
false,
but
you
can
set
it
to
true
if
you
want
to
make
sure
that
something
doesn't
kill
your
pipeline
and
then
starting
is
a
special
circumstance
and
it's
for
the
wind
delayed
option.
So
gitlab
has
the
ability
for
you
to
run
a
delayed
job
if
you
want
to
during
the
course
of
a
pipeline.
A
You
know
who
knows
what
the
use
case
on
this
might
be,
but
maybe
it's
some
kind
of
a
cleanup
job
that
tears
down
an
environment
or
something,
and
if
you're
going
to
use
when
delayed,
you
also
have
to
use
the
start
start
in
attribute
in
tailgate
lab.
Okay
start
this
in
an
hour
start
it
in
two
hours
or
start
it.
That
day.
A
And
when
is
the
job
not
creating
a
pipeline,
the
job
is
not
included
in
a
pipeline.
It's
none
of
the
rules
defined
for
the
job,
evaluate
to
true
a
rule
evaluates
the
tube
that
has
a
clause
of
whenever
which,
by
the
way
is
a
way,
is
a
way
of
creating
a
negative
goal.
This
circumstance
never
run
this
job
kind
of
thing
and
endorals
are
defined
in
a
better
when
Endeavor
Clause
is
specified
so
in
this
particular
job
that
we're
seeing
on
the
right
side.
A
If
the
CI
pipeline
Source
equals
merge,
request
again
we're
not
going
to
run
this
job,
it's
a
CI
equipment
source
is
something
that's
scheduled
from
the
UI.
We're
not
going
to
run
this
job
in
any
other
circumstance.
This
job
is
going
to
run
now
a
rule
doesn't
have
to
have
an
if
clause
in
it.
It
can
just
have
a
win
Clause
like
you
see
here
at
the
bottom.
A
A
All
right,
so
this
job
here
on
the
right
has
got
a
play
button
on
it,
that's
a
job
that
has
a
web
manual
statement
until
I
needed
in
it.
So
it's
going
to
wait
for
someone
with
threat
permissions
to
click
a
bit
play
button
and
then
again,
this
test,
B
job
orange
circle
with
the
exclamation
point
in
it
that
job
has
an
allowed
failure
of
Truth,
so
the
pipeline
proceeds,
even
though
it
failed-
and
you
can
see
the
deploy
job
here
on
the
right.
It's
simply
got
a
win
manual
as
a
job
property.
A
So
some
more
rules
examples.
You
know
we
could
have
multiple
rules
in
this
particular
case.
If
CI
pipeline
sources
set
the
merge,
request,
event
or
scheduled,
the
job
is
executed,
but
it
again
gitlab
is
going
to
go
through
and
it's
going
to
evaluate
these
rules.
So
if
there's
no
rule
that
just
matches
everything
else,
it's
not
going
to
run.
So
this
job
is
only
going
to
run
for
a
merge,
Quest
event
or
if
it's
a
scheduled
pipeline.
A
So
multiple
rules,
and
when
and
the
two
examples
that
you
see
at
the
very
top
here
if
pipeline
Source
equals
merge,
request
event
if
pipeline
Source
equals
schedule,
those
are
examples
of
negative
rules,
so
gitlab
is
going
to
go
through
and
if
it
matches
those
it's
going
to
stop
it's
not
going
to
keep
going
and
looking
to
see
if
it
can
find
a
matching
rule
that
about
that
evaluates
again
after
that,
because
one
of
those
was
true,
so
this
job
will
execute
in
any
pipeline
or
CF.
A
Button
source
is
not
set
to
merge
across
the
band
or
schedule,
and
the
win
on
success
is
just
the
default
rule.
It
doesn't
need
to
have
any
kind
of
evaluation
in
the
job.
It
could
just
be
a
simple
win
on
success
and
then
that,
for
any
other
circumstance
that
a
pipeline
can
be
run
in
for
your
project,
this
job's
going
to
be
included.
A
All
right,
so
this
is
an
example
of
using
changes
along
with
diff.
You
can
see
that
we're
evaluating
a
variable
up
here,
making
sure
it
matches
some
value.
This
could
be
if
CA
popping
Source
equals
merge,
request
again
and
then
we're
using
the
changes
keyword
here.
So
we're
looking
for
changes
in
the
single
file,
Docker
file
or
changes
in
the
darker
scripts
subdirectory
any
of
the
files
included
there
and
then
it's
a
manual
job.
A
So
conjoin,
if
changes
in
exist
Clauses
with
an
and
use
them
in
the
same
rule,
so
these
are
being
anded
together.
They've
all
got
to
be
true.
A
And
in
this
case
again,
if
the
variable
equals
some
string
value,
the
darker
file
or
any
file
with
Docker
scripts
directory
has
changed
in
the
job
runs
manually.
Otherwise,
the
job
isn't
included
in
the
pipeline.
A
Now
there
is
just
a
ton
of
places
that
you
can
set
up
variables
for
gitlab
Pipelines.
They
can
be
predefined
environment
variables,
and
this
is
something
you
set
up
in
your
project
settings.
You
can
create
variables
that
are
for
specific
environments,
and
you
can
also
create
deployment
in
variables,
which
you
can
think
of.
As
you
know,
something
that
from
their
Rampage
or
you
know,
there's
a
few
different
ways
to
get
those
you
can
Define
job
level
variables
in
your
gitlab.ci.yml
file.
You
can
also
deploy
define
global
variables
which
again
apply
to
every
single
job.
A
If
you're
self-hosted,
it's
possible
for
your
GitHub
admins
to
create
instance,
level
variables
that
just
apply
everywhere
in
your
entire
instance
of
gitlab,
and
if
your
project
exists
below
a
group,
groups
can
have
variables
defined
as
well
and
they'll
be
inherited
down
through
their
subgroups
in
their
projects,
and
then
the
project
itself,
it's
possible
to
find
variables
now
notice
that
all
the
instances
in
the
group
and
the
project,
the
boss
got
protected
variables.
Protective
variables
are
a
formal
variable
that
is
going
to
be
masked
if
it's
echoed
out
in
the
job
lot.
A
So
it's
not
going
to
be
shown
there
foreign
to
put
in
protected
variables
that
aren't
going
to
be
exposed
in
your
job
log,
and
you
know
this
definitely
preferred
to
committing
some
kind
of
secret
into
your
repository
right.
We
don't
want
it
to
be
in
the
repository
for,
for
anybody
who
checks
the
repository
out
can
see
that
variable,
so
we
put
them
into
protective
variables
in
the
project
settings
or
the
group
or
the
instance,
and
then
CI
CD
pipeline
trigger
variables
scheduled
pipeline
variables
in
manual
pipeline
run.
A
Variables
have
got
the
highest
priority
and
by
the
way,
the
priority
for
these
goes
from
bottom
to
top.
So
just
know,
and
these
are
delivered
off
to
your
CI
jobs-
they're
also
delivered
off
to
during
the
evaluation
of
rules
in
gitlab
itself.
So
just
be
aware,
so
let's
go
ahead
and
dive
into
that.
A
A
A
A
A
A
Go
ahead
and
move
on
from
here
now
we're
at
about
an
hour
and
20
minutes
in
so
let's
go
ahead
and
let's
take
a
quick
break,
we'll
come
back
at
31
minutes
after
the
hour,
but
please
take
a
minute:
go
get
some
coffee!
Take
a
quick
bio
break
whatever
you
need
to
do
and
we'll
be
back
at
31
minutes
after
the
hour.
A
So
next
we're
going
to
talk
about
Security
application,
I'm,
sorry,
static
application,
security
testing
and
how
to
deal
with
artifacts
we're
almost
done
for
the
day
we're
going
to
go
through
this
brought
quickly
about
transferring
the
project
and
then
talk
a
little
bit
about
the
optional
exercises
you
can
go
through
if
you
want
to
so
after
you
fix
your
pipeline
to
run
smoothly
again
and
executive
stops
by
to
check
on
the
progress
they
want
to
make
sure
that
they
are
taking
full
advantage
of
all
the
features
get
lab
is
offering
like
security
scanning
plus
artifacts
and
ask
if
you
can
demo
this
in
a
pipeline
during
the
next
stand
up
now,
how
do
you
get
SAS
from
gitlab?
A
The
way
you
do
it
is
by
putting
this
include
statement
out
into
your
Global
namespace
in
your
dot
getlab.ci.yml
file,
and
so
this
is
not
part
of
a
job.
This
is
just
part
of
your
Global
namespace
you're,
just
including
a
templated
job,
and
by
the
way
the
keyword
template
means
that
shift
was
get
left
so
anytime.
You
want
to
include
a
job
that
comes
with
Git
lab.
You
can
do
it
using.
A
This
include
template
and
by
the
way,
regardless
of
subscription,
you
have
available
to
you
right
now:
secret
detection,
container
scanning
and
security
static
applications,
security
testing,
so
those
jobs
are
available,
regardless
of
your
subscription
level.
Of
course,
ultimate
subscriptions
have
a
whole
bunch
more
available
to
them.
A
So
what
is
the
template?
It's
a
way
to
share
CI
and
CD
capabilities
with
other
teams.
In
your
word,
it's
a
way
to
consume
CIT,
CD
capabilities
from
other
teams
in
your
org
and
the
way
that
Caleb
engineering
provides
capabilities
via
templates.
So
the
important
thing
to
realize
about
templates
is
there's
nothing
magic
happening
in
templates.
A
Github
is
open
source
for
our
community
Edition
and
open
core
for
our
Enterprise
Edition.
So
you
go
and
look
at
these
jobs
anytime.
You
want
to
they're
available
in
the
gitlab
main
repository
on
gitlab.com,
so
they're
in
there.
They
you
can
review
them.
You
know
at
will
anytime
you
want
to
you
can
read
through
them.
You
can
actually
copy
them
if
you
want
to
and
just
duplicate
them
in
your
own
projects,
but
the
better
way
to
do
it
is
to
use
that
include
keyword
with
the
template.
A
So
templates
are
always
executed
into
a
CI
CD
pipeline
through
an
include
statement
in
your
projects.getlab.ci.yml
file
and
template
jobs
are
created
in
your
cic
pipeline,
based
on
their
defined
stage
and
in
the
applicable
rules.
All
of
our
defined
templates
template
jobs.
You've
got
some
kind
of
rule
attached
to
them,
so
it's
probably
a
good
idea
to
go
in
and
take
a
look
at
those
and
make
sure
you're
meeting
the
requirements
for
the
so
there's
four
types
of
include
statements.
The
first
one
again
include
template
that
you
can
see
in
the
upper
left
here.
A
Then
you
also
have
the
ability
upper
right
to
include
include
a
file,
that's
in
an
external
project,
and
this
is
how
you
would
create
template
projects
that
your
devops
engineers
would
manage.
Independent
of
the
teams
that
are
managing
the
projects
in
coding
form
and
with
this
particular
syntax,
which
I
think
of
as
include
project
by
the
way
project
is
the
keyword,
but
we
delineated
a
file
there.
A
The
project
is
the
path
from
the
root
of
your
gitlab
instance,
which
you
know
it's
gitlab.com.
It's
include
your
namespace
root
level
group
and
in
these
subgroups,
and
then
your
project
name
following
that
that
that
the
that
the
template
file
is
in
and
then
the
file
is
just
the
path
from
the
root
of
the
project
and
then
include
remote
is
a
special
circumstance
realize
a
couple
of
things:
you're
gonna
you're
gonna
give
gitlab
a
URL
to
follow
to
to
read
this
particular
pipeline
file.
A
Anonymous,
that's
actually
possible
to
do
it's
kind
of
a
it's
kind
of
an
edge
case.
So
I
can
see.
Chris
is
typing
up
in
an
answer
for
you,
so
we'll
let
Chris
go
ahead
and
answer
that,
but
it
is
possible
to
do
that.
A
It
would
you
know
one
way
to
do.
It
is
by
extending
that
particular
project
that
particular
file
that
particular
job
you
declare
new
new
jobs
and
then
you'd
use
the
extends
keyword
to
extend
the
properties
of
the
the
original
one,
and
then
you
know
use
those
to
create
some
use
that
job
definition
to
create
some
variation
on
it.
A
So
again
include
remote
no
way
to
authenticate
to
it.
It's
got
to
be
an
absolutely
perfect
file
available
at
the
URL,
so
we've
got
options
for
customizing
these
job
behaviors.
In
these
templated
files,
gitlabs
gitlabs
templated
files
I've
got
the
ability
to
change
how
those
jobs
run
in
a
couple
of
different
ways.
A
So
this
is
kind
of
interesting,
but
this
is
even
though
a
job
is
defined
in
a
template
or
somewhere
else
in
your
pipelines.
You
can
redeclare
it
again
and
then
you
can
override
the
properties
at
the
job
itself
and,
in
this
particular
case,
we're
declaring
the
secret
analyzer
and
we're
overriding
the
property
of
the
image
that
it's
going
to
use.
A
So
understanding
the
include
SAS
jobs
default
Behavior
you
can
look
at
and
by
the
way,
you're
going
to
be
again
you're
going
to
be
getting
a
copy
of
these
slides
tomorrow
in
an
email
but
you'll
notice
that
these
slides
have
just
got
a
prodigious
number
of
links
in
them,
so
as
you're
following
through
them.
If
something
is
of
interest
to
you,
just
follow
the
links
and
go
ahead
and
take
a
look
in
this
particular
case.
We're
linked
to
gitlab
success
documentation.
So
you
can
understand
what
variables
are
defined
available
for
you
to
override.
A
You
can
also
look
at
the
SAS
template
itself
and
see
how
the
job
is
to
bind
and
we'll
actually
be
doing
an
example
of
that
during
the
course
of
our
work
today,
and
you
can
see
several
things
that
are
available
that
are
defined
variables
in
this
SAS
job
in
the
lower
left
down
there,
you
can
override.
A
A
Now
one
thing
real
important
to
realize:
gitlab
uses
open
source
security
scanners
and
ones
that
we
made
contributions
back
to
by
the
way
to
keep
them
moving
and
well
maintained.
But
what
that
means
is
that
there's
a
whole
bunch
that
are
wrapped
up
into
every
single
one
of
our
security
scanners,
so
that
it
has
the
ability
to
cover
a
broad
range
of
languages
and
with
the
static
application
testing
fairly.
Specifically,
since
it's
looking
the
code
itself,
there's
a
very
long
list
of
of
languages
that
it
covers.
A
Let's
just
tell
the
job,
it
only
needs
to
use
node.js
by
excluding
all
the
rest,
and
you
can
see
that
delineated
over
here
in
the
lower
right
in
this
section
right
here
we're
just
making
a
list,
that's
comma,
separated
of
all
the
things
that
we
don't
want
to
use
so
that
you
know
that
particular
job
doesn't
waste
any
time,
checking
those
out
and
trying
to
make
sure
that
those
are
the
ones
that
it
needs
to
use
and
in
this
particular
in
this
particular
case,
you
can
see
this
declared
in
our
Global
namespace
again
with
the
variables
using
the
SAS
excluded,
excluded,
analyzers
variable.
A
A
There's
keywords
that
you
can
use.
Artifact
is
the
keyword
as
a
matter
of
fact,
it's
a
job
property
that
you
can
delineate
and
you
just
give
a
path
to
the
artifacts,
whatever
they're
going
to
be
and
get
Lab
at
the
end
of
that
job
is
going
to
upload
those
artifacts
back
to
gitlab.
So
if
those
are
available
for
everybody
else
to
be
able
to
look
at
and
when
you're
ready
to
go
in
and
get
those
you
can
go
to
the
pipelines
page
if
you
want
to
so
the
pipelines.
A
Page
ordinarily
has
a
line
for
every
single
pipeline.
That's
been
run
in
the
project
and
it's
got
a
download
button.
If
you
click
on
the
download
button
there
and
you
download
the
archive
file
there,
that's
going
to
be
an
archive
file,
that's
going
to
have
all
the
artifacts
from
every
single
job
that
left
them
behind
and
by
the
way,
all
of
our
security
testing
apps
jobs
that
you
can
include
in
your
pipelines
are
all
going
to
leave
artifacts.
That
are
a
form
of
report.
A
So
just
be
aware
of
that.
Those
reports
are
there
and
again,
if
you
download
from
the
pipelines
page
you're,
going
to
get
an
archive
file,
it's
going
to
have
every
single
archive
defined
in
the
entire
pipeline
included
in
it.
A
Looking
at
an
individual
pipeline,
you
can
go
to
the
jobs
and
then
each
individual
job
has
got
the
same
download
button
on
the
far
right
side
of
its
line.
And
again
it's
going
to
be
an
archive
file
that
has
everything
that
that
job
generated,
but
if
you
were
to
click
through
to
a
job,
so
you
click
on
a
job.
You
go
to
inspect
its
job
lot.
There's
also
going
to
be
the
option
to
download
it
on
the
right
hand,
navigation
area,
so
that
you
can,
you
can
do
it
from
there
as
well.
A
So
maybe
you
inspect
the
job
log
and
you
think.
Okay,
that's
kind
of
interesting
I
want
to
take
a
look
at
the
artifact.
You
can
download
it
from
there,
but
you
also
have
a
browse
button
over
here
on
the
right
and
if
you
click
on
the
browse,
gitlabs
can
open
up
the
Erica
file
and
let
you
look
at
the
contents
of
it.
So
you
can
see
everything
that's
in
there,
foreign.
A
Artifacts
have
the
ability
to
really
overwhelm
your
disk
space
over
time.
They
just
do
and
deleting
them
is
really
a
pain
in
the
route,
because
you
have
to
follow
through
to
every
single
job
pipeline.
The
latest
artifacts
stuff
like
that
it
just
takes
an
incredibly
long
time
to
do
so.
Best
practices
are
to
use
this
expiring
keyword.
A
That's
shown
here
on
the
upper
left
in
tailgate
lab
how
long
it
can
keep
that
artifact
file
for
and
after
an
hour
an
hour
after
it
uploads
that
artifact
gitlab
is
going
to
just
go
ahead
and
delete
it
all
by
itself.
Now,
let's
say
that
you've
read
a
pipeline.
A
You've
got
a
job
that
left
in
artifact
and
you
really
want
to
keep
that
artifact.
So
several
other
people
can
review
it.
If
you
need
to,
you
can
go
to
the
job
page,
so
you
click
on
the
job
you're,
looking
at
the
job,
log
and
you'll
see
over
on
the
right
hand,
side
in
the
navigational
elements
that
we've
got
the
option
to
keep
it
there.
If
you
click
on
that,
keep
it's
going
to
just
completely
nullify
the
expiring
option.
A
A
And
by
the
way,
I
forgot
to
remove
one
of
our
stages
here,
let's
go
ahead
and
remove
that
now
so
under
where
we
Define
the
image
for
our
pipelines
by
the
way.
This
is
just
a
global
keyword
image.
It
just
creates
a
default
image.
The
jobs
can
overwrite
it
if
they
want
to,
but
it's
there.
The
jobs
can
use
it
if
they
want
to
as
well.
A
A
Now
the
next
thing
it's
doing
is
it's
wanting
you
to
kind
of
look
around
a
little
bit
in
this
pipeline.
Editor
I
want
you
to
notice
it
up
at
the
top
we're
on
the
main
branch.
You
have
the
ability
to
edit
this
on
different
branches.
If
you
want
to
we're
just
sticking
with
Maine
for
today
that
you
might
actually
have
a
use
case
for
that,
they
have
Brands,
that's
got
a
different
pipeline
and
you
can
see.
We've
got
a
tree,
expand
icon
over
here.
A
So
let's
go
ahead
and
expand
that
out,
it's
going
to
show
all
the
jobs
that
are
included
that
we're
playing
with
right
now
and
because
we
added
this
include
template
job.
That's
the
SAS
job!
You
can
see
that
we've
got
a
we've,
got
it
delineated
here
now.
The
interesting
thing
about
this
is
that
you
can
actually
click
on
this.
If
you
want
to
in
this
case,
take
you
right
to
the
jobs
code
in
gitlab.
A
Now
the
interesting
thing
about
this.
In
this
particular
case,
you
know
I'm
on
gitlab.com.
You
can
see
that
at
the
very
top,
but
this
is
the
actual
assess
job,
and
something
that's
kind
of
interesting
here
is
that
this
SAS
job
will
never
show
up
in
your
pipelines.
It's
just
a
template
for
other
jobs
to
extend
and
use,
and
you
can
see
that
SAS
analyzer
extensive
SAS
job,
so
any
changes
we
make
to
the
SAS
job
are
going
to
be
reflected
in
any
of
these
jobs
that
extend
it.
A
A
Let's
close
this
back
up.
A
So
if
we
wanted
to
look
at
in
this
case,
we're
not
actually
looking
at
the
jobs
code
itself,
if
we
wanted
to
look
at
the
full
merge
demo,
we
can.
We
can
go
to
the
full
configuration
and
it's
going
to
give
us
the
whole
thing
now
notice.
It
says
got
plugged
in
at
the
very
very
top,
but
it's
all
here
you
can
look
at
all
the
code.
If
you
want
to.
A
And
also
notice,
it's
telling
us
our
pipeline
syntax
is
correct
if
our
pipeline
syntax
was
was
bad
for
some
reason,
it
would
show
us
that
at
the
very
top,
if
we
go
correct
it
immediately,
and
so
now
we've
got
a
pipeline
running.
Let's
go
ahead
and
go
and
see
the
build
and
pipelines
take
a
look
at
it.
A
We
can
see
the
build
up
and
the
unit
test
is
running
immediately
because
it's
allowed
to
also
code
quality
is
already
failed
on
us.
So
just
remember
that
we
we
allowed
code
quality
to
fail
and
purposely
made
it
fail,
and
we've
got
these
two
new
jobs
included
here.
Seven
graphs
asked
node.js
scan
SAS
because
we
already
put
the
excluded
analyzers
in
there.
So
this
one's
running
it's
it's
eligible
to
run.
Excuse
me
we're
not
going
to
wait
on
it,
but
we
can
come
back
to
it
anytime.
A
A
And
notice
that
we're
moving
it
into
the
stage
that
we've
delineated
up
at
the
top
called
security
for
some
reason,
we
just
want
to
keep
it
in
a
separate
stage,
but
we've
also
added
Deeds
with
an
empty
array
there,
so
the
zest
is
eligible
to
skin
immediately,
just
as
soon
as
the
breadboard
starts,
running
and
again,
zest
is
is
not
dependent
upon
any
build
or
effect
it
runs
directly
on
the
code.
So
this
is
okay
to
do.
A
Now,
let's
play
around
a
little
bit
with
artifacts,
so
let's
say
that
we
we
want
to
store
the
results
of
the
build
up
job
in
an
artifact.
That's
how
to
change
the
job
to
do
that.
A
A
And
again,
we've
got
the
expired
in
one
hour
set
there
so
before
we
go
ahead
and
click
click
on
Commit
changes
and
use
the
left
hand
manager
to
go
through
the
left-hand
menu
to
go
through
click-through
building
pipelines.
So
let's
go
ahead
and
Commit
This
right,
I'm
working
with
a
couple
screens
here.
A
A
A
A
A
A
A
It's
going
to
delineate
all
the
jobs
here
and
notice
that
there's
nothing
available
for
us
here
except
to
cancel
it,
but
at
the
point
there
becomes
a
artifact
available
there,
we'll
get
a
download
icon
there.
A
B
A
A
But
when
you
transfer
the
project,
if
you
do
not
have
an
ultimate
license,
you're
going
to
lose
capabilities
now,
again,
nothing
that
we've
done
today.
This
is
just
a
standard
disclaimer
for
all
of
our
workshops.
Nothing
that
we've
done
today
won't
transfer
out
to
just
any
any
absolutely
anywhere.
It'll
keep
everything
intact.
A
Let's
just
take
a
real
quick
look
at
that:
we're
not
going
to
transfer
this
project
today,
oh
by
the
way,
look
at
that
our
job
artifacts
showed
up
so
now
we
can
keep
it
if
we
want
to,
for
some
reason,
there's
something
in
here
that
we
need
to
be
able
to
share
with
others
because
remember
we
had
it
expired
in
one
hour,
but
we
can
also
download
it
and
if
we
want
to,
we
can
browse
it.
Take
a
look
at
the
archive.
A
A
A
I,
don't
have
an
explanation
for
this
from
folks,
but
there
absolutely
is
artifacts
Left
Behind
violence
scanners,
the
artifacts
are
reports
and
the
reports
are
something
that
triggers
widgets
for
people
who
have
the
ultimate
subscription.
So
for
some
reason
some
reason
it
doesn't
have
our
job
artifacts.
Here,
let's
go
back
and
see
if
I
can
figure
out
a
way
later.
A
All
right
so
again
here's
your
instructions
for
transferring
the
project.
If
you
want
to
do
it,
you
can
follow
through
here.
You
can
move
it
to
your
personal
namespace.
You
can
move
it
to
some
namespace
owned
by
your
organization,
if
you
want
to
just
so,
you
can
review
and
play
with
it
later
if
you
want
to-
or
maybe
just
show
different
aspects
of
this
off
to
people,
so
you
can
also
transfer
your
group
by
the
way.
So
you
have
that
ability
as
well.
You
can
create
a
subgroup
of
some
other
names.
A
A
So
now,
let's
talk
about
a
couple
of
other
things,
just
real
quickly.
I
want
to
touch
on
these
number.
Six
is
an
optional
security
and
compliance.
It's
an
exercise
that
looks
a
lot
like
what
we
went
through
today,
where
wherein
you're
going
to
Fork
a
project
and
after
you've
worked
that
project
you're
going
to
follow
the
issues
in
that
specific
project
which
you're
it's
going
to
become
your
new
source
project,
you're
going
to
follow
the
issue
list,
and
then
it's
delineated
for
you
here.
So
let's
take
a
real,
quick
look
at
that.
A
A
So
you
have
the
ability
to
scan
licenses
if
you're
in
an
ultimate
license,
and
you
can
also
create
licenses
that
are
denied
so
that
your
teams
are
all
aware
and
create
security
alerts
and
merge
requests
so
that
teams
know
this
can't
go
into
our
default
Branch.
For
example,
and
then
you're
going
to
create
on-demand
scans
audit
events
and
some
extra
configuration
around
this,
so
I
really
recommend
that
you
go
back
and
give
this
a
good
look.
A
Now,
let's
go
back
I'm
going
to
wrap
up
here,
just
as
quick
as
I
can,
because
the
running
is
over
just
by
a
little
bit
now.
We've
also
got
one
other
project
here
that
I'd
really
recommend
you
take
a
look
at
especially
the
devops
teams,
you're
going
to
want
to
be
able
to
create
these
multiple,
independent
pipelines
that
might
run
for
any
single
project
or
you're
going
to
create
these
repositories.
A
Let's
take
a
real,
quick
look
at
this.
This
particular
project
is
something
that
I
maintained,
because
I
went
through
a
lot
of
this
with
my
customers
and
just
for
awareness,
but
this
is
It's
a
template
for
just
getting
started
with
creating
multiple
independent
Pipelines,
but
it
has
absolutely
no
job
logic.
It's
like
the
jobs
we're
playing
around
with
when
we
were
doing
the
the
dag
graphs.
A
A
It
also
uses
a
shell-based
runner
rather
than
a
Docker
room,
because
shell
Runners
don't
have
to
download
a
Docker
image
to
fire
up
in
the
shell-based
runners.
You
know
it
can
run
that
environment
command
real
quickly
and
then
just
flip
on
to
the
next
job.
So
it
creates
a
very
quick
way
to
execute
the
Pipelines
in
this
particular
pipeline.
By
the
way
it's
got
a
it's
got
to
read
me
it
oh
I
mean
the
wrong
pipeline
here.
A
Okay,
I'm
gonna
have
to
go
back
and
edit
that
up
to
the
workshop,
but
we
do
have
a
project
that
I'm
going
to
be
sending
you
to
that's
going
to
have
the
ability
for
you
to
author
pipeline
workflows,
I'm,
not
sure
why
I'm
not
getting
it
getting
to
it
right
now,
but
I
probably
have
the
link
done
wrong
in
that
particular
issue.
A
So
I'm,
please
check
back
on
this
particular
option.
Number
seven
and
I'll
make
sure
the
link
is
correct,
going
forward
after
it
as
soon
as
we
get
out
of
the
workshop,
and
it
just
gives
you
a
chance
to
look
at
ways
of
playing
with
rules.
It
has
some
fairly
complex
rules
already
set
up
in
it
right
now
and
it's
a
way
for
you
to
kind
of
stub
out
pipeline,
workflows,
independent
of
job
logic,
so
I
I'm
at
the
very
end.
For
today,.
A
All
right,
for
some
reason,
my
video
is
not
showing
up
I
apologize
for
that,
not
sure
what's
going
on
here
there.
It
is
we
appreciate
your
time
today
and
are
there
any
final
questions
before
we
go.
B
We
answered
a
bunch
during
the
well,
you
sure
did
holy
cow,
I,
appreciate
everybody's
feedback
and
questions
thanks.
Everyone
appreciate.