►
From YouTube: Hands-On GitLab CI Workshop - AMER Time Zone
Description
Watch this hands-on GitLab CI workshop and learn how it can fit in your organization.
Learn how to build simple GitLab pipelines and work up to more advanced pipeline structures and workflows, including security scanning and compliance enforcement.
A
Okay:
let's
go
ahead
and
jump
in.
We
still
have
some
people
joining
us,
but
we'll
need
to
get
started
here.
Thank
you
again
for
being
with
us
today
we're
looking
forward
to
going
through
the
the
content.
A
This
is
a
Hands-On
Workshop,
so
we
will
be
getting
into
git
lab
with
all
of
you
and
we
have
instructions
on
everything
that
you'll
need
to
do
before
we
jump
in,
though,
wanted
to
just
go
through
a
couple
of
housekeeping
items.
First
off
we
are
recording
this
session,
we'll
send
it
out
here
in
the
next
day
or
so
so
be
on
the
lookout
for
that
we'll
include
the
recording
and
the
deck
with
them.
A
If
you
have
questions
that
come
up
throughout
the
session,
we
have
a
q
a
within
Zoom.
So
if
you
hover
over
your
Zoom
app
click
on
Q
a
you
can
submit
your
questions,
there
we'll
be
able
to
answer
those
throughout.
So
please,
as
you
have
questions,
let
us
know
we're
looking
forward
to
engaging
with
you
I'm
going
to
pass
over
to
my
colleague,
Steve
Graham
he's
one
of
our
customer
success
engineers
at
gitlab
we're
looking
forward
to
hearing
from
him
today.
B
Thank
you
Taylor.
My
name
is
Steve
Graham
I'm,
a
customer
success
engineer
on
the
scale
team,
which
is
Taylor's
team
by
the
way
for
whatever
that's
worth,
I
want
to
add
a
couple
of
quick
footnotes
on
to
what
Taylor
was
just
talking
about.
B
So
one
of
the
other
things
we're
going
to
be
doing
during
the
course
of
the
day
is
we're
going
to
be
sending
you
out
invitations
to
join
us.
A
single
Channel
guest
slack
Channel
on
on
gitlab's,
hosted
Slack.
That's
going
to
give
you
access
to
the
customer
success
Engineers
on
my
team
and
people
that
can
help
intervene
if
you're
running
into
any
problems,
while
you're
working
through
the
various
workshops,
we've
got
more
than
one
we're
going
to
cover
one
Workshop
today,
but
then
there's
an
optional
one
that
you
can
follow
up
with.
B
If
you
want
to
pursue
things
like
security
and
compliance
and
get
to
a
good
understanding
in
and
around
those,
so
some
of
you
signed
up
for
a
one-hour
session.
Today
we
ended
up
extending
it
out
to
90
minutes
because
of
the
amount
of
content
that
we've
got
it
covered.
But
if
you've
got
to
drop,
you
know
an
hour
in
just
look
for
the
email.
You
can
resume
our
session
whenever
you
have
time
by
going
to
the
recording
and
just
fast
forwarding
up
to
an
hour.
B
So
as
I
was
saying
we're
going
to
be
including
some
optional
content,
it
will
be
optional,
but
we'd
like
you
to
engage
in
it
and
take
advantage
of
it.
So
you
get
to
an
understanding
of
how
gitlab
addresses
security
and
how
we
address
compliance,
both
of
which
are
very
important
for
all
of
our
customers.
B
And
then
the
other
thing
I
wanted
to
just
stress
was
that
all
of
the
these
two
workshops
that
I've
brought
together
today,
the
one
that
we're
going
to
cover
today
and
the
one
that's
optional,
both
rely
upon
using
the
old
navigation
at
gitlab,
and
we
just
released
the
new
navigation,
which
has
different
menu
options,
arranged
in
a
different
way
with
different
naming
conventions.
B
So
let
me
just
real
quickly,
share
my
screen
and
give
you
a
real
quick
chance
to
see
what
I'm
talking
about
this
is
the
old
navigation
here
and
if
I
want
to
switch
to
the
new,
which
is
what
you're
probably
defaulted
to
right
now.
I
can
just
simply
do
this
and
then
the
navigation,
as
you
can
see,
changes
up
over
here,
but
if
I
want
to
switch
back
to
the
old
one,
which
is
what
you'll
need
as
you're,
going
through
the
instructions
that
we've
prepared
for
you
today.
B
B
So,
let's
start
diving
into
the
content
we
have
today,
it's
real
important
that
you
have
a
getlab.com
account
you're
going
to
need
it
to
be
provisioned.
The
subgroup
that
you're
going
to
get
so
in
our
gitlab
learn
labs.
We
have
a.
We
have
a
namespace
group,
a
root
level
group
ekitlab.com
that
we
host
these
types
of
sessions
in
and
you're
going
to
be
allocated
within
that
namespace
area,
you're,
going
to
be
allocated
a
an
ultimate
subgroup
that
belongs
to
you
and
you'll
be
made
the
owner
of
it.
B
So
you
can
put
projects
in
there
and
you're
going
to
be
forking
a
couple
of
projects
in
there.
If
you
follow
through
on
all
the
work
that
we're
going
to
be
hoping
that
you
do
over
the
course
of
the
next
day
or
so
now,
just
for
awareness,
these
usually
live
for
a
day.
Maybe
two
tops:
we
managed
to
get
our
team
to
allow
us
to
have
four
days
for
these,
so
you'll
get
all
the
way
through
Thursday
evening,
and
then
these
ultimate
groups
that
you'll
be
provisioned
will
be
de-provisioned
on
June
16th.
B
B
This
is
our
agenda
for
the
day
and
the
lab
setup
is
particularly
the
most
important
part
of
what
we're
going
to
be
doing
today,
because
we
want
to
make
sure
that
you
get
through
the
lab
setup.
If
you
have
a
problem,
please
put
it
into
the
Q
a
area
so
that
my
my
peers
can
jump
in
and
see
if
they
can
help
you
help
you
eliminate
the
problem
and
then
we're
going
to
go
through
some
basic
pipeline
exercises
that
are
meant
to
be
Elementary
in
nature.
B
So,
let's,
let's
dive
in
so
the
premise
is
that
you're
you're,
officially
a
part
of
a
a
new
brand
startup
called
Tanuki
racing.
B
This
is
just
a
scenario
your
company
has
recently
swapped
over
to
using
gitlab
for
ciscd
is
test
you
with
learning
about
the
different
pipeline
capabilities
and
that's
exactly
what
this
Workshop
is
intended
to
help
you
do
today,
especially
if
you
can
follow
up
on
the
optional
parts,
which
is
clear
instructions
for
in
that
you'll
be
seeing
as
we
go
forward.
B
So,
let's
get
into
the
lab
setup
now
these
next
steps
are
really
critical.
So
if
you
have
a
problem,
please
do
Post
in
the
Q
a
if
you're
catching
this
recording
after
we've
distributed
it
via
email
and
you've.
Gotten
your
slack
request.
Please
join
our
slack
Channel.
If
you
have
a
problem
and
make
sure
and
ask
us
for
help
there,
we
will
be
able
to
do
everything
that
we
can
and
again.
This
is
available
to
you
for
the
next
four
days.
B
Now
what
you're
going
to
need
to
do
now.
It's
real
important
that
you
have
that
gitlab.com
account.
If
you
don't
have
a
gitlab.com
account,
you'll
need
to
go
to
getlab.com
and
exercise
the
option
to
sign
up
so
and
then
just
take
a
minute
validate
that
account
so
that
we
can
proceed
on
from
here.
B
The
the
the
Crux
of
this
is
you're
going
to
go
to
www.gitlabdemo.com.
B
This
screen
here
and
you'll
want
to
put
in
the
invitation
code
that
you
see
down
at
the
bottom
following
that
you're
again,
you're
going
to
have
to
to
know
your
gitlab
user
name.
Now,
it's
real
important
that
that
net
include
the
at
symbol
here
and
just
be
the
alphanumeric
characters
that
follow
it.
So
you
know
capture
your
git
lab
name,
you'll!
Need
it
on
the
next
screen
that
we're
going
to
go
to
it
really
case.
The
perception
here
is
your
invitation
code
is
already
at
the
top.
B
This
git
lab
URL
that
it's
giving
you
that
URL
is
going
to
take
you
to
your
provisioned
training
group
and
you'll
end
up
with
something
that
looks
very
much
like
this
now
it'll
have
a
different
string
here.
This
iuc
vibe
the
NTU.
It's
going
to
be
different
for
you,
but
it'll.
Look
essentially
like
this
and
it'll
be
underneath
the
gitlab
learn
bios
subscription.
So
if,
by
chance
you
end
up
here,
then
there
might
have
been
a
misclick
somewhere.
B
If
that's
the
case,
please
go
back
to
www.getlabdemo.com.
B
By
the
way
that
invitation
code
at
the
bottom
is
wrong,
you'll
need
to
go
back
and
get
it
from
one
of
the
previous
screens,
but
put
the
invitation
code
in
again.
Provision
put
your
gitlab
username
in
again
and
then
provision
training,
environment,
and
then
you
should
end
up
back
here
and
again.
If
you
run
into
a
problem,
please
put
it
into
our
q
a
so
my
peers
can
chase
it
down
see
if
they
can
help
you
help.
You
resolve
that.
B
So,
let's
go
on
from
here
we're
going
to
talk
about
setting
up
a
simple
pipeline.
I'm,
sorry,
Dave,
I'll
slow
down
a
little
bit.
We've
got
a
lot
of
content
to
get
through
today,
so
I'm
gonna
have
to
kind
of
jam
a
little
bit
we're
going
to
talk
about
setting
up
this
simple
pipeline
Dave.
If
you
wouldn't
mind,
would
you
put
in
your
gitlab
username
that
you
were
using
that
you
put
in
there
and
let
my
let
my
peers
kind
of
Chase
that
down
a
little
bit.
B
B
But
if
any
of
you
are
looking
for
the
invitation
code,
look
in
our
answered,
questions
and
you'll
see
you'll,
see
Brian
asking
for
the
invitation
code
to
be
added
to
the
chat
and
if
you
show
the
answers
on
that,
you'll
see
a
URL
to
get
livedemo.com
another
one
to
use
or
sign
up
at
gitlab.com.
If
you
need
to
do
that
and
then
the
registration
code
is
in
there,
so
please
continue
on
that.
Let's
make
sure
that
you're
all
able
to
get
to
your
provisioned
environments.
B
This
particular
URL.
Let
me
capture
it
real
quickly,.
B
B
If
you
look
at
Brian
Jackie
dang
it
when
I
said
that
okay,
there
we
go.
If
you
look
at
Brian,
Brian
Jackie's
question
in
the
answer,
questions
you'll
see
that
I've
put
that
URL
there
so
that
you
can,
you
can
access
it.
You
need
to
access
this
particular
project,
which
is
the
one
we're
going
to
be
forking
in
bringing
back
to
our
back
into
our
provision
groups
and
we'll
talk
about
that
more
in
just
a
second.
B
B
Okay,
thank
you
appreciate
that
all
right,
so
what
you're
seeing
is
I'm
in
the
same
place
in
both
both
both
windows
right
now,
the
one
on
the
left
we're
going
to
keep
there,
because
we
want
to
be
able
to
rely
on
the
issue
list
and
by
the
way,
this
was
the
one
that
I've
the
URL
that
I
put
under
Brian
Jackie's
question
in
the
answer
questions.
B
So
the
issue
list
is
where
we're
going
to
find
our
list
of
instructions,
but
the
first
thing
we
need
to
do
is
we
need
to
land
there,
which
is
why
I've
got
it
on
the
right
one
by
working,
we
want
to
be
able
to
forgiven
now.
The
way
this
is
going
to
work
is
the
first
thing
we
need
to
do
is
Select
our
namespace
and,
if
you'll
bear
with
me
for
a
minute
I'm
going
to
find
the
right
one
here.
B
All
right
so
you're
going
to
pick
the
provisioned
group
subgroup
to
put
this
into
that
you
envisioned
in
this
particular
process,
and
once
you
do
that
you're
going
to
Fork
the
project
now
this
is
going
to
take
a
minute
or
two
gitlab
is
going
to
copy
it
over
there.
For
you,
the
important
thing
to
realize
is
that
the
issues
that
we
have
in
the
source
project
that
we
forked
from
are
not
going
to
come
over.
That's
why
we
want
to
keep
this
other
window
open
here.
B
And
then
you
should
be
seeing
something
that
looks
like
this,
so
you
should
be
seeing
CI
CD
adoption,
Workshop
you'll,
see
Logan's
name
up
there,
which
is
one
of
the
creators
of
this
Workshop
and
from
that
point
there's
some
instructions
you'll
need
to
follow,
but
we'll
just
kind
of
go
on
from
there.
So
let
me
go
back
to
my
main
screen.
B
All
right
now
that
we've
got
that
it's,
and
we
just
did
this
but
you're,
going
to
select
the
group
that
you
were
provisioned
under
gitlab,
learn
labs
and
then
once
you
do
that
you're
going
to
Fork
the
project
now
this
is
important
to
do
we're
going
to
want
to
remove
that
forking
relationship
so
that
we
can
continue
on
and
work
in
this
without
causing
any
any
disruptions
that
would
be
related
to
that
yeah.
Let
me
go
back
here,
one
more
time.
B
B
B
So
Haley,
let's
go
back
and
do
that
one
more
time.
B
And
then
scroll
down
to
Advanced
and
then
they
can
scroll
down
to
oh,
it
doesn't
have
it
anymore,
I'm.
Sorry,
one
of
the
options
you'll
see
down
here
is
to
remove
the
fork
relationship
since
I've
already
done
it.
It's
no
longer
here
anymore,
but
let's
back
up
here,
so
that
you
can
kind
of
see
what
that
looks
like.
B
Again,
you're
going
to
go
to
settings
General
scroll
down
to
the
advanced
section.
Removing
the
fork
relationship
makes
it
easier
for
you
just
to
proceed
on
and
do
what
you
want
to
do
at
that
point.
You're
completely
independent
I've
never
tried
to
maintain
a
fork
relationship
on
something
that
I
was
going
to
fork
and
work
on
subsequently,
so
I
don't
know
that
it's
going
to
cause
any
disruptions
for
you,
but
this
is
considered
a
very
important
part
of
our
instructions
by
our
creators
of
this
particular
scenario.
B
So
once
you
go
to
General
scroll
down
to
Advanced,
expand
it
and
then
find
the
remove
Fork
relationship
and
if
you
click
on
that
again,
you're
going
to
have
to
type
in
or
cut
and
paste
the
name
of
the
project
in
its
URL
form
right,
so
the
portion
of
it
that's
up
at
the
very
top
and
then
once
you
do
that
the
fork
relationship
is
gone
and
we
don't
have
to
worry
about
it
anymore.
B
So
again
in
the
source
project.
Thank
you.
Haley
appreciate
you
confirming
that
in
the
source
project,
if
you
go
to
issue
issues,
you'll
find
our
instructions
here
and
there's
quite
a
list
of
them
there's
actually
more
than
five,
which
you're
seeing
presented
here,
there's
seven
in
total,
because
I've
put
two
optional
ones
at
the
end.
B
So
let's
go
ahead
and
proceed
on
from
here.
Let's
talk
about
what
a
basic
pipeline
Anatomy
consists
of
The
Columns.
You
know
we
covered
this
in
our
in
our
webinar
last
week,
but
just
to
remind
you,
all
the
columns
represent
stages
that
you've
created
in
your
gitlabd
FCI
dot,
wiredml
file
and
inside
of
each
stage
are
jobs
that
are
going
to
run.
The
jobs
run
independently,
often
on
different
Runners,
especially
if
you're
using
shared
Runners.
You
know
the
probability
of
getting
to
the
same
Runner
is
probably
very
very
slim.
B
B
Now,
there's
job
keywords:
they're
calling
them
statements
at
the
top.
Here
you
know
script
before
script
after
script
in
script
is
required.
If
you
don't
Define
the
script
section
which
you
can
see
enumerated
here,
if
you
don't
Define
the
script
section,
your
yml
won't
validate
and
it'll
fail
in
the
job.
You
know
the
pipeline
won't
even
run,
but
if
you
define
the
scripts
section
on
your
jobs,
each
one
of
these
is
essentially
a
shell
command.
Now
they're
executed
executing
they
may
be
executing
in
Docker,
which
is
the
case
in
these
test
projects.
B
We're
going
to
be
doing
today,
but
but
they're
executing
at
a
Shell
inside
that
docker
before
script
is
something
that
you
can
set
up
on
beforehand.
So
maybe
you
need
to
pull
in
some
libraries.
Maybe
you've
got
a
base,
stop
or
Docker
image.
B
You've
got
to
pull
some
libraries
in
to
support
what
you
want
to
get
accomplished
and,
although,
if
that,
if
you're
having
to
spend
a
lot
of
time
running
in
it
before
script,
that's
probably
a
real
good
opportunity
to
go
back
and
look
at
your
Docker
file
that
you're
using
or
that
you
may
be.
Creating
your
project
and
potentially
add
the
libraries
there,
instead
of
having
to
do
them
on
the
before
script
after
script,
runs
in
a
separate
shell
after
the
before
script
and
script
statements.
B
B
So
a
pipeline
is
going
to
run
all
jobs
you
define
in
that
pipeline.
It
can
be
tax
that
you
can
use
specific
jobs
only
on
certain
Runners.
If
you
need
to
do
that,
you
may
have
some
special
use
cases
where
you
have
to
use
a
special
tag.
Runner.
A
good
example
of
this
would
be
maybe
you've
got
a
runner
provisioned
on
Mac
OS
Javit.
B
B
B
B
You
know,
or
the
standard
is
Linux
for
just
whatever
it's
worth
job
duration
can
take.
You
know
any
amount
of
time,
but
the
jobs
are
typically
picked
up
within
about
five
seconds
and
that.
A
B
That
you've
got
runner
capacity,
so
if
you're
using
get
live,
shared
Runners,
the
capacity
is
just
off
the
charts.
But
if
you're
provisioning,
your
own
Runners
you're,
going
to
want
to
make
sure
you've
got
enough
capacity
to
pick
up
the
jobs
for
all
the
different
projects
that
might
be
sharing.
Those
runners.
B
B
B
B
All
right
so
simple
pipeline
is
the
first
one
and
that's
what
we
just
got
done
talking
about
just
basic
concepts
for
simple
Pipelines
Let's.
Let's
take
a
real,
quick
minute
and
I'm
going
to
switch
screens
again
here
and
let's
just
kind
of.
B
Thor
we
put
the
Thor
if
you'll
go
back
to
our
answer,
questions
and
look
at
the
one
that
I
answered
for
Brian
fairly
early
on.
It's
second
or
third
answer:
question
you'll
see
that
there's
an
activation
code
there.
B
Maybe
one
of
my
peers
can
hop
in
and
help
you
out
on
that
one
all
right.
So
let's
go
back
to
main
project
here
and
let's
open
up
the
simple
pipeline
instructions.
B
You
know
what's
up
here
at
step:
one
is
just
working
the
application
which
you've
presumably
already
done
now
we're
going
to
create
a
simple
pipeline
and
the
way
we're
going
to
do
this
in
case
you're
not
used
to
doing
this.
Yet
we're
going
to
use
and
remember
I'm
on
the
old
navigation
right
now,
not
the
new
navigation.
B
B
Let
me
expand
it
up
just
a
little
bit.
B
You
know,
let's
commit
these
changes
to
me,
which
is
our
default
Branch
here
and
then
just
for
Grants.
Let's
take
a
quick
minute:
let's
go
over
to
pipelines
and
let's
see
what's
what's
running
there
now,
what
you'll
see
is
the
unit
test
is
waiting
for,
build
up
to
be
done,
because
we
again,
this
is
normal
stage
order.
B
All
right,
so
we're
going
to
do
one
thing
with
the:
let's
see
we're
going
to
add
this
code
quality
job
at
the
very
end
and
I
want
you
to
notice
that
it's
got
a
needs
keyword
defined
at
the
bottom.
The
needs
keyword
is
designed
to
let
us
subvert
normal
stage
order
and
get
lab
pipelines.
It
essentially
tells
gitlab
that
as
soon
as
this
job
is
parsed
in
this
case
needs
with
an
empty
array
is
telling
gitlab
we
have
no
requirements.
This
job
is
qualified
to
run
just
as
soon
as
the
pipeline
fires
up.
B
So
let's
go
ahead
and
get
that
one
in
there
now.
One
thing:
that's
nice
about
using
the
gitlab
CID
CD
editor
is
that
if
we
make
a
mistake
here,
it's
going
to
tell
us
that
and
we're
gonna
we're
gonna
get
some
errors
that
tell
us
that
this
property
stage,
two
that
I
just
put
in
there's
nothing
left
and
that's
what's
useful.
Here,
we've
got
a
built-in
linter.
We
have
the
ability
to
see
what
kind
of
problems
we
might
run
into
now.
B
B
B
That's
something
nice
about
having
this
having
this
particular
sandbox
set
up
is
that
you
can
kind
of
play
around
with
this,
but
it's
still
giving
me
a
failure.
B
Meeting
needs
as
an
independent
job,
because
I
didn't
have
it
properly
end
in
it.
B
All
right,
we've
got
a
running
pipeline
this
time.
Now
what
I
want
you
to
notice
on
this?
One,
that's
different
than
our
last
one
that
we
looked
at
is
the
code
quality
and
unit
test
fired
up
immediately.
Remember
that
we
used
the
needs
keyword
and
then
we
put
empty
arrays
on
each
one
of
them
and,
as
a
result,
they're
able
to
run
before
the
buildup
completes,
which
is
out
of
normal
stage
order.
This
is
an
extremely
efficient
way
of
going
F2.
B
B
B
So
you
see,
we've
just
defined
a
whole
bunch
of
jobs.
Here
needs
is
being
used
as
a
result.
Now,
if
we
want
to
visualize
that
we
can
see
that
here,
we
see
the
jobs
in
the
stages
they're
defined
in
and
there's
dependency
lines
drawn
between
each
one
of
them.
So
we
can
see
the
build
a
is
required
for
test
a
which
is
required
for
deploy
a
so
they're
able
to
run
in
that
specific
order,
but
they
don't
have
to
wait
test
day,
for
example,
doesn't
have
to
wait
for
this
entire
build
stage
to
finish.
B
So
Nisha
in
a
normal
circumstance,
yes,
but
as
you
can
see,
we
defined
build
jobs
for
every
single
different
scenario.
That
was
there
so
just
be
aware
and
let's
go
ahead
and
pull
out
the
pipeline
and
take
a
look
at
it.
B
In
this
scenario,
we've
got
different,
build
apps
running,
and
you
know
you
can
see
that
these,
let's
see,
let's
take
a
quick
look.
Those
are
already
ready
to
go.
B
B
B
B
So
after
showing
off
your
simple
pipeline
to
the
team,
they
really
liked
it,
but
they're
wondering
if
you
could
speed
up
the
process.
A
little
bit
decide
you're,
going
to
show
off
your
skills
and
show
how
you
create
a
pipeline
with
different
execution
orders,
as
well
as
a
large,
dag,
directed
acrylic
graph,
to
show
what
is
really
possible
and
by
the
way
Mike.
Just
for
awareness.
B
So
we're
going
to
use
directed
acrylic
graph
to
show
what's
really
possible.
Now,
as
a
reminder,
jobs,
the
next
stage
will
start
after
all,
jobs
in
the
previous
stage
have
completely
completed
successfully.
This
is
by
Design,
but
with
directed
acrylic
graph
job
net.
B
In
the
test
stage
execute
after
all,
jobs
in
the
build
stage
are
completed
and
the
desired
state
is
we
want
code
quality
to
be
able
to
run
without
build
because
it
doesn't
require
build,
and
we
want
to
keep
code
quality
in
the
test
stage,
so
it
categorically
belongs
in
the
test
stage,
but
it
doesn't
need
to
wait
for
the
bill
to
run.
B
So
in
that
scenario,
the
code
quality
job
would
get.
This
needs
with
an
empty
array.
Now,
normally,
this
would
be
a
list
of
jobs
that
depends
on
in
an
array
that
follows
below
this
desktop
like
this,
but
in
this
case,
if
we
just
put
an
empty
array
there,
that
tells
the
code
quality
job
inside
gitlab
that
it's
ready
to
execute
as
soon
as
the
pipeline
fires
up
and
then,
as
the
pipeline
starts
code,
quality
is
going
to
fire
up.
At
the
same
time,
it's
built
foreign.
B
B
Now
it's
possible
with
gitlab
to
actually
build
stageless
pipelines.
If
you
want
to
do
that,
I'm
going
to
tell
you
very
frankly,
I
prefer
to
keep
the
stages
there,
because
this
in
and
of
itself
doesn't
having
a
stageless
pipeline
means
that
everything
is
just
a
long
column
of
stuff.
That's
going
to
be
executing,
you
know
as
it's
eligible
to
run,
but
by
keeping
it
in
the
defined
stages.
You
know
test
log
and
test
to
me.
Builds
belonging,
builds,
deploys
belonging
to
deploy
stage,
but
they
don't
have
to
wait
using
needs.
B
We
can
go
ahead
and
subvert
the
waiting,
so
status
pipeline.
Make
your
pipeline
more
efficient,
I'm,
going
to
suggest
that
as
long
as
we're
using
needs,
even
if
we're
using
stages,
it's
going
to
be
the
same
speed
and
we
can
implicitly
configure
the
execution
order
so
that
we
can
get
the
faster
processing
and
more
efficient
pipeline
equals
less
cycle
time.
Although
you
don't
have
to
do
stage
lists
to
make
this
work,
you
can
absolutely
still
use
stages,
so
you've
got
categorization
of
your
jobs
that
make
sense
and
then
how
to
navigate.
B
You
know
remember
that
we
were
in
cimcd,
editor
and
I
hit
the
needs
at
the
top,
and
that
gave
us
a
chance
or
it
was
actually
visualized
in
there.
We
used
the
needs
keyword,
but
then
we
hit
visualize
and
it
gave
us
a
very
clear
depiction
of
it.
B
So
if
you
come
back
to
the
team
and
show
them
your
new
pipeline,
you
notice
that
one
of
your
test
jobs
is
failing,
but
after
taking
a
look
at
the
job,
you
were
able
to
determine
that
you
don't
actually
need
to
enforce
it
passing,
but
still
want
to
see
the
results.
This
texture
section
will
show
you
how
to
use
rules
and
failure.
B
Clauses
in
your
git
lab
Pipelines
so
be
aware
that
allow
failure,
true,
is
what
we're
seeing
here,
we're
seeing
this
orange
circle
with
an
exclamation
point
in
it,
which
is
a
way
of
telling
us
that
that
particular
job
failed.
But
it's
allowed
to
fail.
So
we've
implicitly
configured
allow
failure
colon,
true
Now.
The
default
has
allowed
failure
colon
false.
So
if
we
don't
put
this
into
our
job
and
the
job
fails,
it's
going
to
stop
the
pipeline
and
deploy
will
never
be
able
to
run
anything
that
comes
after
test.
B,
that's
not
already
running.
B
So
the
job
defaults
and
we're
going
to
talk
about
this
more
in
when
we
get
to
rules,
but
when
implies
when
the
when
the
job
is
going
to
start
out,
when's
default
is
on
success.
B
So
if
we
don't
explicitly
put
in
a
when
statement
in
our
roles
or
in
our
job,
it's
going
to
default
to
win
on
successful
and
just
run
pretty
much
all
the
time
whenever
you
do
anything
to
trigger
a
pipeline
and
that
also
allow
failure
again,
it's
awesome,
so
it
you
have
to
explicitly
enable
the
failure
to
call
it
true.
B
If
you
have
jobs
that
you
want
to
allow
to
fail,
but
still
want
to
be
able
to
examine
the
results
on
the
jobs
can
be
included
in
the
pipeline
and
from
rule
evaluates
to
true
and
as
a
clause
of
when
on
success
when
delayed.
When
always
those
have
got
different
use
cases,
but
if
no
rule
is
defined,
but
the
job
has
a
clause
of
when
on
success
when
delayed
or
when
always
again,
these
are
unconditional.
B
So
the
rules
block
always
evaluates
prior
to
script.
Your
script
isn't
going
to
run
until
it
rules
have
run
it.
Assuming
that
you
put
rules
in
there
in
this
particular
one
is
only
going
to
run.
If
somebody
goes
to
the
pipelines,
page
hits
the
Run
Pipeline
and
manually
executes
it.
Then
this
job
is
going
to
be
included,
but
under
any
other
circumstance
it's
not
because
it
doesn't
have
any
other
rules
to
match
against.
B
So,
let's
take
a
quick
minute
talk
about
the
different
operators.
Rule
syntax
can
use.
If
and
if
can
also
rely
on
changes
and
exist,
so
changes
you
can
think
of
as
Pipelines
that
a
company
commits
or
merge
requests.
Where
there's
you
know,
changes
in
the
repository
that
you
can
delineate.
You
can
say
this
file,
this
directory
and
its
files.
Anything
along
those
lines
that
you
want
to
do
now.
This
is
different
exists.
Is
you
know
unconditional?
B
If
this
exists
in
the
repository
at
all
whether
it's
been
changed
or
not,
then
then
we
need
to
run
this
an
example
of
that
might
be
Docker
file
right.
You
know
we
want
to,
but
we
might
want
to
run
a
Docker
build
only
if
Docker
file
exists,
The
Operators
that
we
have
available
to
us
or
what
you
think
they
are
the
double
equals
is
just
equals.
You
know
whatever
we're,
comparing
to
as
a
variable
has
to
match
whatever
we
expect
that
string
to
be
not
equals
exactly
the
opposite
of
that.
B
B
So
regular
Expressions
is
available
in
gitlab
rules.
You
know
it.
It
requires
a
little
bit
of
boning
up
on,
but
once
you
get
used
to
it,
it's
the
same
regular
expression,
you're
used
to
everywhere
else,
using
a
the
right
delimiters
and
then
the
bottom
two
and
the
end
and
or
or
are
ways
for
us
to
chain
different
tests
together
in
a
complex
rule.
B
You
know
when
allow
failure
and
start
in
dictate,
and
now
again
these
are
job
attributes,
but
they
allow
us
to
win,
allows
us
to
delineate
the
things
that
are
listed
on
the
right
right.
There
allow
failure
again.
B
So
starting
can
be
15
minutes.
It
can
be
three
hours,
it
can
be
tomorrow
sometime,
you
know
24
hours
a
day,
but
it
will
need
it
to
be,
but
it
just
gives
us
a
way
to
do
a
delayed
job.
Should
we
have
that
kind
of
need
and
then
the
win
options
are
always
never
never.
The
implication
for
never
is
a
negative
rule.
B
Is
kind
of
the
reverse
of
that?
It's
a
way
for
you
to
make
a
job
that
runs
the
NFL
pipeline
intentionally,
so
that
you
can,
you
know,
potentially,
do
some
inspection,
try
and
figure
out.
What
went
wrong
manual
is
a
special
use
case,
and
we
talked
about
this
when
I
was
pointing
out
one
of
the
pipelines
before
if
we
use
the
manual
keyboard
you're
going
to
get
a
play,
a
little
video
play
button
on
your
job
now,
ordinarily,
that
would
be
eligible
for
anybody
that
can
run
pipelines
to
be
able
to
run
that
manual
job.
B
But
let's
say
that
it's
something
like
a
production
deploy
in
that
kind
of
circumstance.
You
can
use
protect
branches
and
you
can
use
protected
environments
to
be
able
to
delineate
who's
allowed
to
who,
by
individual
user
or
who,
by
role
in
your
project,
is
allowed
to
click
on
that
deploy
job
and
then
delay.
It
again
is
just
uses
the
starting
functionality
and
it's
going
to
start.
You
know,
based
on
some
delay,.
B
So
when
is
the
job
not
created
in
a
pipeline
if
none
of
the
rules
defined
for
the
job
evaluate
to
true,
it's
not
gonna
run.
So
if
you've
got
rules
to
lineage-
and
you
don't
have
a
default
Rule
now
notice
that
this
one
has
a
default
rule
that
matches
if
these
other
ones
don't
match.
So
if
these,
if
this
particular
CI
property
source
is
emerge
request
a
bit,
we
don't
want
this
job
to
run
into
that
circumstance
at
all.
B
If
the
CI
appointment
source
is
scheduled,
which
you
have
the
ability
to
do
in
GitHub,
you
can
schedule
pipeline
runs,
you
know
at
whatever
intervals
you
want
to
do
it,
then
we
don't
want
this
job
to
run,
but
under
all
other
circumstances,
given
that
there's
no,
if
here
it's
just
a
straight
up
when,
with
on
success
operator,
this
jobs
can
run
under
every
other
circumstance.
B
Now,
when
to
deploy
when
to
configure
for
manual
execution,
this
job
has
manuals,
so
it
waits
for
someone
to
click
the
play
button
that
you
can
see
listed
here
and
you
can
see
it
listed
here.
You
know
it's
just
a
property
of
the
job
here.
It
can
also
be
a
property
of
a
rule
by
the
way,
with
this
particular
type
of
thing,
if,
if
you've
configured
it
to
be
manual
at
the
job
level,
you
need
to
also
do
the
same
thing
at
the
rules
level,
because
rules
can
override
the
job.
B
And
if
you
join
our
slack
Channel
feel
free
to
ask
that
in
there
too
and
we'll
we'll
get
you
some
we'll
get
you
some
detail
on
it
and
point
you
to
some
documentation.
You
can
read.
B
Daryl,
this
recording
will
be
made
available.
It
will
be
sent
to
you
via
email,
some
of
the
slides.
So
if
you
need
to
drop,
please
go
ahead
and
do
that
and
you
probably
signed
up
for
an
hour
so
no
big
deal
but
go
ahead
and
drop
and
then
look
for
the
email
and
you
can
go
ahead
and
resume
by
just
going
to
the
link
for
the
video
scrolling
in
for
an
hour.
B
All
right,
yeah,
more
rules,
examples.
So
this
is
a
case
of
multiple
rules
again
if
the
CI
pipeline
Source
equals
merge,
request
event
now
notice
that
there's
no
win
Clause
here
so
when
on
success
is
the
default
here,
even
though
we
didn't
put
it
in
there.
If
we
had
put
an
extra
space
down
below
this
top
rule
without
a
doubt
and
then
put
a
win
put
a
wind
keyword
here,
we
could
put
something
else.
If
we
wanted
to
use,
you
know
went
on
failure
if
we
wanted
to
use.
B
So
if
CI
piping
sources
at
The,
Purge,
request,
event
or
schedule,
the
job
is
executed
and
it'll
execute
on
any
pipeline
where
this
criteria
is
met,
but
any
anything
else,
because
there's
no
matching
rule.
It's
not
going
to
execute
on.
B
So
more
rules,
examples
when
for
delaying
a
job
run
now
this
is
one
case
where
it
might
make
sense:
we've
got
a
Docker,
build,
we've
got
to
refresh
our
Docker
build
for
some
reason.
This
is
a
good
reason
to
look
for
rules
and
then
use
changes
at
the
same
time
to
see
if
Docker
has
been
changed,
but
there's
also
the
prospect
of
that
Docker
needs
to
have
updated
libraries
right.
So
maybe
this
is
a.
B
B
B
B
B
You
know
our
script
is
delineated
up
here
been
in
our
rules,
we're
looking
to
see
if
some
variable
matches
substring
value,
it's
up
to
you
whatever
that
might
be,
but
we're
also
looking
for
changes
now.
Remember
the
changes
are
something
new
in
the
commit
something's
been
changed
in
Docker
file,
something's
been
changed
in
the
docker,
slash
Scripts
directory
and.
B
B
So
let's
take
a
quick
minute:
let's
talk
about
the
processing
order
on
variables,
so
this
is
the
Precedence
for
variables
going
from
foreign.
B
And
you
can
see
what
these,
what
these
are
cicd
pipeline
trigger
variables,
so,
if
you're
using
the
API
you're
doing
a
scheduled
job
or
you're
manually
running
a
pipeline,
these
are
variables
that
are
put
into
the
pipeline.
At
that
time,
project
level
variables
come
just
below
that
these
are
defined
in
the
project.
B
There
might
be
protective
variables
there
group
flow
variables,
so
you
would
hear
it
from
your
group.
Your
project
lives
underneath
the
group
if
variables
are
defined
at
the
group
level,
they're
going
to
be
inherited
by
the
project.
If
you're
self-hosted,
you
can
do
the
same
thing
at
the
instance
level
too,
and
then
there's
inherited
environment
variables
and
then
it
below
that
is
again
well-defined
job
level
variables
in
the
yaml
defined
Global
variables.
B
So
this
gives
you
a
basic
list
of
how
these,
what
the
Precedence
is
for
variables
of
as
they're
being
written
in
and
merged
together
to
be
handed
off
to
your
Runner
or
to
be
processed
by
rules
in
gitlab
and
by
the
way
rules
in
gitlab
are
processed
on
the
gitlab
instance
itself:
they're
not
processed
by
Runners
Runners,
don't
get
allocated
to
a
job
until
a
job
is
qualified
to
run
so
just
be
aware
that
they're
being
evaluated,
those
variables
are
being
evaluated
in
two
different
places.
B
One
is
on
gitlab.com,
if
you're,
using
rules
to
decide
whether
or
not
that
the
job
qualifies
and
then
the
environment
variables
are
handed
off
to
the
job
and
can
be
relied
upon
by
your
script.
If
you
need
to
so
that
was
rules
and
failures.
Number
three
Let's
we've
been
at
this
for
an
hour
now,
Let's
ordinarily,
it
would
be
10
minutes.
B
We
just
took
out
of
ways
to
go
in
this
deck.
That's
a
let's
start
again
at
13
minutes
after
the
hour
and
let's
just
take
a
quick
break.
B
B
All
right,
let's
get
back
on
your
way
here
so
section
four,
which
correlates
to
number
two
in
the
source
project
instructions
is
SAS
and
artifacts.
So
in
this
section
we're
going
to
talk
about
configuring
SAS
for
execution
in
your
Pipelines,
adding
it
on
what
what
it
takes
to
get
it
done
in
artifacts
and
how
to
deal
with
those.
B
So
after
you
fix
your
pipeline
to
run
smoothly
so
you've
configured
all
your
needs.
You've
got
your
order
of
execution
set
up
in
the
most
optimal
way
that
you
can
you
want
to
take
care.
They
want
to
take
full
advantage
of
all
the
features.
Gitlab
is
offering
like
security
scanning
and
artifacts
and
ask
if
you
can
demo
this
in
a
pipeline
during
the
next
stand
up,
so
configuring
SAS
is
pretty
easy.
We
just
simply
need
to
add
an
included
line
to
our
gitlab
Dash
ci.yml
file.
B
This
includes
line,
in
this
case
it's
using
the
template.
Keyword
now
what's
important
to
understand
about
this.
Is
you
know?
There's
template
can
apply
to
things
that
your
you
and
your
team
builds,
but
when
used
as
a
keyword
here
for
an
include
file,
it
specifically
means
things
that
are
shipped
with
gitlab.
B
You
know
the
way
to
share
CI
CD
capabilities
with
other
teams
in
your
org.
You
know
you
can
use
these
gitlab
templates.
If
you
want
to,
you
can
also
build
your
own,
which
we'll
talk
about
just
a
minute,
the
way
to
consume
CI,
CD
capabilities
from
other
teams
in
your
org
and
the
way
that
gitlab
engineering
provides
capabilities
via
templates.
So
this
is
built
up
already
ready
to
go.
B
Templates
are
always
executed
into
a
gitlab
cicd
pipeline
to
an
include
statement,
so
you
would
have
to
use
that
include
statement
with
the
template
keyword
to
be
able
to
capture
these
predefined
gitlab
templates
for
you
and
the
template.
Jobs
are
created
in
your
CI
CD
pipeline,
based
on
their
defined
stage
in
any
applicable
rules
that
might
be
attached
to
them.
B
So,
let's
talk
about
the
different
kinds
of
includes
again
include
with
the
keyword
template
this
one
up
here
at
the
upper
left
is
specifically
referencing:
gitlab
included
files
that
are
shipped
with
gitlab
included
file,
which
is
the
one
to
the
right
which
I
think
of
as
project,
because
the
way
it
uses
the
project
keyword
is,
you
know,
is
delineating,
something
that
your
team
might
think
of
as
a
template.
B
Maybe
you
create
a
template
repository
of
CI
and
CD
jobs
that
you
share
with
the
other
teams
in
your
organization,
and
in
this
case
it's
delineating
the
project
and
the
file
that
it's
going
to
capture
and
in
this
case
it's
going
to
be
using
the
default,
a
default
Branch
main
or
Master,
whatever
your
team
calls
that
in
that
Upstream
repository,
so
this
is
a
way
for
you
to
bank
and
build
CI
CD.
That
applies
to
your
organization
fairly
specifically
and
be
able
to
spread
it
around
Food
local
is.
B
In
your
in
your
local
repository
and
local
is
exactly
what
it
what
you
think
it
means
it
means
that
the
file
that's
being
designated
and
it's
being
shown
by
its
path
here
from
this
from
the
rooted
Repository,
is
something
that
you're
going
to
include,
because
it's
got
some
additional
additional
information
that
might
make
it
easier
for
you
to
maintain
and
keep
track
of
the
different
pipeline
files
in
your
project
and
then
include
remote
is
a
special
Circumstance.
B
The
way
to
think
of
this
is
you're
going
to
just
use
the
remote
keyword
you're
going
to
give
it
a
URL.
This
URL
has
to
be
publicly
accessible
with
that
authentication
so
that
the
use
case
for
that
might
be
using
something
like
from
gitlab.
If
you
want
to
do
that,
you
know,
but
you
would
definitely
need
to
be
a
trustworthy
Source.
It's
the
thing
I'm
getting
at
there.
B
Now
options
for
customizing
job
behaviors:
this
is
for
the
templated
jobs
that
ship
with
Git
lab.
We
can
use
variables
to
change
the
behaviors
in
this
case.
You
know
we're
telling
it
where
to
get
the
pre
where
to
find
the
analyzers
at
we're
telling
it
the
analyzer
versions
to
use
and
the
secrets
detection
excluded
has
so
places
we
don't
want
to
check.
For
you
know,
secret
detection
in
this
case
is
nothing
so
and
then
you'll
see
in
the
secret
analyzer
we've
delineated
an
image
as
well,
so
that
we're
using
this
specific
s,
analyzer
version.
B
So
if
we
want
to
understand
the
default
Behavior,
you
can
look
in
the
git
lab
now.
I
want
you
to
notice.
These
are
links.
So
when
you
download
this
this
deck
you'll
have
access
to
these
links,
you
can
look
at
the
documentation
to
understand
how
we
would
suggest
you
use
this
SAS
template
job
if
you're
going
to
included
in
your
pipelines,
but
you
can
also
look
at
the
SAS
template
itself.
You
can
actually
see
the
code
and
inspect
the
job
and
you'll
see
that
it
really
isn't
Magic.
B
B
B
B
B
Now,
let's
talk
about
artifactual
with
the
you
know,
you
might
have
a
Docker
built
job
that
has
to
build
a
Docker
image.
Maybe
you
want
to
keep
that
in
artifact
for
some
period
of
time,
although
ideally
you'd
be
uploading
that
to
the
Docker
container
registry,
if
we
want
to
download
artifacts
if
we
go
to
the
pipelines
page,
so
we
can
see
that
this
is
a
list
of
pipelines
that
have
been
run
and
we
click
on
this
download.
What
we're
going
to
get
is
the
option
to
download
an
archived
file.
B
So
if
you've
got
multiple
jobs,
those
jobs
are
generating
multiple
files,
they're
being
kept
as
artifacts
by
gitlab
you're,
going
to
get
one
big
tar
file
here
and
you're
going
to
have
to
unzip
it
unarchive
it
forgive
me
to
then
pick
out
the
ones
that
you
want
to
get
to,
but
this
is
the
way
to
get
everything.
If
you
want
to
do
that,
if
you
go
to
the
jobs
page,
you've
got
something
very
similar.
B
You
can
download
an
archive
from
that
job.
Now
again,
it's
going
to
be
an
archive
file,
so
whether
it's
one
job
one
artifact
or
10,
artifacts
they're
all
going
to
be
inside
this
one
artifact
this
one,
our
code
file
that
you're
going
to
download.
Now
you
can
do
that
from
the
specific
job
page
if
you
hit
the
download
button,
but
you
can
do
it
from
the
jobs
page
where
it
lists
all
the
jobs
just
by
using
this
little
button
here.
B
B
All
right
so
when
we
create
artifacts
in
a
job,
this
build
out
job
is
using
the
keyword
artifacts,
it's
going
to
delineate
a
path
which
is
dist
and
then
it's
going
to
expire
this
artifact
in
one
hour
now
you
might
want
to
keep
that
for
a
day.
You
want
to
keep
it
for
two
days,
but
there's
an
implication
here,
especially
in
cases
where
you
know
these
very
very
large
artifacts
are
being
created,
maybe
you're,
compiling
a
c
application.
B
That's
you
know
several
several
or
hundreds
of
megabytes,
and
in
that
particular
case
you
have
the
potential
to
really
saturate
your
storage,
whether
you're,
self-hosted
or
you're.
At
gitlab.com,
gitlab.com
you've
got
a
certain
amount
of
allocated
storage.
You
can,
you
know,
consume
based
on
your
subscription
type.
B
But
if
you
go
beyond
that,
you've
got
to
pay
extra
pay
extra
for
the
storage.
If
you're
self-hosted,
you
could
potentially
be
saturating.
Your
your
Block
Level
storage,
that's
attached
to
gitlab,
so
using
this
expires
a
good
idea
in
being
able
to
expire
these
artifacts
in
time.
If
you
don't
need
them,
maybe
it's
a
day
again.
Maybe
it's
a
maybe
it's
two
days
and
then
they
expire
automatically
and
get
my
now.
B
B
So
the
Hands-On
steps
for
this
are
number
four
assassin
artifacts.
We
don't
have
time
to
walk
through
it
in
our
Workshop
today,
but
just
be
aware,
yeah.
The
last
last
thing
I
want
to
talk
to
you
as
quickly
as
I.
Can
here
is
transferring
the
project.
So
if
you
get
through
this
project,
you've
you've
worked.
It
you've
done
this.
All
this.
You
know
incredible
work.
You've
gone
through
the
process
and
you've
learned
how
to
do
the
basic
manipulation
of
your.
You
know:
dot
gitlab.co.yml
file.
B
You
might
want
to
transfer
that
to
somewhere
else,
so
that
you
can
keep
it
because.
Remember
that
your
group
that
we've
provisioned
you
in
this
ultimate
namespace
at
gitlab
is
it's
going
to
be
deep,
provisioned
on
Friday?
B
So,
in
that
scenario,
we
might
want
to
transfer
it
now
only
transferring
you're
done
or
we
want
you
to
take
full
advantage
of
our
Runners,
our
runtime,
our
space,
that
we've
provisioned
you
so
that
you
can
get
a
real
good,
Hands-On
idea
of
how
gitlab,
CI
and
CD
works,
and
hopefully
you'll
look
into
the
optional
steps
that
we've
included
in
the
Upstream
project.
That
will
allow
you
to
go
into
secure
security
and
compliance,
which
would
be
really
really
good.
Now,
some
things
to
realize
here:
yeah.
This
is
transfer
project.
It's
issue
number
five.
B
B
Their
their
instructions
are
fairly
easy
to
follow,
we'll
also
be
sending
out
the
appropriate
slides
that
would
have
gone
with
those
if
we
had
presented
those
in
the
workshop,
but
from
my
experience,
I
was
able
to
go
in
there
and
just
follow
the
directions
and
I
was
able
to
do
everything.
I
wanted
to
do
so
things
to
realize
about
this
and
we're
right
at
the
very
end.
Now,
when
you
transfer
this
project,
remember
that
it's
in
an
ultimate
subscription
right
now.
B
So
if
you
go
into
the
security
and
compliance
you
get
to
find
that
some
parts
of
what
you
worked
on
are
going
to
be
missing.
If
you
transfer
that
to
a
premium
project,
that's
to
be
expected,
but
at
least
you
get
a
chance
to
get
in
there
and
get
to
an
understanding
of
what
the
ultimate
has
what's
available
for
you,
and
there
are
certain
subparts
of
the
security
and
compliance
that
will
still
work
in
premium.
So
just
be
aware
of
that.
B
So
with
that
in
mind,
Taylor
I
think
I'm
going
to
hand
this
back
to
you.
A
Thank
you
Steve
and
thank
thank
you.
Everyone
for
joining
us
and
I
appreciate
you
all
giving
us
some
feedback.
I
I
put
up
that
feedback
poll
there.
So
thank
you,
as
we've
mentioned,
we'll
send
out
the
deck
and
the
recording
look
for
the
invite
to
the
slack
Channel
as
well.
We'd
love
to
engage
with
you.
There
answer
any
questions
you
have
and
Steve
for
the
presentation.