►
From YouTube: Hands-On GitLab CI Workshop - AMER
Description
Watch the playback for a hands-on CI workshop, in which you will learn how to build simple GitLab pipelines and work up to more advanced pipeline structures and workflows, including security scanning and compliance enforcement.
A
So
welcome
everybody
to
our
Hands-On
Workshop.
Today
we're
going
to
be
talking
about
gitlab
CI
in
introducing
you
to
some
fairly
Advanced
topics
and
in
the
process
you're
going
to
be
getting
your
own
ultimate
licensed
group
provision
for
you
at
gitlab.com.
If
you
don't
already
have
a
an
account
at
getlove.com,
please
take
a
second
and
go
there
and
sign
up
you're
going
to
need
it
to
provision
this
group.
A
So
with
that
in
mind,
let's
see
just
a
second
I'm
looking
in
the
chat
here
by
the
way
we
have
both
chat
and
we
have
q
a
if
you
have
questions
today,
please
put
them
in
the
Q
a
my
peers
who
are
attending
this
meeting
along
with
me,
will
do
their
very
best
to
try
and
address
those
questions
and
I
may
occasionally
see
them
and
be
able
to
answer
them
as
well.
But
the
chat
is
more
for
just
commentary.
A
Only
and
we're
not
going
to
be
covering
that
specific
scenario
today,
but
that's
absolutely
possible
to
do
and
I've
done
a
pretty
fair
amount
of
it.
So
it's
something
that
if
you'll
reach
out
to
your
account
executive
and
they
can
engage
my
team
and
have
whoever
would
be
responsible
for
your
covering
your
specific
account-
engage
with
you
and
your
team
to
start
to
talk
about
some
options
on
that.
So
just
for
awareness
all
right,
so
let's
go
ahead
and
get
this
process
started.
A
Okay,
good,
thank
you
Nick
all
right,
perfect,
so
we're
going
to
be
covering
now
again
you're
going
to
be
getting
your
own
group
provision
to
getlab.com.
A
There's
a
process
we're
going
to
have
to
do
to
go
through
to
get
that
done,
and
we're
going
to
be
talking
about
that
today,
you're
going
to
be
getting
an
email
tomorrow,
they'll
have
a
link
to
the
slides
that
I'm
going
to
be
sharing
today,
along
with
some
supplemental
information
that
I
think
is
very
important
in
that
I
would
advise
you
to
take
time
to
dive
into
as
you're
able
to
we're
going
to
be
including
some
optional
content,
so
we're
going
to
cover
most
of
the
stuff
that
we
want
you
to.
A
A
So
one
other
thing
I
want
to
bring
to
your
attention,
and
let
me
just
share
this
with
you
real
quickly.
We
haven't
had
time
to
convert
this
particular
set
of
slides
in
the
workshop
instructions
that
go
with
it
over
to
use
the
new
navigation
yet
and
if
you're
on
gitlab.com
you're
going
to
be
on
the
the
new
navigation.
A
A
A
My
name
is
Steve
Graham
I'm,
a
customer
success,
engineer
on
the
scale
team
here
at
gitlab
and
there's
at
least
a
probability
that,
if
you
reach
out
to
your
account
executive,
you
know
that
I
might
be
the
engineer
that
gets
to
engage
with
your
team.
So
just
be
aware,.
A
This
is
our
agenda
for
the
day
we're
going
to
go
through
lab,
set
up
so
set
up
a
simple
pipeline.
We're
going
to
talk
about
execution
order
and
directive
cyclic
graph,
then
we're
going
to
go
through
rules
and
failures.
Then
we're
going
to
cover
adding
zest
and
artifacts
to
your
pipeline
and
then
the
last
bit
we're
just
going
to
talk
about
transferring
your
project.
If
you
need
to
do
that
which
we'll
get
to
in
just
a
minute.
A
So
welcome
to
the
team
today,
you're
officially
part
of
a
brand
new
startup
that
is
creating
a
public
leader
board
for
the
hit
new
racing
game,
Tanuki
racing
as
well.
Here
we
get
to
Leverage
The
git,
lab
logo
over
and
over
again
here,
but
your
company
has
recently
been
swapped
over
to
using
gitlab
for
cincd,
and
it's
tasked
you
with
learning
about
the
different
pipeline
capabilities.
A
So
so
be
aware
that
is
it's
actually
going
to
get
deep,
provisioned
on
Monday,
June,
July
24th,
so
you're
going
to
have
it
for
a
little
more
than
four
days,
but
that'll
give
you
some
time
to
work
through
the
exercises.
So
if
you
don't
get
a
chance
to
follow
along
today
in
this
Hands-On
Workshop,
you
can
do
it.
You
know,
as
you're
able
to
and
you'll
also
be
getting
a
link
to
this
recording
to
review.
A
A
A
A
A
We're
still
getting
used
to
this
new
chat
methodology
for
webinars.
It's
not
something
we
used
to
have,
but
we
needed
to
have
the
ability
to
share
things
with
you
occasionally,
and
that
was
the
only
way
we
were
able
to
do
it
so
again
go
to
gitlab
capture
the
everything.
That's
everything
that
constitutes
your
user
ID,
that
you
would
log
in
with
at
git
lab,
not
including
the
ad
symbol.
A
A
A
A
So,
if
you're
not
registered
at
gitlab.com,
take
a
minute
and
go
there
and
register
for
an
account
and
then
you
should
be
able
to
follow
the
instructions
that
we're
doing
here
and
if
you
get
a
little
bit
behind,
don't
worry
about
that.
You
will
get
a
link
in
an
email
tomorrow
to
this
recording
so
that
you'll
be
able
to
follow
through
it
again.
A
A
You're
gonna
land
on
something
that
looks
like
this
when
you
navigate
to
my
group
and
by
the
way,
if,
for
some
reason,
you
lose
track
in
this
group,
and
you
have
to
go
through
this
process
again,
it's
not
going
to
create
a
second
group
for
you.
If
you
go
through
this
registration
process,
one
more
time,
it's
going
to
use
exactly
the
same
name
group.
A
If
so,
just
let
you
navigate
back
to
it.
If
you
need
to
so
Dimitri,
something
has
gone
wrong,
Let's,
let's
kind
of
go
back
through
that
one
more
time.
A
A
Put
in
your
invitation
code
put
in
your
gitlab
username
and
then
when
you
get
the
next
page,
you
should
have
a
group
that
you
can
navigate
to,
or
you
can
just
go
to
my
group
so
Victor.
Try
that
one
more
time
and
remember
that
your
username
at
gitlab
is
everything
except
for
the
symbol.
So
everything
that
follows
the
at
symbol.
A
So
now
that
you've
got
your
group
set
up,
and
hopefully
you
do
if
you
don't
follow
along
and
you
will
be
getting
a
copy
these
slides
in
this
recording.
If
you
want
to
go
through
it
again
so
now
we're
going
to
start
talking
about
setting
up
a
simple
pipeline.
A
A
A
A
B
A
And
then
I'm
going
to
navigate
back
to
the
previous
one,
so
I've
got
my
I've
got
the
sample
project,
CI
cicd,
adoption
workspace
workshop
and
then
I've
got
my
test
group
cic,
CD,
adoption,
Works
Workshop.
Now
the
next
thing
that
we're
going
to
want
to
do-
and
let
me
just
forgive
me
this
is
a
little
awkward
moving
back
and
forth
like
this.
A
A
So
you
go
to
settings
in
general
for
your
project,
the
one
that
you
just
formed
scroll
down
to
the
advanced,
then
click
on
remove
Fork
relationship.
So
let
me
just
roll
this
up
just
a
little
bit.
A
A
Another
Workshop
steps
are
in
the
original
project.
Unfortunately,
when
you
Fork
it,
it
doesn't
carry
the
issues
over
with
it.
But
if
you
go
to
the
issues
list,
you'll
see
the
complete
list
of
steps
that
we're
going
to
be
working
through
today
and
by
the
way,
there's
two
optional
ones.
At
the
top
step,
two
manual
Manuel,
which
was
step
two
which
which
Step
were
you
talking
about
they're,
removing
the
fork
relationship.
A
You
have
access
to
this
tutorial
for
a
few
days
and
when
the
group
expires,
your
your
group,
that's
being
provisioned
today
is
going
to
be
gone
in
all
the
projects
underneath
it,
but
you'll
have
until
Monday
to
get
this
done.
So,
even
if
you
don't
get
it
done
in
the
course
of
you
know
the
work
that
we're
going
to
be
doing,
then
you
know
you'll,
you'll
you'll
be
you'll,
be
you'll
still
have
the
next
few
days
to
try
and
get
it
get
it
accomplished,
and
the
instructions
are
moderately
easy
to
follow.
A
A
A
So
you'll
need
to
navigate
back
to
that
project
again
and
then
go
to
the
issues
and
you'll
be
able
to
see
the
list
of
of
things
that
we're
going
to
go
through
next.
A
So
let's
talk
about
gitlab's
pipelines,
just
for
a
few
minutes,
we
need
to
kind
of
set
the
Baseline
so
that
you
get
used
to
some
of
the
most
basic
concepts,
because
we're
going
to
be
going
through
some
fairly
Advanced
topics
today,
but
just
understanding
these
kind
of
bricks
in
motion
so
to
speak,
will
help
you
to
get
a
better
understanding
of
what
you're
looking
at.
So
what
we're
looking
at
is
a
very
basic
pipeline.
It
only
has
four
jobs
in
it.
A
A
Each
job
runs
independently
and
sometimes
on
different
runners,
so
be
aware
that
if
you've
got
a
large
Runner
Fleet
and
like,
for
example,
at
gitlab.com,
we
have
an
extraordinarily
large
Runner
Fleet,
it's
just
off
the
charts
big
each
one
of
these
jobs
can
run
independently.
Now
the
good
thing
about
that
is,
if
you've
got
jobs
in
a
common
stage.
So
let's
say
that
the
test
phase
had
10
jobs
in
it
all
10
jobs
would
be
eligible
to
run
immediately
as
soon
as
that
stage
starts
to
go
now.
A
They're
going
to
go
in
order
based
on
availability
of
Runners
gitlab
is
going
to
dispatch
them
out
in
the
order
that
they're
in
this
listed
in
stage,
but
they're
all
eligible
to
run
as
soon
as
the
runner
can
check
in
and
take
on
their
job
and
for
whatever
it's
worth
each
stage
has
to
complete
before
the
next
one
can
start.
So
this
build
stage
has
to
complete
before
any
of
the
jobs
in
tests
can
start,
and
all
the
test
has
to
complete
before
deploy
can
start.
A
Now,
there's
some
different
elements
and
by
the
way,
what
you're
looking
at
in
the
code
section,
there
is
a
job.
It's
not
Global
pipeline
directives,
it's
just
a
job.
So
in
this
particular
case,
that's
delineated
by
not
having
a
keyword
that
your
gitlab.ci.yml
file
is
going
to
be
recognizing
as
something
it's
going
to
parse
independent
of
a
job
and
having
the
colon
that
follows
it
now.
The
script
section
that
you
see
down
here.
A
And
that's
literally,
what's
going
to
be
executing
at
the
command
line
and
it's
going
going
to
be
executing
them
at
a
command
line,
and
in
this
particular
case
you
can
see
that
this
particular
job
has
a
before
script,
which
just
sets
up
and
set
some
variables
and
set
a
few
things
up.
That
need
to
be
done
to
prepare
for
the
script
to
run.
We
also
have
the
ability
to
do
an
after
script,
and
that
runs
in
a
separate
shell
after
the
before
script
and
script.
Statements,
so
just
be
aware
of
that.
A
And
that
bottom
note
Governor
is
is
important.
So
if
something
fails
in
your
before
script,
your
job
is
going
to
fail
and
it's
going
to
stop
the
pipeline
assuming
default
settings
in
all
the
jobs
and
the
same
thing
is
not
true
the
after
script,
so
after
script
before
scripts
are
commonly
used
to
load,
load,
libraries
or
something
like
that,
you
might
need
to
pull
into
a
container
to
execute
things
that
you
need
to
get
done,
although
a
better
way
to
do.
A
A
So
just
be
aware
that
that's
kind
of
the
general
delineation
point
between
both
of
those
now
get
my
brothers
they're
going
to
run
all
the
jobs
you
define
in
a
pipeline,
they
can
be
Tagged
so
that
you
can
have
specific
jobs
only
run
on
certain
certain
Runners.
So
a
real
good
example
of
this
is
I've
worked
with
a
lot
of
groups
that
do
firmware
and
in
order
to
attach
to
that
device,
they've
got
to
be
running
on
a
machine.
That's
actually
attached
to
the
device
and
has
access
to
it,
and
so
they'll
create
tag
Runners.
A
Just
for
that
and
they'll.
Those
Runners
will
have
all
the
libraries
pre-loaded
on
it
like
an
Android
software
development
kit
or
an
iOS
software
development
kit
and
they'll
be
attached
to
that
device.
So
they
can
preload
that
firmware
and
that's
a
real
good
case
of
free
using
a
tag
Runner.
So
a
tag
Runner,
which
is
by
the
way
tag,
is
a
property
of
a
job.
A
A
Okay,
so
Anonymous
attendee,
it
says,
does
the
after
script.
If,
if
the
script
portion
returns
a
non-zero
exit
code,
I,
don't
think
that
it
does
I've,
never
tested
that
specific
scenario
or
watched
it
to
see
if
it
would
actually
happen,
but
I
don't
know
for
sure
that
it
would
so.
My
assumption
is
that
if
the
script
portion
returns
a
non-zero
exit
code,
the
job
stops.
That's
my
impression
right
now
such
operations
take.
You
know
any
amount
of
time,
but
the
jobs
are
typically
picked
in
within
about
picked
up
within
about
five
seconds.
A
Assuming
that
there's
Runner
availability,
if
you've
got
a
runner,
that's
got
a
concurrency
of
foreign.
You've
got
10
jobs
in
a
stage.
It's
going
to
run.
You
know
the
top
four
and
then
it's
gonna,
as
the
runners
become
available.
Again,
it's
going
to
pick
up
the
next
next
jobs
in
the
stage.
But
if
you've
got
a
runner
sleep,
you
sleep,
you
know,
that's
that's
not
going
to
be
an
issue
issue
for
you
as
a
general
goal.
A
So
where
do
I
see
this
get
lab
Runners?
If
you
own
a
project
and
you
go
to
settings,
CI,
CD
and
Runners
you'll
be
able
to
see
the
runners
there
and
that's
actually
where
you
enable
shared
Runners.
If
you
need
to
do
that
as
well,
so
sometimes
you'll
have
projects
that
use
specific
Runners
that
only
are
only
registered
to
run
jobs
for
your
project
and
other
times.
A
So,
let's,
let's
take
a
quick
look
at
that
real
quick.
A
A
A
A
A
A
All
right,
so
the
next
thing
we're
going
to
talk
about
is
job
execution
order
and
directed
a
cyclic
graph,
and
you
may
occasionally
hear
me
call
this
directed
acrylic
graph.
I
have
no
idea.
Why
I
do
that,
but
it
just
for
some
reason
that
comes
out
of
my
mouth
once
in
a
while,
but
it's
directed
a
cyclic
graph,
which
is
a
dependency
graph
that
can
display
the
jobs
for
you.
A
You
decide
to
show
off
your
skills
and
show
you
create
a
pipeline
with
different
execution
orders,
as
well
as
a
large
directed
acrylic
graph,
to
show
what
is
really
possible,
so
jobs
in
the
next
stage.
In
this
case,
the
unit
test
will
start
after
all
jobs
in
the
previous
stage
of
completed
successfully.
We
spoke
about
that
previously.
A
Jobs
in
the
test
stage
execute
after
all,
jobs
in
the
build
stage
are
completed,
but
code
quality
doesn't
need
to
rely
upon
the
bill.
It's
looking
at
the
code
directly,
it
doesn't
have
to
be
in
any
kind
of
a
build
form
to
be
able
to
do
its
job,
so
code
quality
can
run
out
of
out
of
normal
stage
order.
If
we
wanted
to
thank
you,
but
we
want
to
keep
it
the
test
stage
because
categorically
it
belongs
in
our
list
of
tests.
A
Now
the
way
we
do
this
and
you
can
see
the
code
quality
job
listed
in
the
code
section
there
we
do
this
with
the
needs
directive.
This
is
a
property
of
a
job
and
what
you
can
see
that
follows
after
the
needs.
Colon
is
an
empty
array.
That's
an
empty
yamla
array.
A
A
A
It's
got
some
nice
ways
of
being
able
to
display
it,
but
you
can
see
it
down
here
if
we're,
if
we're
editing
them.
If
we're
editing
the
gitlab.ci.yml
and
we
click
over
here
on
this
visualize
tab,
we
can
actually
see
the
dependencies
between
the
jobs,
and
here
you
can
see,
deploy
a
is
dependent
upon
test,
a
which
is
dependent
upon
build
a
and
a
real
good
example
of
this
might
be
in
iOS
and
an
Android
app
that
live
together
in
the
same
Repository.
A
A
A
A
There
is
a
retry
option
that
you
can
use,
although
subsequent
you'll
probably
want
to
commit
code
to
fix
whatever
the
problem
is,
but
there
is
a
way
to
make
a
job
not
fail
and
stop
the
pipeline
and
we'll
be
covering
that
in
just
a
few
minutes,
and
then
is
it
possible
to
have
a
cycle
if
one
fails
and
I
do
another
job
to
fix
it
and
go
back
to
that
job.
That
initially
failed,
or
is
it
strictly
I'm,
not
completely
understanding
that
one,
so
pipelines
are
going
to
run
for
every
single
commit?
A
So
let's
assume
that
you're
setting
your
pipelines
to
be
on
merge
requests,
everything's,
ready
in
a
branch
feature
branch
that
you're
working
on
now
you
submit
a
merger
request
to
go
back
to
main
or
master
every
single
time
that
you
put
a
new
commit
in
that
feature.
Branch,
a
new
pipeline
is
going
to
run
in
that
merge
request
and
that's
an
opportunity
for
you
to
test
out
something
new
try
to
get
a
fix
in
things
along
those
lines.
A
Now
it's
possible
to
build
status,
pipelines
I'm,
going
to
be
very
Frank
with
you
and
tell
you
that
I
don't
prefer
to
do
this,
but
if
you
wanted
to
do
it,
you
actually
could
so
remember.
The
vertical
columns
that
we
had
listed
in
our
pipeline
view
are
the
stages.
If
you
wanted
to
create
every
single
job
such
that
it
has
needs
declared,
you
could
actually
run
stage
lists
if
you
wanted
to
do
that.
A
That
needs
keyword
can
now
refer
to
a
job
in
the
same
stage,
so
that
if
you've
got
two
jobs
in
the
same
stage
and
one's
got
to
proceed
and
run
before
the
other
one
runs,
you
could
actually
do
that.
You
know
they
used
to
have
to
be
in
different
stages.
So
why
is
this
useful
again?
I
prefer
not
to
do
this
status
pipelines
make
your
pipeline
more
efficient,
implicitly
configure
the
execution
order.
A
You
know
so
that
it's
faster
to
write
and
it's
a
more
efficient
Pipeline
with
less
cycle
time
now,
I'll
be
very
Frank
with
you.
That's
not
been
my
experience
with
it,
but
these
are.
These
points
are
not
something
that
I've
explicitly
tested
either
and
then
how
to
navigate.
You
know
you
go
to
CI
CD
editor
and
you
put
in
the
needs
keyword,
and
then
this
is
available
in
all
tiers
after
14.2.
A
A
A
A
A
Jess,
yes,
you'll,
be
getting.
The
slides
and
you'll
also
be
getting
the
recording
in
an
email
tomorrow.
A
A
A
By
the
way,
notice
that
we've
got
stages
and
then
there's
a
dash
below
that,
that
is
an
array
of
stages
in
yaml,
so
every
single
Dash
is
a
new
array
element
now
next
thing:
we're
going
to
do
is
write
a
copy,
a
massive
number
of
jobs
put
them
in
there.
They
all
have
their
relationships
predefined
already.
A
A
A
A
Anonymous,
that's
up
to
you.
You
know
you
might
want
to
build
independent
pipeline
workflows,
you
might
have
a
workflow
for
an
MR
another
or
a
direct
commit
to
master
or
main.
You
might
have
one
that
you
know
looks
for.
A
As
you
come
back
to
the
team
and
show
them
your
new
pipeline,
you
notice
that
one
of
your
test
jobs
is
failing
now,
remember
that
if
a
job
fails
and
the
default
in
gitlab
is
for
a
job
property
called
the
law,
failure
is
false,
are
allowed
to
fail
if
the
if
that's
default,
is
left
in
place
and
you
don't
overwrite
it
that's
going
to
stop
the
pipeline.
The
pipeline
is
going
to
stop
executing
you
as
soon
as
it
hits
a
failed
job.
A
So
Dimitri,
it's
not
in
your!
Let
me
capture
that
one
more
time
Dimitri
I'm
going
to
put
this
in
the
chat.
Again,
it's
not
in
your
fort
project.
The
fork
project
doesn't
have
the
issues
in
it
that
have
the
instructions
in
it.
You'll
have
to
get
your
instructions
from
the
the
link
that
I
just
put
into
our
chat.
If
you'll
go
to
that
project
and
then
look
at
the
issues,
you'll
see
our
instructions
there
so
notice
that
in
this
particular
job
has
to
be
notice.
A
So
if
test
B
had
not
been
allowed
to
fail
and
it
failed,
we
would
not
be
able
to
deploy,
and
it
might
be
that
you
look
in
there
and
whatever
that
test
is
executing
on
you're,
not
worried
about
it's
not
going
to
affect
your
deployment.
So
you
want
to
be
able
to
move
on,
and
this
is
a
common
way
to
do
tests
by
the
way
test
off
to
your
law
set
up
to
allow
to
fail.
A
So
when
is
the
job
created
in
a
pipeline,
jobs
is
included
in
a
pipeline.
If
it
has
a
rule,
it
evaluates
to
true
and
has
a
clause
of
going
on
success
when
delayed
or
when
always
and
we'll
be
covering
some
of
these
operators,
as
we
as
we
move
through
here.
If
there's
no
rule
is
defined,
but
the
job
has
a
job
property
of
when,
with
on
success
to
later
always
and
no
rules
to
find,
and
no
when
Clause
is
specified,
because
when
on
success
is
the
default
forget
button.
A
So,
let's
talk
about
rules
a
little
bit
if
rules
can
compare
to
predefined
environment
variables
and
we've
got
a
very
long
long
listed
because
it's
just
a
monster.
So
in
this
particular
case
it's
looking
to
see
if
the
pipeline
Source
was
web.
Web
means
that
somebody
went
to
the
pipelines
page
in
their
project
and
then
there's
a
button
on
the
upper
right
that
says,
run
Pipeline
and
they
clicked
on
that
and
they
ran
a
pipeline
from
there.
So
that's
telling
you
that
the
source
is
not
from
a
commit.
A
A
So
changes
means
that
you're
going
to
delineate
a
directory
a
file
or
a
recursive
directory
and
you're
going
to
be
looking
for
changes
coming
into
the
commit
that
are
in
the
in
that
specific
section.
And
if
those
changes
exist,
then
that's
a
that's
essentially
a
match,
and
that
makes
that
rule
positive
and
the
same
thing
is
true
of
exists.
So
you
can
have
a
very
generic
pipeline
code
that
just
looks
for
a
Docker
file
and
there's
Docker
files.
A
There,
then
it's
going
to
build
a
container
and
then
the
operators
that
you
can
use
in
a
rule
if
you're,
using
that,
if
Clause
right
there
on
the
left
equals
equals,
is
what
you
think
it
is
not
equals
is
what
you
think.
It
is
and
then
notice
that
there's
an
equals
tilde
and
a
not
till
the
those
are
regular
Expressions.
A
So
maybe
you
want
to
test
to
see
if
the
CI
webs
you
know,
pipeline
source
is
web,
and-
and
you
want
to
test
to
see,
if
you
know
that
you're
running
in
the
main
or
Master
Branch
you
can
do,
you
can
do
that
and
combine
those
two
tests
with
the
end
to
end
operator.
You
can
also
do
something
similar
with
your
operator,
the
the
two
pipelines
it's.
A
So
when,
of
course
was
one
that
we
talked
about,
it
could
be
on
success.
It's
got
several
other
operators
that
we
can
talk
about
here
on
the
right.
Another
job
attribute
is
a
law
failure.
It
can
be
it
defaults
to
false.
Allow
failure
is
false.
You
can
make
it
true
if
you
want
to
let
a
job
fail,
purposely,
so
that
it
doesn't
stop
your
pipelines,
but
you
also
have
this
ability
to
do
a
start
in
attribute
start
in
is
particularly
used
for.
A
So,
if
you
don't
have
a
rule-
and
you
just
have
start
in-
and
it
says
three
hours
then
as
soon
as
that
job
becomes
eligible
to
run
it's
going
to
count
out
three
hours
and
then
it's
going
to
start
executing,
but
that
can
be
minutes.
It
can
be
days
whatever
it
needs
to
be
and
there's
so
some
of
the
options
for
when
are
always
so
if
this
rule
matches
when
is
always
and
by
the
way
default,
is
on
success.
A
This
particular
operator
right
here
so
unsuccess
is
the
default
for
any
win
clause
and
if
you
don't
put
a
win
Clause
into
your
role
and
you
don't
put
it
into
the
job
itself
as
an
independent
property,
it
rules,
then
the
default
is
going
to
be
on
success,
which
just
means
all
the
previous
jobs
executed
successfully
or
they
were
allowed
to
fail.
And
so
now
this
job
is
eligible
to
run
on.
Failure
is
a
special
circumstance.
So
if
a
previous
job
failed,
you
might
have
a
job.
A
You
want
to
run
to
just
kind
of
do
some
checks
and
tests
to
make
sure
that
you
know
we
didn't
have
any
other
problems
in
the
in
the
in
the
build
or
in
the
in
the
code,
and
then
manual
is
a
case
where
you
can
actually
set
a
Job
review
manual
and
deploy
jobs
are
commonly
manual
very
frequently
manual,
so
that
you
know
somebody
has
to
actually
click
on
it
on
that
job,
to
run
it
yeah
by
the
way,
if
you're,
using
protected
branches
or
protect
these
environments
in
the
case
of
deployed
jobs,
you
can
actually
delineate
who's
allowed
to
run
that
manual
job.
A
In
a
protective
Branch
or
a
protected
environment,
and
the
delayed
is
used
with
this
starting
objective
here,
so
if
it's,
if
a
job
is
when
delayed,
you
would
need
to
use
the
starting
to
tell
it
how
long
to
wait.
You
know
minutes
hours,
whatever
that's
got
to
be,
and
then
the
one
I
didn't
talk
about
was
never
so
never
as
a
case
when
you
create
a
rule
that
matches
a
condition
where
you
never
want
to
run
a
pipeline
or
you
never
want
this
job
to
run
and
that's
an
example
of
a
negative
rule
right.
A
So
it's
going
to
match
and
when
you've
got
multiple
rules
in
a
job
gitlab's
going
to
go
through
those
sequentially
from
top
to
bottom.
First
match
wins,
so
you
know
that.
Never
that
negated
rule
would
need
to
be
the
very
top
one
or
one
of
the
top
ones,
and
that
way,
if
it,
if
that
particular
rule
hit,
that
that
job
would
not
run.
A
A
So
so
this
when
Clause
right
here,
could
be
put
into
the
same
indentation
rule
as
rules
itself
and
in
that
case
that's
just
a
default
position
for
that
job.
Any
rules
that
have
a
winner
going
to
overwrite
it,
but
you
can
see
here,
we've
got
two
rules.
If
it's
a
merge
request
and
if
it's
a
scheduled
job
and
by
the
way
you
could
do
scheduled
pipelines
in
your
projects,
maintainers
and
owners
can
go
in
and
set
up
scheduled
pipelines
that
run
every
24
hours.
A
If
you
need
to
in
this
particular
job,
it
says
I
never
run
for
either
one
of
those
emerge,
requests
or
scheduled,
and
then
it's
got
a
default.
Standalone
rule,
that's
just
a
written
clause
which
is
kind
of
interesting,
there's
no
rule
associated
with
it.
There's
no.
If
it's
just
a
standalone
win.
So
it's
saying
on
success,
which
just
means
that
if
everything
else
in
the
pipeline
was
successful
and
nothing
failed
or
failed
but
was
allowed
to
fail,
then
it's
going
to
go
ahead
and
run.
A
A
All
right,
so,
if
we
get
multiple
rules,
the
CI
pipeline
source
is
merge,
request
event
CA
pipeline
Source,
scheduled
by
the
way
the
CI
pipeline
source
is
a
very
popular
role.
Variable
that
to
compare
to
so
that
you
understand,
is
this
a
commit?
You
know.
Is
this
a
merge
request
event?
It
says
scheduled
job?
Is
it
coming
in
via
the
API
or
is
it
being
executed
from
the
web
form
and
there's
quite
a
few
other
things
that
that
are
defined
there
as
well?
A
So
as
long
as
it's
one
of
these
two
things,
this
job
is
going
to
run,
but
it's
not
going
to
run
for
any
other
conditions,
because
there's,
let's
say
that
it
gets
a.
A
A
A
So
we've
kind
of
put
a
few
things
together
there,
but
if
this
rule
of
values
to
true
and
when
has
any
value
except
for
never,
the
job
is
included
in
the
pipeline,
if
used
as
a
win
delayed
start
in
is
also
required.
So
if
you
use
this
Clause
here,
then
you've
got
to
have
this
Clause
declared
as
well.
A
So
multiple
rules
and
one
I
want
you
to
notice
that
these
two
again
are
negated
rules.
So
if
it's
a
merge
request
event,
if
it's
a
schedule,
those
pipe
this
job
should
not
run
in
that
circumstance
at
all
and
the
default
for
any
other
circumstances
just
went
on.
Success
doesn't
even
have
an
if
Clause,
it's
just
unconditional,
so
this
job
will
not.
This
job
will
execute
in
any
pipeline.
Whether
CA
pipeline
source
is
not
set
to
merge,
request,
event
or
schedule
and
then
to
win
on.
Success
tells
the
job
to
execute
assembly
previous
job
success.
A
A
And
it's
looking
for
changes
in
a
specific
file
called
Docker
file.
It's
looking
for
changes
in
a
directory,
all
the
files
inside
of
a
directory
under
Docker
and
Scripts,
and
then
it's
a
manual
job.
A
A
Now,
let's
talk
real
quickly
about
variables,
processing
order
and
we're
going
to
have
to
go
about
it
really
quickly.
Here
we're
getting
a
little
bit
behind
there's
several
different
places
that
you
could
put,
and
you
can
Define
variables
that
get
lab
predefines
environment
variables.
It's
an
extraordinarily
long
list
of
them,
and
you
should
you
can
look
for
gitlab
predefined
variables
into
Google,
search
and
you'll
find
the
page
that
I'm
talking
about
it
has
standard
ones.
A
A
So
that's
the
predefined
environment
variables
down
here
at
the
bottom
and
then
there's
deployment
variables
right
so
deployment
variables
would
be
something
defined
in
you
know.
A
deployment
job
itself
would
be
a
real
good
example,
yeah
I'm
about
to
find
global
variables,
yeah
we'll
Define
job
level
variables.
So
you
can
Define
variables
in
your
gitlab
sei.yml
in
the
global
section
so
that
they
apply
to
all
jobs
or
you
can
find
them
in
a
specific
job.
A
If
only
one
job
really
needs
to
know
about
that
variable,
if
you're
self-hosted
so
inherited
environment
variables
would
come
in
from
you
know
the
same
environment
and
you
can
Define
variables
in
your
project
settings
that
have
environment-based
scope
if
you
want
to
do
that,
is
this
level
variables.
So
if
you're
self-hosted
and
somebody
wants
one
of
your
admin
wants
to
Define
variables
at
the
instance
level,
they
can
do
it
and
it's
inherited
by
everything,
all
groups
and
all
projects.
You
can
also
Define
variables
at
the
group
level.
A
A
You've
got
to
be
an
owner
or
an
admin,
but
you
could
do
that
and
that's
CI
CD
pipeline
trigger
variables
scheduled
pipeline
variables
and
manual
pipeline,
run
variables,
and
if
you
go
to
the
Run
pipelines
page
or
you
know,
maybe
you
go
to
a
scheduled
pipeline,
page
you're,
going
to
get
the
opportunity
to
put
in
variables
that
you
want
to
define
and
give
them
a
value.
It's
also
possible
to
pre-populate,
what's
shown
there
so
that
you've
got
variables
established
with
default
values
that
can
be
overridden
if
somebody
needs
to
be
able
to
do
that.
A
So
just
know
that
as
we
go
this
list
from
nine
up
to
one
one
wins
and
nine
nine
is
the
lowest
priority
in
this.
As
you
know,
if
we
get
conflicting
variables
defined.
A
Ricky,
it's
that's
not
included
in
this
project
before,
but
if
you
follow
the
instructions
you'll
get
there
and
you'll
have
the
exact
recommended
yml
job
pipeline
so
notice
that
code
quality
already
failed.
Remember
that
we
put
an
exit
one
and
so
that
jobs
in
this
case
been
allowed
to
fail
I,
because
we
also
put
you
know,
be
allowed
to
fail
through
a
law
failure
equals
true.
A
All
right,
let's
take
a
break
for
a
few
minutes.
I'll
tell
you
what
let's
see
it's
just
a
about
me.
Eight
minutes
after
the
hour
and
we're
gonna
end
up
running
a
little
bit
long
today,
which
I
apologize
for,
but
I'll
make
it
as
quick
and
concise
as
I.
Can,
let's
take
a
break
we'll
come
back
at
18
after
the
hour.
So
if
that
I'll
be
right
back
and
we'll
see
you
in
just
a
few
minutes.
B
A
Okay,
let's
get
back
underway
again
and
again:
I
apologize,
we're
going
to
run
just
a
few
minutes
over
today,
but
I'm
going
to
go
as
quick
as
I
can
for.
What's
left
Ricky
Bryant?
Yes,
it
is
possible
to
do
that.
I,
don't
have
any
examples
in
this
in
the
so.
What
Ricky's
asking
about
is:
is
it
possible
to
find
a
global,
a
global
yaml
file
that
all
project
yml
files
and
hear
it
from
in
these?
A
The
answer
is
yes,
you
can
do
it
both
by
inclusions
where
each
project
creates
its
own
dot
gitlab.ci.yml
file
and
defines
include
files
that
are
in
this
separate
project.
There's
examples
for
that
in
our
documentation,
but
it's
also
possible
for
them
not
to
divide
a
DOT
getlab
ci.yml
file
at
all.
This
is
in
the
project
settings
for
CI
and
CD.
You
can
actually
delineate
a
project
in
a
project
and
file
using
special
syntax
in
the
default
projects,
dot,
getlab.ci.ylo
file,
name
right,
so
there's
a
place
where
you
can
delineate
the
file
name.
A
You
don't
have
to
use
Dot,
gitlab,
dashca.ymail
and
using
that
same
field,
it's
possible
to
Define.
If
you
go
to
that,
if
you're
a
maintainer
or
an
owner
of
a
project,
you
can
go
in
and
look
at
that
under
your
settings,
CI
and
CD,
and
you
can
actually
see
that
there's
a
link
there.
That
will
take
you
to
our
documentation
and
show
you
how
to
use
alternative
project
syntax.
A
So
you
can
actually
have
template
projects
with
CI
and
CD
files
in
them
and
it's
possible
to
use
those
as
default,
CI
and
CD
files
for
other
projects.
The
only
place
I
would
recommend
caution
on
that
is,
if
you
get
a
if
you've
got
ultimate
and
you
want
to
get
it
to
compliance
Frameworks,
that
complicates
the
use
of
compliance
Frameworks
pretty
dramatically.
A
A
A
So
after
you
fixed
your
pipeline
around
smoothly
again
and
your
exec
stops
by
checking
on
the
progress,
they
want
to
make
sure
they're,
taking
full
advantage
of
all
the
features.
Kitlab
is
offering
like
security
scanning
and
artifacts
and
ask
if
you
can
download
us
in
a
pipeline
during
the
next
standout,
so
it's
possible
and
our
key
there's
an
example
of
including
the
template
file
in
in
the
in
this
slide
that
we're
looking
at
right
now
now.
A
A
So
again,
what
is
this
template
and
to
your
point
Ricky,
because
I
think
it's
relevant
here.
A
lot
of
teams
like
to
build
their
own
template
repositories,
but
in
this
particular
case
the
keyword
template
means
it's
shipped
with
gitlab,
but
it's
possible
to
use
project
and
file
delineations
in
an
include
file
to
point
to
a
different
project.
You
just
have
to
make
sure
that
whoever's
running
the
pipeline.
Remember
the
pipelines
run
with
the
mask
of
the
user,
who
triggers
the
pipeline
right.
So
maybe
they
do
a
commit.
Maybe
they
submit
a
merge
request?
A
Maybe
they
go
to
the
pipelines
page
or
on
the
pipeline
manually,
they're,
always
going
to
execute
with
the
permissions
of
the
person
who
has
who
executes
the
pipeline.
So
whoever
triggers
that
pipeline
the
pipeline
runs
with
their
rights
masks.
So
that
means
that
all
the
developers
and
all
the
people
who
have
the
ability
to
trigger
a
pipeline
in
your
project
have
got
to
have
read
access
to
the
repository.
That's
got
these
independent
files
in
it.
That
can
be
a
public
project.
A
You
know
if
you're
self-hosted,
it
could
be
an
internal
project
if
those
are
allowed,
because
that
way,
everybody
you
know
that's
limited
to
internal
projects
are
only
visible
to
people
who
are
logged
into
your
self-hosted
git
lab.
We
don't
allow
that
on
gitlab.com.
A
They
have
to
be
either
public
or
private,
but
but
that
is
an
option
with
self-hosted,
so
that
would
make
it
essentially
public
as
long
as
people
are
logged
in
and
then
you
don't
have
to
worry
about
who
has
access
to
them,
but
the
other
option
would
be
to
invite
those
anybody
who
might
have
to
execute
those
pull
those
pipeline
files
in
and
read
them,
invite
them
to
the
projects
to
to
the
template
projects
that
you're
building.
A
So
what
is
the
template?
It's?
Where
is
the
share?
A
CR
and
CD
capabilities
with
other
teams
in
your
org?
It's
a
way
to
consume
CI
and
CD
capabilities
from
other
teams
in
your
work
and
it's
the
way
that
glove
engineering
provides
capability
via
templates.
You
know
templates
are
zero
percent
magical
to
the
point.
A
That's
written
here
on
this
slide,
they're,
simply
gitlab,
CI
and
CD
level
files,
and
for
those
of
you
who
may
not
be
aware,
gitlab
is
open
core,
so
gitlab
has
a
Community
Edition,
that's
open
source,
and
then
we
have
our
premium
and
our
ultimate
versions,
which
are
open
core
and
open
core.
Just
means
that
you
can
go
read
the
the
code
anytime,
you
want
to.
A
Temperature
always
executed
into
a
CI
and
CD
pipeline
through
an
include
statement
in
the
projects,
dot,
gitlab.ci.yml
file
and
template
jobs
are
created
in
your
CI
CD
pipeline,
based
on
their
defined
stage
in
any
applicable
rules
that
might
be
in
the
in
the
template.
A
So
may
let
me
look
at
your
questions:
real,
quick,
how
to
use
get
lab
environment
variable
section
for
different
environments
and
you
create
and
you
create
variables
in
the
project
settings
itself.
A
So
when
you
go
into
project
settings
and
you
create
variables
there,
you
have
the
ability
to
limit
the
scope
to
specific
environments.
If
you
want
to
do
that
so
that
you
can
only
have
those
variables
available
and
an
example
of
this
might
be
a
protected
variable
that
has
credentials
in
it
that
allow
that
job
to
access
some
external
service.
A
But
you
know
those
might
only
be
available
in
protected
environments.
A
And
yes,
we
do
have
I'm
sorry
I
mean
that
I
don't
have
the
links
right
here
in
front
of
me
and
I'm
not
able
to
go
chase
them
down
at
this
this
minute.
But
we
do
have
examples
and
tutorials
on
how
to
deploy
Docker
images
from
gitlab
to
awscc2
or
fargate,
and
both
of
those
are
out
there.
So
just
be
aware
that
those
are
there
and
then
Christian.
It
says
about
SAS
pipeline
notice
that
different
scanners
run
there's
different
users
in
their
own
machines.
A
Sometimes
it's
Roots,
there's
its
node.
Is
there
a
way
to
specify
the
username
for
these
tests?
I.E
always
be
root?
The
answer
is,
it
depends
on
the
router
executor,
so
Docker
executors
for
others
who
they
just
do,
but
if
you're
using
a
shell-based
runner
they're
going
to
run
as
the
user
called
gitlab,
Dash
Runner
and
you
know,
switching
those
is
going
to
be
dependent
upon
how
you've
got
things
set
up
in
either
the
docker
container.
A
A
Now,
let's
talk
about
the
different
kinds
of
includes
this
example.
Here
include
template
again.
That
means
that
that's
shipped
with
Git
lab,
so
this
specific
code
quality
test
that
you
can
include
here
is
one
of
the
ones
that
ships
with
gitlab
and
that's
not
going
to
be
something
that
you
create
in
a
project
that
you
host.
A
You
know
template
files
in
that's
going
to
be
more
like.
This
include
file
which
I
would
call
include
project,
but,
as
you
can
see,
this
includes
statement
delineates
a
project
as
a
path
from
the
root
of
your
instance
of
gitlab.
A
In
this
case
you
know
my
group
my
project
and
then
a
specific
file
that
we
want
to
be
able
to
run
there
and
you
can
have
multiple
independent
files
there.
So
it's
not
like
you
have
to
have
just
one.
You
can
have
multiple
and
they
could
be
included
individually
as
needed
and
then
there's
included
local.
So
a
real
good
use
case
for
this
is
I.
Frequently
write
pipeline
files
with
independent
workflows
and
I'll
create
a
an
independent
pipeline
file
for
every
single
workflow.
A
I
want
to
do
and
I
put
these
in
a
pipeline
stuff
directory,
that's
right
under
the
root
of
my
project,
and
so
this
is
the
process
I
would
use
here.
This
include
local
to
be
able
to
deliver,
including
those
files
as
needed
and
also
be
aware
of
it.
All
of
these
included
statements
have
the
ability
to
use
rules
too,
so
you
can
actually
include
files
conditionally,
but
we're
not
going
to
be
covering
that
today.
Just
know
that
if
you
want
to
search
for
that
or
documentation,
it's
possible
to
do
now.
A
A
B
And
I.
A
Think
so
we
got
some
options
for
customizing
job
behaviors,
template
jobs
can
be
extended
and
it's.
This
is
technically
done
with
key
value
pairs
via
variables
in
gitlab,
and
you
can
see
that
in
this
secret
detection
gitlab,
this
is
a
temp
for
secret
detection
that
is
available
to
all
subscription
levels
at
gitlab.
A
You
can
see
that
it
declares
variables
right
at
the
very
beginning,
so
it's
going
to
set
up
the
behavior
of
this
particular
analysis.
Using
these
variables,
so
environment
variables
can
be
used
to
change
these
these
behaviors
in
the
you
know,
you
can
see
that
we're,
including
a
specific
image
on
this
one.
A
If
you
want
to
go
there
to
understand
what
available
what
variables
are
available
to
override
the
behavior
of
the
SAS
stock,
the
SAS
testing
itself,
you
can
also
look
at
the
SAS
template
itself
and
see
how
the
job
is
defined
and
you
can
see
that
that's
a
leak
as
well,
and
it
goes
to
the
open
core
repository
where
this
is
defined
at
so
you
can
see
that
the
following
Docker
image
related
cicd
variables.
Allow
you
to
set
things
up
for
that,
and
then
you
can
see
it
there
established
here
as
well.
A
So
we're
going
to
talk
for
a
couple
of
minutes
about
this
by
default
status
will
use
pattern
matching
so
just
be
aware
of
that,
and
you
include
SAS.
The
SAS
job
is
going
to
look
and
scan
for
and
use
pattern
matching
to
try
to
identify
what
language
you're
using.
We
use
a
whole
host
of
open
core
open
source
scanners
in
this
SAS
category
and
we'll
pull
in
the
ones
that
match
your
particular
programming
language.
A
If
it's
available
to
to
come
in
and
run
against
your
your
code
base,
you
know
it's
possible
to
avoid
having
to
do
a
broad
scan
by
using
this
SAS
exclude
analyzers,
and
you
can
actually
list
the
ones
you
don't
want
to
use
and
you
can
set
the
exact
scanners
to
exclude
by
defining
this
variable
in
gitlab
dash
ci.yml,
and
you
can
do
it
in
the
SAS
job
itself.
A
So
just
be
aware
that,
once
that
job
has
been
defined,
if
you
print
that
job
name
out
like
you're,
going
to
declare
it
as
a
new
job,
it's
going
to
assume
all
the
properties
that
were
defined
during
this
included
file
came
in
and
then
you'll
have
the
opportunity
to
overwrite
it.
If
you
want
to
do
that.
A
Let's
talk
for
a
couple
minutes
about
artifacts,
so
it's
not
uncommon
for
a
job
that
you
run,
maybe
a
build
job,
but
for
awareness,
all
the
security
scanners
that
we
use
to
find
artifacts
at
their
reports
that
those
security
scanners
will
keep.
And
with
that
mind
you
know
it's
it's
important
to
know
how
to
get
to
these.
So
if
you're
on
the
pipelines
page,
you
can
click
on
this
link
down
here
that
looks
like
a
download
link
and
that
will
download
an
archive
of
all
the
jobs
archives,
all
archived
up
into
one
file.
A
If
you
go
to
the
jobs
page,
you
can
download
any
individual
jobs
files
and
there
could
be
multiple
again.
There
might
be
many
many
archives,
I'm
sorry
artifacts
defined
by
a
single
job,
they're
going
to
be
archived
into
a
single
file.
If
you
go
to
a
job
specific
page,
so
you
click
on
one
of
those
jobs.
A
A
This
is
one
other
important
concept,
especially
if
you're
building
very
very
large
artifacts
like
like
build
our
effects.
It's
you
know
when
you're
self-hosted
or
your
gitlab.com,
you
know,
storage
comes
at
cost
and
these
things
accrue
over
time
and
going
back
and
deleting
them
manually
is
really
a
pain
to
do
so.
A
real
good
best
practice
is
to
set
this
expire
in.
So
you
can
see
the
artifacts
keyword
here
is
being
used
in
this
build
app.
A
It's
so
then
you're
eating
a
path,
but
it's
also
dominating
again
expire
in,
and
you
know,
there's
several
different
values
that
you
could
put
in
here.
If
you
want
to
it
could
be
days,
it
could
be
hours,
it
could
be
minutes
if
appropriate
and
then
gitlab
will
take
care
of
leaving
those
automatically
when
the
time
comes,
and
that
way
you
don't
have
to
go
after
them
and
go
in
and
start
manually
deleting
them
like
crazy.
A
A
A
A
A
Now,
if
we
click
on
full
configuration,
we're
going
to
see
everything,
that's
being
included
in
that
SAS
job,
you
can
see
the
SAS
job
itself,
because
this
is
gonna.
This
is
going
to
take
all
the
include
files.
It's
going
to
merge
them
into
a
unified
gamble
object,
even
though
that's
not
what's
in
your
duculator.yml
file,
so
you
can
at
least
get
a
look
at
it
here
and
get
a
sense
of
understand
how
understanding
how
it
works.
A
And
you
can
see
we've
the
simcraft
zest
came
in
with
the
the
SAS
scanner,
and
so
did
the.
B
A
So
we're
including
a
SAS
job
it's
coming
in,
even
though
you
don't
see
it
in
the
stages
in
the
pipeline,
because
the
SAS
job
just
ends
up
being
extended
by
other
jobs
that
are
in
the
SAS
scanner,
but
notice
that
we're
and
because
it
is
being
used
to
extend
and
build
other
jobs.
What
we
put
into
the
SAS
job
will
then
apply
to
those
those
jobs
that
get
pulled
in
which
remember
was
no
jazz
in
the
same
ground
testing.
A
A
So
notice
that
security
is
so
build
up
is
running
right.
Now
we
already
do
code
quality
unit
tests,
wouldn't
wait
for
it
to
run,
but
now
notice
that
these
two
SAS
scanners
are
now
in
the
security
column
here
or
stage
and
that
they
didn't
have
to
wait
for
the
build
up
to
complete
either
because
we
put
that
needs
into
the
SAS
job
that
these
extend
themselves
from.
A
A
Now,
if
you
want
to
transfer
this
project
out,
you
can
you
can
transfer
it
to
your
personal
namespace,
you
can
transfer
it
to
another.
You
know
your
your
group
licensed
namespace
at
gitlab.
If
you
want
to
either
of
those,
but
the
instructions
are
in
here
just
be
aware
that
if
you're
not
running
ultimate,
you
could
potentially
lose
some
features.
However,
nothing
that
we've
touched
upon
today
is
an
ultimate
feature.
Everything
here
is
included.
A
Everything
that
we've
covered
today
is
included
in
premium,
and
it's
even
down
into
our
free
tier
some
things.
I
want
to
bring
to
your
attention
real
quickly.
Is
you
know,
you're
going
to
have
this
provision
group
for
until
Monday
we
typically
provision
these
groups
that
you
get
to
own
for
just
a
couple
of
days
we're
giving
the
extra
time
so
you
can
go
through
this
optional
security
and
compliance.
If
you
want
to
go
through
that,
that'll
give
you
a
good
sense
of
how
our
security
scanners
work.
A
Sorry,
the
project
that
you're
going
to
Fork
just
like
we
did
here
and
then
that
project
has
instructions
in
it
in
it
as
well.
So
it's
just
the
same
and
then
the
last
thing
I
want
to
tell
you
about.
If
you
are,
if
you
are,
are
somebody
on
your
team
that
has
two
support:
other
projects
or
team
members
for
gitlab,
CI
and
CD
and
you've
got
to
build
these
multiple,
independent
workflows
or
maybe
you're
you're
populating
out
a
template
repository
that
other
teams
can
rely
on
and
lean
into
this
top
number.
A
Seven
here
is
an
example
project.
You
can
use
just
to
look
at
the
way.
It's
structured.
It's
not
set
up
to
be
an
assignment
like
these.
Other
two
are,
but
this
is
a
project
you
can
go
to
and
just
review
to
to
kind
of
get
a
sense
of
how
the
rules
are
working,
how
the
rules
on
include
files
are
working,
and
you
know,
get
a
real
good
illustration
of
multiple
independent
workflows.
A
This
is
an
example
I'm,
sorry,
this
is
the
project
number
seven.
This
is
a
option
optional
step
number
seven
and,
as
you
can
see,
there's
multiple
independent
workflows
for
different
conditions.
Here,
merge
requests
going
into
main
merch
commits
going
in
domain,
so
they're
going
to
requests.
The
merge
request
has
been
approved,
and
now
it's
coming
into
main.
A
These
green
squares
mean
that
these
have
the
potential
to
go
to
production
and
they
have
production
and
release
jobs
in
there's
lots
of
quick
code
scan
test.
So
this
is
a.
This
is
a
rule
that
you
can
look
into
if
you
want
to
be
able
to
find
a
way
to
scan,
commit
messages
and
look
for
something
specific
there
and
trigger
a
job
potentially
based
on
that
API
web
emergency
releases
remain.
A
A
A
A
The
folks,
with
my
homeless,
apologies
for
running
a
sober
today
were
you
know,
15
minutes
over
now,
I
want
to
thank
everybody
for
taking
the
time
to
join.
Us
today
looks
like
we're
able
to
answer
most
of
the
questions,
and
so
again,
if
you
have
any
more
questions
on
this
feel
free
to
reach
out
to
your
account
executive,
they
can
request
an
engagement
to
engage
a
member
of
my
team
and
come
in
consult
directly
with
your
team.
A
Anonymous
that
question
is
going
to
be
outside
of
the
scope
of
our
discussions
today.
I'd
have
to
do
a
little
research
to
be
able
to
answer
that
one.
So,
unfortunately,
please
do
request
an
engagement
with
the
customer
success
engineer
with
your
account
executive.
A
All
right,
everybody
I,
appreciate
your
time
today
and
I
really
do
apologize
for
running
us
over,
but
I
hope
that
you'll
be
able
to
take
it.
Take
advantage
of
the
time
you've
got
until
Monday
to
dive
into
this
assignment
that
we
did
today
and
then
the
optional
steps,
if
you
have
time
to
do
those
as
well
so
with
that
I
wish
you
all
a
good
day
and
we'll
hope
to
see
you
soon.