►
Description
Overview of GitLab CI architecture and key concepts, presented by the Customer Success team for new GitLab customers.
* GitLab CI/CD Overview
* GitLab Runners & Executor Types
* Anatomy of a CI Job
* CI YAML Overview & Capabilities
A
A
You
all
right
welcome
everybody.
Thank
you
for
joining
our
gitlab
CI
new
customer
orientation.
Webinar.
My
name
is
Patrick
Carlin
and
I
haven't
had
a
haircut
since
February
I'm,
a
manager
of
technical
account,
managers
located
out
of
Portland
Oregon
and
I've,
been
with
get
lab
a
little
over
two
years.
Our
webinar
today
is
intended
to
be
an
overview
and
an
orientation
of
get
lab.
Ci
I
will
be
talking
about
the
architecture,
how
get
lab
CI
works
and
some
of
the
key
concepts
that
you'll
need
to
know
to
get
started
with
get
lab
CI.
A
This
is
really
intended
for
an
audience
of
customers.
Existing
get
lab
customers
who
are
kind
of
new,
just
get
labs
CI
or
anybody
who
just
wants
a
refresher.
Maybe
they
are
using
your
lab
CI
today,
but
they
want
to
go
back
over
the
fundamentals
and
make
sure
that
they
understand
some
of
the
different
options
you
have
for
getting
started.
A
Our
webinar
today
is
being
recorded,
we'll
share
the
recording
and
the
slide
deck
after
the
event.
Thanks
everybody
for
taking
my
mouth
your
schedule
today,
we
understand
everybody's
schedule
has
been
a
little
weird
and
disrupted
in
many
different
ways.
So
thank
you
for
taking
time
to
join
us
and
we
hope
this
helps
everyone
get
up
to
speed
quicker
with
get
lab
CI
and
be
more
productive
in
your
day
to
day
work
as
a
result
of
not
using
our
standard
get
lab.
Webinar
toolkit.
A
What's
going
on
under
the
hood
when
your
jobs
executing
so
you
know
how
we
process
your
instructions
and
then
we'll
go
through
an
overview
of
the
gitlab
CI
y
amal
configuration
language
just
giving
you
an
overview
of
the
capabilities.
What
are
your
different
options
for
controlling
the
flow
within
your
jobs
and
essentially
doing
everything
from
build
to
test
to
deployment
all
in
one
pipeline?
A
So
we've
got
we'll
get
into
introductions
to
the
team,
so
we've
got
Martin
Bromer
who's,
a
Technical
Account
Manager
located
and
Leipsic
Germany
I
hope
I
pronounced
that
correct.
We've
got
stuff
in
the
hein
injure
who's
a
Technical
Account
Manager
from
just
outside
in
San
Antonio
Texas,
Bret,
Gavin
Barry
over
here
on
the
west
coast.
Out
of
Los
Angeles
and
Chester
watch
oku
out
of
New
Jersey
and
they'll
be
presenting
each
on
a
different
area
of
this
webinar.
B
So
didn't
find
the
omelet
and
you
hi
I'm
Martin
Technical,
Manager,
gitlab
and
I'm,
going
to
give
you
an
overview,
high-level
overview
of
get,
let's
see
ICD.
So
for
the
start,
what
is
C
ICD?
What
does
it
mean?
So
there
is
first
continuous
integration
or
CI,
and
that
is
a
development
practice
to
continuously
integrate
it
code
into
a
repository
and
each
commit
is
verified
by
automated
building
and
testing.
B
So
that
allows
us
to
detect
errors
very
quickly,
locate
them
easily
and
early
in
the
development
process,
and
then
there
is
CD
that
can
mean
continuous
delivery
or
continuous
deployment,
where
continuous
delivery
sense
for
making
sure
your
code
base
is
always
production
ready,
but
the
deployment
itself
usually
is
a
manual
step
or
a
business
decision.
If
you
will
so
that
can
mean
that
the
deployment
to
test
and
staging
environments
is
automated,
but
usually
the
deployment
to
production
is
still
a
manual
step.
B
B
The
benefits
of
using
git
lab
CI
is
our
first
builds
and
tests
of
version
2,
so
everything
is
controlled
by
a
gillip
CI
Yama
file
that
contains
your
tests
and
build
scripts,
and
that
ensures
that
every
branch
gets
the
builds
and
tests
it
needs,
build
artifacts
and
test
results
or
the
binaries
you
build
other
build
artifacts
and
the
tests
themselves.
Everything
can
be
stored
and
explored
in
get
lab
directly.
You
have
darker
support,
so
we
support
custom
docker
images.
Services
can
be
spun
up
as
part
of
testing
like
databases.
B
Furthermore,
there
are
lot
multi-language,
so
the
build
scripts
are
command
line
driven,
and
that
means
you
can
work
with
any
programming
language
that
you
like
that
you're
using
there
is
real-time
logging.
So
in
the
merge
requests
the
link-
and
you
always
take
you
directly
to
the
current
log
of
the
builds
and
everything
is
in
one
application.
So
there
is
no
integrations
to
maintain.
B
There
is
no
extra
license
cause,
there's
no
switching
back
and
forth
between
applications
for
the
developers,
keeping
everything,
nice
and
efficient
the
components
of
a
gait
lab
see
ICD
pipeline
our
first
jobs,
so
there
are
basically
the
scripts
that
perform
the
tasks.
For
example,
npm
tests
ambien
install
yeah.
So
then
there
are
stages,
so
stages
are
collections
of
jobs
and
inside
these
stages
there
will
be
run
in
parallel.
For
example,
there
could
be
a
build
stage,
a
test
stage
or
deploy
stage,
then
there's
pipeline,
so
that
is
the
set
of
jobs
optionally.
B
With
the
stages
there
are
environments,
so
that
means
where
to
deploy
so
can
be
a
test
environment,
the
review
environment
or
even
a
production
environment.
And
finally,
there
is
the
gate
Lebrun,
oh,
so
the
github
runner
is
a
separate
application
that
executes
the
CI
jobs
that
you
can
deploy
anywhere
and
ii
will
tell
us
more
about
the
runner
later
on
the
webinar
as
the
high-level
architecture.
What
does
it
look
like
so
we're,
starting
with
a
code
in
the
git
repository,
and
that
includes
the
gate
lab
see?
B
I
am
of
files
that
defines
the
pipeline
configuration
now.
If
you
commit
new
code
to
the
repository,
what
happens
is
that
automatically
github
CI
pipeline
is
started
that
does
the
building
of
the
application
and
that
also
executes
all
tests
that
you
have
defined
unit
tests.
Integration
tests,
the
artifacts
that
are
created
as
a
result
of
the
build,
are
then
also
a
sword
in
kidnap.
We
are
calling
that
package
and
can
then
be
as
a
CD
part
of
the
pipeline,
be
deployed
to
review
environment,
staging
environment
production
environment.
B
What
does
it
look
like
in
gate
lab,
so
gate
lab
shows
you
these
nice
pipeline
graphs,
and
these
pipeline
graphs
show
you
how
all
the
jobs
you've
defined
are
executed
in
the
stages.
So
here
we
have
build
stage
a
test
stage,
staging
stage
and
cannery
in
production,
and
these
stages
are
run
in
serial
to
each
other.
So
each
stage,
one
after
the
other
inside
and
inside
these
stages,
the
jobs
are
executed
in
parallel.
So,
for
example,
here
after
the
build
stage
was
successfully
executed,
these
tests
job
would
run
in
parallel
to
each
other.
B
If
one
job
in
a
stage
failed,
the
next
stage
is
not
executed
by
default,
unless
you
specify
otherwise.
We
can
also
see
that
these
jobs
have
executed
successfully
with
a
green
checkmark.
Here
we
could
rerun
them
for
whatever
reason
clicking
these
retry
arrows,
and
we
also
have
two
menu
drops
defined
here.
So
the
deployment
to
a
cannery
environment
in
this
case
would
be
a
manual
job.
They
would
be
executed
when
you
press
this
play
button.
B
Everything
is
controlled
by
this
gate.
Lepsy
IMO
file,
as
mentioned,
and
that
is
configuration
it's
cold.
So
there
is
a
Yama
file
that
resides
inside
the
root
of
the
repository
by
default
and
this
version
controlled
along
with
the
rest
of
your
code
and
that
contains
all
the
definitions
for
the
jobs,
the
stages
and
the
environments.
B
It
has
a
very
rich
and
very
powerful
syntax,
which
you'll
also
learn
more
about
in
this
webinar,
so
that,
for
example,
controls
docker
image
use
what
services
are
spun
up
the
variables
that
are
put
in
scripts
that
are
executed
before
and
after
the
stages,
you
can
define.
Caching,
you
can
define
artifact
locations
and
there
is
also
rules,
complex
rules
that
you
can
define
to
control
the
flow
of
your
jobs.
So
that's
it
for
the
overview
and
I'm
giving
over
to
Stefan
who's.
Talking
about
github
runner
and
executor
types.
B
C
C
So
that
means
you
could
install
a
runner
on
its
own
bare
metal
server.
You
could
have
a
little
vm
with
some
space
left
on
a
host.
If
you
had
a
very
large
need,
you
could
have
a
giant
standalone
cluster,
just
chock
full
of
runners.
Conversely,
it
could
be
very,
very
small.
You
could
have
a
runner
on
a
laptop
or
even
a
little
Raspberry
Pi
runners
are
largely
OS
agnostic.
Make
me
run
on
Linux
or
Windows
or
OSX.
Really
anything
capable
of
running
docker
could
run
a
runner.
So
the
only
small
warning
I
would
give.
C
Is
that
for
performance
reasons,
runners
should
not
reside
on
the
same
server
as
the
instance,
the
actual
gate
Levinson's.
So
having
your
runners
utilize,
the
same
hardware
or
cloud
image
as
the
gate
lab
install.
Will
you
know,
sort
of
short
term
you're
thinking
about
performance
issues
and
resource
contention,
but
long
term?
It's
gonna
make
it
much
more
difficult
to
scale
and
growth
right.
C
It's
obviously
gonna
be
a
lot
more
difficult
to
get
to
separate
those
items,
so
runners
can
be
utilized
in
a
ton
of
ways,
so
they're
flexible
right
in
their
multi
platform,
and
this
allows
a
lot
of
tailoring
to
fit
your
requirements.
This
flexibility
can
be
leveraged
in
the
interest
of
minimizing
inefficiencies
right
both
its
cycle
times
and
but
also
in
cost,
so
think
about
auto
scaling.
C
So
runners
are
capable
at
all
auto
scaling,
so
you
could
provision
auto
scaling
runner
to
the
doctor
machine
or
kubernetes
to
spin
up
right,
so
you
always
have
the
capacity
to
process
jobs
quickly,
but
also
spin
down,
so
you're
not
worried
about
resources
being
wasted
or
needless
costs
for
unused
VMs.
You
can
run
them
in
parallel,
so
you
could
have
multiple
runners
spanning
several
projects.
In
that
way,
you
could
keep
resource
contention
very
low
right
and
and
keep
your
performance
high
and
because
you're
able
to
utilize
multiple
runners.
C
At
the
same
time,
you
could
also
ensure
that
runners
are
properly
sized
for
the
job
they're
sort
of
individually
expecting
to
execute.
We
can
talk
about
that
just
a
little
bit
more
well,
you
know
what
right
now.
So,
let's
talk
about
runner
types
a
little
bit,
so
there
are
three
fundamental
types
of
runners:
there
is
a
specific
group
and
shared
so
specific
group.
Specific
runners
are
most
useful
for
jobs
that
have
very
special
requirements
or
projects
with
a
specific
demand.
So
let's
say
you're
running
up
hype
line
to
verify.
C
Like
a
large
Apache
movie
application,
you
can
provision
a
very
robust
runner
with
necessary
Java
dependencies
to
run
those
jobs
within
your
project.
In
that
way,
specific
runners
can
use
the
right
size
workloads
right.
So,
let's
say
in
conjunction
to
your
very
complicated
movie
application.
You
have
an
admin
team
that
maintains
a
small
project
full
of
Bashan
networking
scripts
right.
They
used
to
troubleshoot
your
company's
infrastructure.
Well,
those
scripts
could
be
sort
of
their
own
project
and
verified
via
very
small
inexpensive
runner,
maybe
on
a
free
clapping
instance,
or
a
small
bare
metal.
C
Vm
group
runners,
broop
runners,
are
similar
to
specific
runners,
but
surprise
just
like
the
name
implies
they're
tied
at
the
group
level
right,
so
they
could
actually
be
used
across
multiple
projects.
When
you
think
about
both
specific
and
group
runners,
it's
important
to
know
that
they're
use
a
very
basic
sort
of
FIFO
mentality
right,
so
first
in
first
out
queue
methodology.
C
Last
up
we
have
shared
runners,
know
the
spoiler
they're,
just
what
they
sound
like
so
shared
runners
are
shared
across
multiple
projects
and
groups,
and
this
is
really
useful
for
jobs
that
have
similar
requirements
across
teams
or
platforms.
So,
rather
than
having
very
specific,
specifically
chosen
runners,
you
could
have
many
identical
ones
to
handle
sort
of
the
same
thing
over
and
over
again,
and
because
sometimes
they
can't
be
identical.
It
makes
it
really
easy
to
maintain
and
both
update
them.
When
you
need
two
shared
runners.
C
Do
use
a
slightly
different
process
to
you
that
we
call
fair
usage
and
fair
use'
just
means
that
it
will
assign
jobs
to
a
shared
runner
from
the
project
that
has
the
lowest
number
of
jobs
that
make
sense,
and
so
lastly,
is
tagged.
Runners
and
not
specifically
a
type
of
runner,
but
tags
can
be
applied
to
any
router
type.
So
you
could
get
a
tag
for
Ruby
or
Python,
or
groovy
right
and
assign
runners
to
those
jobs
specifically,
and
that's
a
lot
of
granularity
right.
C
C
So
we'll
talk
a
little
bit
about
the
executors
or
executor
--zz
tomato
tomahto,
so
the
executors
are
sort
of
the
Hal
of
the
process
is
in
how
the
runner
is
going
to
execute
that
script.
So
your
chunks
use
your
executors
during
the
registration
of
your
runner
install,
and
there
are
seven
total,
including
the
custom
executor,
and
this
I'll
talk
briefly
kind
of
about
the
top
three
which
are
shell,
docker
and
kubernetes.
So
Cheryl
is
exactly
what
it
sounds
like.
It
runs
commands
just
like
you
or
I,
are
sitting
down
and
typing
them
into
the
terminal.
C
There's
support
for
several
shells,
so
bash
and
SH
and
PowerShell
and
I
think
the
the
shell
Runner
is
really
the
easiest
to
install
and
get
running.
So,
if
you're
new
to
this
sort
of
CICE
process
and
want
to
get
started,
it's
a
really
nice
option
with
a
lot
of
great
documentation.
So
the
shell
runner
inherits
permissions
from
the
specified,
get
lab
Runner
user.
So
as
a
caveat,
I
would
just
say
that,
because
the
shell
inherits
those
permissions,
it's
best
to
only
use
machine
lead
on
machines.
C
You
trust
right,
I,
don't
know
if
you
have
untrustworthy
machines
in
your
office,
but
if
you
do
like
this
is
not
the
time
to
use
them,
so
shell
is
not
the
same
as
SSA
try
to
even
put
it
in
little
parentheses,
so
with
ssh
you're,
potentially
logging
in
as
root
to
a
remote
server.
So
security
can
be
a
lot
more
tedious,
as
well
as
less
secure.
In
addition,
ssh
only
supports
bash
and
there's
there's
no
caching,
so
if
nothing
else
walk
away,
knowing
that
shell
does
not
equal
ssh,
so
the.
C
The
most
common
and
it's
an
executor
which
runs
inside
a
docker,
so
it
can
run
both
Linux
and
Windows
containers,
that's
pretty
neat
and
it
provides
a
clean
image
for
each
job
and
even
can
be
used
as
dr.
and
docker
in
this,
and
it's
actually
also
includes
dr.
machine
which
provides
the
auto
scaling.
We're
currently
utilizing
in
our
dot-com
instances,
which
creates
demands
for
our
SAS
right.
So,
lastly,
is
kubernetes.
Good
old
kubernetes
runs
as
a
pod
inside
that
cluster,
so
the
executor
connects
to
the
kubernetes
api
and
creates
a
pod
right
and
a
minimum.
C
That
pod
will
be
three,
so
the
build
container,
a
helper
container,
an
additional
container
for
each
service
that
you
define
in
the
ya
mold
and
tamil
files.
So
kubernetes
can
also
auto
scale,
of
course,
right
and
there's
some
great
documentation
around
sort
of
the
benefits
and
caveats
I'm
using
it
versus
docker
machine
as
an
auto
scaler.
In.
C
On
our
docks
right
now
so
on,
my
last
page
you'll
see
an
excerpt
from
our
Doc's
straight
from
our
wiki.
That
shows
some
of
the
less
common
executors,
so
options
like
SSH,
write,
nutshell,
VirtualBox
and
parallels,
as
well
as
a
high-level
overview
and
some
sort
of
key
facts
about
each
one
of
them.
Lastly,
I
would
say
that
before
you
make
any
implementation
decisions
for
runners
or
executors,
please
consult
our
doc
phage
or
your
friendly
neighborhood
Tamm,
so
sort
of
the
most
current
information
with
that
I
will
stop
sharing
and
I'll
hand.
D
D
So
a
job
sequence
begins
with
the
get
action,
such
as
a
push
or
tag
push
event.
Jobs
themselves
are
defined
with
constraints
that
state
the
conditions
under
which
they
will
be
executed.
So,
for
example,
the
rules
keyword
allows
for
a
list
of
individual
rule
objects
to
be
evaluated
in
order
until
one
matches
and
dynamically
provides
attributes
to
a
job
once
define
jobs
are
in
cute.
In
a
background,
job
processor
called
sidekick,
so
here
the
the
job
would
only
run
when
the
commit
branch
is
master
and
then
on
the
next
example.
D
First,
runner
pulls
the
side
cook
you
to
the
received
jobs.
Runners
can
be
filtered
as
Stefan
mentioned,
using
tags
and
or
protected
branches.
Once
a
job
has
been
received,
runner
downloads,
to
specify
docker
images,
the
image
keyword
is
used
to
specify
a
base
image
in
the
services.
Keyword
is
used
to
link
service
images
to
that
base.
Image.
D
The
good
strategy
keyword
is
used
to
specify
the
method
for
getting
recent
application
code
in
the
working
directory,
so
clone
clones,
the
repository
from
scratch
for
every
job,
fetch
reused,
the
local
work
and
copy,
and
then
nun
will
reuse
the
local
working
copy,
but
we'll
skip
all
get
operations.
The
default
good
strategy
can
be
configured
at
the
project
level
on
the
CI
CD
settings
page.
D
D
So
next
I'll
describe
five
additional
actions
that
trigger
pipelines.
So
first
here
we
have
is
in
the
context
of
a
merger
bust.
So
you
may
be
familiar
familiar
with
the
only
and
accept
keywords
which
are
other
drop
constraints
used
to
limit
job
creation,
but
they
are
soon
to
be
dedicated
and
should
not
be
used.
So
instead
we
have
the
if
CI
merge,
request
ID
rule,
which
is
the
recommended
method
for
only
creating
jobs.
In
the
context
of
a
merger.
Quest
passed
out
pipeline
for
merge
results
can
be
configured
at
the
project
level.
D
Next,
we
have
the
manual
play
button,
so
the
wind'
manual
keyword
makes
it
possible
to
execute
the
job
manually.
The
action
will
expose
a
play
button
for
the
job.
The
job
will
be
triggered
only
when
the
play
button
is
manually,
click
the
play
button
is
accessible
in
pipeline
environment
and
job
use.
D
So
a
unique
trigger
token
can
be
configured
at
the
project
level
on
the
CI
CD
settings
page
under
pipeline
triggers
or
the
triggers
API
to
trigger
to
trigger
a
pipeline
using
a
trigger
token
simply
send
a
post
request
to
the
trigger
pipeline.
Api
I
see
here
to
trigger
a
pipeline
from
a
web
hook
of
another
project,
ad
webhook
URL
to
the
projects
integration
settings
page.
The
pipeline
will
then
be
triggered
at
the
corresponding
event.
D
D
E
Ok,
so
once
you
have
your
get
lab
project
created
and
you
want
to
create,
create
your
first
pipeline
there's
a
few
things
to
keep
in
mind.
So
basically,
there
are
some
reserved
keywords
that
you,
you
should
keep
take
note
up.
That
you're,
you
aren't
allowed
to
use
as
the
names
of
your
jobs
in
your
CI
pipeline
and
I
have
a
list
of
those
over
there
on
the
right
side
of
the
ve
on
the
PowerPoint
and
I'll
review
them.
I'll
go
over
them
in
the
next
slide,
as
well
to
explain,
explain
them
in
depth.
E
The
next
point
is
spaces
in
tap,
so
there's
there's
a
tendency
to
for
folks
to
use
to
alternating
between
spaces
and
tabs.
It's
there.
You
know
writing
code.
In
their
favorite
IDE
of
choice,
but
when
it
comes
to
llamo
spaces
can
cause
issues.
If
you
don't
have
the
right
number
of
separations
between
elements.
So
to
avoid
this
wheel,
it's
advised
few
spaces
over
tabs
and
just
to
avoid
any
syntax
issues
and
the
gate
lab
web
ID
has
assistance
capabilities
that
allow
you
to
avoid
some
of
these
issues
that
come
up
in
terms
of
spacing
issues.
E
The
last
one
is
once
you
have
your
CI
mo
file
created
and
you're
ready
to
run
your
first
pipeline
there
there's
a
CIA
lint
tool,
that'll
allow
you
to
validate
the
syntax
of
your
llamo
file
and
every
get
lab
project
has
this
capability.
So
once
you
go
into
your
gala
project
of
UI,
there's
the
option
to
select
CI
CD
on
the
left
hand,
side
of
your
get
project,
it'll
produce
a
drop-down.
E
You
you
select
pipeline
and
you'll,
see
this
3
button
option
at
the
top
right
hand,
corner
to
choose,
run
pipeline,
clear
your
cache
or
sort
the
CIA
talent
tool.
You
select
that
CI
lentil
it'll,
take
you
to
a
window.
You
drop
your
yeah
yeah
no
file
in
there,
it'll
validate
it
and
let
you
know
if
there
are
any
syntax
errors.
E
So
I've
listed
out
all
these
parameters.
My
colleagues
have
gone
over
a
few
of
them,
so
I
just
want
to
reiterate
some
of
the
important
ones.
So,
as
as
mentioned,
the
script
tag
parameter
allows
you
to
supply
commands
that
you
would
like
executed
in
your
shell.
The
image
parameter
allows
you
to
specify
which
docker
container
you'd
like
to
use
in
your
pipeline.
The
services
parameter
allows
you
to
associate
up
an
additional
service,
that's
tied
to
your
to
your
image.
E
So,
for
instance,
if
you
want
to
install
some
packages
from
package
manager
or
be
some
brief
authentication
commands,
you
can
specify
that
there,
the
after
script
allows
you
to
do
any
cleanup
work.
So
this
this
portion,
this
parameter,
will
allow
you
to
basically
it'll
execute,
despite
whether
the
job
passes
or
fails.
So
so
you
can
do
cleanup
work
in
terms
of
remove
any
sensitive
credentials
that
are
remaining
in
the
and
the
in
the
runner,
so
to
just
make
sure
that
you're
not
basically
to
avoid
any
security
violations.
E
So
this
the
stages
parameter
allows
you
to
specify
which
different
phases
you
would
like
to
be
executed
in
your
pipeline.
So
you
can
specify
you
want
development
stage,
build
stage
test
stage
and
deploy
stage
and
the
order
in
which
you
list
these
stages.
Matters
and
I'll
show
that
in
the
next
slide,
but
just
keep
that
in
mind
that
the
order
in
which
you
list
of
stages
matters
and
the
only
in
except,
as
my
colleagues
mentioned,
there,
they're
being
deprecated.
E
But
these
are
just
a
conditional
statements
where
you
specify
hey
I'd,
like
you
know
this
job,
to
run
only
in
the
event
of
a
merchant
of
a
emerge
to
master
right.
So
you
can
specify
those
conditions,
so
the
rules
expression
allows
you
to
do
the
same
thing,
but
it
just
gives
you
more
flexibility
and
to
specify
those
conditional
statements
and
I'll
also
show
y'all
a
link
in
the
end
of
the
the
presentation
where
you
can
see
all
the
definitions
for
all
these
parameters.
E
So
this
is
an
example
see
IMO
files
you
can
see.
The
stages
is
one
of
the
first
definitions
that
I
have
here
and
there's
a
prep
stage
and
there's
a
build
stage.
So
the
order
in
matters
right.
So
if
I
listed
the
built
stage
first
and
then
the
prep,
then
the
build
stage
would
get
executed
first
so
and
that's
regardless
of
which,
which
stage
I
have
defined
here.
E
So,
for
instance,
if
I
had
the
the
build
build
stage
second,
but
defined
the
if
I
had
the
the
build
stage
set
in
the
stages
section
and
as
a
second
portion
of
the
list
and
then
define
the
build
stage
here,
it
would
still
execute
the
build
stage
in
this
before
the
sorry.
It
will
execute
the
preface
a
before
the
tow
stage.
E
So
just
keep
that
in
mind
that
the
order
matters
so
in
the
prep
stage
declaring
a
a
folder
I'm,
creating
a
folder
in
the
script
I'm
creating
a
file
and
I'm
using
this
artifacts
parameter
to
store
the
contents
of
that
folder
so
that
I
can
access
it
in
the
next
job
right.
So,
if
I
never
declared
the
artifacts
portion,
then
the
folder
that
I
create
will
not
be
accessible
in
the
next
job.
E
So
just
keep
that
in
mind
that
if
you
want
artifacts
to
be
propagated
from
one
job
to
the
next,
you
want
to
make
sure
to
use
functionality
such
as
you
know,
an
artifacts
and
the
bill
in
a
bill
stage.
I'm
simply
accessing
that
that
folder
and
I'm
adding
a
file
or
something
adding
text
to
that
file.
That's
within
the
folder
and
just
goes
to
show
you
how
you
can
complete
some
functionality
in
one
stage
and
access
it
in
the
next
stage.
E
So
in
the
other
slide
I'm
demonstrating
how
you
can
use
the
environments
capability,
so
this
allows
you
to
specify
an
environment
that
you
would
like
associated
with
a
job
and,
as
you
can
see
here,
I'm
using
predefined
environment
variables,
so
the
CI
commit
ref
name.
That's
basically
representing
the
commit
that's
tied
to
this
particular
job
right,
and
so
basically,
this
is.
This
is
also
highlights
one
of
our
capabilities
with
Auto
DevOps
that
allows
to
create
these
ephemeral
environments.
E
So
this
is
another
CI
mo
file,
an
example
on
a
ruby
pipeline,
as
you
can
see,
I'm
representing
Ruby,
specific
language
here,
so
using
the
bundle
K
building
in
the
Rick
the
rate
test
as
well.
So
just
an
example
of
how
you
can
you
know,
use
it
to
get
latch
the
mo
file
for
different
types
of
text
acts
and,
as
you
can
see
in
the
stage
and
deploy
stages,
it's
being
an
example
of
the
code
being
deployed
to
Heroku.
E
So
I
wanted
to
show
a
section
in
our
Docs
that
will
give
you
a
language
reference
for
the
CI
mo
file
where
you
can
access.
You
know
more
information
about
other
capabilities
that
you
can
take
advantage
of
in
this
section
just
goes
into
depth
about
those
parameters.
I
mention
that
you
can
learn
more
about
each
one
and
there's
many
other
areas
of
this
doc.
E
That
can
be
helpful
and
you
understand
how
different
capabilities
of
the
mo
file
works
and
there's
also
a
section
on
CI
CD
examples
where
you
can
see
different
use,
cases
of
how
to
deploy
get
lab
pages
or
how
to
set
up
a
game
development
pipeline
using
our
galaxy
IMO
file,
and
so
there's
just
a
wealth
of
information
that
you
can
access
and
not
have
to
start
from
scratch,
and
you
can
use
these
use.
These
use
these
examples
to
basically
get
up
and
running
pretty
quickly.
A
Alright.
Well.
That
concludes
our
webinar
today.
Thank
you
to
everyone
for
your
participation
and
attendance,
we'll
be
sending
a
recording
out
to
all
the
ten
bees,
as
well
as
anyone
who's
not
able
to
attend
today.
So,
look
for
that
later
on
today,
Pacific
time,
just
a
quick
rundown
of
some
of
the
things
that
will
help
you
get
started
quickly.
A
I
think
Chester
touched
on
some
of
these,
but
there's,
if
you
just
search
for
get
lab,
run
or
install
that'll,
take
you
to
our
documentation
for
choosing
runner
getting
it
installed
and
registered
me
your
get
lab
instance.
It's
very
easy
to
install
a
runner
on
your
laptop
or
anywhere
else
that
you'd
like
to
get
started.
Your
get
lab
admins
may
also
have
shared
runners
available.
A
For
those
folks
that
have
questions
and
want
to
understand
next
steps
or
some
of
the
ins
and
outs
of
the
topics
we
covered
today,
we
don't
have
the
ability
to
take
questions
live
right
now,
but
contact
your
Technical
Account
Manager,
your
gift
lab
administrators
should
know
who
that
is
and
can
reach
out
and
if
you're,
not
sure,
just
feel
free
to
contact
me
I'm,
Patrick
and
get
lab
and
depending
on
the
number
of
questions
and
responses,
I'll
get
back
to
you
as
fast
as
I.
Possibly
can
so.
Thank
you.