►
Description
Watch this webinar recording to:
1. Gain a comprehensive understanding of CI pipelines in GitLab.
2. Learn how to overcome common challenges and learn effective strategies.
3. Discover best practices for configuring and optimizing CI pipeline workflows.
4. Explore advanced features and integrations in GitLab.
5. Review the engaging Q&A that was hosted live with our expert panel.
A
All
right,
let's
jump
in
thanks
again
for
joining
us.
We're
excited
to
go
through
the
content
today
before
I.
Kick
it
over
to
my
colleague,
Steve
I'm,
going
to
go
through
just
a
couple
of
housekeeping
items.
First
off
this
webinar
is
being
recorded,
so
you
can
look
for
that,
recording
as
well
as
the
deck
to
be
sent
through
to
you
here
in
the
next
day
or
so
looking
forward
to
sending
over
that
content.
A
If
you
have
any
questions
that
come
up
throughout
the
session
feel
free
to
put
those
in
the
Q
a
section
of
your
Zoom
window,
and
we
have
some
folks
online
that'll
be
able
to
type
some
of
those
answers
back
to
you.
But
then
Steve
will
be
able
to
answer
some
questions
towards
the
end
as
well
as
time
permits
and
with
that
I
will
kick
it
over
to
Steve
Graham.
Who
is
a
customer
success
engineer
here
at
gitlab?
A
B
You
Taylor,
my
name
is
Steve
Graham
I'm,
a
customer
success
engineer
with
gitlab
scale
team
I've
been
with
gitlab
for
close
to
four
years
now,
and
just
for
awareness,
gitlab
CI
CD
were
one
of
my
primary
motivations
to
join
gitlab
I
wanted
the
opportunity
to
really
dive
with
all
of
gitlab's
tooling,
and
this
role
gave
me
the
opportunity
to
do
that,
so
we're
going
to
be
covering
gitlab
CI
concepts
with
a
very
large
list
of
advanced
topics
today.
B
As
a
result,
we're
gonna
have
to
go
fairly
quickly,
but
to
Taylor's
point
you
will
have
a
chance
to
review
the
recording
again
if
you
want
to
in
an
email,
follow-up
and
you'll
also
have
a
link
to
be
able
to
download
this
deck,
and
this
deck
has
got
a
lot
of
links
in
it,
so
that,
if,
if
we're
on
a
slide
that
has
particular
interest
to
you,
invariably
it's
going
to
have
links
on
it.
That
will
help
you
to
get
to
more
advanced
content.
B
So
just
be
aware
of
that,
our
goal
today
is
to
help
you
understand
a
wide
breadth
of
the
potential
variations
you
may
want
to
use
in
pipelines.
For
for
yourself
and
for
your
team,
so
just
to
reiterate
what
Taylor
already
said:
we're
gonna
we're
gonna
be
sending
out
an
email.
Afterwards,
it's
going
to
have
a
link
to
the
workshop
for
viewing.
It'll
also
have
a
link
to
this
slide
presentation
so
that
you
can
download
that
and
again
take
advantage
of
the
links
that
are
in
it.
B
We're
also
going
to
be
inviting
you
to
join
a
Hands-On
workshop
for
CI,
which
I
would
really
encourage
you
to
do
if
you
haven't
had
a
chance
to
play
with
gitlab
CI.
Yet
this
will
be
Hands-On
you'll,
be
getting
a
group
provisioned
at
gitlab.com,
that's
private
to
you
that
you
can
play
around
in
and
you'll
be
given
specific
exercises
to
go
through
and
it'll.
Give
you
a
chance
to
learn
how
to
build.
You
know
simple
gitlab
pipelines
and
some
ways
to
start
to
work
up
toward
admit,
more
advanced
pipeline
structures
and
workflows.
B
So
you
know,
and
potentially
you
could
even
go
into
including
security
scanning,
and
you
know,
potentially
compliance
enforcement
under
your
own
private
group.
If
you
want
to
now
just
for
awareness,
we
typically
only
make
these
available
for
a
few
hours
after
the
Hands-On
workshops
and
we're
discussing
right
now
the
potential
to
keep
it
open
for
24
hours,
maybe
maybe
48.
So
please
do
join
us.
B
B
They
can
also
help
answer
any
other
questions
that
come
up
after
our
session
today,
so
I'm
gonna
have
to
keep
going
here.
This
is
our
agenda
for
the
day
we're
going
to
cover
a
real,
quick
overview
of
CI
CD
Basics,
which
is
again
just
a
quick
review.
We're
going
to
go
through
an
overall
overview
of
a
pipeline
file.
Then
we're
going
to
talk
about
pipeline
structures,
variables
controlling
when
your
jobs
run,
holding
and
securing
and
sharing
results
with
job,
artifacts
and
assembling
pipelines
from
components
using
extends
and
includes.
B
So
I'm
going
to
low
pass
this
slide
for
now,
but
just
be
aware
that
this
is
here
and
if
this
is
something
that
makes
sense
for
you
and
your
team,
please
take
advantage
of
it
Taylor
do
you
want
to
go
ahead
and
fire
up
the
first
first
poll.
We
want
to
take
a
quick
minute
for
a
poll
we're
interested
in
understanding
what
may
be
holding
you
back
from
using
CI
and
CD.
B
B
This
is
git
lab
flow.
Now
this
is
a
picture
of
gitlab
flow,
which
is
you
know,
roughly
analogous,
to
get
flow
or
GitHub
flow,
although
we
have
our
own
unique
variation
of
it
and
we're
gitlab
CI
fits
into
it.
So,
as
you
can
see,
somebody
creates
some
Milestone
creates
an
issue.
Besides
the
issue
out,
they
create
a
merge
requests
and
then
they
start
to
iterate
and
work
in
that
merge
request.
There
might
be
a
team
of
people
working
in
there.
B
It
might
be
a
single
individual
and
then,
during
the
course
of
that
work,
they're
going
to
every
single
time
they
push
code,
they're
going
to
want
to
be
able
to
see
some
kind
of
Pipeline,
and
so
since
they've
already
got
a
merge,
Quest
up
and
they'll
be
able
to
see
a
pipeline
see
the
testing
they
can
fix
anything
that
is
potentially
broken
or
needs
to
be
resolved,
and
that's
an
iterative
process
that
just
keeps
going
until
they're
ready
to
merge
the
merge
requests
and
in
every
every
commit
they're
going
to
have
a
new
pipeline
running
and
then
at
some
point
you
know
somebody
if
they
need
to
release
this
to
production.
B
For
example,
you
know,
they'll
have
a
react,
review
app
that
they
can
get
into
and
look
at,
make
sure
everything's
what
they
need
it
to
be,
and
then
at
some
point,
if
their
merges
accepted,
they
can
release
it
out
and
deploy
it,
and
this
is
possible
to
do
with
pipelines.
It's
it's
just
all
in
how
you
configure
them
and
and
how
you
set
your
rules
up
stuff
like
that,
so
gitlab
CISD
again,
you
know
it
starts
with
your
code.
B
Your
commit,
you
know
you
can
put
build
tests
in
it,
you
can
put
builds
in
there.
You
put
unit
tests
in
there.
You
put
integration
tests
in
there
and
then
you
also
have
the
ability
to
create
a
package.
If
you
need
to
you
know
that
might
be
a
container,
it
might
be
an
actual
Linux
package,
that's
held
in
gitlab
for
use
Downstream,
and
then
you
can
review
it.
You
can
put
it
on
staging
somewhere.
You
can
release
it
to
production
with
CD.
B
So
if
we
talk
about
the
anatomy
of
a
gitlab,
CI
and
CD
build,
you
know,
we've
got
a
pipeline
which
is
a
set
of
one
or
more
jobs.
Optionally,
organized
into
stages
stages
can
be
thought
of
as
analogous
to
columns
in
gitlab's
pipeline
View,
and
these
are
a
collection
of
jobs
that
all
run
in
parallel
subject
to
the
availability
of
Runners.
So
if
you
only
have
one
Runner,
it
can
only
do
one
job
at
a
time.
B
It's
going
to
go
sequentially
through
that
list
in
this
stage,
but
if
you
have
you
know
a
couple
dozen
Runners
out
there
and
not
that
many
jobs
jobs
can
all
fire
up
in
parallel,
which
is
wonderful.
We
also
have
jobs,
jobs
or
scripts
that
perform
specific
tasks.
You
know
this
might
be
npm
test.
Nvm
mvn
install
stuff,
like
that,
whatever
you
need
the
the
jobs
to
do
and
you're
going
to
be
working
from
essentially
a
show
command,
but
you
can
engage
at
that
point.
You
can
engage
your
build.
B
You
can
use
your
your
programming
language,
whatever
you
need
to
do
now.
I
want
to
real
quickly
touch
on
the
security
scanners
that
are
available
to
you
and
that
your
team
should
be
taking
advantage
of
right
now.
If
you've
got
a
premium,
subscription,
you've
got
static.
Application
security,
testing,
you've
got
container
scanning,
you've
got
infrastructures,
code
scanning,
which
was
just
released
in
16
by
the
way,
and
we
also
have
secret
detection,
all
of
which
are
extremely
valuable.
For
teams
like
yours
and
I
would
really
recommend
it.
B
These
be
hooked
up
and
if
you've
got
ultimate,
you've
got
the
list
that
we
see
here
now,
I'm
not
going
to
dwell
on
this
too
too
much,
but
just
realize
that
this
is
considered
a
full
Suite
of
security
tests
that
you
can.
You
can
apply
to
your
to
your
code
and
make
sure
that
you're
staying
safe
and
then
ultimate
of
course
comes
with
the
security,
dashboard
and
other
tools.
That
can
be
useful
for
that
too.
B
But
let's
talk
real
quickly
about
the
pipeline
file
itself
in
gitlab,
it's
default
name
is
Dot
gitlab.ci.yml,
and
this
is
where
everything
starts.
So
gitlab's
robust
support
for
cicd
has
been
going
on
for
a
very
long
time.
We
released
CI
initially
in
November
of
2011,
and
we've
been
iterating
on
it
constantly.
Since
then,
and
as
a
result,
we've
got
a
very
highly
mature
pipeline
capability
built
into
gitlab.
It's
well
equipped
to
handle
a
diverse
range
of
scenarios.
B
Getting
started
with
pipelines
is
actually
pretty
easy
and
they
can
be
effortlessly,
expanded
and
customized
to
meet
your
specific
needs,
which
is
not
to
say
that
you
won't
occasionally
run
into
a
little
law
school
that
you
need
to
work
out,
but
kitlab
offers
the
flexibility
to
find
multiple
pipelines
per
project.
It's
not
an
issue
to
do
and
I
frequently
do
a
lot
of
it,
enabling
you
to
address
any
scenario
that
requires
an
independent
pipeline.
So.
A
B
Look
at
it
at
gitlab.
The
first
section
that
you're
going
to
run
into
in
gitlab's
pipeline
files
is
the
global
section.
The
entire
kitlab
pipeline
file
is
declarative
in
nature,
you're,
not
taking
on
a
new
programming
language.
This
is
simply,
you
know
it's
like
filling
out
an
ini
file
if
you
will,
but
it's
in
yml,
it
uses
Global
keywords
in
anything.
That's
listed
as
a
keyword
gitlab
will
treat
as
that
particular
type
of
Declaration.
B
Anything.
That's
not
a
specific
keyword
is
considered
a
job,
so
just
be
aware
of
that
image.
In
this
case,
at
the
very
top,
is
defining
the
default
image
for
anything
that
uses
containers
Downstream,
you
can
Define
the
defaults
for
jobs,
so
you
can,
you
can
see
I've
got
default.
Tags
equals
shell.
This
is
a
project
of
mine
that
I
use
for
testing,
out,
workflows
and
building
multiple
independent
workflows,
and
it's
just
a
real,
simple
type
of
scenario.
B
We
can
declare
stages
that
we
want
to
be
able
to
use
now.
We
can
also
Define
workflow
rules,
so
you
can
make
decisions
about,
am
I
going
to
run
a
gitlab
pipeline
at
all
or
am
I
just
going
to
shut
pipelines
off
altogether,
which
is
what
you
can
do
with
workline
rules
and
we'll
talk
more
about
that
later.
B
You
also
have
jobs,
jobs
can
be
hidden,
which
is
to
say
they
start
with
a
period
and
hit
jobs.
Don't
do
anything
until
they're
extended
by
a
downstream
job.
That
declares
them,
and
you
can
see
in
this
test
variables.
Job
I'm,
extending
this
job
base
and
the
script
section
is
required.
You
have
to
have
the
script
section
or
it'll
just
fail
to
do
anything
for
you
in
fact,
you'll
get
a
yml
failure.
When
you
commit
it,
it
can
use
rules.
If
you
want
to
do
that.
B
In
this
case,
we've
got
a
real,
simple
rule
and
there's
many
more
options
we're
going
to
be
covering
in
detail
today.
So
let's
keep
going
I'm
just
going
to
skip
over
this
slide
for
the
moment,
but
just
be
aware
that
gitlab
has
the
ability
to
include
lots
of
independent
files.
If
you
need
to
spread
your
pipeline
configuration
out
over
multiple
files,
you
can
do
it
this
way.
A
lot
of
use
cases
for
this
are
independent
pipelines
defined
in
their
own
files.
B
B
We
appreciate
you
taking
the
time
folks.
It
helps
illuminate
us
and
helps
us
understand
what
what
else
is
out
there.
B
Now,
let's
talk
about
pipeline
architectures,
these
are
the
variants
on
how
you
can
assemble
a
pipeline.
What
you
can
do
with
it,
the
the
Billy
blocks
your
pipeline
are
just
the
basic
pipeline.
B
You
know
we
have
directed
a
cyclic
graph
which
allows
you
to
create
relationships
between
jobs
that
subvert
stage
order.
We
have
parent
child
pipelines,
which
is
a
way
to
launch
an
independent
pipeline
in
the
same
project,
and
we
have
Dynamic
child
pipelines
where
you
can
create
Dynamic
ymail.
If
you
want
to,
we
also
have
multi-project
pipelines,
which
is
a
good
use
case
for
model
repos
that
have
been
broken
out
into
separate
components.
B
B
Let
me
go
back
to
that
real
quickly.
It's
the
simplest
pipelining
gitlab
job
run,
jobs
run
independently,
sometimes
on
different
Runners.
All
jobs
in
this
stage
have
to
complete
successfully
before
proceeding
to
the
next
stage,
and
you
can
control
pipelines
with
pipeline
options.
So
here's
an
example
of
a
real,
simple
one
and
the
columns
that
you
see
are
the
stages
that
have
been
defined
in
gitlab's
pipeline
files.
You
can
see,
we've
got
to
build
a
test
and
a
deploy
stage
and
also
notice
that
this
job
hasn't
allowed
failure
equals
true.
B
B
Nothing
would
run
after
that
particular
test
had
failed
or
you
can
allow
it
to
failure
fail
in
cases
where
you
just
want
to
be
able
to
review
it,
come
back
and
iterate
on
it
again,
and
then
we've
also
got
the
ability
to
set
jobs
to
be
using
a
when
closet
manual,
and
that
means
that
somebody's
got
to
come
in
here
and
manually
start
that
particular
job.
You
can
also
restrict.
Who
can
do
that
manually
so
just
be
aware
that
those
options
are
available
to
you.
Other
options
that
are
available
are
when
delayed.
B
You
can
actually
start
a
job.
You
know
two
three
hours
after
a
pipeline
after
the
job
becomes
eligible
to
run.
If
you
want
to
do
that,
you
can
do
a
15-20
minutes
later,
whatever
makes
sense
for
you,
you
also
have
the
option
for
win
on
failure,
which
means,
if
some
previous
jobs
stopped
the
pipeline,
it's
got
the
ability
to,
or
or
even
if,
it's
allowed
to
fail
some
previous
job.
B
Now,
let's
talk
a
little
bit
about
directed
a
cyclic
graphs.
This
is
a
method
of
creating
job
dependencies
that
allow
you
to
subvert
our
normal
stage.
Orders
in
run
jobs
when
they're,
optimized
and
ready
to
go
so
dependent
jobs
can
proceed
to
the
next
stage
without
waiting
for
other
jobs
in
the
stage
to
finish
so,
if
we
look
at
a
job
here
in
the
scenario
that
we're
seeing
here.
B
Are
you
know,
we've
got
a
build
for
Android,
we've
got
to
build
for
iOS,
but
the
test
for
Android
doesn't
have
to
wait
for
the
build
for
iOS
to
complete,
because
in
normal
stage
order
everything
in
that
stage
has
to
complete
before
the
next
stage
will
and
its
jobs
will
start
to
to
to
run.
And
so
what
you
can
see
here
is
we've
declared
needs
and
Android
has
needs
build,
Android,
which
is
the
name
of
the
job
and
and
then
it
can
run
as
soon
as
that
job
is
available
and
has
completed.
B
So
this
gives
you
a
way.
You
know
I've
seen,
projects
that
had
Windows
Linux,
Mac,
OS,
IOS
and
Android
components
all
assembled
in
the
same
project,
and
they
would
be
testing
all
of
them.
So
this
gives
you
a
way
to
subvert
the
need
to
wait
for
everything
else,
to
build
and
then
be
able
to
move
immediately
to
do
the
testing
you
need
to
do
now
when
you
use
that
needs
keyword,
you're
going
to
be
building
this
a
cyclic
graph
capability
that
gitlab
can
actually
display
for
you,
and
this
is
under
the
need.
B
Now
the
next
type
of
Pipeline
and
again
folks
I
apologize
we're
having
to
go
through
this
so
quickly,
but
we
want
to
make
sure
you
get
a
chance
to
understand
the
broad
range
of
topics
that
pipelines
can
cover
for
you,
parent
child
pipelines,
give
you
the
ability
to
launch
an
independent
pipeline
in
the
same
project,
so
these
are
separate
entire
pipeline
configurations.
You
can
run
them
from
their
own
independent
pipeline
files.
If
you
want
to,
they,
don't
have
to
be
contained
in
gitlab.ci.myml.
In
fact
they're,
not
by
intention.
B
B
B
B
So
parent
child
pipelines
use
trigger
jobs.
In
this
particular
case,
we
see
this
build
one
which
is
a
trigger
job.
We
would
typically
call
this
a
bridge
job.
It's
only
going
to
run.
This
is
part
of
a
rules
declaration,
but
it's
only
going
to
run
for
changes
under
this
specific
directory
and
that's
keep
that's
that's
the
capability
you've
got
built
in
in
this
case.
B
It's
actually
not
using
rules,
it's
using
our
old
only
construct,
but
this
is
something
built
into
only
and
into
rules
that
you
can
take
advantage
of,
so
that
this
particular
job
only
runs.
If
there's
changes
in
that
particular
directory
or
set
of
files,
and
then
you
can
include
a
file
from
the
same
project,
it's
got
to
be
a
valid
yml
file.
B
So
just
be
aware
of
that,
and
then
strategy
depends
means
to
hold
this
pipeline
until
that
Downstream
pipeline
finishes
so
you're,
going
to
execute
that
full
Downstream
pipeline.
This
pipeline
won't
reach
the
point
of
having
passed
or
failed
until
that
pipeline
reports
back
in,
but
it
also
this
is
the.
This
is
also
the
key
word
that
exposes
the
jobs
in
the
Upstream
pipeline.
So
when
you
click
on
the
trigger
job
in
the
Upstream
pipeline,
you'll
be
able
to
see
the
jobs
in
the
downstream
pipeline,
so
just
be
aware
of
that
as
well.
B
So
let's
talk
about
Dynamic
pipelines,
but
this
is
the
same
concept
again.
This
is
a
child
pipeline
that
runs
independently,
but
Dynamic.
The
idea
in
Dynamic
is
that
it's
not
declared
in
a
pipeline
file.
It's
actually
created
as
an
artifact
by
a
job
running
in
your
pipeline
and
then
used
by
a
downstream
job.
So
you
can
generate
the
pipeline.
B
Configuration
at
build
time
use
the
generated
configuration
at
a
later
stage
to
run
a
child
Pipeline,
and
this
is
useful
to
use
a
single
pipeline
configuration
with
different
settings
to
support
a
matrix
of
different
targets
and
architectures.
So
if
you've
got
this
kind
of
need,
know
that
this
capability
is
built
in
and
you
do
have
the
ability
to
take
advantage
of
it,
and
then
this
is
a
job
that
sets
up
a
dynamic
pipeline
and
what
you
can
see
here
is
that
we're
generating
an
artifact
place.
It
generated
one
ml
file
in
the
job
artifact
store.
B
B
This
is
just
a
way
to
get
Target
specific
things
so
that
you
can,
you
can
go
through
that
quickly,
foreign,
so
multi-project
pipelines
are
another
variant
of
you
know
having
Downstream
Pipelines,
but
this
is
the
ability
to
launch
a
pipeline
in
a
different
project,
so
pipeline
in
one
project
can
trigger
a
pipeline
in
another
project.
Important
thing
to
notice
about
to
note
about
this
is
that
any
user
can
trigger
a
pipeline
will
need
to
have
the
right
role
to
trigger
the
pipeline
in
the
downstream
project.
B
So
that's
developer
right,
so
if
they
can
trigger
the
pipeline,
any
user
who
can
trigger
the
pipeline
in
the
Upstream
project
needs
to
be
at
least
a
developer
in
the
downstream
project.
You
can
also
pass
variables
to
the
downstream
pipeline
if
you
wanted.
If
you
need
to
do
that,
if
the
downstream
pipeline
fails,
it
doesn't
necessarily
fail
the
Upstream
pipeline,
although
that's
up
to
you
and
your
strategy
and
building
it.
This
is
useful
when
building
deploying
large
applications
that
are
made
up
of
different
components
that
have
your
own
project
and
build
pipeline.
B
Now,
where
I
see
this
a
lot
or
or
at
least
I
have
in
the
four
years
that
I've
been
at
kit
live,
is
people
who
break
up
mono
repos
and
they
start
to
componentize
them
into
separate
projects.
They
can
all
contribute.
You
know
each
one
of
those
separate
projects
can
launch
a
pipeline
in
the
the
large
aggregation
project
that
pulls
everything
together
for
the
build,
and
this
is
what
a
multi-project
pipeline
can
be
constructed
of.
We
can
pass
the
variables
Downstream
by
just
declaring
them.
B
We
can
specify
project
and
branching
the
downstream
project,
so
we
can
actually
tell
it
right
on
master
or
run
on
development
if,
if
appropriate,
we
can
declare
the
strategy
again
and
we
can
Define
what
variables
to
forward.
If
we
want
to
take
that
route
too,
although
I
kind
of
prefer
to
use
this
as
long
as
you
don't
have
too
many
variables
to
have
to
declare
the
job
with
the
triggers
often
referred
to
as
a
bridge
job,
but
it
doesn't
have
to
have
that
name,
it
can
have
any
name.
You
want
it
to
have.
B
B
Gitlab
has
a
huge
list
of
predefined
variables
that
you
can
use
in
in
various
ways
in
jobs,
faculty
and
configuration
files,
things
along
those
lines
and
all
these
variables
are
available
to
your
jobs
when
they're
running.
So
when
they're
in
their
script
section
all
these
all
these
variables
are
there
and
available
to
you.
You
can
set
variables
at
the
project
level.
You
can
also
set
them
at
the
group
level
and
you
can
set
them
at
the
instance
level.
B
If
you
want
to
do
that,
I,
don't
see
that
done
a
lot
of
these
instance
level,
but
it
is
it's
something
that
you
can
do
so
in
a
project
you
can
go
to
settings,
CI,
CD
and
then
expand
the
variables
section
and
you'll
be
able
to
set
variables.
Here.
You
can
set
the
default
state
for
them,
which
means
you
know
they
could
still
be
overridden
Downstream,
but
this
is
a
good
way
to
just
set
them
up
now.
B
Be
aware
that
you
can
also
hide
them,
you
can
hide
them
and
you
can
protect
them
if
you
choose
to
protect
them.
What
you're
saying
is:
don't
don't
expose
this
variable
to
anything,
that's
not
on
a
protected
branch.
This
keeps
that
variable
from
being
exposed
in
feature
branches
that
people
are
working
on
and
it's
just
not
available.
Unless
the
pipeline
is
actually
running
on
a
protected
branch,
and
you
can
also
mask
it
so
that
if
somebody
tries
to
Echo
it,
it
can
hide.
B
So
variables
entered
for
the
pipeline
run
so
be
aware
that
if
you
go
to
gitlab
CI
and
CD
default
page,
which
is
pipelines,
there's
a
run
pipeline
tab
at
the
very
very
top,
and
that
gives
you
the
ability
to
create
any
variable
you
want.
You
can
create
a
variable
Define,
its
name,
I'm.
Sorry,
it's
key
and
its
value
and
then
be
aware
that
you
can
also
do
the
same
thing
using
pre-constructed
URLs.
B
So
if
you've
got
a
place
where
you
can
keep,
you
know
specific
types
of
URLs
that
you
people
will,
that
will
correlate
to
manual
pipeline,
runs
and
variables
that
you
need
to
set.
You
can
actually
set
that
in
the
URL,
that's
being
passed
in
the
premise
being,
of
course
those
users
have
to
be
logged
in
members
of
that
project
and
able
to
run
a
pipeline,
but
it
is
possible
to
do
this
so
variables
that
are
defined
in
CI
configuration.
Look
like
this
and
again.
B
This
is
a
global
setting,
so
variables
Global
bars
test
and
then
the
build
job,
as
you
can
see,
takes
advantage
of
its
own
environment
here,
variable
that
it
sets,
and
then
it
just
Echoes
those
out,
but
you
can
set
them
both
in
the
job
and
in
the
as
a
global
for
the
entire
pipeline.
B
If
it's
a
global
on
the
entire
pipeline,
all
the
jobs
are
going
to
inherited.
So
just
for
awareness
all
right,
so
there's
also
inherited
variables.
B
B
You
know
if
you
you
can
see
that
in
this
particular
case,
we're
declaring
in
this
deploy
job
we're
declaring
the
same,
build
variable,
that's
being
declared
in
the
dot
EnV,
but
if
we
Echo
it
out,
it's
going
to
save
value
from
the
build
job,
because
that's
what's
being
declared
up
here
in
the
dot
EMV
and
the
dot
EnV
is
going
to
over
is
is
the
last
input
on
variables
for
a
job.
So
it's
already
after
any
variable,
has
been
declared
globally.
B
It's
after
any
variable
has
been
declared
in
the
job,
and
at
that
point
you
know
it's
it's
overriding
everything
that
came
before
it.
So
just
be
aware
of
that.
Now
the
important
thing
to
realize
about
these
dot
EnV
files
is,
they
can't
be
used
in
rules.
B
They
can't
be
used
in
rules
anywhere
in
git
lab
because
they
don't
exist
until
the
script
section
actually
starts
firing
up.
At
that
point,
these
dot,
even
b
files
are
available,
and
this
is
how
variables
are
processed.
This
is
the
order
of
Precedence
going
from
bottom
to
top
top
having
the
highest
precedence.
B
So
just
be
aware
that
you
know
values
for
this
run
that
you
can
set
with
triggers.
You
know
you
can
run
gitlab
pipelines
from
the
API.
You
can
run
them
from
the
web
on
the
Run
pipelines
page
you
can
make
scheduled
runs
if
you
want
to
take
that
route
and
each
of
those
have
the
ability
to
set
values
for
variables,
so
values
for
this
run.
B
Values
configured
is
in
the
project
group,
for
instance,
so
this
would
be
a
value
that
you
set
in
the
project
or
in
a
group,
above
it
or
in
the
instance,
and
then
there's
the
values
inherited
from
previous
jobs
and
stages.
Now
the
values
configured
in
the
group,
or
instance,
are
going
to
have
precedence
over
the
values
that
are
set
in
the
yml,
typically,
where
you
would
set
the
default
state
for
these
and
then
values
from
git
Lab
at
the
very
bottom.
B
B
So
when
are
pipelines,
run
pipelines
are
run
for
new
commits
and
you
create
a
new
Branch.
You
create
a
new
tag,
new
merge
request
and
then,
by
the
way,
when
you
create
a
merge
request,
it's
always
based
on
a
branch.
So
every
single
time
you
recommit
to
that
branch
in
the
context
of
a
merge
request,
you're
going
to
get
a
new
pipeline.
So
just
be
aware
of
that.
B
B
You
know
I
can't
remember
the
name
of
that.
Forgive
me
and
then
the
dash
is
the
start
of
an
array
in
yml,
so
we
can
have
multiple
rules
for
any
independent
job
which
we
want
to
in
this
case.
We're
just
testing
one
thing:
FCI
pipeline
source,
which
is
a
predefined
variable,
equals
equals
web,
which
is
by
the
way
the
Run
pipeline
capability
from
gitlab's
default,
CI,
CD
page.
B
If
it's
that,
then
this
is
going
to
run,
it
doesn't
have
to
make
any
other
any
further
declarations
beyond
that
at
all.
It's
going
to
default
to
what's
called
when
on
success,
and
that
means
as
long
as
everything
before
it
passes.
B
You
know
any
job
that
precedes
it
passes
and
doesn't
stop
the
pipeline.
It's
going
to
go
ahead
and
proceed,
so
this
job
will
only
run
when
the
pipeline
is
triggered
from
the
web.
So
if
you
do
commit
we're
not
going
to
pre-populate,
the
CEO
populated
Source
equals
web
and
it
won't,
it
won't
run.
For
that
scenario,.
B
So
this
is
a
quick
reference,
the
various
operators
Clauses,
you
know
the
results
you
can
try
to
capture
and
then,
when
options
you
can
use,
we
can
use.
If
we
can
use
changes
which
again
is
something
in
the
commit
has
been
changed.
B
It
can
be
a
directory,
it
can
be
files,
it
can
be
individual
files
and
there's
a
few
different
ways
to
construct
that
and
then
it
can
also
be
exist,
which
is
just
checking
for
the
presence
of
something
in
the
in
the
directory
or
I'm.
Sorry,
in
the
repository,
so
if
it
exists,
the
pipeline
runs
The
Operators
that
you
see
listed
here
are
pretty
much.
B
What
you're
used
to
seeing
you
know
equals
equals
is
just
does
it
equal
this
verbatim
not
equals
is
a
negative
match
right
and
then
what
you
see
right
below
that
with
the
tildes
equals
tilde,
not
tilde.
Those
are
regular
Expressions,
so
you
could
use
regular
Expressions
to
test
against
variables
and
you
know
match
up
based
on
how
you
construct
your
regular
expressions
and
then
the
end
and
or
of
course,
or
what
you
expect
them
to
be.
B
The
results
could
be.
You
know
we
can
use
when
to
encapsulate
what
we
want
to
be
able
to
do
next
and
under
what
conditions
we
can
allow
failure,
so
allow
failure
can
be
true
or
it
can
be
false.
It's
up
to
you
if
it's
if
allow
failure
is
false,
that
child
is
going
to
stop
the
pipeline.
If
allowed
failure
is
true,
it's
not
going
to
stop
pipeline,
but
it's
going
to
show
an
exclamation
point
if
the
job
fails
in
the
pipeline-
and
we
also
have
the
start
in
right.
So
again,
this
is
starting.
B
15
minutes
start
three
hours.
Whatever
you
want
it
to
be,
the
win
options
can
be
always
which
means
run
unconditionally.
You
know,
if
you
put
it
below
a
rule.
That
means
as
long
as
that
rule
is
there
always
run
no
matter
what
we
have
the
ability
to
do
negative
matching
on
rules
With,
the
Wind.
Never
so
you
can
create
a
rule
that
matches
something
you
don't
want
that
job
to
run
in,
and
you
can
say
whenever
and
then
that
that
job
is
completely
excluded
from
that
particular
rule
set.
B
We
do
have
the
odd
success
and
on
failure,
which
just
means
as
long
as
everything
preceding
this
job
has
passed
or
is
allowed
to
fail,
unsuccessful
that
you
proceed
and
that
job
will
run
on
failure
is
the
reverse
case.
You
know,
and
something
has
failed
and
we've
got
to
go
back
and
see
if
we
can
do
some
cleanup
or
something
along
those
lines.
We
also
have
the
manual
somebody's
got
to
manually
trigger
that
and
again
you
do
have
ways
to
set
who's
allowed
to
trigger
that.
B
So,
if
you're,
using
protected
branches
and
and
you've
got
your
environment
set
up,
you
have
the
ability
to
be
able
to
decide
based
on
who
is
allowed
to
run
a
manual
job
in
a
protective
Branch.
You
can
set
that
particular
group
of
individuals,
but
you
can
also
set
it
for
the
environments
too.
B
So
here's
another
example
here:
FCI
pipeline
Source
equals
merge,
request
dependent.
This
is
a
negative
match.
It's
an
example
of
a
negative
match,
that's
a
case
where
we
don't
want
this
particular
job
to
run
ever
so,
we
put
a
Whenever
there
and
that
becomes
a
negative
rule
that
excludes
this
job
from
running.
The
same
thing
is
true,
with
CI
pipeline
source
equaling
schedules.
So
if
somebody's
scheduled
a
pipeline
run,
we
don't
want
this
job
to
run,
but
in
all
other
circumstances,
it's
going
to
run
as
long
as
nothing
stops
the
pipeline
again.
B
B
All
right
in
this
particular
job,
it's
a
manual
job
as
you
can
see.
It's
got
wind
manual
down
here
down
here,
but
it's
only
going
to
run
if
R
equals
some
string
value
right,
so
you're
testing
some
predefined
variable.
It's
got
to
have
this
exact
stream
value
and
then
again
only
if
there's
changes
to
the
docker
file
itself
or
to
the
docker
scripts
directory
or
contained
files
in
that
directory.
B
B
And
similar
scenario
here
in
this
particular
case
they're
checking
to
see
if
the
commit
branch
is
Main
and
you
also
have
a
way
to
check
to
see
if
it's
protective
Branch
so
just
be
aware
or
I'm
sorry,
the
default
Branch.
You
have
a
way
to
check
to
see
if
it's
the
default
branch,
which
I'm
assuming
main
is
in
this
case,
so
just
know
that
that's
another
variation
on
this,
and
in
this
particular
case
you
know,
somebody
has
set
this
to
be
delayed
and
it's
going
to
start
in
three
hours.
B
B
B
Now
this
is
a
special
use
case,
but
let
me
talk
about
it
real
quickly.
We
do
have
the
ability
remember.
We
were
talking
about
globals,
which
were
separate
from
jobs,
so
globals
apply
to
everything
we
do
have
this
special
construct
here
called
workflow,
that's
a
keyword
that
can
have
rules
in
it
and
this
controls
running
pipelines.
It's
a
bold
decision
at
that
point
true
or
false.
B
If
you,
if
you
decide
not
to
run
a
pipeline
by
putting
a
whenever
that
negative
rule
at
the
very
top
there
no
pipeline
jobs
are
going
to
run
even
if
their
rules
would
allow
them
to
and
I
want
you
to
notice.
This
particular
one
uses
regular
expressions
and
it's
it's
against
the
commit
message.
So
it's
looking
to
find
Dash
WIP
somewhere
in
the
commit
message.
It's
actually
as
if
you
look
at
it,
it's
right
at
the
very
end.
B
So
this
is
right
at
the
very
end,
somebody
ends
their
commit
message
with
Dash
WIP,
and
if
that
comes
in
and
that's
in
the
commit
message,
this
pipeline
is
never
going
to
run.
It's
also
not
going
to
run
if
it's
a
CI
commit
tag,
so
that
just
means
somebody
created
the
tag
this
this
particular
we're
not
going
to
run
a
pipeline.
B
And
then
notice,
it
says
when
always
at
the
end
bottom,
it's
not
when
on
success,
it's
just
unconditionally,
always
in
all
the
other
circumstances,
except
for
those
two
we're
always
going
to
attempt
to
run
a
pipeline.
Subject
to
you
know
the
rules
in
the
jobs
all
right
so
variables,
depending
on
the
variable
dark.
Docker
file
will
be
set
differently
based
on
these
sets
of
rules.
B
So
if
the
CI
commit
ref
name
has
got
the
dash
WIP
at
the
very
end,
you
know
it's
going
to
use
this
Docker
file
called
test.docker
file
instead
of
using
the
main
Docker
file.
But
if
it's
not
got
that
and
it's
coming
in
on
the
default,
Branch,
presumably
main
or
Master,
its
Docker
file
is
going
to
be
main.docker
file
and
in
all
of
the
circumstances,
it's
got
a
standalone.
B
So
I
want
you
to
notice
that
the
dash
here
delineates
a
new,
a
new
array
entry
and
each
one
is
meant
to
be
a
completely
inclusive
rule
all
by
itself.
So,
even
though
this
has
a
standalone
win,
if
neither
one
of
these
matches
the
Standalone
win
is
going
to
win,
there's
implied
wins
for
both
of
these.
That
again,
is
when
colon
on
success,
but
if
neither
one
of
these
matches,
then
it's
just
always
it's
always
always
going
to
create
a
pipeline.
B
Now,
let's
cover
artifacts
we're
going
to
have
to
do
is
fairly
quickly.
Gitlab
has
the
ability
to
generate
artifacts
that
are
then
uploaded
up
to
gitlab
for
downloading
in
the
UI
or
for
use
in
subsequent
jobs.
In
the
same
pipeline,
CI
configuration
is
moderately
simple
on
these.
It
allows
for
the
saving
of
build
artifacts
for
the
output
of
any
job
and
I'm
going
to
really
encourage
you
to
pay
attention
to
this
expiring
at
the
bottom,
because
these
artifacts
build
up
over
time.
B
I've
seen
them
just
cause
immense
challenges
for
teams
that
need
to
manage
their
storage
a
little
bit
more
closely.
So
if
you're,
creating
large
Builder
artifacts,
definitely
think
about
that
expiring
and
in
fact
I
would
just
do
it
unconditionally,
but
they're
available
by
all
subsequent
jobs.
So
the
artifacts
will
automatically
flow
to
you
and
be
downloaded
by
all
subsequent
jobs.
Before
your
script,
section
runs,
they
can
pull
in
any
combinations
of
paths
or
files.
B
You
can
use
exclude
to
limit
what
is
added
and
then
a
downstream
jobs
you
can
use,
needs
artifacts
or
the
dependencies
keyword
to
limit
what
gets
downloaded
in
the
case
of
needs,
you're
only
going
to
download
from
Upstream
jobs
that
have
been
declared
as
dependencies
for
the
job
that's
running,
but
you
can
also
use
the
rfx
to
delineate
what
you
want
to
download
in
the
way
of
artifacts,
including
nothing
at
all
and
the
same
thing's.
True
dependencies
empty,
empty
arrays
for
both
of
those
means
you
don't
want
anything
coming
Downstream
to
that
job.
B
You
can
use
when
to
determine
if
artifacts
artifacts
will
be
stored
or
not
again,
on
success,
and
then
you
can
use
the
expiring
to
determine
when
they're
going
to
be
destroyed.
So
if
you
set
expire
in
three
months,
three
months
after
that
they're
going
to
get
deleted
by
gitlab
automatically,
if
you
want
to
download
an
artifact
in
gitlab
on
the
pipeline
page,
you
can
download
all
of
the
artifacts
for
a
pipeline
using
this
little
download
icon
over
here
on
the
upper
right
on
the
job
page.
B
If
you
want
to
go
to
the
jobs
page,
you
can
download
them
for
individual
jobs
and
then
on
a
specific
job.
You
can
do
the
same
thing.
You
can
download
it
and
you
can
go
into
the
artifact
browser
for
a
job
if
you
want
to
do
that,
gitlab
by
default
is
going
to
archive
all
the
artifacts
created
by
a
single
job
and
it's
going
to
download
them.
I'm,
sorry
upload
them
back
to
gitlab,
and
at
that
point,
if
you
don't
use
the
artifact
browser,
the
only
thing
you
could
do
is
download
that
entire
thing.
B
Now
our
fact
Administration
and
the
self-managed
lab
instance.
These
artifacts
can
be
stored
on
local
stores,
so
block
mounted
storage
or
NFS.
Whatever
you've
got
there,
they
can
also
be
in
object,
storage
and
you
can
configure
them
to
use
something
like
S3.
If
you
want
to
do
that,
artifact
expiration
times
can
be
configured
at
the
instance
level.
If
you
are
self-managed,
you
can
just
make
a
default
for
that
there
and
then
artifact
downloads
followed
to
gitlab's
access
control.
B
Now
we
have
a
lot
of
container
and
language
specific
Registries
that
you
can
use
for
artifact
storage
if
you
want
to
take
that
route
and
I
would
really
encourage
you
to
to
pursue
these.
We
do
have
a
Docker
registry.
If
you
want,
if
you
want
to
use
that,
you
can
store
your
Docker
Registries
right
there
in
gitlab,
presumably
close
to
your
runners,
so
that
you
know
you've
got
you're
not
having
to
lean
into
GitHub
or
something
like
that
to
store
your
your
Docker
registries.
B
In
in
gitlab
itself,
coming
from
external
sources,
so
gitlab
makes
a
call
out
to
GitHub,
and
it's
going
to
Cache
that
version
of
the
container
that
you're
asking
for
so
that
subsequent
calls
to
it
don't
have
to
go
back
to
GitHub
again.
It
also
speeds
those
jobs
up
because
you're
not
having
to
Traverse
the
internet
right.
So
we've
got
a
long
list
of
different
types
of
package.
B
Now,
let's
talk
real
quickly
about
dependencies.
This
is
another
one
of
those
keywords
you
can
use
to
reduce
the
number
of
artifacts
being
passed
into
a
job
from
a
previous
job.
B
B
It's
a
using
the
build
OS
X
artifacts.
So
it's
going
to
declare
that
and
then
it's
only
going
to
get
those
artifacts
same
thing
true
here,
but
remember
that
if
you've
got
a
job
that
doesn't
depend
on
any
of
those
artifacts
at
all,
it
can
also
be
an
empty
array
which
would
be
just
nothing
coming
after
here.
So
if
you've
got
an
empty
array
or
a
fact,
you're
going
to
pass
to
that
job
at
all.
B
Now
this
very
last
section
I'm
going
to
have
to
move
through
this
fairly
quickly
we're
going
to
talk
real
quickly
about,
includes
and
extends.
These
are
two
ways
of
reusing
code
over
and
over
again
in
pipelines
and
creating
you
know
higher
maintainability
in
the
way
that
the
way
that
it's
all
constructed
so
include
is
the
first
thing
we're
going
to
talk
about.
This
is
a
way
to
include
pipeline
files
from
potentially
other
projects.
If
you
want
to
do
that,
something
we
actually
encourage
you
to
do.
B
If
you've
got
a
pipelines
team
that
specializes
in
this
they
might
want
to
set
up.
You
know
a
project
that
just
has
their
templated
pipelines
that
other
pipelines
can
use
if
they
want
to
do
it
and
you
can
include
them
from
gitlab.ci.yml
now,
there's
another
way
to
do
it
too,
but
this
is
the
most
common
way
to
do
it,
so
it
allows
the
occlusion
of
external
yaml
files.
You
might
want
to
break
your
pipelines
down
into
you,
know
specific
workflows.
B
You
know
job
definitions
and
role,
definitions,
I've,
seen
lots
of
different
methodologies
here,
but
this
is
an
unconditional
include
here
and
it's
just
use.
It's
going
to
look
for
the
same
named
file
in
this
repository
that
you're
currently
in
now,
this
local
does
exactly
the
same
thing.
That's
exactly
what
that
does
so
notice
that
we
also
have
the
ability
to
include
a
file
from
a
different
Repository.
B
You
know
we
declare
the
project
and
then
we
declare
the
file
name
and
gitlab,
we'll
pull
it
in
now.
In
this
particular
case,
I,
don't
have
to
be
a
developer
in
that
remote
project.
All
I
have
to
do
is
have
read
access
to
that.
So
if
you're
self-hosted
and
you
have
the
ability
for
projects
to
be
internal,
only
that's
effectively
public
for
anybody,
that's
logged
in
so
just
be
aware
of
that.
It
works
very
well.
People
who
are
running
pipelines
have
got
to
have
read
access
to
that.
B
To
that
project
and
they'll
be
able
to
pull
those
in
and
our
remote
is
the
ability
to
include
a
file
from
your
remote
URL.
Maybe
you
want
to
include
a
file.
That's
on
gitlab.com,
that's
you
know
a
specific
file
that
you
want
to
be
able
to
get
from
there.
That
has
some
challenges.
You
know
it
has
to
be
publicly
accessible.
You
can't
be
using
anything
that
requires
Authentication,
and
then
we
also
ship
gitlab
with
a
whole
variety
of
various
pipeline
files.
You
can
use
to
do
security
scanning.
B
We
also
have
an
auto
devops
pipeline
file.
If
you
want
it's
kind
of
a
one-size-fits
all,
but
these
are
what's
called
template
files
so
notice
that
these
are
all
links,
so
you
can
go
to
learn
how
to
use
each
one
of
these,
but
a
template
file.
B
Now
extensis
next
thing,
I
want
to
talk
about.
Remember
when
we
were
talking
about
the
pipeline
files.
I
was
talking
about
hidden
jobs,
that's
exactly
what
this
test
is
up
here
at
the
very
top,
that's
a
hidden
job,
but
virtue
of
the
fact
that
it's
appreciated
with
a
period
so
extends
we
can
use,
extends
to
Define,
hidden
irregular
job
names
that
a
job
is
coming
here
from
now
realize
we're
not
limited
to
extending
hidden
files.
B
We
can
actually
extend
a
job,
that's
already
been
declared
somewhere
and
overwrite
the
aspects
of
it
that
we
need
to.
If
we
want
to
take
that
route,
so
we
don't
have
to
extend
just
hidden
files,
we
can
also
extend,
as
you
can
see
here.
Oh
you
know
what
that's
not
showing
us,
but
we
can
actually
extend
regular
jobs
to
not
just
hidden
ones.
B
It's
an
alternative
to
yml
Anchors.
If
you've
ever
used,
yml
anchors,
they're
kind
of
they're
a
little
bit
ugly,
they
work
well,
but
they're
a
little
bit
ugly.
It's
a
little
more
flexible
and
readable.
It
performs
a
merge
so
every
time
and
by
the
way
we
don't
have
a
declared
limit
on
the
number
of
extents.
B
We
do
have
a
limit
on
the
multi-level,
so
this
job
extends
that
job
extends
that
job
extends
that
job,
but
in
a
single
job
we
can
have
three
or
four
extends
in
our
extends
line
right
here
and
one
of
the
ways
I
like
to
do.
This
is
I
like
to
Define
rules
in
one
hidden
job
I
like
to
Define
job
script,
content
in
another
hidden
job,
and
you
might
find
another
application
for
something
else
too.
B
So
just
be
aware,
you
can
do
multiple
extends,
but
they're
going
to
go
in
the
order
that
they're
included
and
each
one
is
successfully
going
to
override
its
predecessor.
So
it's
a
good
to
make
sure
that
they're
not
going
to
conflict
with
each
other
and
again
it
can
merge
the
hidden
jobs
or
it
can
merge
from
regular
jobs.
B
We
can
use,
includes
and
expands
together,
so
be
aware
of
this.
We
can
include
a
file
pull
it
in
and
it's
got
a
hidden
job
in
it
or
it's
got
regular
jobs
to
clarinated,
and
we
can
go
ahead
and
use
that
to
extend
a
downstream
job.
As
you
can
see,
that's
happening
here
to
this
used.
Templatex
extends
this
template.
That's
a
declared
hidden
job
here,
so
in
the
interest
of
trying
to
get
to
some
time
for
questions,
I'm
going
to
go
ahead
and
end
it
here,
but
I
want
you
all
to
be
aware.
B
Remember
that
you're
going
to
have
access
to
this,
this
entire
presentation
in
a
follow-up
email
you're
also
going
to
have
access
to
the
recording,
but
there's
five
or
six
appendixes
that
I
put
at
the
very
end.
These
are
slides
that
ended
up
making
you
too
busy
and
run
too
long,
so
I
pulled
them
out,
put
them
in
there
and
then
there's
also
some
just
documentation
slides.
If
you
want
to
just
suggestions
for
documentation
that
can
be
useful
for
you,
so
just
be
aware
that
those
are
there.
B
A
I
just
launched
the
just
a
quick
feedback
poll,
we'd
love
to
get
a
little
bit
of
feedback
from
from
you
all.
Just
a
couple
of
quick
questions
there
and
I
think
we
may
have
time
for
maybe
one
questions
to
use
before
before
we
wrap
up.
A
B
B
So
in
this
particular
case
we're
using
regular
expressions-
and
it
has
to
be
of
this
form-
release
Dash
decimal
and
it
can
be
more
than
one
decimal
I'm,
sorry
digit.
It
can
be
more
than
one
digit
followed
by
a
period
which
can
be
more
than
one
digit
following
that
and
then
followed
by
a
period
and
followed
by
another,
and
then
that's
the
end
of
the
tag
name.
So
yes,
you
can
absolutely
do
this.
Regular
expressions
are
incredibly
powerful,
they're,
not
inhibited
by
getting
that
and
being
able
to
Target.
B
A
Awesome
thanks,
Steve
I,
think
that
puts
us
right
at
time.
Thank
you.
Everyone
for
joining
us
today,
as
we
mentioned,
we'll,
be
sending
out
the
deck
and
the
recording.
A
So
you
can
look
for
that
as
a
reminder
within
that
email
that
we're
sending
you
there
are
two
opportunities
to
sign
up
for
a
Hands-On
Workshop
next
week,
that's
actually
led
by
Steve,
so
we're
looking
forward
and
then
there's
also
an
opportunity
to
sign
up
for
a
one-on-one
session
with
Steve
or
another
customer
success
engineer
to
help
you
with
anything
we
went
through
today,
so
take
advantage
of
those
offers
we're
looking
forward
to
engaging
with
you
and
we'll
catch
you
on
the
next
one
thanks,
everybody.