►
From YouTube: Advanced CI/CD Webinar
Description
Expand your CI/CD knowledge while we cover advanced topics that will accelerate your efficiency using GitLab, such as pipelines, variables, rules, artifacts, and more. This session is intended for those who have used CI/CD in the past.
A
A
All
right,
let's
go
ahead
and
get
started.
I
wanted
to
welcome
everyone
to
our
to
our
webinar
session
today
on
advanced
ci
cd,
we're
glad
you
could
all
join
us.
I'm
joined
by
my
colleague,
conley
rogers
who's,
a
technical
account
manager
here
at
gitlab,
we're
happy
to
have
him
joining
us
today
before
I
kick
it
over
to
him.
Just
a
couple
of
housekeeping
items.
A
First
off
this
webinar
will
be
recorded,
so
you
can
look
for
that
recording
to
come
into
your
inboxes
in
the
next
couple
of
days
for
any
questions
that
come
up
throughout
the
session.
Please
go
ahead
and
throw
those
in
the
q
a
we
have
a
couple
of
other
folks
from
from
our
side
that
are
that
are
here
and
ready
to
to
answer
those
questions
and
and
colony
will
be
able
to
answer
some
of
them
towards
the
end
as
well.
So
without
further
ado,
I
will
kick
it
over
to
calmly.
B
All
right
perfect,
we
can
get
going
can
say
where
you
can
see
my
screen.
Okay,
we
got
you
awesome
well.
First
of
all,
I
wanted
to
start
today
with
the
most
important
thing
in
mind,
which
is
you
so
this
presentation
and
the
work
that
taylor's
team
does
is
all
in
hopes
of
providing
you
more
value
for
your
gitlab
subscription.
B
So
that
means,
if
you
have
a
question,
don't
hesitate
to
put
it
in
the
chat.
I
am
going
to
do
my
best
to
save
around
10
minutes
or
so
at
the
end
for
q.
A
for
anyone
who
would
like
to
come
off
of
mute.
B
We
can
promote
you
to
a
panel
or
we
can
just
kind
of
verbalize
your
questions.
My
name
is
conley
rogers,
I'm
a
technical
account
manager
for
our
strategic
enterprise
accounts,
I'm
coming
to
you
from
atlanta
georgia.
Before
joining
get
lab,
I
was
an
engineering
manager
at
verizon,
where
I
led
a
team
in
charge
of
sdlc
modernization.
B
B
So
it's
my
privilege
to
talk
to
you
today
about
all
of
that
and
and
some
more
so
this.
This
webinar
does
assume
that
that
you
know
a
little
bit
about
gitlab
ci
and
probably
have
dabbled
with
it
in
your
projects
or
free
time.
But
I
will
recap
some
of
the
fundamentals
on
the
flip
side.
If
this
doesn't
go
deep
enough
or
you
really
wanted
a
personalized
workshop
with
hands-on
demos,
we
we
have
a
paid
training
that
you
can
take
advantage
of
for
you
and
your
team.
B
B
B
I
know
this
is
an
advanced
ci,
cd
webinar,
but
I
did
want
to
start
with
the
bigger
picture
in
mind
here,
because
beyond
the
technology
is
a
process
that
will
ensure
that
you
get
the
most
from
your
investment
in
gitlab.
This
is
what
we
call
the
get
flow.
It's
not
a
prescribed
branching
strategy.
You
can
use
feature
branching
trunk
based
development
with
a
similar
flow,
but
this
one
is
important
because
it
really
speaks
to
the
fullness
of
the
product
because
you
start
with
an
issue.
B
So
some
idea
that
you've
got
an
improvement
to
be
made.
You
pretty
much
immediately
are
prompted
to
create
a
merge
request.
Even
before
you
commit
code
and
the
reason
is,
you
can
start
to
get
collaboration
going
as
soon
as
you
do
make
that
first
commit
it.
Also,
if
you
create
it
in
such
a
way,
we'll
will
start
to
run
ci
pipelines
when
you
create
those
commits,
which
is
great
because
you
start
to
get
that
feedback
loop
going
as
your
commits
land
and
you
can
do
unit
testing.
B
You
can
start
to
do
a
lot
of
static
analysis
and
security
scans
and
start
to
get
that
fast
feedback.
Once
you
get
past
sort
of
the
ci,
you
can
create
a
review
app,
and
this
is
a
gitlab
feature
that
throws
up
sort
of
a
test
environment
that
shows
your
app
in
a
live
environment.
Where
we
can
then
run
dynamic
tests
like
das
dynamic
application
security
tests,
then
you
can
start
to
hone
in
on
feedback,
so
a
feedback
loop
from
other
people
outside
of
yourself.
B
B
Now,
let's
get
to
the
anatomy
of
a
gitlab
cicd
pipeline,
so
the
pipeline
is
the
the
largest
construct.
It's
the
set
of
one
or
more
jobs
that
are
organized
into
stages.
Stages
are
the
logical
grouping
of
your
jobs
that
pertain
to
a
phase
of
the
ci
pipeline,
so
they
can
be
run
in
parallel,
which
improves
the
performance.
B
So
examples
would
be
your
build
test,
deploy
stages
within
those
stages.
You
have
jobs,
that's
the
actual
task
that
needs
to
be
performed,
so
examples
would
be
scripts
like
an
npm
test.
Maven
install
bash
scripts,
shell
scripts
and
ultimately
you
want
to
deploy
that
packaged
code
into
an
environment,
and
so
you
can
do
this
all
in
one
file,
versioned
and
stored
in
the
repository
that
it
pertains
to
using
the
gitlab.ci
yaml
file.
B
The
last
piece
is
the
gitlab
runner.
We
don't
go
deep
into
the
runner
today,
it's
more
on
how
to
configure
pipelines,
but
the
runner
is
the
infrastructure
that
these
pipelines
run
on.
So
it
executes
everything
that
you
see
on
the
left
side
and
you
can
have
you
have
your
gitlab
server,
which
can
have
many
runners
as
many
as
you
like,
and
you
can
even
use
your
laptop,
but
it
means
that
members
of
the
project
will
be
using
your
laptop
as
a
runner.
B
B
And
speaking
of
that,
let's
start
to
cover
some
of
the
major
architectures
that
we
support
and
recommend,
starting
with
the
basic.
Then
we're
going
to
get
and
build
on
that
with
directed
acyclic
graphs,
we'll
do
parent
child
pipelines,
dynamic
child
pipelines
and
finally,
we're
going
to
talk
about
multi-project
pipelines.
B
B
So
in
this
example,
you
can
kind
of
see
that
I've
got
a
mobile
application
that
I
deploy
both
on
android
and
ios.
These
are
independent
environments,
so
my
build
is
only
dependent
on
the
technology
that
it's
running
on.
So
if
I
have
an
android
build-
and
I
want
to
start
testing
that
I
don't
need
to
wait
for
the
ios
build
to
complete
and
then
you
can
kind
of
see
a
snippet
of
the
yaml
code
that
it
would
use.
B
So
we
need
the
needs
keyword
in
order
to
move
on
to
the
next
stage
without
finishing
all
of
the
jobs.
So
that's
how
you
start
to
create
a
dag
pipeline
and
then,
as
we
go,
I
wanted
to
share
some
really
practical
tips.
So
if
you
view
you've
used
gitlab
ci
at
all
in
the
past,
you'll
know
that
we
use
yaml
files
to
manage
them
and
complexity
itself
is
something
to
certainly
keep
in
mind.
B
If
you
do
have
a
very
complex,
very
dependent,
ci
pipeline,
that
complexity
kills
devops,
so
the
more
you
can
simplify
the
better
and
so
the
parallel
jobs
really
helps
in
this
way,
because
it's
going
to
run
in
the
most
efficient
way
possible
in
that
stage-
and
you
know
if
you
are
inside
of
these
stages
and
have
parallel
jobs,
you
know
the
more
you
can
maximize
that
stage
and
that
performance
starts
to
be
eked
out
with
these
pipelines,
so
pretty
simple
concepts
that
most
people
understand.
B
B
B
This
feature
allows
you
to
call
other
yaml
files
from
within
the
same
project
and
that
solves
issues
like
the
stage
structure
of
a
pipeline
where
all
steps
in
this
stage
must
be
completed
before
the
first
job
and
the
next
stage
begins
which
causes
arbitrary
weights
in
it.
And
that
slows
things
down.
B
You
can
also
configure
for
the
single
global
pipeline,
and
that
could
be
very
long
and
complicated
right,
like
you
may
start
to
have
a
fci
ammo.
That
is
a
thousand
lines
long,
so
that's
really
tough
to
maintain
and
to
read
and
or
you
may
be
facing
issues
with
imports
where
you're
using
includes
that
starts
to
increase
the
complexity
of
the
configuration
and
then
the
pipeline
ux
can
become
unwieldy
if
you
just
have
one
kind
of
mono
job
and
mono
gitlab
ci
repo.
That
is
just
hard
to
to
read.
B
So
this
is
a
very
useful
feature
with
the
parent
chop
pipelines
for
running
non-dependent,
long-running
jobs
like
a
code
scan
or
building
and
deploying
your
front-end
and
back-end
services
separately.
B
B
The
other
rules
around
it
that
you
see
here
help
with
the
efficiency
so
using
the
only
and
changes
keywords
you
can
designate
only
to
run
that
stage
of
the
job
if
something
changes
to
files.
So
that
could
be
super
helpful
for
speeding
up
pipelines.
You
don't
have
to
run
it
across
the
whole
project,
and
you
can
imagine
if
you
have.
You
know
docker
file
or
some
kind
of
like
kubernetes,
manifest
where
you
need
infrastructure.
B
As
code
scanning,
you
may
be
able
to
skip
that
stage
if
there
was
no
changes
made
to
those
files
and
then
the
strategy,
so
the
strategy
can
also
speed
up
your
pipeline
because
it's
saying
that
it's
creating
a
rule
so
that
that
is
dependent.
B
So
this
technique
can
be
very
powerful
in
generating
pipelines
targeting
content
that
changed
or
to
build
a
matrix
of
targets
and
architectures
right.
So
in
this
script
you
can
see
that
we
are
generating
a
new
test.gitlab
ci
yaml
file
with
unit
test
integration,
test
stages,
as
well
as
what
those
jobs
are
going
to
do
within
those
stages.
B
Pretty
simple.
It's
just
saying
echo
replace
me
just
for
example,
purposes,
but
there's
a
setup
stage
in
my
main
gitlife
ci
ammo
here,
and
that
setup
is
going
to
generate
a
dynamic
file
that
I
then
use
in
the
subsequent
stages.
So
I
reference
it
later
to
actually
run
the
pipeline.
B
B
B
C
B
B
B
So
it's
a
single
project
that
manages
the
build
and
deploy
of
multiple
other
apps
and
then
the
last
is
something
like
a
versioning,
and
you
know
situation
where
the
main
project
passes
a
version
number
to
the
downstream
project
that
one's
flexible.
It
could
really
be
any
variable,
not
just
you
know
the
version,
but
those
are
sort
of
the
three
use
cases
that
I
would
want
into
this
multi-project
pipelines
architecture.
B
B
B
So,
starting
with
defining
them
in
your
ci
configuration,
this
is
how
you
would
define
a
variable
like
build
process
and
then
call
it
later
down
in
a
script,
pretty
simple
and
straightforward:
there
that's
a
custom
variable.
B
B
And
then
another
option,
you
can
actually
run
these
when
you
run
your
pipeline,
you
can
run
it
with
variables.
So
I
a
practical
example
that
I've
used
this
for
is
during
a
hackathon.
B
B
Variables
another
slide
just
going
into
the
details
of
the
precedence
of
those
variables
from
highest
to
low,
but
let's
just
look
at
a
quick
scenario,
so
you
have
this
api
token
equal
secure
as
a
project
variable.
Then
you
have
it
also
being
defined
in
your
gitlife
ci
ammo
so
which
value
will
take.
So
the
api
token
is
going
to
take
the
value
secure
as
the
project
variables
take
precedence
over
those
defined
in
your
gitlab
ci
ammo.
So
it's
something
to
keep
in
mind
if
you
have
conflicting
definitions,
that
there
is
a
hierarchy
here.
B
Okay,
we're
going
to
keep
moving
on
so
we
have
our
variables
and
now
it's
it's
time
to
talk
about
rules
which
is
you
know
how
it
really
helps
in
defining
when
jobs
run
and
how
they
the
pipeline
runs
in
the
order
of
those
operations
very
flexible.
But
I
do
want
to
start
with.
When
are
these
pipelines
being
kicked
off
and
there's
a
lot
of
different
ways
for
triggering
a
pipeline
to
run?
It
can
be
through
a
new
commit
branches,
a
new
tag.
B
It
could
be
through
that
manual
ui,
where
you
kick
it
off
through
that
an
api
call.
You
can
also
schedule
jobs
to
run
at
certain
times
of
the
day
and
the
the
variable
the
variable
for
this
setting
is
the
ci
pipeline
source
all
right.
So,
within
your
your
yaml,
you
can
use
ci
pipeline
source
to
determine
how
you
want
to
kick
off
your
pipeline
and
there's
a
bunch
of
different
options
that
you
can
see
just
a
few
of
right.
There.
B
Now
to
generate
a
rule,
there's
some
syntax
to
follow.
So
you
would
you
would
first
start
by
creating
a
new
rules
block
underneath
your
job
as
it
claims
this
job
will
only
run
the
pipeline.
It's
kicked
off
from
via
the
web
form.
So
that's
what
it's
saying
if
the
ci
pipeline
source
is
web
and
then,
if
statements
can
reference
variables,
including
the
predefined
ones
or
custom,
in
this
case
it
was
a
predefined.
B
B
So
I
threw
in
some
some
tips
and
tricks
for
speeding
up
complex
pipelines
early
on.
Let's
take
a
look
at
some
other
ways
now
that
I've
covered
rules
and
variables,
the
first
one
I'm
kind
of
leaving
is
as
broad
but
run
rules,
so
these
can
come
in
many
flavors,
but
right
now
I've
been
really
enjoying
ones
that
are
the
the
files,
except
so
that
these
files,
that
don't
change
you
don't
have
to
run
that
job
that
I
mentioned
earlier
or
if
you
only
wanted
it
to
run
on
certain
branches.
B
B
The
other
is
a
caching.
So
if
you
wanted
to
configure
your
runner
caching-
and
you
can
use
this
in
existing
built
items
or
artifacts
that
are
building
a
lot
so
that
you
don't
have
to
rebuild
them
every
single
time
you
run
your
pipeline
tags
are
especially
useful
as
you
can
ensure
the
correct
runner
is
being
utilized
for
the
right
deployment.
B
Parallel
testing
is
another,
a
small
feature
that
that
I
wanted
to
bring
up,
probably
self-explanatory,
but
you
can
choose
how
many
tests
are
running
in
parallel
to
potentially
speed
that
job
up
and
then
the
before
script.
So
if
you
have
a
lot
of
container
preparation
build
up
in
a
before
script,
it
might
be
a
sign
that
you
need
to
convert
that
section
into
a
docker
file
and
a
new
repo
and
have
your
own
build
container
configuring
that
can
actually
dramatically
accelerate,
builds
where
there's
a
lot
of
dependencies
layered
on
before
running
your
build
code.
B
B
B
So,
let's
talk
about
a
couple
example
rules
here
now
that
we've
covered
the
structure
of
them,
so
this
first
one
is
for
when
we
want
our
pipeline
source
and
our
pipeline
to
kick
off
when
it's
not
a
merge
request
event,
so
this
job
will
not
run
if
it
was
triggered
from
a
merge
request.
It
won't
run
if
it
was
a
schedule
if
it
was
triggered
via
a
schedule.
B
B
Sorry
I
wanted
to
delay
my
build.
My
docker
build
script
from
running,
and
so
you
can
do
this.
It's
really
just
to
illustrate
that
delay
feature,
and
so
I
can
say
that
it's
going
to
start
in
three
hours
and
allow
failures
of
that
so
that
it
doesn't
have
to
stop
the
rest
of
my
pipeline.
B
So
workflow
workflow
rules
control
more
than
just
a
single
job,
like
we've
mainly
been
talking
about
job
logic,
but
workflow
rules
allow
you
to
control
when
the
entire
pipeline
will
run,
and
these
are
outside
of
job
definitions.
B
So
here
we
see
if
the
commit
message
contains
dash
whip,
then
it
won't
run
the
pipeline
and
if
and
if
a
tag
was
applied,
then
it
also
won't
run.
Otherwise
it
will
so.
This
is
great
because
you
know
git
lab
already
has
some
functionality
where,
if
you
have
a
merge
request
in
draft
or
whip,
it
won't
run,
but
you
can
further
define
this
in
other
conditions,
so
the
whole
pipeline
won't
be
triggered
until
you're
ready
for
it
to
be.
B
Now
I
did
want
to
to
talk
about
artifacts
because
a
lot
of
times
in
these
pipelines,
you're,
building,
artifacts
you're
building
files,
so
a
build
and
publish
stage,
will
generate
files
that
you'll
use
for
deploying
your
application
or
viewing
results
of
tests.
B
So
we're
going
to
talk
about
managing
those
artifacts
for
a
minute.
Gitlab
allows
for
saving
artifacts
in
local
or
object
storage.
You
can
then
use
them
in
subsequent
jobs.
You
can
use
the
rules.
Logic
of
exclude
depends
and
when
to
control
what
is
added
and
when
to
determine
if
an
artifact
is
stored
or
not.
B
B
You
can
also
set
expiration
times
so
that
you
can
ensure
that
a
stale,
artifact
isn't
living
on
for
weeks
or
months
and
artifacts
follow
the
role-based
access
controls
that
gitlab
comes
with.
So
that's.
What
we're
highlighting
here
is
that
guest
reporter
developer
maintainer
owners
are
able
to
download
and
browse
job
artifacts
you
can.
You
can
look
further
into
that.
If
that's
a
topic
of
interest
for
you.
B
B
Sorry,
my
headphone
automatically
cuts
off
if
there
was
no
audio
coming
in
sorry
for
that,
but
you
can.
What
I
was
saying
is
that
all
artifacts
from
all
previous
stages
are
passed,
but
you
can
use
the
dependencies
parameter
to
define
a
limited
list
of
jobs
or
no
jobs
to
fetch
artifacts
from
so
to
use.
This
feature
define
dependencies
in
context
of
the
job
and
pass
a
list
of
all
previous
jobs
from
which
artifact
should
be
downloaded.
B
B
B
B
So
includes
and
extends.
This
is
a
really
good
concept
around
templatizing
reusing
this
configuration,
so
you
can
define
things
once
and
you
can
start
to
reuse
across
your
company.
So
that's
really
powerful.
The
include
statement
is
how
you
bring
in
external
emo
files
to
your
gitlab
ci
configuration
it's
helpful
because
it
allows
you
to
abstract
common
components
and
improve
readability.
B
It
can
save
you
time
if
you
utilize.
Other
templates.
Gitlab
itself
comes
with
many
templates
as
part
of
your
install,
so
things
like
sas,
you
can
just
use
right
out
of
the
box
secrets
detection.
You
can
include
that
template.
That
already
has
secrets,
detection
job
included
and
it
saves
just
a
ton
of
time.
There's
other
ways
too.
If
you
wanted
to
include
a
file
from
your
local
project
repository
that
was
just
in
the
same
project,
but
you
were
abstracting
it
out.
You
can
use
the
local
method.
B
B
Extends
so
extends
is
a
way
to
merge
and
to
reuse,
as
well
kind
of
like
a
full
include,
but
it's
more
or
less
to
save
you
effort
from
copy
pasting
and
to
clean
up
the
code
within
that
yaml
file
and
it's
similar
to
yaml
anchors
if
you're
familiar
with
that
concept,
but
it's
a
little
bit
more
flexible
and
readable.
So
I
wanted
to
touch
on
this
real,
quick
it'll.
Allow
you
to
enhance
and
reuse
configuration
sections.
B
So
if
you
know
about
those
yaml
anchors,
you
know
that
it's
used
to
duplicate
or
inherit
content
across
your
yaml
file,
but
they're
only
valid
in
the
file
they
were
defined
in
so
that's.
Where
extends
comes
in,
you
can
inherit
up
to
11
levels
with
well,
we
recommend
no
more
than
three,
but
what
it
does
is
it
merges
the
configuration.
So
we've
got
sort
of
this
example
that
you
see
here
where
we
have
this
dot
tests,
which
is
that
template
job
and
then
the
r
spec
job.
So
we
kind
of
want
to.
B
We
want
to
merge
these
and
take
what
we
have
in
dot
tests
and
add
on
to
it
with
this
environment
variable
and
that
script,
so
we
extend
dot
tests.
So
if
we
look
over
on
the
right
hand,
side
you
can
see
that
it
brought
in
that
only
refs
branch's
logic
and
that
it
brought
in
the
stage
test
logic
and
then
it
merged
it
with
the
r
spec.
B
B
Instead
of
having
to
copy
paste
and
complicate
your
gitlab
ci
email,
you
can
create
the
job
that
extends
that
dot
template
job
with
the
script,
echo
pillow
and
just
say
what
image
to
run
it
on
so
instead
of
me
having
to
retype.
This
could
probably
be
a
huge
block
right,
but
instead
of
having
to
retype
everything
that's
over
on
the
template
file,
then
I
can
just
extend
it
and
I'm
just
determining
which
image
that
I
want
it
to
run
on.
So
it's
a
simple
example,
but
you
can
start
to
understand
how
powerful
that
feature
is.
B
B
A
Thanks
colin
yeah,
I'm
going
to
launch
that
poll
that
you
just
mentioned
now
just
a
little
bit
of
feedback.
We'd
love
to
get
from
you
all,
just
a
couple
of
quick
questions.
If
you
can
take
a
minute
to
answer
that,
we'd
appreciate
it
and
from
there
we
will
jump
to
some
questions
with
conley
colony.
If
you
want
to
just
open
the
q
a
there
there's
a
couple
in
there
and
you
can
take
a
stab
at
a
couple
of
them
in
the
time
we
have
left.
B
All
right
that
sounds
good,
so
let's
is
there
any
that
that
we
haven't
answered
yet
that
we
want
to
start
with
I'm
just
now.
Looking
at
this
taylor.
B
Okay,
all
right,
so
it
looks
like
there's
a
question
about
the
parent
child
paradigm,
in
which
the
parent
is
generating
the
child
gitlab
ci
yaml
file
the
best
way
still
to
run
security,
compliance
scans
in
a
modern
repo,
okay,
so
parent
child
paradigm
generating
the
child
yeah.
So
this
is
helpful
because
you
you
want
to
abstract.
B
You
know
a
lot
of
the
stuff
like
security
scans
should
run
on
all
of
the
the
changes
right.
So
I
would
say
that
the
the
parents
gitlab
ci
yaml
file,
should
definitely
have
those
basics
around
sas
secrets,
detection.
These
are
easy
ones
that
come
with
both
our
standard
premium.
B
You
don't
have
to
have
ultimate
to
get
all
of
those
those
features.
So
there's
some
definite
table
stakes,
some
101s
that
you
want
to
to
make
sure
in
your
your
parent
yaml
file
and
then
for
the
child
right.
This
is,
if
it's
a
monorepo,
then
you
can.
You
know,
start
to
specify
the
nuances
right.
Maybe
a
different
text
stack
if
it's
a
monorepo
one
component
may
be
in
a
different
language
than
the
other,
so
you
know
that's
when
you
would
start
to
do
like
junit.
B
If
it,
if
it's
a
java,
app
right
so
that
that
would
be
my
recommendation,
I'm
sure
that
there's
even
better
ones
on
the
line.
But
if
I
understand
your
your
question,
that's
that's
how
I
go
about
it.
B
B
I
mean
you
can
always
manage
your
projects
through
your
repository
settings.
I
don't
know
if
there's
anything
else
to
that
question
or
any
of
other
nuances.
There.
B
Okay,
darren
iyers
is
asking:
how
do
you
manage
versions
of
the
app
when
you
have
develop,
release
and
master
and,
of
course
may
get
hot
fixes
on
release
or
master
assuming
the
artifacts
use
branch
references
yeah,
so
there's
definitely
different
branching
strategies
that
help
you
manage
these
versions
right,
but
we
also
have
the
artifacts.
So
as
you
publish
and
name
those
artifacts,
you
have
to
scope
them
and
version
them.
So
you
can
go
back
and
you
can
also
look
at
those
historical
artifacts.
B
B
B
Anonymous
attendees
says
if
I
define
a
global
variable
in
a
template
and
include
the
template
in
my
main
git
fci
yaml
file.
Can
it
be
overridden
in
the
main
yaml
file
yeah?
So
if
you've
got
a
global
variable
and
a
template
yeah,
so
there
is
that
sort
of
hierarchy
that
I
can
go
back
to
just
to
show
as
an
example.
B
C
B
What
you're
asking
for
there
is
the
global
variable
in
a
template,
and
so
you
know,
starting
with
a
project
level,
would
inherit.
It
would
be.
First
group
levels
will
be
second,
so
it
could
be
overridden
with
you
know,
a
project
level
variable
in
that
case.
C
B
So
one
other
question
I
see
here
is:
is
it
possible
to
detect
a
change
on
some
database
table
and
react
to
then
maybe
with
hooks
yeah?
It's
it's
a
place.
I
haven't
dabbled
much
into
with
regards
to
kind
of
like
versioning
tables
and
but
if
there's
you
know
defined
as
if
you
define
that
as
code
and
that
you've
got
a
certain
file
that
you
can
watch
for
right.
So
if
it's
you
know
a
dot
sql
file
that
you're
saying,
is
there
a
change
here
or
is
there
not?
B
B
And
then
jim
robbins
did
did
point
out
something,
so
some
of
these
slides
use
the
only
accept
rules
and
those
have
been
deprecated
in
favor
of
of
rules,
the
construct
of
rules
that
is
much
more
extensible,
so
yeah
we
do
have
a
like
a
linter
within
the
ui
that
you
can
look
in
and
make
sure
that
your
yaml
is
correctly.
B
All
the
syntax
is
correct.
It'll
look
for
that
functionality,
so
it
would
catch
if
you're,
using
an
outdated
rule
like
with
the
onlys
and
accepts
so
there's
like
a
linter
within
the
ui
that
I
would
use
there.
So
yeah
good
call.
B
Yes,
so
you
can
specify
different
values
for
variables.
You
would
extend
the
parts
that
you
want
and
then
you
can,
if
you
were
to
override
that
within
the
you
know,
it
would
take
precedence.
The
variable
that
you
gave
it
after
you
extended
the
functionality
over
that
you
wanted,
then
whatever
it
is
that
you
define
that
variable
within
that
job
would
take
the
precedence
there.
B
Okay,
so
is
there
a
way
to
configure
gitlab
sas
analyzers
to
only
scan
change
files
on
merge
requests,
so
this
is
a
good
functionality
within
mrs
is
it's
only
going
to
you
can
configure
it
so
that
it's
only
chain
scanning
the
diffs
and
the
changed
files
on
that,
mr?
That
is
a
great
functionality.
So,
yes,
you
can
do
that
and
that
way
you're
not
getting
all
of
that
history
with
the
the
project
that
already
has
known
vulnerabilities.
B
More
than
likely,
they
may
just
be
like
a
low
medium
severity,
so
yeah
that
way,
you're
literally
just
seeing
as
a
developer.
What's
changed
for
you?
What's
in
my
control
to
go
back
and
fix,
there's
actually
a.
I
think
we
it's
like
a
security
training,
or
you
know
a
remediation
that
we
have
now.
So
it's
one
thing
to
kind
of
show
you
where
the
issue
is,
but
we've
gone
the
next
step
with
some
of
our
scanners
to
actually
give
the
recommended
remediation
as
well
as
like.
Why
like?
Why
is
that
an
issue?
B
So
it's
more
of
right
there
in
the
moment
being
trained
up
and
you
can
choose
to
read
that
or
not,
and
it
also
gives
you
the
fix.
So
this
is
an
area
that
we've
invested
in
for
really
multiple
years.
We
take
security
super
seriously,
and
it's
really
all
about
that.
Shifting
left
is
finding
something
quickly,
remediating
it
immediately,
rather
than
leaving
it
for
a
different
team
to
do
the
further
right
that
it
goes,
the
the
more
expensive
that
that
change
and
that
fix
first
vulnerability
becomes.
B
So
we
are
always
kind
of
looking
for
ways
to
educate,
identify,
and
then
we
have
dashboards
as
well,
so
that
you
can
kind
of
see
all
of
the
it
takes
stock
of
all
of
your
current
vulnerabilities
and
that
can
be
done
by
a
different
persona
than
a
developer
that
can
kind
of
look
across
the
whole
project
and
and
and
get
a
good
assessment
of
where
they're
at.
B
So
scott
wright-wise:
what's
a
good
strategy
to
capture
permanent
results
from
jobs,
I.e,
kubernetes,
config
files
or
cluster
names,
etc,
generated
by
a
terraform
pipeline
permanently
and
then
use
the
reference
to
that
new
permanent
artifact
and
subsequent
pipelines,
external
secrets
commit
back
to
a
repo
or
others.
B
You
know
just
one
thing
that
comes
to
my
mind
is
using
our
our
container
registry,
storing
those
results
as
an
artifact,
but
you
can
also
kind
of
click
into
the
pipeline
to
see
those
results
and
with
the
logic
of
your
choice,
you
can
store
those
permanently
in
in
your
local
or
object
storage,
but
certainly
not
probably
the
only
way
of
going
about
that.
B
So
we
do
have
like
an
e-learn
trainings.
We
do
both
the
professional
services
where
it's
live
stuff,
but
then
we
also
do
have
some
self-paced
training
with
some
certifications
that
you
can
get
if
you
want
to.
So
those
are
more
like
a.
We
do
have
a
lab
environment
in
those
they're
they're
paid,
trainings
and
but
they're
self-paced
they're
quicker
than
you
know,
having
proserv
come
in
and
do
something
custom.
B
So,
yes,
those
do
exist.
A
lot
of
us
have
taken
that
training
for
ci
cd
and
then
we're
also
testing
out.
You
know
more
advanced
versions
of
that
that
are
going
to
be
coming
out
at
some
point,
but
I
as
a
side
note,
those
are
really
awesome
labs.
The
kwik
labs
that
you
can
use
for
for
training
on
aws,
it's
very
similar
where
it's
an
ephemeral,
gitlab
environment
and
then
it's
created
by
we
have
graders
as
well
as
some
automation
that
checks.
So
it's
not
just
a
test.
B
B
That's
a
it's
a
good
question
and
I
don't
I
don't
know
the
exact
syntax,
but
that
is
definitely
an
option
where
it
can
rebuild
a
retry
and
you
can
kind
of
set
that
number
of
limits
so
that
it
doesn't
keep
looping
around
and
I've.
B
And
see
one
from
patrick
about
a
question
using
iac
in
gitlab
cicd.
What's
the
best
way
to
deploy
similar
terraform
code
to
multiple
environments,
so
you
have
three
devs
environments:
three
test
environments
in
three
different
pride
environments:
what's
the
best
way
to
construct
the
cicd
pipelines
differentiate
between
the
environments,
make
it
easy
to
merge
changes
in
yeah?
B
It's
a
good
question
I
mean
now
we
do
have
a
lot
of
focus
on
iac,
including
some
some
scanning
that
we
do
using
kicks,
and
so
that
ensures
that
there's
quality
around
the
terraform
code
and
then
you
can
set
up
stages
within
your
pipeline
so
that
it's
it
hits
those
those
environments
that
you
have
listed.
B
B
All
right,
I
think
that's
I'm
seeing
a
few
duplicates,
so
I
think
I
hit
the
ones
that
that
I
saw
there.
Taylor.
A
Awesome
yeah,
I
think
you
got
through
most
of
those
too
good
work.
We
we
appreciate
everybody
joining
us
today.
You
can
look
for
for
future
sessions
like
this
and
in
the
coming
weeks
and
months.
Once
again,
I
appreciate
you
taking
some
time
out
of
your
busy
day,
and
I
hope
you
found
this
helpful.