►
Description
DAST Project-level Scan Execution Policies (https://gitlab.com/groups/gitlab-org/-/epics/4598)
Spike: How to add a job that doesn't exist in .gitlab-ci.yml to a pipeline (https://gitlab.com/gitlab-org/gitlab/-/issues/280315)
Spike: How to run a scheduled pipeline with one security job (https://gitlab.com/gitlab-org/gitlab/-/issues/280314)
Spike: How can we fail a pipeline depending on conditions set in Scan Result Policy (https://gitlab.com/gitlab-org/gitlab/-/issues/280313)
Spike: How Gitlab configuration inheritance works (https://gitlab.com/gitlab-org/gitlab/-/issues/282420)
A
So
I'm
here
with
matiay
and
today
we're
going
to
talk
about
dust
project
level,
scan
execution
policies,
it's
a
big
epic
lots
of
stuff
that
doesn't
exist,
or
it's
not
very
there's
no
consistent
architecture
to
to
do
yet
so
we're
just
gonna,
walk
through
the
stuff
and
and
bring
up
any
problems,
areas
and,
most
importantly,
the
three
spikes
that
I
want
to
see.
If,
if
we've
got
the
right
questions
and
ask
them,
so
let
me
share
my
screen
cool.
Do
you
want
to
drive
but
yay.
A
I'll
do
a
quick
overview
and
then
and
then
just
feel
free
to
interrupt
this.
This
is.
B
A
Mostly
so,
please
don't
feel
like
I'm
presenting
to
anyone
else.
It's
for
you
cool,
let's
hope
I'm
sharing
the
right
window
looks
like
so
so
this
is
the
epic.
A
So
the
first
point
is
fairly
simple:
it's
just
talking
about
moving
this
tab
here,
which
exists
under
threat
monitoring
to
create
its
its
own
menu
here
called
policies
and
it'll
live
under
there.
That's
pretty
straightforward
and
it's
probably
front
end
only
I
dare
say
what
do
you
think.
B
A
A
Cool
then
the
list
that
we
get
there
will
have
another
column,
so
this
list
will
have
a
column
called
type
and
then
those
types
would
be
container
runtime
or
scan
schedule
again,
pretty
simple.
If
so,
the
container
runtime
is
the
existing
ones
right.
So
these
these
rules
here
they
are,
they
are
container
runtime
rules
cool,
so
they
already
exist.
A
We
need
to
create
another
type
of
of
rule
that
that's
the
scan
shadow
that
that
might
not
be
as
simple
as
it
sounds,
and
then
this
is
where
this
is
where
the
gravy
thickens
users
will
be
able
to
view
creator,
blah
blah
blah
as
described
in
the
prototype.
So
if
I
jump
over
to
the
prototype-
and
I
select
scan
execution
policy,
so
the
I
think
the
prototype
has
got
a
an
old
name,
it's
called
a
scan
schedule,
so
apologies!
A
A
I
don't
I
don't
know.
I
don't
see
someone
doing
this,
but
yeah
we'll
figure
out
so
that
that's
one
part
the
other
one
is
so
so
this
means
that
that
you're
gonna
run
a
scan
on
it
on
its
own.
That's
going
to
be,
you
know
a
job
running
if,
if
we
do
it
as
a
job,
it
means
we'll
need
a
pipeline,
and
maybe
the
pipeline
will
only
have
that
job
running
then
the
other
one
is
okay.
A
Given
somebody
triggered
a
pipeline
and
you
can
pick
the
branches
that
you
want
to
enforce
that
then
you're
gonna
enforce
that
the
scan
is
going
to
run
so
another
way
to
see
this,
is
you
forcing
the
dot
gitlab
ci
to
have
the
the
template
for
the
scanner
and
then
for
actions
require
scan
to
run
we're
beginning
with
dust?
So
that's
the
that's
the
scope
for
this
for
this
epic,
so
this
will
probably
be
hard
coded
and
then
these
things
already
exist,
but
I
think
this
is
called
a
profile.
A
A
B
So
good
so
far,
so
good
yeah,
the
one
question
may
be
we're
starting
with
tasks,
but
we
need
to
think
about
other
security
jobs
because
that's
rather,
I
believe
it
was
taken
on
purpose
as
the
first
one,
because
we
already
have
unmanaged
test
scans,
but
one
could
think
that,
oh
maybe
we
could
use
that,
but
then
what
we
should
do
with
sas
with
other
types
right.
So
I
believe
that
that's
the
one
concern
I
have
is
to
not
be
not
think
only
about
death,
but
about
the
whole
solution.
A
It's
a
good
point
and-
and
it's
it's
it's
sort
of
what
I
think
not
to
speak
for
jan
but
john
was
thinking
about
that
way.
Is
it
cool?
It's
that's
now,
but
what
happens
down
the
road
I
I
so
there's
got
to
be
a
balance
there
right.
B
A
And
I
think
the
trick
in
here
will
be
we'll
see
how
far
ahead
do
we
plan
do
we,
you
know
we
don't.
I
don't
know
if
we
want
to
find
out
the
nth
degree,
every
detail
for
every
scanner,
but
at
a
high
level.
What
I
would
expect
is.
A
A
So
let's
look
at
the
at
the
two
ones
right.
So
if,
if
I'm
looking
at,
if
I'm
looking
at
schedule
only
then
that's
more
or
less
the
on-demand
scan
that
already.
B
A
B
A
So
I
guess
what
I'm
looking
at
is
is
to
finish
so
the
spike
hasn't
been
completely
refined
is
have
have
a
look,
have
a
look
at
the
dust
one
and
try
to
zoom
out
and
see.
Would
that
model
fit
the
others?
And
if
not,
then
we
we
use
the
spike
to
answer
some
of
these
questions.
B
Yeah
yeah
definitely
I
was
thinking
the
main
thing.
What
we
need
to
do
is
to
be
able
to
replace
and
fly
the
file
that
is
used
to
schedule
pipeline
and
then
somehow
mark
it
as
a
special
pipeline,
so
it
is
not
being
taken
into
like
when
creating
vulnerabilities
or
it's
not
being
thinking
when
there
are
some
statistics
being
counted
and
so
on.
So
we
need
to
be
able
to
see
that
pipeline
in
the
ci
pipeline
view,
but
it
should
be
somehow
marked
out.
This
is
not
the
regular
pipeline.
A
Yeah
yeah,
so
detached
pipelines
use
that
that
ui,
I
don't
know
what
the
what
the
model
supports
in
the
in
the
in
the
back
end,
but
there's
probably
something
there
that
says:
hey
this
pipeline
is
detached,
so
maybe
we
can
have
that
model
supports
and
say:
hey.
This
pipeline
is
coming
from
a
policy.
B
Yeah
exactly
yeah,
okay,
yeah.
This
is
exactly
what
we
need.
A
detached
pipeline
runs
in
the
context
of
the
merch
request
and
not
against
the
merge
result.
Okay,
which
is
great,
which
is
what
we
need,
because
probably
this
is
how
it's
being
protected
from
doing
any
any
harm.
A
So
that
that's
is
there
anything
else
to
consider
on
the
schedule?
I
guess
I
guess
the
other
thing
to
consider
is
is
so
so
triggering
and
running,
but
but
where
do
we
store
the
shadow?
I
think
for
that
when
you
come
to
cicd.
A
Yes,
so
there'll
be
some
of
new
authorizations
there
for
for
roles.
I
don't
remember
seeing
I
I'm
sorry
the
opposite.
I
remember
seeing
something
about
rose,
but
I
maybe
seen
the
design
issue
we
can
always
ask
later.
So
if
you
don't
find
it,
that's.
B
A
A
A
B
Evaluate
I
I
believe
you
can
write
them
down
later
on,
because
it's
it
will
be
exactly
what
would
consider
taking
a
schedule
like
consider
doing
it
in
fly,
make
sure
that
it's
secured
from
the
api
and
draft
build
perspective,
and
so
on
so
on.
So
you
just
need
to
find
answers
for
these
questions.
I'll
update
it.
A
Thank
you,
sir.
So
moving
on
the
the
other
one
is
the
with
the
pipeline
execution,
and
that
would
be
so
that
legit
does
not
exist,
and
that's
the
second
one
here,
so
how
to
add
a
job
that
does
not
exist.
You'd
like
ciamble
to
a
pipeline.
A
I
believe
we
had
a
call
earlier
on
where
we
touched
some
of
the
options
with
derrick,
sam
and
seth.
There
might
have
been
someone
else
that
I'm
forgetting,
but
we
talked
we.
We
talked
a
few
options
right.
We
talked
about.
A
Editing
the
file
which
I
I
didn't
like,
because
there
are
too
many
ways
to
trick
the
paza
to
override
there's
too
many
ways
to
override
the
policy.
Basically
is
what
what
I'm
thinking
so
any
what
I
mean
by
that
is
any
anything
anything
that
depends
on
on
a
user-defined
gitlab
ci
could
be
tricked
into
not
running
right.
So
if
yeah
so
say
we
we
append
something
to
the
to
the
object.
A
B
A
B
So
yeah,
but
that's
probably
this
is
option
again
like
modify
the
ci
yaml
file
in
sly
when
running
pipeline.
So
before
we
just
modify
it,
we're
not
saving
it,
we're
just
modifying
it
in
fly
and
then
using
it
to
run
to
run
the
pipeline
you'd.
B
B
A
Or
we'll
see,
yeah
exactly
that
I
mean
that's
the
spike,
let's
figure
out
some
some
ways
of
doing
that,
so
the
the
second
part
of
that
is,
if,
if
there
are
any
configurations
to
this
thing,
how
do
they
end
up
in
the
job?
Is
it
just
part
of
the
database
model?
Is
it
is
it
something
like
like
we
have
for
that?
We
just
looked
at
for
the
on-demand
scans.
B
B
I
was
thinking
only
about
the
research
spike,
so
the
whole
security
orchestration
policy
architecture-
and
I
was
thinking
okay-
maybe
instead
of
trying
to
save
everything
database,
we
could,
you
know,
have
a
project
that
it's
similar
to
gitlab,
managed,
apps
and
we'll
have
a
configuration
for
a
project
group
instance,
and
then
we
could
store
those
yaml
files
and
then
you
can
easily
like
configure
it
to
be
editable
by
by
some
people,
yeah
and
and
and
so
on,
and
then
because.
A
B
Exactly
and
that's
the
native
way
and
you
can
you
can
store
it
somewhere,
you
can
automate
it
if
you
want,
because
that's
a
simple
git
repository,
so
you
can
modify
it
yeah
and
and
so
on,
and
there
are
so
many
things
that
are
good
in
that
solution.
The
the
only
thing
that
I'm
thinking
could
be
bad
is
that
we
need
to
parse
it
somehow.
A
Yeah
I
like
that,
because
part
part
of
what's
so
complicated
about
this
is
creating
a
a
flak,
not
painting
ourselves
in
the
corner,
with
with
a
model
for
this
framework
right
coming
up
with
the
and
and
it's
a
lot
of
work
to
do
everything.
We're
talking
database
migrations,
we're
talking
a
bunch
of
new
tables,
we're
talking
so
yeah.
I
quite
like
that
idea,
and-
and
it
doesn't
mean
that
we
can't
do
it
later
right
it.
It
can.
B
Just
be
the
mvc
yeah
for
and
the
other
thing
that
we
we
have
all
audit
options
by
default
because
we
already
have
all
comments.
You
know
who
did
what
you
have
whenever
you
would
like
to
do
a
change.
You
can
set
up
the
approval
process
and
you're
responsible
of
when
you're
responsible
for
a
repository
you
can
specify
who
can
merge
and
who
can
modify
and
who
can
create
a
mars
and
so
on.
A
I
really
like
that,
and
it's
slightly
different
to
the
idea
of
of
tweaking
the
gitlab
ci
from
the
project
that
is
being
scanned.
It's
like
it's
still
a
github,
ci
ammo
and
there's
still
a
policy
defined
and
still
be
pop,
but
it's
separate
right:
it's
not
yeah,
so
the
permissions
are
separate
yeah
I
like
that
cool.
So
that
could
be
an
option
there.
A
Any
anything
else
to
consider
here
around.
So
the
information
about
the
branches,
for
I
assume
there'll
be
there'll
need
to
be
some
glue
records
in
the
database
or
somewhere
right.
Some
metadata
saying:
hey,
go
look
for
for
the
stuff
in
here
and
by
the
way
you
only
need
to
do
that
if
it's
running
on
master
and
staging.
A
A
A
You
can
choose
to
fail
the
pipeline,
and
this
is
something
that's
been
discussed
in
other
groups.
As
you
know,
a
security
scan
is
a
job
that
runs
in
the
pipeline.
It
never
fails
it.
It
writes
the
report
and
the
report
gets
passed
and
and
regardless
of,
if
their
vulnerability
is
there
or
not
it
it,
it
succeeds
and
and
if
the
job
fails
for
any
reason,
it's
allowed
to
fail.
A
That's
the
default
yeah,
so
in
in
some
customers
and
some
engineers
and
and
even
even
myself
before
I
joined
gitlab,
I
had
a
different
view
of
this.
I
had
a
I
had.
Some
projects
approaches
errors
right
there.
That
said,
you
know
what
I
I
want.
I
want
the
bill
to
fail,
so
I
might
be
doing.
For
example,
my
pipeline
might
be
doing
a
push
to
production
and
if
there's
vulnerabilities,
I
don't,
I
don't
want
it
to
to
continue.
A
So
I
think
that's
the
idea
behind
that.
The
other
thing
that
this
gives
us
is
I've.
I've
seen
requests
from
customers
asking
for
a
notification
when
vulnerabilities
are
found.
This
would
be
a
sort
of
a
hacky
way,
but
it
would
give
you
a
notification
because
the
pipeline
will
fail.
You
get
an
email
saying,
hey,
failed
because
of
vulnerability,
and
then
you
feel
to
buy
that
and
that's
an
easy
mvc.
B
Yeah
yeah,
that
will
be
probably
the
last
spike.
We
can
work
right
now,
because
that
is
number
three
yeah.
We
need
to
first
find
the
way
to
modify
gitlab
ci
in
flight,
because
I
I'm
imagining
that
that
it's
being
part
of
the
security
jobs,
that's
the
last
security
job
and
it
will
check
the
results
of
the
previous
of
the
security
jobs
and
then
it
will
fail
or
allow
it
like
based
on
the
on
the
rule.
So
we
we
can.
B
Probably
we
just
need
to
make
sure
that
we
can
take
configuration
from
the
policy
and
put
it
in
the
job
and
the
job
can
read
it
and
decide
what
to
do
next.
A
But
assuming
assuming
for
a
second
that
we
solved
that
problem
and
we
have
things
running,
you
understand
this
area.
Well,
so
you'll
be
able
to
correct
me
if
I'm
wrong
the
by
the
time
the
reports
are
being
passed
and
merged
and
ingested,
the
pipeline
has
already
either
succeeded
or
failed.
B
A
So
this
would
actually
need
to
change
flip
a
pipeline
to
that
it's
under
success,
or
you
need
to
have
maybe
an
extra
job
there
that
holds
the
pipeline
open,
but
doesn't
block
the
reports
from
being
made.
It's
like
it's
like
a
I'm
done,
but
I'm
done
but
not
really
state
where
you
know
the
yeah
you're
following
this.
B
Yeah
yeah,
so
all
vulnerabilities,
all
artifacts
after
secured
after
pipeline
is
finished,
will
only
be
executed
if
it's
default
branch
and
the
pipeline
succeeds
like
it's.
If
it's
green,
I
was
thinking
about
the
the
second
idea
that
you
gave
so
being
able
to
have
a
separate
job.
That
is
a
part
of
the
pipeline,
and
it
will
either
fail
or
not
depending
on
on
the
results,
however,
that
if
the
pipeline
will
fail,
then
we'll
not
see
any
any
vulnerabilities
created.
B
A
A
B
Okay,
cool
then
we're
all
set
yeah
everything's
great
I'll
I'll
start
with
the
first
one
that
we've
discussed
we'll
see.
Then
now.