►
From YouTube: Development of Continuous Vulnerability Scans - Overview
Description
This is the first video of a series on the development of the Continuous Vulnerability Scans.
You can read more on the corresponding epic: https://gitlab.com/groups/gitlab-org/-/epics/7886
A
Hi
this
is
Olivia
Gonzalez
I
am
the
injury
manager
for
composition,
analysis
group
in
secret
stage,
and
today
we'll
talk
about
continuous
rapidity
scans.
This
is
a
very
important
feature
for
above
continuous
scanning
and
dependency
scanning
feature
categories,
and
it's
also
going
beyond
that
with
some
potential
unlock
for
further
Improvement
in
the
dependency
management
feature
category
and
some
other
aspect
in
the
software
supply
chain
security.
A
So
we
are
very
eager
to
move
forward
on
that
and
through
that
presentation,
or
at
least
a
series
of
short
videos,
I
will
explain
how
we
are
doing
things
today.
Our
continuous
reality
scans
will
compare
to
that.
What
will
be
our
intent
in
the
implementation,
because
there
are
some
very
challenging
aspects?
Technically
so
we'll
have
some
discussion
around
that
and
also
some
expectations
in
terms
of
planning
when
we
expect
to
deliver
that
or
at
least
up
to
deliver
that,
and
hopefully
that
will
address
all
of
your
questions.
A
If
you
add
any
other
follow-up
questions
after
this
presentation
feel
free
to
reach
out
to
us,
you
can
contact
us
directly
by
commenting
in
the
epics
or
you
can
reach
out
on
the
slack
Channel.
If
you
have
access
to
this
so
as
a
first
step,
this
is
a
free,
a
little
glossary
about
a
few
words
that
you
might
acronyms
that
you
might
encounter
a
lot.
There
continue
scanning
the
pencil
skating
and
continuous
gravity
scans.
Those
are
such
long
words
that
it's
a
rather
faster
to
use
those
acronyms.
A
So
let's
talk
about
how
things
are
working
today,
basically
continuous
scanning
and
defense
scanning
we're
in
work
in
a
very
similar
fashion
by
taking
on
one
side
the
list
of
components
on
your
project,
either
the
repository
or
the
docker
image
for
continuous
scanning
and,
on
the
other
end
having
a
database
of
known
vulnerabilities
for
such
confidence,
the
scan
itself
can
be
simplified
roughly
into
a
matching
logic
that
takes
two
data
sets
and
try
to
figure
out
the
the
join
between
the
two.
A
So
usually,
an
advisory
will
get
the
details
about
a
given
package
and
it
will
list
the
affected
versions,
or
at
least
the
range
of
impacted
versions
for
that
package.
On
the
other
end,
the
s-bomb
or
the
dependency
list
would
tell
us
which
version
of
your
company
or
the
company
you
are
using
in
your
project
and
for
each
of
those
we
will
verify
if
this
specific
version
is
within
the
range
or
the
list
of
affected
versions
for
each
advisories.
A
A
This
approach
currently
relies
EVD
on
the
CI
job
to
do
that
matching
logic
and
Report
generation.
This
means
it's
really
tied
to
having
a
pipeline
running
and
what,
as
we've
seen,
there
could
be
other
events
outside
of
the
pipeline
lifecycle
that
can
generate
new
vulnerabilities
for
your
project.
A
good
example
is
a
product
that
is
rather
stale
or
I.
Don't
have
a
lot
of
changes.
There
won't
be
no
new
pipeline
triggered
on
that,
but
there
could
be
new
advisories
disclosed
for
existing
component
in
your
project,
and
you
won't
know
about
that.
A
So
what
needs
to
be
done
to
achieve
that
because
that's
exciting,
but
there
is
a
lot
of
work,
so
here
you
can
see
how
a
simple
diagram
highlighting
the
big
part
that
needs
to
be
achieved
and
both
for
the
pencil
scanning
and
container
scanning.
We
will
try
to
reuse
and
mutualize
as
much
as
possible
between
the
two,
but
this
also
means
there
is
some
sequential
aspect
into
the
development
of
these
parts.
A
We
also
have
a
color
chord
here,
a
color
code
here,
to
highlight
our
level
of
confidence
in
addressing
those
tasks
that
is
mainly
defined
by
the
level
of
uncertainty,
complexity,
the
risk
or
the
velocity
associated
with
all
of
those
parts.
A
So
let's
have
a
look,
and
maybe
that
will
also
answer
a
lot
of
questions
around
how
we
can
achieve
that.
So,
as
I
mentioned
earlier,
we
need
two
things:
the
s-bombs
and
the
as
these
areas.
Storing
these
moment
the
data
space
is
already
fairly
down.
The
working
group
that
worked
previously
on
that
part
has
done
99
of
the
job.
I'd
say
there
are
a
few
adjustments
that
needs
to
be
done
for
fulfilling
our
needs,
but
that's
already
pretty
done,
and
this
is
relatively
straightforward.
This
is
something
we
know
well.
A
The
generation
of
the
s-bomb
is
already
something
that
is
done
in
DCI
job
today,
so
there
is
no
much
unknown
or
things
that
left
to
be
done
there.
The
advisories
is
something
that
we
also
manage
ourselves
for
dependency
scanning.
So
we
have
a
fair
amount
of
knowledge
around
that
it's
self-contained
in
the
git
repository
so
ingesting
that
information
should
not
be
a
big
problem.
A
Once
we
have
those
two
things
together,
we
can
start
working
on
the
component
and
advisory
matching
logic,
which
is
what
I
mentioned
earlier.
There
is
an
additional
level
of
Interest
uncensity
here,
because
we
are
not
sure
yet
where
we
want
to
implement
that
we
already
are
doing
that
work
in
the
dependency
scanning
CI
job.
A
So
it's
just
a
matter
of
moving
that
logic
somewhere
else,
and
this
is
a
proof
of
concept
that
would
tell
us
whether
we
go
with
moving
that
into
the
res
application,
or
we
rather
move
that
into
the
brand
new
infrastructure
that
we
developed
for
the
new
license
scanner.
This
will
be
further
explained
in
the
details,
but
for
this
overview,
I
will
just
stay
there.
A
It
is
also
mutualized
between
several
other
security
reports
and
this
need
this
means
we
will
have
to
extract
the
bits
that
are
specific
to
dependency
scanning
and
container
scanning
into
another
abstraction
level
to
achieve
that,
because
we
also
want
to
maintain
backward
compatibility
with
the
current
approach
for
the
third
party
integrators.
So
this
is
a
very
uncertain
and
highly
risk.
We
have
a
high
risk
part
to
achieve
once
everything
is
done,
we
will
have
our
MVC.
A
This
is
also
a
critical
point
to
understand
here
between
the
current
approach
and
what
continuous
cancer
will
deliver.
There
won't
be
a
lot
of
visual
changes
for
the
customer
because
in
the
end,
it's
not
about
where
we
are
putting
that
information,
but
how
we
are
achieving
that
that
scan
and
where
we
are
achieving
that
scan
and
when
particularly,
but
in
the
end,
both
approach
will
feed
the
existing
variety
management
system.
A
So,
whether
we're
running
with
the
all
approach
or
the
new
one,
you
will
see
the
results
in
the
variety
dashboard,
for
instance,
so
it's
very
difficult
for
us
to
have
a
user
visible
change
at
any
time
during
the
development
process.
It's
very
difficult
to
do
to
just
demonstrate
one
piece
from
a
new
user
side.
This
is
a
bit
of
a
concern
because
it
doesn't
really
represent
our
iteration
value,
we'll
try
to
challenge
that
and
come
up
with
maybe
a
more
iterative
approach,
but
again
until
the
CVS
logic
has
been
implemented.
A
We'll
definitely
will
be
able
to
demonstrate
progress
through
some
more
internal
demonstration,
like
I'm,
making
sure
that
we're
able
to
ingest
the
database
of
advisories
or
we
have
the
s-bomb
available
so
anything
like
that,
but
that's
pretty
pretty
minimal
and
then
once
this
is
done
for
the
pencil
skinning,
we
have
to
repeat
the
same
thing
for
continuous
scanning.
The
good
thing
is
that
a
lot
of
what
will
be
done
for
the
pencil
skinny
will
influence
or
rather
be
reused
to
totally
or
partially
for
container
scanning.
A
A
This
is
something
that
we
already
know
how
to
achieve
for
I'd
say
most
of
that,
so
we're
mostly
very
confident
about
achieving
that.
On
the
other
end,
the
advisories
for
continuous
scanning
is
something
that
is
much
more
complex.
It's
the
data
is
spread
amongst
multiple
databases,
and
this
is
currently
mainly
handled
by
our
open
source
components.
A
So
we
need
to
have
a
better
grasp
that
how
this
data
is
built,
what
it
contains,
because
we
will
have
to
make
sure
we
can
manage
and
handle
that
data
ourselves
and
feed
it
into
the
matching
logic
and
again,
there
might
be
a
little
level
of
uncertainty
here,
because
matching
Logic
for
open
operating
system
packages
might
be
slightly
different
and
then
for
application
packages.
But
again
once
we
will
have
achieved
that
for
dependency
scanning
all
the
architecture,
the
framework
to
do
that
will
be
already
in
place.
A
So
it
would
just
be
a
matter
of
having
the
specific
Logic
for
continuous
gaining
packages
to
be
applied
again.
There
is
not
a
lot
of
opportunity
to
demonstrate
an
integration
there,
and
this
is
also
highly
dependent
on
the
progress
made
on
dependency
scanning
in
the
planning.
Part
I
will
go
further
into
the
details
about
how
we
plan
to
address
those
different
parts
and
how,
when
we
expect
to
start
and
deliver
them,
but
this
gives
a
good
overview
about
the
dependency
between
all
of
those
tasks.
A
So
thank
you
for
watching.
There
will
be
other
videos
coming
for
the
details
about
what
needs
to
be
done
and
some
others
about
the
planning
so
see
you
there
and
thank
you
for
watching.