►
From YouTube: Development of Continuous Vulnerability Scans - Planning breakdown and spikes outcomes.
Description
This video presents the outcomes of our spikes and planning breakdown for Continuous Vulnerability Scans high level epics.
Warning: in this video the planning view is DRAFT and not reflective of our expecations to deliver the work.
A
A
Well,
this
sorry
I'm,
seeing
my
words
non-prepared
presentation,
improvised.
That
was
the
word
that
we're
looking
for
improvised
presentation
on
where
we
are
standing
on
continuity
scans,
we're
just
finishing
to
break
down
the
two
main
epics,
which
are
for
the
pencil
skating
and
container
scanning.
A
A
lot
of
work
has
been
done
by
the
team
members
to
split
that
up
into
more
actionable
chunks
and
better
understanding.
What
needs
to
be
done.
I
cannot
make
announcement
about
how
much
time
it
will
take,
because
a
lot
of
these
issues
have
not
been
refined
yet
so
we
know
what
to
do.
We
have
removed
a
lot
of
the
uncertainties
that
we
had
a
month
ago,
because
there
was
a
lot
of
spikes
issue
that
have
been
completed
and
we
know
we
know
know
which
direction
we
know
which
direction.
A
We
want
to
take
sorry
so
I'm
in
the
process
of
taking
the
slides
to
provide
an
updated
timeline.
But
this
will
give
you
a
small
overview
about
what
needs
to
be
done
now.
Please
don't
pay
too
much
attention
about
the
timing.
I
was
just
displaying
the
the
little
pieces
there,
but
I
am
not
considering
any
planning
for
now
so
Yep.
This
is
stealing
air
and
there,
but
this
doesn't
mean
it
would
be
done
in
that
Milestone.
That
will
come
later.
A
So
we've
completed
the
spikes
a
bit
earlier
than
expected,
which
is
great,
so
we
we
know
exactly
what
we
want
to
do
for
storing
the
advisories,
where
we
want
to
sing
them
Etc.
We
also
move
forward
on
looking
at
the
race
code
base
and
see
what
would
be
the
impact
or
at
least
where
in
the
code
base,
we
are
currently
using
the
current
logic
that
we
will
have
to
replace
for
our
Continuum
scans.
A
So
the
very
first
piece
and
the
color
code-
sorry
not
mentioned
there,
but
when
we've
blue
for
the
pencil
scanning
green
for
continuous
scanning
and
the
purple
one
is
shared
pieces.
That
is
a
entirely
mutualized
between
the
two.
So,
as
you
can
see,
we're
pretty
happy
because
we
can
reuse
a
lot
of
things
between
the
two
features,
which
is
awesome
for
maintenance.
A
So
the
first
thing
we
will
have
to
do
is
to
go
expand.
The
current
license.
Db,
external
infra,
so
we
have
this
external
infra
with
some
components
running
there
to
gather
license
information
for
the
various
package
Registries.
We
store
that
into
a
database
there,
and
then
we
export
that
into
a
gcp
bucket
using
some
CSV
files,
and
this
is
what
then
is
synced
by
the
res
applications.
A
So
we
will
reuse
that
architecture
and
in
between
the
same
infra,
we'll
spin
up
some
new
components,
but
we
are
really
trying
to
replicate
the
same
approach
in
the
same
architecture.
So
we
have
already
a
licensed
feeder,
a
licensed
processor,
a
license
exporter
Etc.
So
we
are
reusing
the
same
approach
for
advisories
to
make
that
consistent
and
easier
to
maintain.
So
the
first
piece
we
want
to
add
is
an
advisory
feeder.
A
This
piece
is
supposed
to
be
mutualized,
but
the
first
break
we've
contained
the
shared
piece
plus
the
advisory
feeder
for
dependency
scanning,
which
is
taking
the
gymnasium
The
gitlab
Advisory
database
git
repository
and
fit
that
into
the
license.
Db
database,
once
this
is
done,
this
can
be
extended,
so
this
will
be
a
single
project.
At
least
that's
the
plan
so
far,
I
I
need
final
confirmation
that,
but
we
will
then
extend
that
to
include
some
advisory
feeder
for
some
new
feeds
for
container
scanning.
A
There
is
currently
an
ongoing
question.
I
think
Sarah.
You
didn't
have
time
to
have
a
look,
but
I
asked
you
and
Sam
a
for
the
MBC.
We
could
maybe
just
focus
on
one
single
distro
that
will
that
doesn't
mean.
We
won't
have
time
to
add
the
others,
but
at
least
for
shipping
the
MVC.
We
were
only
required
to
have
one
working
distribution.
I
think
that's
not
a
huge
Time
Saver
in
implementation,
but
in
testing
it
can
be.
We've
seen
that
with
rolling
out
license
compliance
scanner,
there
was
several
languages
to
test.
A
At
the
same
time
and
anytime,
there
was
one
that
didn't
work.
The
world
piece
was
blocked,
and
that
was
a
bit
of
a
shame.
So
here
we
would
like
to
have
a
different
approach
where
we
we
are
more
granular
in
the
way
we're
running
things
out,
so
we
can
ship
at
least
the
end-to-end
solution
for
just
the
distro
and
then
incrementally
add
new
new
ones.
A
A
So
that's
for
pitching
advisories
from
outside
external
database
into
our
licensed
event
for
and
then
we'll
have
to
have
to
create
an
adversary
processor
which
basically
takes
what
the
feeder
sends
and
I
mean,
takes
what
the
feeder
provides
and
send
that
to
the
database.
So
we
also
have
to
update
the
database
with
new
tables
to
store
it.
These
are
either
and
once
everything
is
there,
there
will
be
a
new
exporter
that
takes
the
data
from
the
database
on
the
riverlab
as
is
and
Export
that
into
the
gcp
bucket.
A
One
thing
that
came
out
of
the
spikes
is
that
the
CSV
format
is
no
longer
usable
for
that
for
that
usage,
so
the
team
has
decided
to
migrate
over
ND
Json,
which
is
new
line
delimited
season
if
I'm
saying
that
correctly,
which,
as
some
similar
advantages
to
CSV
but
some
also
additional
ones
or
less
inconvenience
when
it
comes
to
expandability,
because
you
can
have
full
season
objects
per
row
again.
This
is
these
are
all
shared
pieces.
A
This
will
be
very
generic
approach,
so,
whatever
the
type
of
advisories,
if
it's
for
CS
or
DS,
it
will
go
through
the
same
process,
which
is
very
nice
for
us
and
then
the
last
part
would
be
to
ingest
this
additional
advisory
data
into
the
rest
database.
We
already
have
the
table
there,
but
it's
not
fit
so
far,
so
we'll
have
to
expand
the
currency
process
or
do
a
different
sync
worker
Etc
to
fetch
that
information
and
store
it
into
the
race
models.
A
This
is
where
we
would
like
to
have
the
incremental
rollout,
because
we're
not
sure
about
yet
the
potential
performance
issue
that
we
are
currently
experiencing
with
the
license
DB.
So
this
one,
we
will
be
much
more
careful
about
how
we
were
we
roll
things
out
in
the
rails
instance,
there
was
another
big
block
that
was
called
the
CVS
logic.
This
is
now
broken
down
further
into
two
main
parts,
as
explained
in
the
very
first
presentations
to
achieve
continuous
scans.
A
We
mainly
need
two
sorts
of
informations,
the
s-bombs
on
one
side,
and
these
reason
one
side
and
the
scan
will
be
triggered
by
reacting
to
changes
on
those
two
data
sources.
So
the
first
thing
we
will
be
doing
is
to
trigger
the
scans
on
the
s-bomb
changes,
because
that's
what's
closer
to
what
we
do
today,
it's
it's
really
similar.
Actually,
because
today,
what
we
do
is
we
trigger
a
pipeline.
We
run
the
scan
in
a
pipeline
and
then
we
ingest
the
results
at
the
end
of
the
pipeline
and
reacting
to
esbom.
A
Changes
will
follow
the
same
process
because,
yes,
bomb
are
generated
during
a
pipeline,
and
then
they
are
ingested
at
the
end
of
that.
So
it's
it's
really
close
2016
because
base-
and
this
is
something
we
can
achieve
more
easily.
So
we'll
start
with
that.
In
the
meantime,
the
street
inside
team
is
working
on
updating
a
bit
the
dependency
list,
so
there
will
be
some
coordination
to
happen
there.
So
it's
better
to
delay
a
bit
for
us
working
on
the
database
for
s-bumps
if
I
understood
that
correctly.
A
So
that's
why
the
second
part
here
is
coming
after,
which
is
reacting
on
advisory
changes,
and
for
this
one
we
need
to
rely
on
having
DS
bomb
in
the
database,
because
we
know
when
we
sync
a
new
advisory
or
we
update
an
obesity
from
the
external
licensed
DB.
We
know
which
package
is
impacted,
but
we
know
I
have
to
find
all
the
projects
that
use
that
package
or
at
least
affected
versions
of
that
package.
So
for
that
we
cannot
rely
on
the
artifact.
A
And
then
the
remaining
PCS
would
be
about
generating
the
s-bomb
on
container
scanning
job.
This
I
started
earlier
than
expected,
which
is
good.
It's
mostly
finished
on
the
first
scanner,
because
container
scanning
has
two
scanners:
one
is
try,
video.
There
is
a
scripe
so
far
we
enabled
that
for
driving,
and
there
is
a
question
for
you,
Sarah
and
Sam
about.
A
A
Once
we
have
that
completed,
we
will
have
to
make
sure
that
the
generated
s-bomb
for
continuous
scanning
is
correctly
interested
in
the
race
database.
There
are
some
changes
to
happen
there,
but
once
this
is
completed,
probably
by
the
end
of
this
Milestone,
the
yes
bomb
will
be
a
a
brand
new
feature
that
we
can
announce
independently,
which
is
great,
because
customers
can
already
reverse
that.
A
So
here
it
is.
Hopefully
this
was
understandable.
Is
there
any
question
about
what
I've
just
presented
to
you.
B
B
C
Yes,
I
was
just
gonna
add
about
this
timeline
like
maybe
this
is
not
the
right
granularity
for
the
level
of
variability
for
discussion,
but
one
thing
that's
come
out
with
our
general
deploy
is
there's
some
growing
pains
around
the
data
set
and
I
feel
like
that.
Might
either
put
some.
You
know
pieces
ahead
of
some
of
the
other
like
like
push
push
some
of
the
things
down
in
order
to
let
us
resolve
it
or
you
know
it
might
actually
change
the
way
we
Implement
some
of
these.
A
A
Yeah
you're
talking
about
an
impact
of
the
current
performance
issues
we
have
with
license
TV.
C
C
A
C
The
size
of
these
of
these
data
sets
is
just
not
something
that
we
can
store
for
various
reasons
right
like,
for
example,
one
thing
we
found
out
with
the
rollout
is:
not
only
should
we
be
worried
about
self-hosted
instances,
our
processes
like
our
process,
to
sync
competing
with
the
other
workers.
C
The
other
thing
we
should
be
worried
about
is
the
fact
that
some
of
these
sub-hosted
instances
are
actually
disk
limited,
so
well
we're
finding
that
the
amount
of
space
on
disk
for
some
of
these
is
actually
not
enough
for
the
kind
of
storage
that
we
want
to
do
just
with
the
license
data.
So
I
think
that
probably
needs
to
be
investigate
and
if
that's
the
case,
we're
either
gonna
have
to
make
a
decision
about
the
lower
bound
of
where
this
feature
is
available.
C
Like
do
we
make
it
available
in
Raspberry,
Pi
or
or
a
tool-
that's
an
instance.
That's
like
four
gigabytes
of
disk
or
eight
gigabytes
of
disk,
and
then,
if,
depending
on
that
issue,
that
might
change.
So
that's
that's
just
kind
of
that.
That's
my
thinking
on
that.
When
I
see
the
the
linear
progression.
A
Yeah,
it
did
thank
you
for
cleaning
it
out.
Probably,
would
impact
this
last
piece
and
not
so
much
about
the
rest,
hopefully,
but
yeah.
That's
a
good
call
out.
B
I
wanted
to
add
two
additional
to
what
Igor
just
said,
because
right
now,
with
the
issue
related
to
the
performance
of
license
scanning
I've
been
suggested,
it
goes
like
kind
of
as
a
long-term
solution.
Do
you
consider
actual
stories
that
in
database?
If
you
understand
him
correctly
and
access,
is
data
by
API
like
I'm,
saying
that
but
I'm
not
really
sure
how
it's
supposed
to
work
and
I
feels
it
also
might
like
decision.
This
decision
might
affect
the.
C
Yeah
you're
right,
you're,
right
Olivia
in
that
on
the
external
side.
Most
of
the
stuff
will
continue
looking
exactly
the
same,
so
that
can
proceed
as
it
is
more
or
less,
but
some
of
the
changes
might
be
in
the
exporter,
for
example.
So,
but
but
yeah
most
of
the
stuff
on
the
on
that
side
will
probably
look
exactly
as
we
envisioned.
A
And
did
and
to
China,
to
add
on
what
you
said.
The
the
API
I
think
the
ideal
long-term
plan
is
something
we'll
definitely
discuss,
but
for
after
the
MVC
I
think
that
for
the
MVC,
if
we
manage
to
fix
the
the
performance
issue
for
the
license
DB,
it
should
be.
It
should
be
acceptable
also
for
advisories
at
least
that
that's
the
current
plan
we'll
see,
but
that's
they
hope
we
have,
but
definitely
for
the
long
term.
A
There
will
be
a
a
different
architecture
to
consider,
because
not
only
this
is
causing
performance
issue,
but
we're
also
doing
very
useless
consumption
of
data
of
storage
of
bandwidths
and
then
CPU
that
is
costing
customers,
but
also
us
Money
for
Nothing.
A
We
can
do
things
much
more
in
a
much
more
smarter
way
and
the
API
started
to
suggest.
That
is
a
very
good
first
step,
because
it
will
be
actually
an
API
coupled
with
some
local
cash.
So
it's
three!
You
need
some
data,
you
don't
have
it
locally,
so
you
go
fetch
on
the
API,
but
the
next
time
you
need
the
information
for
the
same
packages.
You
have
them
locally.
A
There
are
some
good
shouts,
because
this
information
needs
to
stay
up
to
date.
There
might
be
some
polling
some
stuff
like
that,
but
at
least
what
you
get
in
your
local
self-managed
instance
is
a
subset
of
what
we
have
in
the
external
license
DB
and
it's
a
subset
that
is
useful
to
you
still
that
still
require
that
the
license
the
bin
for
as
a
100
runtime
SLA
ish-
we're
not
starve
for
that.
Yet
this
is
another
project
that
we
can
yet
end
over
to
the
infra
team.
A
But
this
is
definitely
one
of
the
duration.
We
could
take
to
make
that
more
sustainable.
B
A
No
I
think
that
the
the
solution
that
we
went
with
make
preference
sense
and
there
will
be
way
acceptable
for
an
MVC,
particularly
with
the
admin
panel
that
allows
customers
to
toggle
that
off.
The
remaining
question
would
be
if
we,
if
we
Face
very
a
lot
of
problem
with
that,
it
might
means
that.
Well,
we
just
offer
customers
that
are
not
happy
with
that
solution
to
run
their
own
license
scanner
and
offer
them
a
way.
Maybe
to
ingest
that
information.
A
You
know,
similarly
to
what
we
do
today
for
dependency
scanning.
We
we
move
to
continue
Rapid
City
scans,
meaning
we
will
no
longer
leverage.
The
defenses
can
be
reports,
but
we
will
still
maintain
the
feature
available
for
subpachi
or
customer
that
want
to
use
their
own
scanner
and
feed
that
into
gitlab
gravity
management
system.
So,
in
a
similar
fashion,
we
can
offer
a
way
for
people
to
use
their
own
license
scanner
and
feed
that
into
the
race
application.
A
So
far,
I
say
that
the
duration
we
are
adding
toward
is
having
this
included
in
the
s-bomb.
So
we
had
a
few
issues
around
that,
but
I
think
we
moved
them
to
Austin
VC
down
the
road,
but
making
sure
that
if
the
license
is
provided
in
the
s-bomb,
we
ingest
that
into
our
license
information,
and
this
is
what
we
display
in
the
UI.
A
A
Hopefully,
that
will
solve
not
everybody's
problem,
but
most
of
our
customers
problem.