►
Description
This is the last part of a series of demos about Ingesting Dependency Scanning advisories.You can find the other demos here:
- Advisory Feeder - Part 1: https://www.youtube.com/watch?v=s1IhnVYYJXk&ab_channel=GitLabUnfiltered
- Advisory Processor - Part 2: https://www.youtube.com/watch?v=jpvxh2BNipA&ab_channel=GitLabUnfiltered
A
Hello:
everyone,
my
name,
is
Nick
lieshko
and
I
work
as
a
senior
back
end
engineer
for
the
secure
composition,
analysis
team
today,
I
would
like
to
demonstrate
the
last
part
of
a
series
of
demos
about
continuous
advisory
suggestion,
and
in
this
particular
demo,
I
would
like
to
show
you
our
work
on
the
last
component,
which
is
the
advisory
exporter
before
I
begin.
Let
me
give
a
brief
introduction
of
what
you're
going
to
see.
A
So,
as
you
might
know,
the
intention
here
is
to
ingest
advisories
from
a
variety
of
advisory
sources
and
store
them
into
a
public
gcp
bucket.
In
this
case,
we
started
with
the
gymnasium
DB,
but
in
the
future
we
are
also
going
to
use
the
3vdb.
So
we
ingest
these
advisories.
This
is
a
black
box
for
now,
but
I
promise
to
show
more
information
about
this
in
a
minute.
We
store
this
in
a
public
bucket
in
ND
Json
format.
This
is
a
public
bucket
because
it
needs
to
be
reached
by
all
the
available
gitlab
instances.
A
So
every
gitlab
instance
it's
going
to
pull
that
data
and
it's
going
to
sync
those
data
with
its
own
postgres
SQL
database.
A
Maybe
this
indigation
format
is
a
bit
new
for
you,
so
let
me
Talk
a
bit
about
it.
So
until
now
we
were
only
supporting
this
incision
for
licenses
and
we
were
exporting
this
into
a
public
bucket
using
CSV
format.
This
is
what
we
called
format
version
V1.
Now
we
introduced
a
format
version
V2,
where
we
support
indigation
both
for
licenses
and
advisories
right
now,
I'm
only
gonna
focus
on
the
advisories
part.
So
now,
let's
see
and
let's
get
more
information
about
how
this
continuous
advisory
in
session
happens.
A
So
you
might
remember
from
my
previous
demos
the
first
and
the
second
iteration-
and
this
is
the
last
piece-
is
basically
the
third
iteration.
So
briefly,
the
first
theory
in
the
first
iteration
we
introduced
The
Advisory
feeder
The,
Advisory
fader-
is
triggered
based
on
schedule
every
day
once
per
day.
A
What
the
advisory
feeder
does
right
now
is
that
it's
clones,
it
clones,
The
gitlab,
Advisory
database.
It
reads
from
a
cursor
which
is
nothing
more
than
a
txt
file
stored
in
a
gcp
bucket.
It
reads
the
last
commit
that
is
processed
so
that
it
continues
from
there.
It
gets
all
the
advisories
and
it
publish
them
on
a
pub
sub
topic.
A
It's
interesting
to
mention
here
that
the
advisory's
public
bucket
is
different
than
the
licensed
public
bucket,
and
that
being
said,
I
can
already
show
you
what
we
have
in
production
already
for
this.
We
just
deployed
this,
so
it's
quite
fresh.
So
here
we
are
in
the
deployment
project.
This
is
one
of
the
projects
that
we
have,
and
basically
this
is
the
glue
where
it
connects
all
these
services,
and
what
you
can
see
here
is
that
I
have
an
advisory
feeder
job
for
Dev
and
prod
and
an
advisories
exporter
job
for
Dev
and
prod
respectively.
A
So
these
are
jobs
that
are
scheduled
to
be
executed
once
per
day,
and
they
will
run
one
for
Dev
and
one
for
prod
and
in
case
of
the
feeder
it
will
basically
put
all
the
send
all
the
data
to
the
pub
sub
so
that
the
advisory
processor
can
process
them
and
the
exporter
will
read
the
data
from
the
database,
so
here
I
can
already
show
you
that
the
Run
feeder
run
a
couple
of
hours
ago
and
the
result
was
basically
this
part,
so
it
cloned
the
whole
DB
repo.
A
A
It's
also
interesting
to
see
that
this
run
quite
fast,
just
almost
two
minutes,
then
we
also
have
the
exporter
and
the
exporter
again.
Here
it
run
and
what
it
does
is
that
for
every
registry,
so
for
Maven,
for
instance,
it
it
found
5,
000,
advisories
and
stored
them
into
the
into
this
path
in
the
gcp
bucket,
then
for
npm
again
it
found
that
many
advisories
and
stores
them
in
this
path
right.
A
A
You
see
that
we
have
here
a
V2
format,
protocol
format,
version
and
then
you
can
see
in
here
that
we
have
all
the
all
the
directories
for
all
the
packet
managers
that
we
support
and
Maven.
You
see
that
we
have
a
file,
that's
of
6.2
megabytes
and
it
will
basically
contain
the
5000
advisories
that
were
exported.
A
So
this
is
interesting,
but
what
I
would
also
like
to
show?
You
is
an
actual
demo
of
this,
so
what
I'm
going
to
do
is
I'm
gonna,
run
the
feeder
and
then
run
the
exporter
in
real
time,
so
that
you
can
see
the
result
and
I'm
gonna
run
this
on
my
own
personal
gcp
project,
so
that
we
don't
have
to
worry
about
the
dev
and
the
product
environment.
A
So
let
me
start
by
counting
so
on
the
on
the
bottom
right
corner.
I
have
a
connection
to
the
license,
DB
database.
So
let
me
first
count
and
see
how
many
rows
I
have
so
I
already
have
some
data
I
would
like
to
delete
all
the
data
from
there
so
delete
from.
A
A
A
So
you
see
that
the
first
log
message
it
says
that
it's
cloning,
The
gitlab,
Advisory
database
locally
it
and
and
basically
what
it
does
is
that
it
detected
20,
248
advisories
and
these
advisories
are
being
sent
right
now
and
processed
by
The
Advisory
processor,
which
you
don't
see
right
now,
but
what
I
can
already
do
is
I
can
already
count
the
number
of
advisories
and
you
already
see
that
these
are
exactly
the
same
number
as
this
one
right
now,
if
I
go
here
and
this
this
is
my
personal
GSP
project-
you
see
again
two
public
buckets,
but
we
are
interested
for
the
advisory
bucket.
A
So
this
is
clean,
so
there
is
nothing
there.
So
now,
I'm
gonna
run
the
exporter
for
advisories,
and
maybe
this
is
also
interesting.
So
when
we
run
the
exporter,
you
see
that
we
have
two
different
CLI
commands,
one
for
licenses
and
one
for
advisories.
So
basically
we
use
the
same
code
base
for
licenses
and
advisories.
A
So
this
is
very
nice
I
think
so
now,
if
I
run
the
exporter
for
the
advisories.
A
So
basically,
what
it
will
do
is
that
it's
going
to
fetch
all
the
in
this
specific
example,
I'm
running
this
only
for
the
bypi
pipei
packet
manager,
so
I'm
not
running
it
for
all
of
them
and
basically
what
it
does
here
is
that
it
fetches
from
using
a
cursor
all
the
advisories
from
the
database
and
sending
them
to
the
gcp
bucket.
A
But
what
I
can
actually
do
is
I
can
add
them
here
in
a
file,
and
you
can
see
that
we
have
300
3053
lines
and
if
I
go
back
here,
you
will
see
that
we,
we
exported
500
to
500
1000
2000
3000
3500.
So
so
it
should
be
basically
the
same
number.
If
you
add
all
these
things
up.