►
From YouTube: Secure Section - Brown bag on the secure Data Model
Description
This video goes over the Secure Data Model objects as they appear in the database.
This was recorded Feb 25, 2020.
More information on the Secure Data Model can be found here on the issue below, including links to supporting documents.
https://gitlab.com/gitlab-org/secure/brown-bag-sessions/issues/5
A
A
That
is
essentially
the
reference
for
any
data
that
is
available
for
secure.
So
the
goal
of
today's
meeting
is
to
go
through
parts
of
this
model.
Explain
what
the
fields
are.
Why
they're
there?
You
know
the
different
types
of
values
that
that
are
available
for
that
and
any
limitations
that
we
know
about.
There's
also
a
document
that
we
will
probably
loosely
follow
for
our
agenda
and
that's
the
intro
I'll
hand
it
over
to
Bobby
and
Bobby.
And
if
you
want
to
kind
of
walk
us
through
the
approach
for
today.
Yeah.
B
You
know
these
slides
all
these
pages.
Sorry,
yes,
okay,
oh
yeah!
So
that's
the
issue
for
the
brownback
session
of
today.
As
you
said,
it's
gonna
be
on
secure
data
model,
so
bathroom
model
we've
got
in
the
base
backyard
not
to
be
confused
with
the
the
JSON
report,
though,
in
virtue
are
related,
I
get
back
to
that
and
I
guess.
We
should
start
with
this
intonating
diagram
that
represents
all
the
classes
we've
got
in
the
rest,
black
back
end,
all
the
other
models,
but
first
I
have
to
roll
back
a
bit.
B
So
the
way
it
works
is
that
at
the
beginning,
we've
got
a
github
project,
a
repo
for
this
github
project
users,
push
commit
and
assuming
at
the
CI
configuration
security
scans
configured
that
triggers
scanning
jobs
and
these
scanning
jobs,
security
scanning
jobs
will
generate
security
reports,
JSON
reports
following
these
secure,
JSON,
syntax
and
in
the
end,
because
of
whatever
happens
in
the
back
hand,
these
classes
of
some
of
these
classes.
Some
of
these
objects
re
get
created
so
yeah.
Oh,
can
you
see
my
pointer
I?
Guess
you
can?
Yes,
ok.
C
B
B
To
a
project,
all
these
object
being
to
a
project
to
a
project
and
are
created
when
security
process
can't
and
and
passed
by
the
backend,
whereas
this
one,
the
each
link
and
this
one,
the
feedback
are
created
by
users.
So
we
got
the
stuff
here
the
models
created
by
the
passwords
we
got
in
the
backend
and
we
got
these
two
created
by
users
just
just
to
give
you
an
overview
of
the
models
we
got.
B
Also
this
one's
a
mini
scan
was
introduced
recently,
it's
it's
not
totally
used
at
the
moment.
It's
it's
still
new
in
this
data
model,
all
that
to
say
that
it's
evolving
some
of
the
classes,
some
of
the
models
we
have
been
there
for
a
while
and
use
when
you
click
on
the
interface.
When
you
render
a
list
of
security
findings,
vanity
findings,
some
other
and
some
of
them
sorry
models
that
I
created,
but
not
fully
used,
not
at
the
moment,
yep.
B
A
B
So
that's
that's
part
of
this
where
it
all
starts
with
the
state
machine
that
belongs
to
the
pipeline
model,
and
you
can
have
that
you
can
have
pipelines
for
every
branch.
It
doesn't
matter
that
this
is
the
default
branch,
master
branch
or
another
branch.
Actually,
so
whenever
a
branch
is
as
a
pipeline.
B
The
pipeline
will
collect
reports
and
create
an
impasse.
Reports
I,
don't
use
specific
words
for
it,
but
you
maybe
you
can
use
I,
don't
know
precise
I
should
get
here
but
short
answer.
You
can't
as
long
as
you
get
a
pipeline
running
whatever
the
branch
is
you'll
have
the
security
we
passed
passed
by
the
back
end.
Does
that
make
sense
yeah.
A
B
B
C
A
B
Okay,
so
that
would
be
the
Vinci
model
and
and
Seth
I
can
see.
You've
got
many
questions
on
this
model,
but
again
that's
a
new
model.
It
belongs
to
the
first
class
20
back
in
MVC,
so
I
don't
know
if
I
should
spend
too
much
time
on
this
one.
Maybe
I
should
should
start
with
a
with
the
legacy
models,
so
to
speak.
I
mean
the
models
that
I've
been
there
for
one.
Yes,
look
ass,
you're
great,
thank
you,
I'll
be
back.
Visual
feedback
same
goes
for
issue
link,
it's
a
new
one,
oh
by
the
way.
B
Don't
think
so.
Okay,
then
here
I
wasn't
sure
about
that
wanna
the
model
seems
functional,
but
it's
not
enough
to
have
a
model
so
yeah.
So
what
we
have
here,
although
you
know,
except
for
this
one
model,
is
currently
used
in
in
the
back
end.
We've
got
the
oops.
Sorry,
we've
got
the
occurrence
where's,
that's.
B
Anyways
I
can
I
can
open
here.
Oh
maybe
I
should
go
back
to
the
model
to
the
diagram,
so
the
occurrence
is
a
finding
and
technically
corresponds
to
a
vanity
of
JSON
reports.
So
whenever
this
well,
actually,
that's
not
so
simple
that
first
approximation
when
there's
a
Vinci
in
the
Ville
empties
JSON
array
in
the
JSON
report,
that
is
translated
to
an
occurrence
object
in
the
data
model.
B
B
So
this
occurs
as
generic
fields
that
describe
that
describes
right
now
empty
in
general.
What
type
of
empty
that
is?
We
mostly
rely
on
the
identifiers
to
do
that
to
describe
the
ability,
for
instance,
in
the
case
of
of
dependency
scanning,
we
would
have
CVE
identifiers,
that's
one
identifier
that
describes
the
vanity
finding
another
example.
B
In
the
case
of
the
been
scanning,
or
actually
also
as
a
way
of
sorts
of
vanity
findings,
we
have
cwe
ids
that
describe
the
kind
of
gravity
flow
that
we
have
found
all
that
to
say
that
different
different
cells
of
identifiers.
We
combine
all
them.
We
aggregate
all
them
in
the
finding.
So
in
a
finding
is
the
aggregation
of
these
identifiers
saying
what
kind
of
empty
that
is
and
allocation.
B
Is
combined
in
the
same
object?
This
is
what
we
are
sound
and
where
it
has
been
found
in
the
project
via
the
occurrence.
The
finding
itself
already
belongs
to
the
project,
but
the
occurrence
object
also
tells
where
precisely
it
has
been
found.
For
instance,
if
it
comes
from
SAS,
it
will
tell
the
line
of
a
file
and
which
file
generates
ability
flow
I
mean
where
it
has
been
found.
B
Okay,
so
that's,
and
so
it
was
a
bit
confusing
at
first
I
hope
it's
getting
better.
So
we've
got
this
finding
the
finding
as
a
location,
and
it
has
identifiers
going
back
to
the
model.
We
can
see
the
occurrence,
the
security
finding
and
identifiers
here
also,
the
findings
are
reported
by
a
security
scanner,
and
this
is
this
is
what
we
stir
in
the
in
the
scanner
model.
A
B
I
mean
if
we
scan
the
the
default
branch-
yes
and
answering
one
of
your
questions.
It
was
somewhere
in
the
document.
All
the
other
identifiers
and
the
scanners
belong
to
the
project.
They
are
in
the
scope
of
the
project
and
we
do
that
so
that
scanning
a
processing,
a
report
in
the
context
of
one
project,
won't
have
any
side
effect
on
some
other
project,
unrelated
project
belonging
to
someone
else
in
a
different
context.
B
Three
is
a
scoped,
that's
very
important
because
the
the
name
of
the
scanner
and
the
idea
of
a
scanner
come
from
the
JSON
report
and
technically
it's
possible
to
push
everything
you
can
you
can
out.
You
can
set
a
job
that
creates
a
JSON
document,
JSON
report
that
is
declared
as
a
such
in
the
CI
configuration
file
and
in
there
you
could
you
can,
you
can
say
the
scanner
to
XYZ.
Why
not
I
know.
A
B
Yes,
okay
and
and
again
he
knows
to
the
project.
Yep,
yes
or
no
other
project
doesn't
leak.
It
doesn't
make
in
any
way
yeah
any
absurd.
It
that's
the
same
fall
other
the
identifiers
in
the
context
of
finding
actually
but
anyways.
So
scanners
are
absurd
in
the
context
of
a
project.
So
if
you
push
not
try
the
same
two
consecutive
reports
at
the
same
time,
you
use
existing
scanner
in
the
context
of
the
project.
B
A
Occurrence
one
was
around
the
field
type
of
binary,
what
the
binary
means,
and
particularly
around
feedback
vulnerabilities
feedback,
and
actually,
if
you,
if
you
go
to
the
other
UML
document
in
the
in
the
issue,
you
can
see
the
different
data
types
and
I
was
curious.
If
there's
a
reason
that
some
of
those
fingerprints
are
binary
and
one
of
them's,
a
string
that
seems
like
it
might
be
difficult
to
query
against
yeah.
B
A
short
answer
is
yes,
but
I.
Don't
remember
and
yeah
sunny
enough.
It's
not
the
commanding
the
codes
but
I'm
again.
As
far
as
I
remember,
there
were
technical
limitations.
Yeah
I
have
to
investigate
than
that
yeah.
It's
gave
a
battery,
but
about
the
energy
feedback,
and
maybe
I
should
spend
some
time
on
that.
B
B
Anyways
I.
Don't
all
that
to
say
that
it's
it
has
limitations,
and
we
are
aware
that-
and
we
have
issues
about
moving
on
to
something
else,
and
to
move
on
some
sorry
to
move
on
to
something
else.
I
would
not
be
invalidated
when
the
vanity
finding
moves
in
the
code
I
mean
when
it's
location
changes
right
now.
This
is
a
limitation
we
have
maybe
I
shouldn't,
maybe
I
shouldn't
say
too
much
about
this
at
this
party.
I.
F
F
Hence
the
term
fingerprint,
but
that's
that's
tricky
to
track
like
if
a
vulnerability
moves
around
and
has
different
line
numbers
and
it's
in
a
different
file,
etc.
So
I
just
think
that's
helpful,
historical
context.
Yeah.
B
Combines
the
location
of
a
siding
and
the
kind
of
firing
that
is
so,
for
instance,
in
the
context
of
science.
It
would
say
it
would
combine
the
file
path,
the
the
line
number
and
and
the
kind
of
energy
flow,
for
instance,
as
SQL
injection.
Something
like
that.
It
would
combine
all
that
and
it
works,
because,
if
you,
if
you
in
the
in
the
UI,
if
you
dismiss
that.
B
This
this
setting
so
to
speak,
this
preference
is
kept
as
you
push
push
new
comments,
ran
new
pipelines
and
create
new
reports
for
the
true
branch,
because
the
fingerprints
remains
the
same.
The
limitation
is
that
if
we
prepare
lines
at
the
beginning
of
the
farm
where
the
relative
finding
has
been
found,
then
we
we
lose
of
an
MD
feedback,
because
the
fingerprint
has
changed.
So
we
need
a
way
to
obtain
the.
B
B
B
A
B
B
B
Actually
they
have
been
updated
by
cam
recently
and
first
we've
got
a
common
password
and
I
think
it
is.
So
that's,
that's
the
one
that's
a
curve
responsible
for
passing
the
t's
and
repulsed
and
turning
that
into
objects
and
eventually
into
models,
restoring
the
database
and
extracting
occurrences
extracting
scanners
extracting
identifiers,
and
so
that's
the
command
passer
and
then
we've
got
specific
ones
like
this
one
thought
they've
been
scanning
on
top
of
the
common
passer,
it's
pretty
generic!
B
G
G
G
D
F
D
Sorry,
what
do
you
mean
sorry
she's,
sorry
she's
referring
to
when
we
get
to
one
too
many
I
believe
it
is.
D
F
The
way
that
we
originally
discussed
that
with
Andy
and
I,
don't
know
if
we're
gonna
go,
that
way
would
be.
If
you
were
to
take
an
action
of
just
missing
a
vulnerability,
it
would
trigger
feedback
just
missiles
for
all
of
its
related
findings.
So
vulnerability
is
kind
of
like
an
aggregate
of
its
findings.
Yeah.
G
A
Sorry,
I
couldn't
just
back
up.
So
when
we
haven't
be
feedback,
you're
saying,
there's
three
types:
it
could
be
a
dismissal,
an
MRI
or
an
issue.
So
if
you
have
a
finding
and
you
open
an
issue
based
on
that,
that's
when
you
would
get
a
feedback
type
of
issue
or
if
you
open
up
an
EM
are
based
on
that.
That's
when
you
would
get
that
record.
F
Actually,
maybe
now
is
a
good
point
to
actually
talk
about
that
issue.
Length
model,
which
is
for
the
purpose
of
linking
issues
to
vulnerabilities,
not
occurrences,
slash
findings,
we're
not
currently
using
that
I
left
a
comment
that
I
don't
know
if
we
should
on
the
issue,
but
that's
that's
worth
mentioning
that
we
are
currently
like
in
the
process
of
maybe
changing
how
issues
are
linked
there.
So
there's
not
much
more
to
say
because
we're
not
using
that
currently
but
I'm,
something
to
keep
in
mind.
A
F
Baby,
okay,
so
so
the
really
short
30-second
answer
to
that
is
we're
getting
kind
of
confusing,
with
our
data
models,
where
a
issue
is
a
unit
of
work
in
get
lab
and
we're
starting
to
make
vulnerabilities
a
unit
of
work.
So
if
you
have
a
vulnerability
that
blocks
an
issue,
how
do
you
track
the
unit
of
work
of
Henle,
the
vulnerability?
You
need
to
create
an
issue
and
then
two
things
are
blocking
the
same
object
and
it
creates
a
more
complex
like
object
graph.
F
B
A
Yeah,
no,
so
the
question
I
had
was
so
you're
gonna
have
that
JSON
report,
which
is
going
to
come
from
a
pipeline,
but
it
actually
comes
more.
It
comes
from
a
deeper
spot,
which
is
the
job
or
in
in
our
model
job.
So
everyone
knows
jobs
and
builds
are
actually
the
same
thing
in
the
rails
model,
but
basically
a
JSON
report
comes
from
a
build,
and
so
my
question
was:
if
you
look
at
the
object
model,
why
it
goes
through
pipeline
then
to
build,
as
opposed
to
just
being
linked
directly
to
build.
B
Yes
and
the
Austral
answers
I,
don't
remember
why
we
decided
to
that,
but
there
would
be
benefits
from
connecting,
though
they
are
the
findings
to
them
to
the
job,
because
man
we
would
would
be
able
to
give
to
present
the
repos
as
we
process
the
jobs
we
wouldn't
have
to.
Ideally,
we
would
know
where
it
comes
from
and
possibly
we
could
have
partial
reports.
A
partial
I
mean
the
partial
reports
by
pipelines
containing
a
few
reports,
but
where
some
reports
be
would
be
missing,
because
some
jobs
would
be
failing,
I
think
we
should.
B
We
should
do
that.
We
should
have
we
should
process
as
much
as
possible
and
be
as
accurate
as
possible,
but
I
don't
remember
time
to
create
an
issue
I
believe
about
possibly
moving
the
the
findings
from
the
pipelines
to
them
to
the
jobs,
maybe
because
it's
because
of
a
lifespan
of
jobs
compared
to
a
nice
part
of
our
pipelines
and
because
it's
possible
to
we
run
a
job
in
the
context
of
the
pipeline.
The
same
security
scanning
job
I
think
it's
because
of
that
that
I'm
not
sure
I,
remember,
that's
that
make
sense.
A
Do
you
know
I
know,
there's
been
issues
about
like
the
dashboard
going
blank
when
this
might
be
a
bigger
issue
where
the
pipeline
doesn't
where
jobs
fail
in
the
pipeline.
So,
for
example,
in
this
object
model
does
occurrence,
because
it's
relying
on
the
pipeline
does
the
entire
pipeline
have
to
succeed
or,
if,
like
once,
I
hear
your
job
fails
and
another
one
doesn't
do
you
know
if
those
get
parsed
I.
B
Believe,
although
the
pipeline
us
to
complete
that's,
it's
just
I've
got
to
go
back
to
the
code.
Does
anyone
remember
I'm,
saying
that,
because
I
had
to
look
at
the
state
machine
moments
ago,
could
you
have
said
again
yeah?
Do
you
know
if
it
has
the
pipeline
has
to
complete
for
them?
We
posed
to
be
passed
by
you,
the
backhand,
a.
B
B
B
A
F
B
F
So
soon
we
have
talked
about
this
in
other
capacities
as
well
like
with,
with
the
new
scan
model,
we
discussed
things
like
showing
a
percentage
of
a
scan,
so
within
the
merge,
Eclipse
widget,
we
could
say
we've
only
scanned
20%
so
far
and
among
20%
there's
zero
new
vulnerabilities
and
then
someone
clicks
merge
when
pipeline
succeeds
and
they
don't
realize
that
there
could
be
more
vulnerabilities
coming
in
later
at
the
pipeline.
So
that's
why
we
waited
until
the
pipeline
is
finished
yeah,
but.
G
F
Because
any
job
could
upload
a
finance
report
and
when
we're
running
a
nodejs
scan
and
a
ruby
scan,
then
they
could
upload
at
different
points.
So
if
we
show
a
partial
data
set,
we
either
need
to
be
very,
very
clear
in
the
interface
that
it's
partial
or
just
not
show
anything
until
the
full
thing
is
finished,
which
is
what
we
currently
do.
G
Top
of
the
history,
so
this
is
the
second
time
I
run
the
pipeline
for
the
same
branch
and
privates
one
goes
as
sex
for
all
the
new
one
is
not
the
previous
one
found
100
at
the
new
one
from
20
and,
like
we
kind
of
know,
both
right.
We
know
the
history
past
100
from
the
privacy
pipeline.
We
know
the
20%
of
the
current
one,
which
is
not
finished,
yeah.
F
G
That's
part
I
know
I
just
want
to
like.
There
is
a
case
that,
for
example,
right
there
I
run
two
part
lines.
Both
them
have
exactly
the
same
job
while
Ruby
when
something
else
and
the
pipeline
one
runs
successfully
with,
but
we
showed
like
vulnerability
difference
relative
job
not
to
pipeline.
B
The
thing
is,
we
can
have
findings
of
one
report
type
coming
from
multiple
jobs,
so
we
can
have,
for
instance,
three
science
jobs,
one
for
Java,
one
for
piyah
and
whatnot,
and
we
couldn't
even
we
couldn't
even
render
progress
bar
because
in
the
backend
we
don't
know
what's
coming
in.
We
just
know
that
we
got
jobs,
possibly
generating
reports.
Well,.
G
A
F
Yeah
I
don't
know
if
that
would
be
useful
in
the
case
of
a
merge
request,
it
would
make
more
sense
in
the
case
of
a
default
branch.
I.
Think,
because
that,
in
the
case
of
the
default
branch,
it
would
be
more
pressing
notifications
like
there's
no
a
vulnerability
unless
we're
just
talking
about
showing
things
more
quickly
in
the
merge
request.
Widget
yeah.
A
I
mean
one
of
the
reasons
I'm
just
bringing
this
up
is
I,
don't
know
the
extent
and
how
long
some
of
scams
take.
Certainly
I
think
we're
talking
about
like
a
deep
scan
of
secrets
that
might
potentially
take
you
know,
hours
or
days
to
run.
You
know
on
the
dash
side
if
we
were
to
run
a
deep
scan
of
your
website
a
desk
and
could
take
hours
so
waiting
all
that
time
before
you
can
get.
The
results
might
not
be
practical.
So.
B
B
B
F
E
A
A
A
That's
not
being
explained
all
the
vulnerabilities
going
to
the
database
after
a
report
is
parsed
and
one
of
the
challenges
we've
had
is.
If
a
report
is
parsed
and
there's
no
vulnerabilities,
then
no
entries
go
into
the
database
and
so
there's
a
couple
of
bugs
that
are
out
there.
For
example,
if
you
have
bugs
on
your
dashboard
and
then
you
next
job
finishes,
what
happens
is
it
goes
through
the
process
of
IV
and
explained,
which
is
it
parses
it?
It
finds
some
vulnerabilities,
it
adds
nothing
to
the
database
and
then
on
the
dashboard.
A
It
shows
the
old
vulnerabilities,
so
the
scan
model
will
now
say:
okay
I
know
that
add
a
scan
ran,
I
knew
as
a
scan
ran,
and
it
will
know
that
a
scan
ran.
It
will
see
that
there
were
zero
vulnerabilities
in
Atkins
zero
out
your
dashboard.
So
it
gives
us
a
cleaner
way
of
saying:
when
did
the
scan
run
and
if
you
query,
when
the
scans
run
based
on
when
the
scan
run,
you
can
then
find
the
vulnerabilities.
B
It
is
because
it's
it's
safe
and
scan
returned
novelties
at
all,
which
is
great
so
right
now
we
can't
make
the
distinction
between
a
scenario
a
very
bad
sinner
where
the
scanning
jobs
are
not
properly
set
up,
possibly
and
a
very
good
scenario
where
there
are
no
findings,
it's
all
clear,
which
is
that
and
also
we,
we
don't
have
the
possibility
to
to
tell
users
why
it
has
failed.
Users
have
to
go
to
the
scanning
jobs
where
well,
this
kind
of
job
white
fails
to
make
sense
of
what's
going
out
and
fixed.
B
A
Yeah,
so
the
scan
model
will
allow
us
to
do
some
new
interfaces
so
for
theoretically
we
can
just
list
out
all
the
scans
and
you
can
look
at
all
the
scans
that
ran
and
then
you
can
see
which
part
which
pipelines
that
were
a
member
of
that
would
be
very
very
hard
to
do
today.
So
it
gives
us
a
couple
different
windows
into
that
data.
A
D
The
idea
for
vulnerabilities
is
that
they're
gonna
be
persistent,
so
we
can
link
to
them.
They
have
a
proper
ID.
The
eventual
plan
is
that
a
vulnerability
will
have
multiple
findings,
but
right
now,
vulnerabilities
just
have
one
finding
and
that's
I
mean
the
the
model
supports
one-to-many,
but
we're.
B
Got
a
question
on
this
point
for
us:
if
you
don't
mind
absolutely
yes,
so
when
you
run
two
consecutive
pipelines
for
the
same
branch,
do
you
mean?
Does
it
mean
that
you
update
the
relationship
so
that
the
fill
empty?
The
existing
ability
objects
is
now
connecting
connected?
Sorry
to
the
new
finding.
B
D
Yes,
sir,
oh
yeah,
no,
no!
No!
No!
These!
This
is
this
is
great
nuance.
That's
the
important
to
call
out
I
think,
because
so
we
do
rely
on
the
you
know
the
fingerprint
to
make
sure
it's
the
same
but
like
like
you're
talking
about
before.
If
the
location
changes
we'll
have
that
old
vulnerability,
that
is
pointing
to
the
old
location
and
we
will
create
a
new
vulnerability,
that's
pointing
to
the
new
location.
This
was
a
bit
less
of
a
problem
in
the
in
the
current
model
or
the
the
old
model.
D
If
you
will,
because
we
just
brought
back
vulnerabilities
or
occurrences
rather
from
the
last
successful
pipeline,
so
we
wouldn't
find
the
the
you
know
that
that
occurrence
that
happened
on
the
previous
pipeline,
it
would
just
disappear,
but
but
now
we're
retaining
that
and
bringing
in
the
new
ones.
So
the
moving
the
location
will
cause
duplicates
to
show
up
until
we
address
that,
you
know
problem
that
you
alluded
to
a
kind
of
losing
track
of
it.
Yeah.
B
But
using
the
new
veloute
model,
it's
possible
to
connect
to
reconnect
the
existing
vanity
model
to
the
new
finding.
That
is
very
because
the
location
has
changed
right.
That
was
the
ID
initially,
so
you've
got
to
run
one
execution
of
the
pipeline.
One
fine
one
occurrence
model
second
execution
of
the
pipeline,
a
different
one,
because
the
location
has
changed
and
either
manually
or
programÃs,
TPS
or
automatically
I'd
say
you
can
reconnect.
The
existing
reality
object
to
the
new
finding.
That
was
a
plan
right.
F
B
F
Needs
we
need
multiple
current
or
we
need
multiple
findings
per
vulnerability
to
really
do
that
properly
and
we're
not
doing
that
yet,
okay.
So
so
what
you
could
do
this
if
we
have
Auto
promotion
or
whatever
we're
calling
it
now
enabled
so
a
occurrence
a
finding
gets
auto
promoted
to
a
vulnerability.
F
Then
you
could
merge,
perhaps
merge
two
vulnerabilities
together
to
like
reconnect
those
together
but
again
for
that
I
think
you
need
multiple
findings
support
it,
and
maybe
we
can
do
that
now,
because
we
only
are
like
kind
of
like
pretending
that
it's
a
one-to-one
relationship
on
the
front
end.
It's
it's
more
of
a
UI
issue
right
now,
but
not
on
the
data
model.
So
we
could
do
that,
but
I
think
it's
just.
D
Yeah
yeah
and
I
mean
we.
If
we,
if
we
had
that,
that
logic
to
say,
hey
the
location,
changed
and-
and
these
are
the
same
actually
either
the
same
occurrences
we
could.
Potentially
you
know
inject
it
in
that
merge
the
default
branch
pipeline
process
to
to
update
the
vulnerability
to
point
from
the
old,
the
old
vulnerability
occurrence,
to
the
new
vulnerability
occurrence
and
avoid
all
these
duplications.
B
A
D
F
Say
it's
more
epic
like
than
issue
like
at
least
that
was
the
idea,
so
there
should
be.
There
can
be
many
issues
linked
to
vulnerability,
say
you
have
one.
That
is
a
you
need
to
do.
It
depends
the
update
and
you
need
to
change
like
some
like
when
you're
doing
a
dependency
update.
You
need
to
change
like
a
deprecated
sis
call
to
a
different
Cisco,
so
there
might
be
two
units
at
work
for
a
single
like
dependency
vulnerability.
F
So
there
multiple
issues
that
could
link
to
a
single
vulnerability
and
you
treat
like
every
other
thing
for
an
epic,
an
you
can
attach
a
milestone.
You
can
touch
labels,
you
can
do
whatever,
but
it's
not.
The
unit
of
work
issuers
are
still
the
unit
of
work.
A
Gives
me
a
little
bit
better
idea
of
how
this
is
coming
together
and
then
the
other
question
I
had?
Was
we
have
these
overrides
of
severity
and
confidence,
so
I'm
interested
in
how
those
are
going
to
work
and
whether
vulnerabilities
really
should,
if
the
severity
and
competence
of
those
should
really
be
overridden,
or
whether
that's
something
that
whether
that's
really
kind
of
more
of
a
risk
assessment
and
how
I
think
of
these
vulnerabilities.
F
So
the
the
idea
was
that
gonna
bill
his
work
as
an
aggregate,
so
you
think
about
a
finding,
as
essentially
a
findings
immutable
you,
you
can't
manually
override
the
location
you
can't
manually
over
at
the
severity
of
a
finding,
but
boehner
ability
is
something
that
you
can
modify.
You
can
trash
an
SLA,
you
can
close
it,
you
can
do
whatever
and
then
it
has
these
findings
attached
as
almost
like
immutable
metadata.
So
as
a
vulnerability
you,
if
we
get
requests
a
lot
like,
can
I
downgrade
the
severity
of
the
specific
finding.
F
You
can't
downgrade
the
severity
of
a
finding,
but
if
you
attach
to
a
vulnerability,
you
can
downgrade
the
severity
of
the
finding
the
vulnerability,
and
so
we
can
set
the
override
of
the
severity
level
on
the
vulnerability
and
then
we
mark
over
I,
true
to
just
track
cleared
out.
If
it's
not
overridden
at
the
severity
of
confidence
level,
then
it's
just
an
aggregation
of
its
findings.
A
Got
it
so
if
I
understand
correctly,
so
the
occurrence
is
or
the
occurrence
of
the
finding
is
really
what
our
scanner
spitting
out
and
the
vulnerability
object
is
really
as
a
security
analyst.
It's
my
risk
assessment
of
those
findings
and
so
I.
Look
at
all
those
findings
and
my
risk
assessment
is
a
this
actually
really
doesn't
matter,
because
a
B
and
C
and
I
want
to
attach
it
here,
and
these
are
the
units
of
work
and
here's
the
issues.
The
vulnerability
is
my
human
layer
on
top
of
the
actual
findings.
A
G
I
have
a
little
question
so,
like
I'm,
a
user
I
like
I,
see
the
vulnerability.
The
vulnerability
now
has
one-to-one
relationship
to
offending
and
the
severity
of
the
funding.
Originally
from
scanner
is
critical
and
as
a
user,
I
change
it
to
as
a
medium
and
then
like
later
on.
The
pipeline
runs
again
and
then
like
it's
fence
offending,
and
will
there
be
a
new
funding
attached
to
it
or
like
the
findings,
sobriety
voters
changed.
F
G
F
That's
the
that's
the
complicated.
How
do
we
track
this
using
this
occurrence,
identifier,
discussion,
which
is
it's
complicated,
but
basically
we
did.
We
look
at
like
this,
the
specific
line
of
code
in
the
location
and
try
and
identify
it
that
way
and
currently,
if
the
line
changes,
then
it
breaks
everything.
Oh.
G
F
Different
functionally,
currently,
yes,
okay,
as
a
user,
though,
within
the
context
of
a
merge
request,
what
you
would
see
is
fine
or
vulnerability
a
is
gone
vulnerability,
a
is
added,
so
it
doesn't
look
that
different
from
the
merge
request,
but
it
will
probably
heavily.
It
will
probably
severely
affect
first-class
vulnerabilities.
B
Well,
there's
an
issue
about
that
about
cases
that
are
doable
and
that
are
cases
that
are
really
complex
to
deal
with,
and
this
is
why
it's
some
time
in
the
past
we
had
out
the
idea
of
making
that
manual
so
that
users
can
connect
the
dots
when
the
the
back
end
is
not
about
to
do
that.
So
look
at.
Do
we
still
have
an
issue
about
that?
I
mean
well,
I,
guess
just
drop
dead.
B
Yeah,
oh
there
there
is
an
issue
yeah,
no
we're
not
talking
about
this
one.
That
means
right,
no,
not
talking
about
our
improving
our
vanity
recognition,
because
that
would
be
automatic
and
that's
there
are
cases
we
can
quite
easily
grab
cases
we
can
handle,
but
there
are
more
complex
cases
where
it
would
make
sense
to
have
users
connect
the
dots
so
either
using
the
vanity
object.
B
The
new
first-class
quality
model,
because
then
user
could
attach
the
new
finding
to
the
the
old
one
via
the
vanity
model,
the
new
vanity
model,
but
anyways
I
I
want
I,
guess
I,
wonder
with
whether
this
is
still
an
upcoming
issue.
I
mean
not
coming
issue,
but
let's
say
something:
we
we
plan
to
do
in
the
short
term.
I'm,
not
fighting
I,
can't
find
it
anymore.
I,
don't.
F
A
I
know
we're
running
out
of
time
if
you
have
any
thoughts
on
the
session
or
want
follow-up
sessions.
Please
just
add
that
to
the
issue,
the
brown
bag
of
things
that
we
didn't
cover
today
or
things
that
you'd
like
to
go
in
in
more
detail
about.
Add
that
as
comment
to
this
brown
bag
and
then
we'll
try
to
schedule
something.