►
From YouTube: Vulnerability Management: Engineering Q&A
Description
Engineers from the Threat Insights group ask Secure engineers about Vulnerability Management related questions.
A
So
this
meeting
is
probably
past
due,
given
that
the
engineers
who
are
working
on
the
threat
insights
team
are
all
new
to
the
company.
I
know,
there's
been
an
informal
knowledge
sharing
between
the
people
who
developed
the
initial
vulnerabilities
and
findings
with
ingot
lab,
but
we
haven't
actually
had
any
kind
of
a
organized
meeting
outside
of
our
weekly
group
calls.
So
this
came
up
in
our
retrospective.
It's
come
up
several
times
in
the
past.
Apologies
for
taking
so
long
to
get
this
on
the
calendar.
A
We've
got
in
our
agenda
that
people
have
put
meteor
put
questions
in
in
advance,
so
we'll
just
jump
right
into
that.
We
would
can
also
decide
at
the
end
of
this
call
if
this
should
be
something
that
we
do
again,
whether
it's
recurring
or
just
you
know,
on-demand,
but
we'll
see
how
that
goes
today.
This
is
part
of
a
larger
effort,
Thank
You
Olivier,
for
creating
the
the
knowledge
sharing
issue
that
Ciego
and
I
are
driving,
so
I.
A
Think,
in
addition
to
these
types
of
conversations,
we
need
to
also
look
at
our
engineering
focus,
documentation
and
any
other
supporting
resources
that
the
engineers
who
are
working
on
vulnerability
management
can
use
to
understand
some
of
the
reasons
decisions
were
made
in
the
past
and
domain
information.
That's
held
by
a
lot
of
the
folks
on
this
call,
so
I'll
be
quiet
now
and
Alexandre.
You've
got
the
first
item
on
the
agenda.
I'll
go
ahead
and
share
the
agenda
for
the
sake
of
everyone.
Looking
at
the
same
thing,.
B
Cool
yeah
so
I.
My
question
was
around
the
pipeline
security
tab
and
why?
Every
time
someone
accidentally
calls
it
a
dashboard
they
get
banished
and
Matt
and
Lucas
and
Andy
all
chimed
in
to
say
that
it's
more
of
a
report
and
to
list
and
Lucas
says
that
their
unperson
report
data
Lucas.
Can
you
elaborate
on
that.
C
Yeah
so
I
guess
the
the
dead
horse
I'll
probably
continue
to
beat
in
this
meeting-
is
that
there's
there's
a
fundamental
difference
between
findings
which
are
occur
and
vulnerabilities.
So
when
something
appears
in
a
merger
quest
and
it
hasn't
been
merged
into
the
default
branch,
it
has
the
potential
to
the
invulnerability.
But
it
is
not
yet
a
vulnerability.
C
We're
a
bit
more
vague
with
our
wording
in
the
UI
for
that,
but
we
don't
actually
persist
something
to
the
database
until
it's
merged
into
the
default
bridge,
and
that
idea
is
that
you
don't
actually
have
a
vulnerability
in
your
application
until
it's
been
merged.
So
that's
like
the
the
real
difference
between
occurrences,
which
are
now
called
findings
and
vulnerabilities,
and
so,
when
you're
looking
at
a
pipeline,
the
pipeline
is
rendered
out
of
the
report
attached
to
the
report
attached
to
an
artifact
which
is
just
JSON
objects.
It's
not
actually
pulling
for
the
database.
B
C
So
the
like
20
second
flow
is
the
security
scanner
runs
in
the
pipeline.
It
generates
a
JSON
artifact
which
gets
upload
to
the
rails,
app
at
the
end
of
the
pipeline
run.
We
have
a
thing.
We
have
a
background
worker
that
takes
a
JSON
file
iterates
through
it
and
persists
these
things
until
the
vulnerability
occurrences
table,
which
also
creates
the
vulnerabilities
in
the
vulnerabilities
table.
C
B
C
D
E
Like
this
is
very
English,
this
is
sitting
and
instance,
while
sitting
and
not
correct
in
30
days
by
default,
so
it
could
be
30
days
there
unless
you're
a
specific
exists.
Basically,
we
kind
of
ride
that
value
into
the
table
definition
or
maybe
we're
in
an
environment
variable
just
has
to
be
checked
back
yeah.
Basically,
this
is
an
issue
for
multiple
edge
cases.
E
So
basically,
what
was
happening
to
the
sample
branch
is
what
impacting
your
alive
application
this
our
existing
real
stress-
and
this
is
what
is
mostly
called
rivetti,
whereas
what
is
happening
on
the
future
branches
and
showing
in
pipeline
views
and
much
requests
are
potential
more
abilities.
So
there
are
not
yet
impacting
your
real
application
and
from
their
perspective,
as
advocacy
on
one
side,
this
is
just
costing
the
reports,
creating
training,
Ruby
objects,
entities
and
sending
them
to
the
front
end
with
serializers
and
the
other
side.
E
You
have
a
service
running
at
the
end
of
the
pipeline.
Getting
this
passing
against
at
season
five
and
sending
them
into
the
database,
and
on
top
of
that
we
had
at
the
first
plasma
rapidity,
which
is
not
what
is
displayed
in
the
dashboard.
Initially,
are
we
using
the
findings
from
the
database
and
he
has
a
small
good,
Shia
spy
plane?
E
You
it's
a
bit
hybrid,
because
if
you're
looking
at
the
feature
branch,
whose
pilot
season
reports,
if
you're
looking
at
the
devil
branch,
it
would
get
data
from
the
database
and
we
have
a
class
that
transform,
which
way
it
is.
If
we
transform
a
plan,
review,
object
into
I.
Think
it's
decayed,
which
returns
from
a
plan
review
object
into
an
activerecord
instance,
but
we
don't
purchase
it.
We
just
instanc
sage
them
and
then
send
them
to
the
serializer
to
show
that
for
pipeline
use
and
merge,
request,
use
and
feature
branches.
E
B
F
Not
much
has
a
question,
but
I
was
wondering
because
at
some
point
I
was
trying
to
get
a
clearer
picture
of
what
database
table
does
what
and,
as
you
can
see,
there
were
several
iterations
of
the
database
model
with
the
latest
one
being
posted
on
I.
Think
there
is
a
comment
down
below
with
a
link
to
even
and
more
up-to-date
database
model.
So
I
was
wondering
if
we
could
like
have
some.
E
No
clue
this
is
a
man
issue.
It's
been
a
long
time,
we're
just
getting
that
in
6002
and
yeah.
We
don't
know
yet.
What's
the
best
like
the
use
of
the
communication
might
not
be
the
place
place
because
it's
too
much
for
engineers,
part
of
our
users,
but
a
lot
of
things
that
we
are
now
trying
to
expose
to
our
own
engineers
also
makes
sense
for
so
bad
integrators.
So
we
might
have
multiple
places
to
put
that
kind
of
information.
E
Maybe
if
it's
to
internal
it'll
make
sense
to
put
it
there,
but
I
think
the
workflow
might
be
interesting
to
put
into
use
of
the
communication.
Some
of
the
the
behavior
of
the
I
mean
the
main
entities
of
the
architecture,
make
sense
from
an
integrator
inches
perspectives
that
they
understand
better
how
we
display
the
data
I
push
into
the
cheesin
reports,
but
for
a
very
internal
technical
documentation.
It's
not
yet
clear
and
still
an
issue
for
us
are.
A
A
F
I
think
I
read
me
file
would
be
a
good
idea
at
the
beginning,
but
I
am
wondering
if
this
will
not
be
much
more
complicated
and
I
think
we
will
require
multiple
files.
So
I
was
wondering
if
we
could
like
start
thinking
about
creating
something
like
an
internal
engineering
handbook
or
something
like
that,
where
every
stage
could
place
their
own
documentation,
which
I
imagine
would
mostly
have
like
high-level
overview
of
how
things
work
together
and
the.
C
Glossary
I
think
is
a
huge
step
there
and
getting
at
least
ever
on
the
same
page
but
figuring
out
some
way
of
creating
of
auto-generating.
That
would
be
the
most
effective
I
think
there
is
actually
also
the
following.
The
architectural
evolution
worked
by
people
working
on
as
well
and
I'll.
Try
and
like
that
here,
but
I
know
that
at
least
the
verify
team
has
started
documenting
ways
of
setting
up
an
architecture
which
might
be
something
worth
exploring.
A
F
I,
don't
think
the
high-level
overview
will
change.
I
think
the
code
will
change.
As
Lucas
said,
this
is
going
to
change
very
often
so
it's
going
to
get
out
of
date
very
often,
but
I
wouldn't
say
that
the
high-level
overview
of
the
future
will
change
that.
Often,
unless
you
like
say,
ok,
we're
scratching
the
whole
thing
and
doing
it
from
scratch,
but
yeah
I
don't
think
that's
going
to
help
or
not
anyway.
Okay.
A
F
Far
as
yeah
I
think
I
think
that
it
would
be
a
good
place.
Basically,
the
problem
I
see
happening
every
now
and
anon
is
pshoo
links
model.
We
have
a
model,
that's
called
vulnerability
issuing
and
it
was
created
at
the
early
stage
of
the
implementation
when
we
still
were
not
doing
the
embassy,
and
it
was
supposed
to
be
a
cornerstone
of
multiple
vulnerabilities
per
issue,
and
we
are
not
using
it
at
all.
Currently,
with
the
current
implementation
and
this
pops
every
now
and
often
the
maintainer
is
asking
or
reviewers
asking.
Why
is
this
model
here?
A
A
D
So
basically,
we
just
want
to
understand
what
was
the
rationale
behind
like
designing
it
in
this
way,
one
vulnerability
can
have
more
than
one
occurrence
or
finding
actually
like
after
reading
the
like
answer
from
Lucas.
What's
written
here,
it
kind
of
makes
sense.
We
want
to
verbalize
it
Lucas
sure.
C
And
so
we
talked
a
lot
about
aggregation
early
on
and
any
can
definitely
speak
more
to
this.
But
if
you
think
about
the
two
different
models,
so
we
have
the
occurrences
table
which
is
created
straight
out
of
our
analyzer
data.
So
it's
essentially
stateless
data.
It's
an
artifact
of
the
location
and
just
the
raw
data
of
what
is
found
via
our
analyzers.
The
vulnerability
table
is
stapled.
C
So
the
idea
is
that,
with
a
vulnerability
that,
if
you
run
a
analyzer
and
code
gets
merged,
then
the
finding
is
no
longer
present
in
your
code,
but
just
because
something
has
been
merged,
it
may
not
have
been
deployed
of
production.
Yet
so
you
might
want
to
manually
resolve
your
vulnerability,
but
the
findings
no
longer
there.
So
a
vulnerability
can
be
stapled
can
have
things
like
an
assignee
in
theory
or
if
you
want
to
give
it
a
clear
title
you
can,
but
the
underlying
fine
idea
should
not
be
modified.
C
So
that's
that's
that
separation
there,
and
in
the
case
of
something
like
a
vulnerability
that
SAS
picks
up
that
gets
exploited
by
dust
that
will
create
two
separate
occurrence
records
or
findings.
But
in
theory
that
should
really
be
the
same
vulnerability.
It's
the
same
piece
of
code,
you're
gonna
be
modifying
to
resolve
it's
the
same
piece
of
code
that
you'd
be
assigning
out
or
triaging.
So
that's
the
idea
behind
having
multiple
so.
D
C
E
We'll
also
be
careful
with
the
protection,
your
PIN,
because
this
is
a
legacy
way
to
compare
things.
It's
not
accurate.
It's
not
variable
over
time.
Really
adding
up
and
he/she
can
move
to
a
more
stable
way
like
comparing
findings
in
a
state
to
a
way
like
when
we
are
storing
a
finding
in
the
database
and
then
having
a
new
pipeline,
comparing
if
it's
an
existing
one,
we
don't
use
a
project
in
your
print.
E
We
use
the
report
type,
the
location,
fingerprint
and
the
primary
identifier,
because
those
are
supposed
to
be
stable
data
except
it's
a
location
change,
and
then
we
have
an
issue
to
improve
the
tracking.
But
the
product
in
your
brain
is
a
legacy,
probably
that
we
generate
from
the
analysis
side.
That
is
a
compound
that,
like
I,
said
about
multiple
properties,
but
it's
not
done
consistently
between
the
different
indices
and
we
have
no
proof
yet
that
the
sir
body
will
follow
that.
E
So,
instead
we
want
to
generate
that
consistent
way
of
identifying
a
finding
on
the
right
side
by
leveraging
those
three
information
which
we'll
also
call
like
the
coordinates,
which
is
missing
either
the
location
fingerprint
and
the
primary
identifier.
The
repo
type
is
one
of
them,
but
it's
more
about
the
unicity
constraint
in
the
database.
But
the
difficulty
is
for
a
year
in
the
variety
management
part
is
how
the
different
grouping
capabilities
that
you
can
add
to
the
first-class
vulnerability,
because
there
can
be
multiple
interests
in
to
grouping.
E
It
could
be
like
a
workaround
around
the
fact.
The
fact
that
we
are
not
able
to
correctly
track
findings
over
time.
So
we
will
just
pop
up
multiple
findings
to
the
UI,
but
it's
actually
the
same
one,
but
are
well
aware.
There
is
no
way
we
can
merge
them
together.
So
it's
up
to
the
user
to
group
them
constantly
to
the
same
variety,
which
is
painful,
but
it's
the
best
work
right.
E
D
I
think
this
is
a
really
good
insight,
so
you
were
saying
like,
since
these
occurrences
are
based
on
the
location,
primary
identifier
and
the
report.
Type
composition
like
this
is
the
identifier
of
an
occurrence
at
some
point.
As
you
say
it
like,
you
can't
really
understand
like
if
the
occurrences
are
identical,
if
the
line
has
been
changed,
so
this
has
to
be
done
manually
by
the
user,
so
seems
like
in
the
near
future.
We
have
to
find
a
way
to
merge
vulnerabilities
together
to
represent
just
like
one
one
yeah.
G
So
before
we
move
on
I,
just
I
think
it's
up
and
Biore
can't
see
it
on
screen
of
the
epic
that
jonathan
linked,
it's
definitely
I
think
Lucas.
This
is
what
you
were
talking
about.
It
is
related,
but
I
guess
what
I
want
us
to
make
sure
that
we're
we're
not
I'm
not
overstepping
here.
This
exploration
was
intended
less
around
I.
Think
it's
pretty
clear-cut
that
we
need
to
have
a
roll
up
of
findings
into
sort
of
single
containers.
G
My
concern
is
more
about
opening
up
things
like
linking
any
sort
of
you
know
the
vulnerabilities
or
potentially
even
the
occurrences,
to
one
issue
versus
multiple
issues
versus
1mr
versus
multiple.
Mrs.
There
are
certain
things
that
if
we
go
one
too
many
or
many
to
one
now,
we
don't
know
if
that's
a
I
guess
a
good
way
to
work.
If
it's
a
preferred
way
to
work,
I
think
there's
a
lot
of
usability
research,
that's
missing!
G
That's
what
Andy
and
tally
on
our
UX
research
team
are
trying
to
do
right
now
is
have
a
little
bit
more
informed
perspective.
There's
also
other
kind
of
complications
that,
if
you
look
at
a
lot
of
the
competitive
tools
out
there,
they
even
have
a
third
layer,
so
they
can
sitter.
Let's
say
an
instance
is
one
one
specific
time
that
a
vulnerability,
let's
say
a
Kurd
was
was
identified
to
file.
G
If
a
particular
SAS
can
finds
the
same
CVE
ten
times
in
one
file,
they
would
say
that
that
was
ten
instances
now
the
application
itself
that
counts
as
one
occurrence
or
there-there's
other
terms
they
use
for
that.
But
basically
the
application
contained
that
CVE.
So
then,
if
you're
looking
broadly
across
multiple
applications,
the
number
of
occurrences
is
the
number
of
apps
vulnerable
to
that
CVE.
That
makes
sense.
So
there's
there's
that
kind
of
it's
a
three-tier
hierarchy.
G
So
this
is
why
I'm
a
little
bit
hesitant
to
push
forward
with
some
of
these
decisions
right
now,
because
I
get
that
it's
a
lot
of
Ryoka
tech,
chure
and
if
we
I
don't
want
to
do
it
and
then
have
to
redo
it
or
make
it
a
change
that
we're
gonna
regret
later.
So
that's
why
I'm
just
I'm
sort
of
advocating
caution
right
now
until
we
have
a
little
more
data,
yeah
I
think
that.
C
Makes
complete
sense
and
since
I
commented
or
emoji
onto
that
epic,
without
leaving
a
comment,
I
think
that
that's
I
think
what
what
you
said
makes
a
ton
of
sense
and
I
think
that
this
really
needs
to
be
broken
apart,
because
discussing
findings
versus
vulnerabilities
and
then
issues
and
merge
requests
and
a
third
object.
Those
are
all
so
vastly
different
topics
that
I
really
don't
think
that
we
can
continue
that
and
that's
in
the
same
thread
previously
we're
talking
about
things
like
attaching
issues
directly
to
findings
which
is
currently
possibly
using
the
vulnerability
feedback
object.
C
And
then
the
idea
of
a
vulnerability
would
be
an
aggregation
of
the
issues
attached
to
its
findings.
So
there's
still
be
like
a
one
to
one
relationship
there,
but
there
is
there's
kind
of
like
that.
Cascading
idea
that
we
explored
before
as
well
I
think
that's
definitely
worth
exploring
more
I.
Just
think.
It
definitely
need
to
be
prioritized
differently
than
something
like
the
issues,
merge,
requests,
etc.
G
Yeah
absolutely
I
think,
based
on
the
discussion
that's
been
going
around
in
that
particular
issue,
I'm
almost
inclined
to
sort
of
close
it
it
was,
you
know
it
was.
It
was
me
not
knowing
a
whole
lot
about
gitlab
or
the
history
of
the
project,
or
a
lot
of
this
so
old
me
was
writing
something
down
and
not
really
understanding
it
and
I
agree.
It's
not
really
actionable
it's
too
open-ended,
and
it
contains
too
many
things
all
that
is
to
say
the
whole
discussion
around.
G
Do
we
need
a
way
to
aggregate
up
all
these
independent
findings
into
a
single
vulnerability
object.
Absolutely
I
think
we
just
need
a
little
bit
more
detail
on
how
to
approach
that.
But
that's
something
we
know
today
is
a
challenge
for
users
they
want
to
be
able
to.
If,
if
it's
the
same
CVE,
they
want
to
see
it
all
in
one
place,
they
don't
want
to
action
it
five
times
or
500
times,
depending
on
what
it
was
I.
E
Appreciate
the
distinction
you
made
about
this
just
three
different
layers:
is
it
clear
to
everyone
here
what
is
currently
our
architecture
in
terms
of
how
those
layers
today
like
what
is
really
important?
Is
you
said
there
are
multiple
instances,
but
they
just
kind
of
want
to
rub
it
into
the
application
right
now.
E
This
kind
of
things
which
are
shown
on
the
dashboard
is
better
than
that,
like
we
want
to
report
each
individual
filing
separately,
if
there
are
in
different
lines
of
code,
because
there
are,
if
you
have
ten
in
currencies,
then
instances
of
the
same
files
is
10
and
3
points
for
security
variability.
So
this,
why
want
to
report
a
number
of
10
in
the
dashboard
controls,
for
example?
C
And
I
guess
I'm
one
more
word
on
terminology
to,
because
it's
not
really
captured
here,
if
this
might
already
be
understood,
but
just
to
touch
on
why
we
keep
jumping
in
between
the
word
occurrence
and
finding
occurrence
just
we
want
to
move
away
from
occurrence
because
occurrence
has
a
sense
of
finality
to
it.
You're
saying
there
is
an
occurrence
of
this
thing
and
when
we
go
back
to
the
idea
of
a
merger
quest
having
the
potential
for
being
exploitable,
then
findings
seem
to
be
more
indicative
of
that
potential
occurrence.
A
C
G
A
great
point
and
I
think
there
was
a
lot
of
that
and
Alexander's
question
as
well.
So
Andy
and
I
are
making
terminology
definitions,
part
of
the
research
we
want
to
find
out
what
do
people
that
actually
deal
with
the
stuff
call
it
because,
interestingly
enough
from
a
security
perspective,
we've
heard
some
engineers
say
if
it's
not
in
the
code
base
that
I
care
about
yet
I,
don't
care
what
you
call
it.
It's
just
a
defect.
It's
a
flaw.
It's
a
coding
error.
G
E
Yeah,
but
we
might
consider
a
follow-up,
because
this
merger
quest
has
guns,
first
escalated
to
my
fault
as
a
back-end,
first
and
big
thanks
to
Cameron
for
holding
this
request
by
the
way.
So,
oh
yeah,
this
is
definitely
a
good
place
to
improve.
If
you
think
that
the
current
definition
are
not
complete,
her
ambiguous.
D
Yeah,
the
next
one
is
about
texture
like
in
one
of
the
upcoming
issue.
Designs
I
have
seen
that
there's
a
model
box
to
mark
finding,
as
dismissed
in
the
findings
panel
of
merge
requests.
But
the
thing
is
like
since
we
don't
since,
since
we
are
not
storing
those
findings
in
the
databases
they
are
just
in,
the
like
JSON
file,
like
seems
like
there
is
no
way
to
no
way
to
do
this,
like
the
only
thing
that
I
can
come
up
with
is
just
like
creating
new
IDs
with
seed
value.
D
So
if
we
can
use
the
identifiers
that
we
already
discussed
previously
the
location,
the
report
type
and
the
other
thing,
we
can
actually
create
the
same
UID
always
for
the
same
occurrence
for
for
finding
just
to
be
totally
correct.
So
is
it
like
a
viable
plan
to
you
or
like
what?
What
do
you
think
should
we
actually
store
them
in
the
database?.
F
That
means
I
think
you
can
already
do
this
magnet
I
mean
if
you
open
a
merge
request
and
the
pipeline
finishes,
and
you
go
to
the
security
widget
on
the
merge
request.
You
can
dismiss
okay,
okay,
I
think
I.
Think
I
know
what's
the
problem.
If
you
go
to
the
merge
request
and
you
go
to
the
security
widget,
it
shows
findings
and
you
can
mark
the
finding
as
dismissed.
E
So
this
is
actually
what
the
product
fingerprint
is
made
for.
We
wanted
to
make
sure
that,
when
you're
providing
feedback,
whether
it
is
about
dismissing
creating
an
issuer
or
creating
a
merge
request
on
the
finding,
whether
it's
based
on
finding
that
you
get
from
a
Tizen
report
on
a
pipeline
to
or
a
merge
request
or
from
one
starting
to
database,
that
you
can
see
that
you
could
see
before
on
the
dashboard,
you
can
retrieve
that
feedback
in
the
other
locations.
E
So
if
you
create
a
marriage
request
by
defining
here-
and
you
dismiss
it
because
you
don't
think
it's
relevant
once
you
once
your
murder
room-
urges
request.
The
pipeline
running
at
the
default
branch
will
find
that
finding
again
because
it
still
exists
in
the
cut,
but
you
said
you
dismissed
it
in
the
mesh
request.
So
we
don't
want
this
to
pop
it
back
into
the
dev
or
branch
and
show
on
your
dashboard.
So
the
feedback
is
based
on
the
project
in
your
brain,
which
is
generated
by
the
analyzer.
E
So
this
information
is
always
kept
with
the
finding.
So
when
you
do
you
have
the
reports
on
the
devil
branch.
Sorry,
you
stopped
that
into
database
there
when's
displaying
the
dashboard.
You
can
fetch
all
the
fingerprints,
the
feedback,
sorry
for
that
project
and
you
take
all
the
project.
Fingerprints.
You
compare
that
with
the
project
fingerprint
of
the
finding
that
and
you
can't
match
them
and
apply
this
feedback
directly.
So
you
can
quickly
see
if
there
is
an
issue
emerge
across
our
dismissal
apply
to
the
finding
that
you
have
in
the
database.
A
G
Yeah
sure
so
Jonathan
had
pointed
to
that
was
I,
believe
that
was
sort
of
the
original
top-level
issue
for
first
class
vulnerabilities.
It
was
a
back-end
ticketed.
It
calls
out
a
whole
bunch
of
other
cases,
basically
of
where
first-class
vulnerabilities
could
come
from.
So
I
can't
specifically
speak
to
what
previous
p.m.
thought
about
this,
but
I
I
believe
what
the
intent
was
is
what
the
goal
is
still
kind
of
long
term.
So
vulnerabilities
today
really
only
come
from
our
internal
scanners,
but
that's
not
the
only
place
that
a
vulnerability
might
be
identified.
G
You're
gonna
have
internal
testing.
You
might
have
third-party
pen
testers.
You
have
bug
bounty
programs
like
it
lab
uses
hacker
one.
So
all
of
this
is
potential
ways
that
we
want
to
make
an
awareness
of
the
vulnerability
inside
of
the
system
and
I
think
what's
underlies
all
these
is
we
just
need
a
way
to
manually,
create
a
vulnerability
and
tie
it
now.
It
may
not
look
exactly
like
the
vulnerabilities
for
the
scanners
and
tie
that
into
same
workflow,
but
under
their
covers,
we
need
to
have
similar
definitions.
G
So,
if
we'll
need
to
have
things
like
description
title,
if
there
has
been
a
C
ve
added
to
it,
so
like
things
that
come
in
through
hacker
1,
it
turns
out
to
need
a
CVE.
We
can
attach
it
internally
standardizing
severity.
So,
yes,
I
think
this
is
actually
in
the
the
viable
plan.
I've
linked
the
the
issue
there
and
it's
kind
of
a
key
piece
of
having
moving
more
from
where
we
are
today,
which
is,
we
have
really
a
severity
based
workflow,
because
that's
all
we're
giving
people
it's.
What
is
the
classification
of
this
vulnerability?
G
What
we
need
to
get
to
for
much
broader
adoption,
especially
in
larger
enterprises,
is
a
risk
based
model
so
that
they
can
actually
have
more
understanding
of
the
context
of
a
particular
vulnerability.
You
know
it
may
say
it's
a
critical
severity,
but
if
it's
in
a
an
internal
only
never
to
hit
production
facing
system,
they
may
not
consider
that
as
risky
to
let
something
through
is,
let's
say,
a
medium
or
an
unknown
and
the
production
system.
So
this
is
going
to
be
part
of
it
and
the
reason
why
it's
you
can
kind
of
see.
G
It's
the
ethic
it's
tied
to
is
looking
at
CBS,
SB,
3,
there's
a
way
to
standardize.
That
is,
we
need
to
let
people
classify
these
things
manually
in
a
common
language
and
I
was
gonna,
say
rest
of
the
secure
engineering
team
jump
on
me
when
this
is
wrong.
I
believe
a
lot
of
the
scanners
that
we
use
today
do
have
the
ability
to
provide
CBS
s
course.
It
may
not
be
quite
the
same
version.
I
may
not
be
3.1,
but
I
know
we
are
not
exposing
it
in
the
JSON
today.
G
E
G
No,
that's
that's
a
great
point
and
there
are
gonna
be
other
things
as
well
like,
especially
on
buzzers,
depending
on
what
they're
doing,
there's
no
way
they're
gonna
be
able
to
come
up
with
the
CBS
escort
for
that.
So
we
will
have
to
make
things
like
that
optional,
but
we're
hoping
we
can
use
some
some
neat
things
like
the
CBS
s,
calculator,
where
we've
turned
it
into
a
form
that
the
user
is
basically
filling
in
the
right
information.
We
can
use
that
to
help
generate
a
severity
for
them.
H
E
That's
great
idea:
you
might
discuss
the
Pampas
with
a
very
key
research
team
because
they
already
out
doing
a
lot
of
matching
work
with
the
PD
database
for
dependency
scanning,
because
this
is
one
of
our
main
Neurology
database
and
they
are
doing
some
matching
your
cps,
but
they
could
do
something.
Similarly,
the
CBS
s
course
so
that
if
you
have
a
c
identified
for
a
thousand
or
ability
you
could
recreate
this.
You
use
this
car
on
that.
That
would
be
great
yeah.
A
D
Ok,
so
quite
actually,
just
one
more
I
just
came
to
my
mind
because,
like
I
was
reviewing
the
EMR
from
Ellen
today
recently
we
had
an
issue
like
in
one
of
the
reports.
We
were
having
some
findings
without
scanners,
so,
since,
like
this,
information
is
coming
from
the
security
like.
How
do
we
guarantee
that,
like?
How
do
we
guarantee
the
scheme
of
the
report
because,
like
this
is
breaking
the
parts
that
we
are
implementing
at
the
moment?
But
we
don't
really
know
like
what
is
required.
What
is
optional
like?
How
do
we
guarantee
that.
D
C
So
we
have
a
JSON
rapport
schema
that
tell
us
which
fields
are
required
now
the
biggest
issue
is
around
legacy
data,
namely
we
don't
currently
differentiate
between
I
open,
like
seven
issues
about
this
recently.
So
let
me
just
find
these,
but
we
don't
currently
differentiate
between
different
reports
versions.
There
is
a
version
attached
to
reports,
but
we
don't
actually
validate
that
version
and
have
a
minimum
version
set.
So
let
me
paste
some
relevant
issues
to
that.
C
Please
enjoy
the
pretty
colors
like
yeah,
so
we
have
a
lot
of
issues
around
this
documenting
a
version
compatibility
matrix,
because
currently
we
have
full
compatibility
theoretically,
the
primary
reason
that
this
breaks
is,
if
you're
uploading
a
really
old
reports.
The
reason
this
isn't
normally
an
issue
except
in
our
testing
is
because
the
parsing
of
the
JSON
report
is
on
those
simultaneous
with
the
generation
of
the
report.
It.
C
There
is
a
really
a
common
use
case
where
someone
would
actually
download
a
report,
artifact,
wait
and
then
later
upload
it
back
into
the
rails,
app,
that's
just
something
we
do
to
speed
testing
along.
So
it's
pretty
rare
for
there
actually
to
be
a
discrepancy
between
an
analyzer
generating
an
old
report
and
the
rails
app
uploading
it.
The
only
time
that
happens
is
if
you
pin
to
an
old
version
of
the
analyzers,
because
we
changed
something
that
broke
your
workflow,
which
is
still
a
valid
reason.
So
we
have
lots
of.
G
C
C
But
the
I
guess
I
would
say
that
this
one
is
kind
of
tangential,
but
it
really
depends
on
how
they
do
their
version
pinion.
So
if
they
actually
release
new
versions
in
a
similar
way.
So
we
did
that
we
do,
which
basically
means
we
release
all
our
analyzers
as
docker
image
tag
to
and
anytime.
We
make
an
update,
weary
tag
image
to
then
that
will
continue
to
work,
but
there
is
definitely
more
potential
for
third-party
interviewers
to
break
I.
C
E
Yeah
I
think
the
main
issue
here
from
remedy
management
team
is
to
deal
with
the
discrepancies
between
what
the
schema
is
seeing
and
what
the
Rays
application
is
supposed
to
support,
and
this
was
a
discussion
we
had
in
the
recent
brainstorming
session
in
secure
to
clarify
a
bit
this.
It
was
regarding
at
the
addition
of
the
vendor
property
I,
think
and
it's
everything
that
you
are
adding
to
the
report
should
be
considered
optional
from
the
respective
like
in
the
repo
schema.
E
We
add
a
new
property
and
we
say
starting
from
this
version,
this
should
be
a
required
field
because
we
assumed
that
any
vendor
any
M
somebody
and
any
good
lab
analyzer
outputting.
This
version
of
the
schema
should
absolutely
provide
this
properly,
but
on
the
right
side,
you
still
need
to
be
backward
compatible
with
until
three
major
release
back,
which
is
a
ton
of
different
versions.
Probably
so
this
is
the
main
guideline,
but
usually
it
means
you
should
be
backward
compatible.
E
So,
even
if
the
reproduction
is
saying
that
this
field
is
required,
the
right
side
should
consider
this
as
optional,
because
you
still
still
support
our
version
of
the
report
that
don't
provide
that
information
dependent.
The
use
case
can
be
addressed
differently
like,
for
example,
if
you
can
provide
a
default
value.
Well,
you
can
consider
this
a
required
field
and
the
passer
for
the
older
version
of
the
report
will
provide
the
typical
valued
stat.
E
A
You
you
all
right
so
I'm
going
to
share
this
recording
I'll
leave
this
agenda
open
and
if
there
are
more
questions
added
that
can't
be
answered
asynchronously
we
can
decide
if
we
want
to
set
up
another
meeting
like
this.
So
I
really
appreciate
everybody's
time.
We
do
have
four
more
minutes.
If
anyone
has
any
more
questions,
there's
still
time.
E
Do
we
want
to
set
up
an
action
item
to
make
sure
we
finally
put
that
somewhere,
because
this
is
a
recurring
in
secrets-
has
been
a
recurring
topic
yeah,
we
need
to
document,
we
need
to
document,
it's
been
more
than
a
year
that
well,
nothing
much
has
been
documented,
so
I
think
it
would
be
great
to
have
this
as
an
action
item
in
here
right.
So.
A
A
A
Written
an
issue
in
a
ladder
in
that,
under
the
assumption
that
the
thread
inside
scene
would
be
writing
that
documentation
and
working
with
the
secured
team
to
ensure
that
it
was
correct,
does
I'm
kind
of
looking
at
Matt
and
me
how
the
two
folks,
who
would
be
kind
of
on
the
hook
for
that?
Does
that
sound,
accurate
to
you
guys
yep.
A
And
I'll
share
that
right
after
the
call
all
right.
Thank
you
everyone!
So
much
for
your
time.
This
was
really
informative.
I
feel
like
I
learned
a
lot
today.
Personally,
so
like
I,
said,
I'll
share
the
recording
and,
if
there's
more
questions
that
come
out
of
it,
we'll
try
and
answer
them
asynchronously.
If
not,
we
will
find
another
time
to
talk.