►
From YouTube: Secure::Static Analysis weekly meeting for 2020.11.02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Happy
monday,
once
again
so
hope,
everybody's
continuing
to
have
a
good
week.
At
least
you
know.
Well,
you
know
it's
monday,
but
it'll
be
the
week
of
weeks
I'll
jump
into
the
agenda.
B
All
right,
backlog,
refinement
office
hours
to
me
has
run
its
course.
I
thought
it
was
a
great
idea,
as
we
were
getting
as
everybody
was
ramping
up
and
we
were
learning
how
to
work
with
each
other
and
how
to
approach
some
of
these
issues
and
everybody
getting
used
to
the
the
problem
domain
that
we're
in,
but
I
think
we've
outgrown
it
and
the
reason
that
I
say
without
growing
it
is
very
rarely
do
you
all
bring
items
to
discuss
or
anybody
other
than
so,
and
it's
usually
talking
about
priorities
or
planning.
B
So
I've
seen
people
already
responding
to
this,
but
I'd
like
to
open
it
up.
So
it's
not
focused
internally
on
backlog,
refinement
but
open
it
up
to
larger
gitlab
and
I'm
not
talking
engineering.
I'm
talking
sales,
I'm
talking
professional
services,
solution,
architects,
tams
the
like,
because
there's
just
the
the
uptick
is
noticeable
on
requests
and
for
for
our
assistance
and
some
of
the
practices
that
I'm
seeing
done
by
the
field.
B
I
would
like
the
opportunity
to
correct
and
show
them
how
we
want
them
to
use
things
like
our
rendered
templates
and
and
answer
questions
and
so
making
it
open
ended
up
so
that
we
can.
We
can
do
more
of
a
q
a
and
that
doesn't
say
that
we
can't
use
that
time
to
refine
sticky
issues,
but
it
would
compete
with
other
things
that
are
on
the
agenda.
B
Unless
he
just
wants
to
okay,
all
right
deprecations
14-0
is
coming.
We
have
this
90-day
announcement
period
that
we
need
to
do
before
we
remove
things.
So
if
we
want
to
remove
items
in
14-0,
we
got
to
start
thinking
about
deprecations
now,
because
that's
going
to
come
around
april
or
may
of
2021.,
so
issues
filed.
If
you
have
items
in
mind
that
you
would
like
to
see,
replaced
deprecated
or
removed
it,
please
please
contribute
to
this
issue.
B
D
The
only
thing
I
would
call
attention
to
is
that
we've
mentioned
the
potential
sim
grip
transition
and
there's
a
pretty
good
thread
about
thoughts
there.
This
is
something
that
we
probably
do
want
to
figure
out
what
our
strategy
is
sooner
as
opposed
to
later
so
yeah.
Take
a
look
at
that
feel
free
to
put
any
comments
there.
B
C
Thanks
for
the
sorry
for
the
late
edition,
but
there's
been
some
conversation
around
the
security
compliance
page,
you
know
for
access
to
core
users,
so
they
can
start
configuring
things
like
sas
and
secrets
that
are
available
now
from
you
know,
we've
uncovered
a
lot
of
enterprise
versus
core
code
based
restrictions.
Things
like
that,
so
I
think
it
leads
back
into
that
conversation
about
having
an
article
architectural
discussion.
C
So
I
see
taylor's
been
kind
of
adding
some
commentary
to
that
thing.
So
my
main
questions
like
what
is
our
plan?
What
are
the
expectations
around
implementation?
And
I
see
taylor-
you
mentioned
that's
our
next
friend
and
priority,
but
I
definitely
anticipate
a
lot
of
back-end
work,
making
sure
the
data
is
exposed.
C
We
have
those
apis
so
just
wanted
to
raise
this
and
make
sure
we
were
setting
expectations
around
capacity
in
the
next
couple
milestones
and
then
I
also
really
want
to
have
a
design
kind
of
a
project
kickoff
type
of
session,
where
we
talk
about
this
and
that's
funny
because
joined,
but
like
really
learn
what
what
the
this
initiative
is
all
about
and
then
feed
that
into
a
technical
discovery
session.
As
well
so
we
can
start
to
uncover
things
so.
Thoughts
and
taylor
feel
free
to
represent
your
stuff.
D
Yeah,
so
there's
a
couple
of
different
things
here.
This
is
a
surprisingly
much
more
complicated
page
than
it
looks
at
face
value.
There's
a
lot
of
very
bizarre
edge
cases
that
we
have
come
to
learn
about.
So,
from
my
perspective,
like
this,
probably
should
be
promoted
to
an
epic
there's
a
couple
pieces
to
this.
One
would
be
update
the
screenshots
in
the
new
discovery,
page
they're,
currently
old.
That
should
hopefully
just
be
something
easy
becca,
I
think,
has
provided
all
of
those
screenshots.
So
that
should
be
a
simple
update.
D
We
then
have
to
kind
of
go
through
the
page
as
it
exists
today
and
try
to
un-gate
it
for
its
feature
check
and
make
sure
that
we're
properly
checking
all
of
the
cases
where
you
should
or
shouldn't
be
able
to
see
certain
configuration
options
for
now.
I
think
that's,
maybe
a
little
bit
easier.
Given
we
only
have
one
configuration
ui.
D
We'll
have
to
look
at
the
permissions
for
that
which
are
the
like
user
roles
and
then
we'll
also
need
to
look
at
how
we're
doing
licensing.
For
that.
I
suspect
we
may
hit
some
of
the
problems
where
we're
not
granular
enough
with
our
license,
so
we'll
have
to
look
into
what
the
best
direction
is
there
and
then
the
last
piece
would
be
exposing
our
configuration,
mr
button,
not
the
ui
page
for
sas.
So
basically
what
we
initially
released,
which
just
goes
straight
to
an
mr.
D
So
the
idea
with
this
whole
thing
basically,
is
that
all
get
lab
users
should
be
able
to
see
this
page.
It's
an
entry
point
to
learn
about
all
of
the
different
things
that
we've
got.
It
gives
us
a
consistent
place
to
link
people
throughout
the
product
which
we
surprisingly
don't
have.
Today.
We
can't
link
somebody
to
a
place
to
learn
about
all
of
our
security
functionality
in
the
product.
D
C
So
this
page,
through
the
the
recent
additions
we've
made,
we've
learned
that
it's
heavily
driven
off
of
pipeline
status,
which
inherently
isn't
a
bad
idea,
because
it's
it's
real,
but
it's
hard
and
it's
hardwired,
like
every
single
row
in
there
is
dependent.
So
I
recently
added
on-demand
dash
scans
and
that's
not
dependent
on
like
a
pipeline
scan.
So
it's
really
hard
to
like
fake
something
I
had
to
essentially
fake
a
data
row
and
adding
all
the
tests
around
that.
So
I
think
that's
one
of
my
main
considerations
is:
is
this
new
page?
C
It's
going
to
be
very
lightweight
for
core
users
right.
We
don't
have
all
the
data
and
like
you're,
mentioning
the
the
license.
Compliance
like
yeah,
all
the
granularity,
so
it
sounds
like
we
need
to
pick
off
like
one
by
one
feel
like
what
the
data
looks
like,
but
I'm
envisioning
us
kind
of
revamping
this
in
how
we're
exposing
this
data.
Maybe
the
back
end
works
similarly
to
how
it
does
today,
but
in
terms
of
how
front
and
consumed
this
it's
a
lot
more
flexible
than
it
is
now.
F
Specifically,
which
is,
I
know
what
at
least
with
the
configuration
page-
obviously
that's
a
sas
thing
right
now,
but
I
know
that
at
least
nicole
was
looking
at
increasing
that
as
well.
So
I'm
curious
how
much
this
work
is
scoped
to
sassed
versus
something
we'd
be
more
genericizing.
D
It
is
something
that
the
dependency
scanning
team
is
working
on
right
now,
they're
going
to
as
a
first
pass
start
with
the
mr
only
option,
so
no
configuration
ui
that
I
believe
is
scheduled
for
work
to
start
in
13.7,
but
I
believe
it'll
be
13,
8
or
9
before
they'll
actually
release
anything,
so
we'll
be
well
ahead
of
them.
But
anything
we
do
here
should
make
it
as
easy
as
possible
to
allow
other
scan
types
to
do
the
same
thing
that
we're
doing.
B
D
B
Okay,
I'm
just
trying
to
understand
the
boundaries
between
what
we
control
within
the
ui
versus
the
container
security
group
and
its
new
and
its
new
charter
are
responsible
for
and
I'm
still
internalizing
the
distinction.
So
that's
that's
why
I'm
asking
okay.
D
F
I
definitely
think
that's
a
prerequisite
for
the
for
establishing
the
permissions,
because
we
really
need
to
rethink
how
that
permission
model
is
laid
out
which
is
currently
categorical,
not
feature-based.
D
D
I
will
click
the
button
and
then
can
outline
a
few
placeholder
image
or
epics
or
issues
sub
issues.
C
C
So
as
part
of
the
initial
issue
creation
taylor,
would
you
mind
explaining
kind
of
the
structure
you're
in
for
like
focus
topics,
and
maybe
we
can
centralize
sync
discussions
or
basic
whatever
just
around
that,
so
we're
focusing
on
the
right
priority.
We
can
think
about
dris
once
we
have
those
issues
as
well:
cool
thanks.
F
Cool,
so
I
was
just
gonna.
Do
a
quick
demo
of
the
work
I
was
doing
for
q3
on
augmenting
severities.
So
let
me
share.
F
Yep,
yes,
okay,
cool
all
right
so
for
for
q3,
I
was
working
through
this
epic,
which
was
primarily
around
improving
our
data
quality.
So
in
terms
of
normalizing
our
data,
we
have
a
fair
bit
of
scanners
that
do
not
report
a
severity
based
on
this
handy
dandy,
super
readable
chart.
We
are
missing.
Sobolo
security
code
scan
the
ones
that
recently
changed
are
flaw:
finder
node.js
scan
and
eslint
and
break
man.
So
a
fair
bit.
So
we've
we've
done
a
pretty
decent
job
by
improving
that.
F
So
this
is
the
basic
breakdowns,
brakeman
eastland,
flawfinder,
etc.
So
for
some
of
these,
it
was
a
bit
easier
because
in
the
case
of
something
like
flaw
finder,
once
you
actually
read
through
the
docs,
they
say
this
attribute
that
we
return
as
risk
is
essentially
a
severity
score
instead,
and
so
in
this
case
we
just
went
back
through
and
remapped
confidence
to
severity.
F
This
I
I
was
looking
into
the
original
decision
here.
I
didn't
really
figure
out
a
good
reason
why
we
are
using
confidence
initially,
but
it
also
matters
a
lot
less
than
it
used
to.
We
don't
even
expose
confidence
in
the
ui
anymore.
That's
just
something
that
customers
don't
really
care
about.
It
seems
so
that's
fine.
F
In
this
case,
we
actually
bring
that
confidence
to
severity,
because,
according
to
the
description
here,
it's
more
around
risk-
and
I
personally
would
say
risk
is
a
lot
closer
to
severity
in
the
case
of
break
man
that
one
is
also
similar,
but
according
to
break
man's
docks,
they
actually
conflate
the
two
and
someone
actually
asked
about
this.
So
in
that
case
the
author
considers
it
one.
In
the
same,
so
in
this
case,
we
actually
did
a
similar
remapping
effort,
and
this
was
instead
of
moving
confidence
to
severity.
F
We
now
report
it
in
both
cases,
so
severity
is
informed
by
the
confidence
level.
This
there
is
a
really
good
methodology
for
how
we're
doing
severity
here
and.
F
Now
I
started
with
those
two,
because
those
two
are
kind
of
the
simplest.
It's
a
pretty
quick,
easy
change,
but
some
of
these
it's
a
bit
more
complex
because
we
don't
always
have
a
provided
value
to
map
to
eslint
is
probably
one
of
our
simplest
analyzers.
It
doesn't
really
have
any
concept
of
what
a
severity
or
risk
score
would
be.
So
we
can't
really
do
much
with
this
one.
F
It
just
gives
us
either
it
fails
or
doesn't
fail
it's
about
as
simple
as
the
lyncher
can
get
so
for
this
one
we
essentially
have
to
add
a
mapping
layer
on
top
of
it
to
remap
things.
If
you
look
at
something
like
the
description
mapping,
we
actually
have
a
hard-coded
map
within
our
analyzer,
where
we
take
the
rule
name
and
we
give
it
a
human
readable
description.
So
we
need
a
very
similar
thing
here,
where,
once
we
have
a
score
that
we
actually
assess,
we
can
map
it
within
our
analyzer.
F
F
So
the
one
I
really
want
to
talk
about
here
is
so
below
so
below
our
elixir.
One
is
pretty
interesting
because
the
way
it
works
you
take
a
module
in
this
case.
Let's
look
at
xss
and
then
there's
some
various
detection
rules,
so
we
have
a
category
like
cross-site
scripting
and
then
we
have
the
actual
there
we
go.
So
we
have
this
parent
one
that
is
the
actual,
like
rule
name
xss,
and
then
we
have
a
context
by
which
it's
detected
so
send
response
raw
html
content
type.
F
So
what
I
did
when
I
started
looking
at
sobelow
is
first
mapped
all
of
the
severity
categories
and
then
for
each
one
of
these
modules
we
have
each
one
of
these
detection
rules.
So
in
some
cases
the
detection
rule
is
essentially
the
same
raw
and
send
response.
Here.
We
map
to
a
cwe79
which
is
improper
neutralization
of
input
during
web
page
generation,
but
then,
in
some
cases
like
this
one,
this
is
a
missing
content
type,
so
we're
more
vulnerable
to
a
cross-site
scripting
attack
with
a
missing
content
type.
F
F
F
For
each
one
of
these
it
could
probably
be
23,
but
it
would
require
actually
looking
through
each
individual
module
to
determine
that.
So
we
have
a
cwe
and
cwas
are
basically
categories.
They
don't
actually
tell
us
too
much
there
about
what
the
actual
severity
or
risk
rating
would
be.
So
for
that
we
need
something
like
a
cvss.
F
F
Now,
let's
take
the
path
traversal
here
as
a
good
example
of
this
so
say,
send
file
all
right
so
directory
traversal
and
send
file,
and
that
is
it
detects
if
there
is
a
unprotected
string
getting
used
in
the
send
file
module
here
and
it
raises
that
issue.
So
if
we're
actually
calculating
our
cbss
score
from
it,
we
can
say
the
first
attribute
here
for
exploitability
is
an
attack
vector.
So
this
is
essentially.
Where
can
this
be
exploited?
F
Do
I
need
to
be
on
the
physical
machine
or
on
the
local
network
or
adjacent
network
or
network?
So
this
is
pretty
much
99
of
times
network
for
us
because
well
we
don't
really
know
now.
The
attack
complexity
is
how
complex
the
attack
would
be.
So
this
is
whether
it
only
can
occur
on
mondays
whether
it
requires
a
very
specific
configuration,
a
series
of
things
like
that,
in
this
case
past
reversal,
you're,
just
passing
in
a
string,
so
I'd
say
that's
fairly
low
attack
complexity.
F
Privilege
is
required,
like
whether
user
is
logged
in
whether
they're
an
admin
and
again
we
don't
really
know
here,
and
so
in
these
cases
we
have
to
be
fairly
generic
because
we're
not
looking
at
a
specific
user's
case
within
their
application.
We're
just
trying
to
establish
like
a
base
score
to
to
look
at
first.
F
So
in
this
case
I
would
go
with
none,
because
we
should
default
to
the
highest
risk
here
that
we're
assessing
for
an
individual
customer
user
reaction,
whether
a
this
one's
a
bit
confusing
it's,
whether
or
not
we
need
a
a
user
other
than
the
attacker
to
participate
so
waiting
for
a
user
to
send
in
their
input
first.
So
in
this
case
I'll
say
none
because
a
user
can
trigger
the
past
reversal
directly.
F
Scope
is
an
attribute
about
whether
the
affected
component
is
isolated
or
whether
it
would
actually
the
exploitation
of
the
vulnerability
affects
a
different
component.
So
in
this
case,
if
you
have
a
content,
security
policy
of
like
a
same
origin
policy,
you
can
trigger
a
script
tag
and
alert
in
inside
someone's
browser,
but
it
doesn't
affect
other
users.
It
doesn't
affect
other
systems.
F
So
for
past
reversal,
this
could
be
fairly
damaging
for
imagining,
like
the
worst
case,
for
a
send
file,
pass
traversal,
you're
overriding
a
pseudoers
file
or
you're
overriding
a
password
file,
so
we're
going
to
say,
scope
change
there,
because
it
could
change
the
scope
from
the
vulnerable
component
to
other
components,
confidentiality
impact
potentially
high.
Let's
go
with
high
integrity,
similar
high
availability.
I
don't
really
know
how
you'd
use
a
past
reversal
to
take
down
a
system,
but
I
think
it's
possible
overriding.
F
A
pseudo
or
style
could
be
a
good
example
of
that
so
also
high
cool.
So
this
calculator
base
score
of
a
10
which
is
critical,
there's
a
lot
of
nuance
in
this.
So
this
is
kind
of
the
part
where
I
would
definitely
pull
in
the
apsec
team
for
evaluating
this
score,
but
as
a
baseline,
we
have
a
score
of
10
here,
that's
as
high
as
it
goes.
The
temporal
score
is
something
like
something
that
can
affect
it.
F
Based
on
a
let's
say:
if
there's
a
fix,
if
someone
releases
a
patch
for
this
pastor's
vulnerability,
then
that's
how
the
temporal
score
affects
the
base
score.
If
it's
unproven
that
the
exploit
exists,
then
that
actually
lowers
the
score
from
a
10
to
a
9.1,
because
it's
just
theoretical
and
other
things,
an
official
fix,
lowers
it
even
more.
F
G
F
So
I
think
that
one
issue
with
this
basic
methodology
is
we're
establishing
a
base
score
that
could
work
for
all
of
the
findings.
We
discover
an
individual
finding
within
a
user's
application
should
have
a
temporal
and
environmental
score,
but
it's
not
something
that
we
can
assume
just
based
on
what
we're
doing
here.
F
If
we
find
a
leaked
passcode
in
the
rails
application,
then
the
temporary
score
would
be
affected
by
whether
or
not
we've
revoked
that
secret
yet
and
the
environmental
score.
Whether
or
not
someone
whether
or
not
that
commit
has
been
pushed
up,
and
someone
can
see
it
or
something
like
that.
F
But
I
don't
think
that
if
what
the
goal
is
here
is
establishing
kind
of
like
a
generic
base
severity
to
use
for
our
rules,
we
can't
really
do
that.
Okay,
cool
thanks.
Does
that
make
sense?
Yes,
absolutely
cool,
so
I
was
just
going
to
copy
over
here.
F
F
Okay.
So
that's
how
we
calculate
that
one!
I
haven't
really
looked
through
these,
but
as
a
baseline
I
would
probably
set
all
the
past
reversals
the
same.
I
don't
think
that
they
vary
too
much
in
these
modules,
but
that's
worth
a
check
before
I
do
that
so
oops.
F
Cool,
so
that's
that's
essentially
how
we
came
up
with
these
scores
here.
One
addition:
I
guess
two
additional
things
I
would
say
the
there
isn't
always
a
easy
score
to
go
with
here.
I
don't
really
trust
myself
as
like,
not
a
security
professional
to
establish
these
so
having
the
security
team
review.
All
these
scoring
and
cw
mappings
is
one
thing
whether
or
not
we
even
need
to
map
to
cwe
is
another.
F
F
Is
this
cvss
component
mapping
so
what
they
did
is
they
actually
pulled
all
national
vulnerability
database
submissions
back
to
like
2017,
I
think,
and
they
pulled
out
the
cwe,
the
the
case
count
and
individual
aggregations
for
each
of
the
levels
per
attribute
in
the
scoring
vector.
So
if
we
look
back
at
the
this
here,
we
have
the
attack,
vector
network
adjacent
network,
local
physical,
and
I
guess
I
forgot
to
mention
the
the
basic
syntax
is
attack.
Vector
network
attack
complexity
low.
F
There
is
no
23.,
so
that
either
means
no
one
has
ever
submitted
cw23
to
the
national
vulnerability
database,
and
that
probably
means
that
that
is
not.
That
is
not
the
cwe
we
want
to
be
using.
F
F
So
if
we
go
back
here,
it
looks
like
there
is
ten
thousand
instances
of
cw22
submitted,
so
that's
probably
a
better
c
cwe
to
use
here
for
the
base
scoring
in
my
mind
I
would
imagine.
Specificity
is
better,
but
it
seems
like
people
use
the
more
generic
ones
from
what
I've
seen
so
for
cw
22.
F
This
is
something
that
the
vulnerability
research
team
is
actually
using
to
ensure
that
their
results
are
accurate
when
evaluating
risk
source
scores
when
people
submit
requests
for
c
cves,
but
in
our
case
it's
just
useful
for
fact
checking
that
our
severity
ratings
are
accurate.
So
I
can
go
through
here
determine
this.
Some
of
these
are
a
bit
less
clear
like
for
18
looks
like
exactly
50
of
people
said:
it's
lower
high
attack
complexity,
but
there
is
just
a
little
bit
of
nuance
there.
F
So
that's
where
we
end
up
here
and
in
this
case,
in
the
case
of
something
like
c
cwe-644,
I'm
of
the
opinion
that
this
is
the
right
cwe,
but
because
it's
not
in
the
nvd,
we
end
up
calculating
this
score
from
the
parent
of
cwe-644,
which
is
116.
F
And
that's
where
we
end
up
with
the
scores,
there's
still
a
few
missing,
but
it
takes
a
lot
of
manual
effort
to
get
there
cool
and
then
just
there's
another
tab
here
for
eslint,
which
is
less
far
along.
A
So
lucas,
so
are
we
keeping
record
of
the
link
that
you
use
for
calculation
somewhere
in
our
project?
I
understand
it's
in
the
spreadsheet
right
now,
so
are
we
planning
to
so
why?
Why
should
we
put
the
link
that
this
is
how
we
calculated
this
score.
F
Yeah
there's
a
clarify
severity.
F
I
don't
remember
the
the
name
of
the
issue.
Okay,
I
think
it's
linked
on
here,
but
there
is
an
issue
that
we
have,
which
is
essentially
documenting
how
we
normalize
severity,
which
I'm
not
sure
where
that
one
is.
Oh,
I
also
props
to
zach
for
this
one.
I
forgot
to
mention
that,
but
his
work
on
updating
node.js
scan
to
v4,
which
is
funnily
enough,
v015
this
one
added
severity
to
our
node.js
scan
as
well,
so
that
was
another
awesome
win
to
get
severity
in
another
analyzer.
F
The
idea
here
is
that
we
will
actually
no
longer
need
this
spreadsheet.
It's
just
something
that
using
to
figure
these
out
once
this
list
is
complete.
We
basically
just
take
this
score
and
similar
way
to
that
mapping
we
had
in
the
eslint
analyzer.
We
can
actually
put
this
mapping
between
the
rule
name
and
the
score
to
return.
F
Actually,
I
guess
it
would
be
this
one,
because
we
use
a
textual
severity,
but
we
can
put
that
directly
into
the
analyzer
to
return
with
our
data,
and
that
gets
us
the
data
immediately
and
then
look
to
contribute
this
data
back
upstream
to
the
tool
if
they
are
interested.
If
not
that's
fine,
we
do
it
ourselves,
but
ideally
sobolo
cares
about
severity
enough
where
we
can
include
this
and
their
data
directly.
D
Okay,
that
was
super
helpful
lucas.
I
saw
your
comment
about
doing
a
release
post.
Yes,
what
analyzers
does
it
need
to
cover.
F
Just
I
think,
just
breakman
is
the
only
one
that
we've
we
didn't
include
in
thirteen
five,
but
breakman
now
includes
a
severity.
Okay,
perfect.
I
will.
C
D
F
Yeah,
it's
it
that
that's
one
thing
that's
been
very
difficult
and
in
this
case
too,
this
group.
F
This
group
right
here
we
talked
about
this
a
little
bit,
but
one
thing
that
becomes
apparent
when
you're
looking
through
every
rule
from
these
is
that
we
do
have
a
lot
of
I.
I
wish
I
had
a
different
word
than
impurity
in
our
analyzers,
but
one
good
example
is
gosek
includes
a
hard-coded
credentials
password.
F
I
I
feel
like
to
properly
do
this.
We
should
probably
exclude
hard
code
credentials
from
our
static
analysis,
rule
side,
that's
something
for
secrets,
but
in
a
similar
way,
here's
this
group
of
vulnerabilities
discovered
in
sobelow
that
are
basically
all
dependency
skin.
So
this
one
looks
within
the
it
checks.
Your
dependency
file
looks
for
affected
versions
and
complains
about
it
with
the
cde
one
of
the
nice
things
about
this
is
that
we
can
actually
take
the
cv
and
pull
the
exact
cvss
score
that
it
was
submitted
with.
F
So
in
that
case,
we
don't
have
to
calculate
ourselves,
which
is
why
it's
so
much
easier
for
dependency
scanning
to
have
a
severity,
but
that
doesn't
mean
that
we
have
like
this
group
of
rules
that
aren't
really
static,
analysis
checks,
but
we
include
them
anyway,
so
they're.
It's
definitely
interesting
going
through
these
manually.
F
Out
what
kind
of
I
wouldn't
say:
they're
data
quality
issues,
but
they're
data
considerations.
B
F
C
B
E
F
E
F
Yeah-
and
I
I
think,
that's
that's
a
good
point
too,
because
if
we
are
reporting
it
from
multiple
locations.
Well,
I
guess
it's.
It's
like
asking
your
text
editor
to
check
your
white
space
and
it
starts
complaining
about
your
spelling
like
it's.
It's
not
something
that
you
want
from
it
necessarily,
but
it
definitely
would
cause
duplicate
information
too.
B
I
will
resist
the
temptation
to
jump
on
a
soapbox,
but
so
okay
alrighty,
as
noted
we're
over
time.
So
thank
you
everybody.
I
hope
you
have
a
good
rest
your
week
and
there's
since
there's
nothing
going
on
the
rest
of
the
week,
so
it
should
be
kind
of
quiet.