►
From YouTube: 2020.05.05 - SAST to Complete working session #1
Description
This is the second half of the first working session we had to define what it means for SAST to be declared complete.
A
If
you
look
at
things
like
es
lent,
which
provides
his
findings,
but
we
result
in
unknowns
as
far
as
their
severity.
We
have
an
uneven
experience
depending
upon
which
analyzer
you're
using
so
some
reporting
severity.
Some
don't
that's
what
this
is.
That's
what
this
is
intended
to
be
is
to
make
it
a
more
unified
experience
so
that
we
can
augment
the
data
that
is
coming
out
of
our
underlying
tools
and
you
get.
A
A
A
A
C
C
B
A
C
I
so
I
think,
maybe
as
a
secondary
there's
two
approaches.
That
would
be
the
right
approach
if
we
controlled
more
of
the
underlying
tools
for
something
like
the
go
analyzer
its
compiled
with
the
rules
included,
so
we'd
have
to
recompile
our
own
version
to
separate
out
a
data
file
which
is
not
unheard
of,
but
remapping
as
a
post
processing
step
might
be
more
easy
to
achieve
the
desired
behavior.
C
A
A
And
the
ones
that
I
have
seen
most
I
mean
for
this.
If
you
want
severity,
then
you're
talking
about
let's
an
override
for
severity
based
upon
rule
ID,
you
could
so
this
is
certainly,
but
these
are
other
fields
that
have
been
discussed
before
well,
I
mean
and
like
that,
it's
a
like
for
a
react
with
any
eslint.
We
don't
we're
not
happy
with
the
description,
so
we've
got
an
issue.
A
That's
in
flight
right
now
to
change
the
descriptions
and
I
know
we
do
that
elsewhere
and
if
we
go
with
the
workflow
through
web
IDE
and
we're
wanting
to
provide
recommendations
for
how
to
deal
with
this
and
that's
this
would
be
a
place
that
I
would
suggest
we
would
do
it.
Keep
it
tied
with
the
analyzer
itself.
A
D
C
To
the
right
way,
that
is
true
that
we
could
we
could
do
that
if
the
goal
is
to
give
a
severity,
it
doesn't
have
to
be
a
right
severity
initially
and
I
mean
that
I
honestly,
like
think
that
could
be
an
important
first
step
if
we
want
to
go
that
way.
I
guess
is
a
question
of
perception
where
the
customers
would
prefer
a
unknown
severity
or
medium
in
all
cases,.
A
C
C
And
we
actually
do
map
it
unknown
above
critical.
Currently,
that's
in
our
default
sort
within
the
UI
known
as
is
critical,
because
I
is
the
highest
potential,
so
we'd
actually
be
downgrading
the
criteria
for
what
these
unknowns
would
be,
but
I
still
think
it's
worth.
Recording
is
the
the
easiest
way
you.
A
A
A
B
The
way
that
I
think
about
this
largely
is
we
can
continue
to
add
open
source
scanners
all
day
long
and
at
some
point
it
will
not
be
worth
doing
any
more.
Instead,
this
is
more
looking
towards
the
future
of.
How
would
we
go
about
doing
a
language
in
specific
scanner
that
would
be
capable
of
looking
for
classes
of
vulnerabilities
rather
than
specific
implementations
of
them?
This
gets
into
some
theoretical
and
conceptual
concepts
of
like
data
flow
and
how
programs
execute
and
looking
for
like
higher-level
types
of
vulnerabilities,
there's
a
whole
established
field
of
research
here.
B
C
So
since
this
it's
mostly
in
vulnerable
research,
I
guess
I'd
be
curious.
What
we've
you
are
rolling
and
there,
since
we
I,
don't
think
that
our
team
would
have
the
purview
to
write
those
rules
and
we
didn't
write
the
engine.
But
there
was
an
alternate
proposal
for
that.
The
really
dumb
way
is
to
not
use
a
sextant,
abstract,
syntax
tree
and
instead
just
use
a
giant
regex
engine
I.
B
Realize
I'm
going
to
be
cagey
with
this
comment,
but
we
are
looking
at
a
company
right
now
that
has
a
concept
of
this
sort
of
generic
data
flow
scanning.
That
could
be
an
interesting
potential
acquisition
target.
So
there
are
many
ways
for
us
to
skin
this
cat.
This
is
more
of
one
of
we
need
to
have
an
opinion
as
a
team
and
as
the
owners
of
static
analysis
of
how
we
would
support
something
like
this.
B
A
C
C
Maybe
I'm
not
following
your
question
and
if
this
gets
to
like
nitty-gritty,
that's
fine,
but
I
would
say
that
there's
kind
of
like
a
distinction
that
we
we
need
to
make
between
a
template
replaces
the
contents
of
a
file,
whereas
something
like
the
CI
recipe
idea,
or
so.
We've
talked
a
lot
about,
like
injecting
a
subset
of
in
case
of
the
CI
configuration
ya
know
into
the
existing
file.
So
so
I
think
that
there's
an
important
distinction.
C
C
Yes,
I
would,
if
I,
wanted
to
add
a
sassed
job
to
a
my
CI
configuration
today.
We
don't
provide
a
mechanism
to
do
that
without
replacing
the
entirety
of
the
file,
so
the
current
way
that
vendor
in
the
current
way
that
templates
work
templates,
are
full
file
replacements
rather
they're,
rather
than
a
penny
into
an
existing
file.
So
that's
a
distinction
with
a
the
CI
recipes
approach
would
be
injecting
a
snippet
into
the
file.
A
A
B
C
I,
actually
that's
on
time
story.
It
would
simplify
work
because
we
have
to
do
things
like
figure
out
how
to
build
some.
If
we,
if
we
never
had
to
figure
out
how
to
build
a
visual
basic
project-
and
we
said
just
give
us
your
project
and
we'll
scan
it,
then
that
could
be
easier.
So
there's
a
fair
bit
of
effort,
we
put
into
figuring
out
a
properly
build
a
project.
Your.
B
We
definitely
have
competitors
who
don't
even
do
build
at
all.
You
have
to
build
it,
zip
up
the
resulting
project
and
then
pass
it
over
and
upload
it
to
their
project
or
to
their
scanners,
so
I'm
interested
in
ways
to
reduce
this
complexity,
both
from
a
customer
experience
point,
but
also
us
not
having
to
care
what
whatever
scripts
or
things
or
bad
practices.
Customers
are
doing
to
build
their
projects.
A
A
C
B
So
this
sort
of
goes
to
the
point
of
whatever
way
our
customers
build.
We
need
to
be
able
to
support,
and
that
extends
to
the
way
that
they
use
project
repositories,
a
common
framework
and
pattern
within
the
enterprise.
Is
these
large
mono
repos?
So
there's
a
couple
of
problems
to
solve
here.
From
my
perspective,
there's
one
when
we
encounter
a
mono
repo
making
sure
that
our
we
know
when
to
scan
what
and
how
to
scan
the
various
projects.
B
This
is
one
where
we've
had
customers
who
might
have
as
many
as
twenty
or
thirty
of
the
same
language
or
different
languages
in
a
single
repo,
different
types
of
projects
at
who
knows
what
depth
within
that
mono
repository.
There's
then
the
problem
of
depending
on
how
customers
contribute
to
that
project,
whatever
they're
merging
process
is
making
sure
that
we
can
support
that
some
people
do
this
whole
branching
or
an
forking
of
the
project,
making
your
changes
and
then
merging
that
back
in
which
could
result
in
large
and
ours.
B
C
B
B
Also
I
think
you
may
end
up
in
issues
of,
and
this
is
who
I'm
not
quite
sure
how
standalone
bloom
abilities
handles
this
book.
Do
we
end
up
with
duplicate
listings
and
how
do
we
handle
the
identification
of
those
over
time
but
again
kind
of
tangental
in
the
interacting
with
vulnerabilities,
vulnerability,
management,
scope.
B
B
C
So
sorry
I
think
that,
like
as
Taylor
drug
there's,
some
interesting
considerations,
if
you
have,
if
you're
trying
to
segments
like
results
in
your
dashboard,
that
I,
don't
think
falls
in
our
area.
If
we
have
bad
performance
nests
or
traversée
too
deeply
into
a
application,
that
is
definitely
something
we
could
work
on.
C
I
think
that's
more
of
a
scanner
by
scanner
basis
and
in
the
case
of
the
bullet
you
have
here,
that'd,
be
something
like
a
inheritance
on
a
pawn
XML
file
or
Java
for
something
that
more
points
to
what
I
would
consider
to
be
a
bug
and
how
spot
bugs
build
a
project,
so
I'm
still
not
sure
I
quite
follow.
This
is
a
mono
Reaper
problem,
but
that
might
just
be
me.
There.
A
A
Think
the
security
code
scan
was
the
one
that
I'm
aware
of
that
was
the
most
that
was
that
was
guilty
of
this,
where
it
would
look
for
one
project
file,
and
it
would
only
scan
that
one
directory
regardless,
if
that
was
long
as
it
be,
if,
if
it
had
sibling
directories
that
were
other
projects
that
needed
to
be
scanned
as
well.
That
is
an
example
of
where
we
are
failing
with
mono
repo
support.
A
It's
in
our
it's
a
bug
in
our
wrapping
of
security
code
scanner,
because
we
pointed
at
that
one
directory,
as
opposed
to
as
as
opposed
to
playing
it
at
everything.
And
then
there
is
the
performance
implications,
because
if
you
imagine
that
this
is
a
mono
repo
for
in
number
of
microservice
marker
services,
all
of
which
deployed
by
different
kubernetes
and
help
charts
yeah,
then
there's
your
performance
problem
or
there's
a
performance
problem.
But
okay.
This
may
be
just
fixing
a
bug
and
making
sure
that
we
better
support
it.
I.
A
Alright,
squishy
vulnerability,
research
and
proving
rules.
This
is
all
of
this
is
so
a
nutshell.
How
do
we
we
have
a
vulnerability,
research
team?
How
do
we
enable
them
to
improve
the
detection?
The
rule
set
that
we
are
making
available
to
customers
themselves,
so
this
is
something
that
we
would
be
internal
to
get
lab,
but
it
is
also
something
that
would
be
relooked
at
I
if
I
would
hope
and
I
would
articulated.
F
I
mean
I,
think
we've
talked
about
this
Thomas
before
you
introduce
it
on
gitlab
for
a
while
and
then
eventually,
as
you
know,
more
and
more
rules
come
in
those
rules
that
are,
for
you
know,
gold,
customers
or
whatever,
eventually
get
pushed
down
to
core
and
then,
if
it's
on
core
that
was
then
get
pushed
back
to
the
community
as
like
a
PR.
For
you
know
those
individual
scanners
sort,
yes,
scanners,
that's
where,
in
language,
that's.
A
Yeah,
that's
that's
the
way
that
I've
seen
this
practice
gone
before
where
it
starts
that
your
highest
here
and
then
they,
then
you,
then
you
release
it
to
the
community
as
a
trailing
something
the
trails
at
some
point
later
and
it
puts
a
it
puts
a
stress
on
the
organization
to
always
be
innovating,
because
you
need
to
have
something
to
continually
get
better
and
stay
ahead
of
what's
released,
but
in
any
case
that's
a
way
to
do
this.
That's
a
that's
a
that's
a
that's
a
strategy.
F
B
B
If
that
happens,
in
2020
or
2022
is
discussion,
we
can
have
I
think
this
is
one
where
today,
it's
all
or
nothing
on
ultimate
and
I,
think
this
is
one
to
start
providing
value
or
down
the
tiers,
and
this
is
one
to
where
I
expect
that,
once
we
fully
support
this
type
of
customizing
rulesets,
we
will
have
customers
who
will
propose
rules.
The
community
will
propose
rules
and
thinking
about
how
we
incorporate
that
in.
This
is
where
the
idea
of
like
a
rule
pack
might
come
in
where
there
are
community
contributed
rule
packs.
B
C
B
Absolutely
this
is
the
longer
vision
of
we
have
visibility
into
what
vulnerabilities
are
being
dismissed,
which
ones
are
being
remediated,
and
we
should
be
able
to
use
that
telemetry
data
to
power
our
severity
or
to
recommend
fixes,
or
things
like
that
nature
I
think
that's
a
little
longer
term,
but
100%
agree
with
you.
Well.
C
C
Smell
see,
we
have
grounds
for
soft.
It's.
B
E
A
B
Yes,
so
basically,
we
had
this
realization,
probably
a
month
ago,
I
think
it
was
Sid
doing
whatever
data
exploration.
He
was
doing
realized
that
the
more
stages
that
a
customer
uses,
the
more
likely
they
are
to
upgrade
or,
if
they're,
a
prospect
to
actually
convert
into
a
customer.
So
the
goal
basically
is:
how
do
we
get
more
customers
using
more
of
our
functionality?
This
introduces
the
fun
problem
of
well.
You
have
to
have
ultimate
to
be
able
to
use
secure
so
like
we
kind
of
break
that
nice
cycle.
B
We
do
have
that
idea
that
right
now
is
in
the
lovable
category.
I
think
this
is
one
where,
if
we
found
an
easy
way
to
do
it
I'd
be
more
than
happy
to
pull
that
forward.
I
like
the
idea
of
code
coverage
I
think
that's
a
really
interesting
way
to
think
about
what
we
do
and
I
think
that
also
could
provide
us
ways
to
solve
other
challenges.
When
we
do
run
into
scan
length.
B
Issues
of
all
of
your
code
is
not
created
equal
and
there
are
more
vulnerable
or
risky
sections,
and
if
we
could
prioritize
the
scanning
of
those,
we
could
cut
through
some
of
the
noise.
That
also
goes
to
the
like.
Do
I
care
about
a
lull
'nor
ability
in
a
doc
page,
probably
not
as
much
as
my
actual
application.
B
This
is
something
that
now
that
those
scanners
are
split
out,
we
probably
could
do
a
one-time
query
with
the
analytics
team
for
them
to
go
and
search
through
job
logs.
That's
certainly
something
we
could
try
to
do
today,
but
longer-term
having
actual
metrics
and
telemetry
on.
That
would
certainly
be
beneficial
and
could
also
very
strongly
help
us
answer
the
question
of
what
language
should
we
care
about
the
most.
A
F
D
A
Raining
in
South
Carolina
what
and
it's
raining
hard
we
just
put
it
that
way,
all
right,
vulnerability,
research,
improving
rules!
Can
we
punt
on
this
and
just
say:
we
really
need
a
discovery
issue
to
figure
out
what
this
means.
Or
do
we
even
know
anything
or
do
we
have
any
informed
yes
about
what
it
could
mean
is.
B
A
B
A
Filled
this
in
benchmark
projects
to
me,
that
is
not
something
that
is
a
sass
or
static
analysis
concern.
However,
we
do
need
to
get
these
run
automatically
as
vulnerability
research,
crews
that
benchmark
projects
to
me
are
owned
by
vulnerability
research.
It
would
be.
It
is
in
our
best
interest
to
integrate
these
into
our
build
process.
Just
like
our
integration
tests
are
or
can
be,
to
understand
how
well
they
are
doing
as
we
improve
improve.
Are
we
improving
the
product
with
updates
that
we're
providing.
B
If
y'all
have
not
been
following
that
project,
it's
very
interesting
to
watch
they've,
already
integrated
their
benchmark
tool
with
SCI
cents,
as
well
as
their
triggering
their
build
process
or
their
run
process
anytime.
Our
scanners
are
updated
right
now.
They
only
have
one
benchmark
project
for
SAS,
which
is
J,
Spring
I
know
we
want
to
do
more,
they're
working
on
implementing
a
sample
of
DAST
as
well.
Right
now,.
F
A
Regularly
occurring
jobs
as
far
as
things
types
of
work
that
we
are
classes
of
work
that
we
are
executing
to
me.
This
needs
to
be
tracked.
It
needs
to
be
something
that
is
articulated
as
well.
So
one
of
the
bits
of
analysis
I
haven't
done
yet.
What's
our
discovery
rate,
how
many
woods
are
how
much
time
are
we
spending
on
average
each
milestone
on
remediate
or
in
resolving
bugs?
A
Because
that's
going
to
speak,
that's
a
percentage
of
time
that
just
needs
to
be
accounted
for
budgeted
for
with
some
me
with
some
variability
involved.
I
mean
I,
know
30.
No,
we're
spending
a
lot
of
time
or
planning
to
spend
a
lot
of
time
on
updating
analyzers.
So
this
is
a
bit
of
a.
This
is
a
this
is
a
bit
of
an
outlier,
but
how
much
time
are
we
actually
spending
update,
keeping
her
analyzers
up
to
date
or
scanners
up
to
date?
A
A
C
I
think
that,
for
these
the
depends,
the
update,
treadmill
and
community
management
and
cross
state
collaboration.
We
often
end
up
talking
about
things
like
meeting
consensus
and
eating
dog
discussions
about
these
things
and
that
probably
won't
change,
but
I
guess,
based
on
the
pointers
raising
its
life
earlier,
I
feel
like
we
need
to
have
a
we
need
to
have
decisions
documented
on
what
is
our
dependency
update
target?
Do
we
into
the
minor
major
and
underlying
tool?
Do
we
bump
weekly
or
quarterly
sending
a
community
management
like
what?
C
A
A
What
I'm
doing
Thursday
is
I'm
putting
up
an
mr
that
is
going
to
create
a
static
analysis
group
handbook
page
that
sits
underneath
secure,
where
we
can
start
articulating
things
that
are
static,
analysis
concerns
and
I'm
going
to
start
with.
These
I
want
to
talk
about
workflow,
because
we're
going
more
to
a
pole
based
method.
I
want
to
put
that
out.
There
is
something
that
static
analysis
is
doing.
I'll
I'll,
put
up
also
a
target
and
I'll
tailor
I'm
going
to
add
this
to
our
one-on-one.
So.
A
What's
our
target
as
far
as
keeping
up
to
date,
is
it
quarterly?
That's
a
good
call
out
we'll
put
that
down,
and
also
these
are
the
kinds
of
how
to
engage
with
us
on
community
management,
which
is
the
most
squish.
What
are
we
gonna
accept?
What
are
we
not?
We
can
start
there,
those
are
artifacts.
We
can
do
Eastern
pointing
to.
A
C
E
A
E
B
Alright,
one
thing
that
I'll
add
to
just
close
out
here
is
that
this
is
always
going
to
be
a
moving
target.
If
you
have
an
amazing
shower
idea
or
otherwise
have
thoughts
of
hey,
we
could
try
this
other
thing.
I
definitely
want
to
hear
those
this
list,
basically,
is
where
my
head
has
been
for
the
past,
probably
2
or
so
months,
and
this
is
kind
of
what
I
think
about
all
day
every
day.
So,
as
you
all
have
ideas,
I
want
to
hear
them.
A
Okay,
all
right,
we'll
close
out
here's.
What
tomorrow
is
going
to
be
so
I
am
taking
this
list.
I
am
putting
it
into
another
favorite
managerial
artifact,
also
known
as
the
spreadsheet
that
is
going
to
Anu.
So
I
was
asking
everybody
in
the
static
analysis
weekly
about
deck
scoring,
so
is
it
defined?
How
much
effort
is
it
how
much?
How
complex
is
it,
and
these
are
t-shirt
size,
so
small,
medium
large
for
each
one
of
those
three?
This
translates
into
some
rough
points.
A
Point
values
that
we
can
start
talking
about
as
far
as
kind
of
efforts
and
how
squishy
some
of
these
ideas
are,
which
will
be
some
information.
We
can
get
back
to
Taylor
about
how
big
these
things
are.
That's
what
tomorrow
is
about.
That's
why
the
effort
was
on
today.
You
already
knew
that
so
so
we'll
be
well.
We
will
not
be
walking
we'll
be
walking
the
output
of
this
conversation.
It's
just
going
to
be
at
a
different
format,
we'll
be
doing
that
tomorrow.