►
From YouTube: Secure::Static Analysis office hours for 2021.01.21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
happy
thursday,
so
everybody's
having
a
good
week,
so
you
found
yourself
on
the
january
21st
edition
of
static
analysis
office
hours.
The
very
first
item
on
the
agenda
is
from
taylor.
So
floor
is
here,
sir.
B
So
one
of
the
first
things
we
wanted
to
do
was
start
more
openly
communicating
about
what
we're
focusing
on
and
we
have
some
really
exciting
things
kicking
off
the
year.
So
I
want
to
review
our
13.9
plan.
The
issue:
is
there
check
it
out?
We
do
these
planning
issues
generally
a
few
weeks
before
a
milestone
starts
we're
trying
to
open
them
earlier,
so
they'll
be
issues
open
very
soon
through
14.0,
but
basically
this
is
an
overview
of
what
we're
looking
through.
B
However,
it
does
not
include
vulnerability
management.
So,
when
you're
looking
at
a
merge
request,
you
don't
get
all
of
the
vulnerabilities
listed
out
and
interactable,
you
have
to
interact
with
the
json
object,
artifact,
so
we're
making
that
more
improved.
You
can
now
download
that
artifact
from
the
mr
page
as
part
of
13.9
we're
working
to
enable
more
core
customers
to
both
understand
that
sas
and
secret
detection
is
available
to
them
and
then
get
it
turned
on.
B
So
that
is
our
core
or
I'm
sorry,
our
configuration
page
for
security
functionality
that
will
expose
the
ability
for
people
to
create
a
merge
request
to
turn
on
sas.
B
It'll
also
include
an
overview
of
all
of
the
individual
scanners,
with
a
call
to
action
to
upgrade
so
there's
a
little
bit
more
awareness
and
ability
to
get
things
turned
on.
It
includes
links
to
documentation
to
know
that
things
are
available,
so
that's
one
improvement
that
we're
making
for
all
users.
B
Next
we're
to
the
point
where
we're
we
have
a
lot
of
analyzers.
We've
got
about
15
different,
open
source
tools
that
power
our
sas
engines
today
about
10
of
those
are
what
we
call
linters,
which
are
more
simplified
security
tools
that
just
lint
code.
So
it's
looking
at
static
analysis
and
finding
security
vulnerabilities.
B
That
simple
approach
is
something
that
an
open
source
project
called
semgrep
does
and
has
a
more
structured
and
streamlined
manner.
To
run
those
queries,
write
your
own
queries,
there's
a
whole
community
around
simgrep
already,
so
we're
looking
to
replace
our
linter
analyzers,
potentially
with
semgrep.
So
in
13.9,
we're
going
to
focus
on
making
sure
that's
the
right
decision
in
evaluating
what
that
will
actually
look
like
for
us
firsthand,
so
I
think
we'll
either
be
doing
a
python
or
our
javascript
analyzer,
which
is
breakman
or
eslint.
B
I
think
we'll
probably
go
with
python,
which
is
breakman
or
no
I'm
sorry
bandit
it's
bandit,
yes,
bandit
see.
I
can't
keep
them
all
straight.
So
that's
something
that
we're
starting
to
look
into.
What's
really
interesting
is
that
semgrep
is
something
that
comes
up
with
customers.
Quite
a
lot.
We've
had
many
customers
directly
ask
us
for
support
for
semgrep,
so
this
opens
up
a
new
potential
for
customers
to
write
more
flexible
rules
so
very
exciting
there.
B
B
B
Next,
we've
got
some
improvement
in
our
vulnerability
tracking.
So
if
you
think
about
the
way
that
our
security
scanners
work,
they
identify,
vulnerabilities
and
vulnerabilities
today
are
just
a
specific
line
in
a
specific
file.
That's
how
we
track
vulnerabilities
as
they
move
around.
That
is
not
flawless
vulnerability.
Code
moves
around
it
moves
files,
it
shifts
up
it
shifts
down.
B
We
can
occasionally
lose
track
of
vulnerabilities,
so
we're
working
on
a
new
fingerprinting
engine
which
will
basically
make
it
easier
for
us
to
track
vulnerabilities
around
we're
code,
naming
it
tagger,
it's
an
internal
name,
but
basically
it
will
improve
the
accuracy
of
our
tracking
engine.
So
that's
something
that
we're
exploring
in
this
next
release
as
well,
we're
not
fully
shifting
over
to
that
engine.
B
Yet
we're
going
to
run
it
side
by
side
with
what
we're
doing
today
make
sure
that
its
results
are
better
than
what
we're
doing
and
then
in
a
future
release
we'll
shift
to
that
so
really
exciting
things
that
I
think
really
start
to
illustrate
our
focus
on
data
quality
and
improvements
for
our
security
scanners.
So
that
is
really
what
the
the
entire
focus
of
this
year
is
going
to
be.
Is
improving
our
data
quality
accuracy,
reducing
false
positives,
so
yeah
we're
well
on
our
way.
So
that
is
a
quick
overview
of
our
13.9
plan.
A
Actually,
I
do
have
one
question
sure
I
was
wondering
if
this
is
something
we're
going
to
be
actively
dog
fooding.
B
Absolutely
we
have
already
done
a
lot
of
work
to
better
dog
food.
Our
security
features
within
get
lab.
I
very
much
want
us
to
do
this
same
thing
as
we
start
trying
out
these
new
tools
and
testing
the
data
quality.
If
we
have
projects-
and
this
is
kind
of
how
we're
evaluating
which
of
the
analyzers,
we
want
to
go
and
test
some
of
these
ideas
with
what
makes
it
easier
for
us
to
get
data
on.
B
C
Yes,
so,
first
of
all,
hi
everybody,
my
first
time
meeting
most
of
you
here
so
I'm
part
of
a
group
of
essays
we're
currently
working
on
this
idea
of
performing
sas
analysis
for
a
portion
of
a
monorepo
and
the
idea
you
know
the
idea
is
that
you
may
have
a
team
working.
You
know.
C
That
was
the
idea
that
we
set
out
with
at
the
start,
and
so
we
kind
of
focused
on
spot
bugs,
because
I
think
daniel
shared
a
demo
of
monorepo
support
for
spot
bugs,
but
I
think
it
became
obvious
that
what
daniel
was
kind
of
demonstrating
there
was
really
the
ability
for
spock
bugs
to
be
able
to
run
analysis
across
every
single
project
or
directory
within
a
monorepo
which
was
kind
of
now.
C
We
have
a
customized
spot,
bugs
analyzer
that
only
scans
that
portion
of
the
repo
that
has
changed
and
so
happy
days,
but
as
we
were
going
through
this,
it
kind
of
raised
some
questions.
First
of
all,
you
know
we
were
kind
of
thinking.
Well,
actually
is
what
we're
trying
to
do
here
wrong,
because
if
you
think
about
how
the
vulnerability
findings
are,
you
know
surfaced
within
the
mr
and,
if
you're,
a
team,
that's
working
on
more
than
one
project
within
that
model.
C
Within
that
repo,
and
let's
say
you
make
a
change
to
project
project
a
and
you
get
the
findings
in
the
mr,
which
is
great.
That's
fine,
because
you
know
you're
you're,
just
really
focusing
on
on
those
changes
you've
made.
However,
if
you
work
on
say,
project
b,
and
we
generate
that
report
again,
but
now
we're
discarding
all
of
the
findings
from
that
were
previously
shown
for
project
today.
C
So
we're
wondering
if
we're
going
down
the
wrong
track
here
or
one
way
that
we
could
probably
work
around,
that
is
merge
all
of
these
reports
together,
iteratively-
and
so
you
know
these
are
questions
that
kind
of
come
to
mind
for
us,
and
I
just
want
to
kind
of
share
this
with
the
group
here
to
see.
Does
anybody
have
any
thoughts
on
this?
Are
we
going
down
the
wrong
rabbit
hole
altogether,
or
does
anybody
see
value
in
what
we're
trying
to
do
here?
C
I,
I
suppose
two
two
examples
or
use
cases
where
I
think
it
might
be
useful
to
do.
This
kind
of
thing
is
if
you've
got
a
very,
very
large
amount
of
repo,
that's
housing,
all
of
your
projects
and
microservices,
and
what
have
you
it
might
make
sense
to
do
this,
because
it
will
reduce
the
overall
time
it'll
take
to
analyze
the
the
changes
that
you've
made
right
so
you're.
C
You
know
this
from
a
performance
overhead
there's,
I
think,
there's
value
there
and
the
second
benefit
might
be
that
it
makes
you
more
efficient
because
you're
only
really
zeroing
in
on
or
focusing
on
those
changes
that
you've
made.
Rather
than
analyzing
the
entire
repo,
so
that's
kind
of
an
overview
of
what
what
we're
trying
to
do.
Where
we're
at
some
of
the
questions
that
have
kind
of
bubbled
up
as
part
of
this
work
and
yeah.
D
B
I
can
sort
of
kick
us
off
here,
so
monorepo
support
has
been
a
struggle,
to
say
the
least.
It's
complicated
it's
different
with
every
single
customer,
the
goals
of
what
they're
trying
to
do,
how
they
organize
is
all
different.
So
I
I
think
this
is
one
where
it
kind
of
just
depends
on
what
a
customer
is
doing,
but
I
think
you're
focused
there
on
speed
of
scanning
and
the
accuracy.
The
results
is
something
that's
very
much
top
of
mind
for
us
and
I
think
fundamentally,
the
way
that
we
do
scanning.
B
Today
we
scan
twice
in
a
merge
request.
We
try
to
do
the
comparative
piece
so
that
it's
it's
very
cause
and
effect
driven
we're
not
super
great
at
that
today.
I
will
admit
actually
some
of
the
tracking
vulnerability
stuff
that
I
mentioned
before
should
help
with
some
of
that
and
then,
when
you
merge
your
merge
requests.
We
of
course
scan
the
whole
project
again
and
that's
what
feeds
into
our
security
dashboards.
So
I
see
these
as
sort
of
two
concerns.
B
B
D
Yeah,
I
just
I
just
had
a
couple
things
to
mention:
yeah
you're,
correct
with
how
you
interpreted
what
I,
what
my
demo
showed.
So
it
just
does
a
blanket
just
scans
everything
I
see
quite
a
bit
of
value.
In
what
you're
proposing
I
mean,
I
could
think
back
to
being
a
consultant
in
mono
repos
and
would
would
want
this
similar
thing,
the
biggest
gotcha
that
I
see
that
you're
going
to
run
through
without
doing
some
work
on
our
behalf
or
likewise
is
it
has
to
do
with
how
we
see
if
something's
been
fixed.
D
If
we
get
a
report
that
is
missing
other
vulnerabilities,
then
those
won't
those
will
be
marked
as
fixed
right
and
that
might
be
what
you're
running
into,
and
in
that
case
we
would
need
better
support
to
be
able
to
say
just
scan
these
two
and
then
in
that
diff.
It
would
have
to
make
sure
that
it's
looking
at
the
locations
of
these
files
and
it
gets
a
little
bit
more
complicated,
but
those
comparison
of
those
reports
to
knowing
what's
fixed,
I
think,
is
going
to
probably
be
the
biggest
roadblock
to
this.
I
think.
C
Yeah,
I
think,
if,
if
you
took
just
a
very
very
simple
use
case
and
a
happy
path
where
a
team
is
only
ever
going
to
be
working
on
a
specific
project
as
part
of
an
mr,
then
that's
fine.
It's
you
know.
This
is
a
very
straightforward
thing,
but
it
the
where
the
complication
comes
in
is
if
they
are
touching
different
parts
of
the
the
repo
as
part
of
their
work.
If
they're
a
multi-project
team,
you
know.
E
So
so,
like
one
in
the
case
of
spot
bugs,
you
could
have
a
multi-module
say
so
so
yeah,
let's,
let's
say
you
have
like
a
multi-module
project,
so
your
palm
references,
a
number
of
sub
modules
at
that
point,
a
bit
difficult
to
determine
scope
there
in
terms
of
a
change
within
module,
could
have
a
cross
dependency
between
the
different
modules
so
running
a
single
one.
It
gets
difficult
to
say
this
has
no
greater
side
effects
there.
E
If
we
could
be
certain
that
it
didn't,
then
it
would
be
possible
to
run
like
a
subset
of
an
analysis
per
branch,
so
maybe
because
the
vulnerability
dashboard
at
least
currently
is
based
entirely
off
the
default
branch.
We
could,
in
theory,
do
something
like
a
sub
analysis
of
the
change
files
for
a
feature
branch
and
then
the
pipeline
for
the
default
branch
does
an
entire
analysis.
E
But
again
it's
it's,
just
not
the
it's
fundamentally,
not
the
way
it
works.
Currently
I
I
think
there's
definitely
discussion
there
as
daniel
said
about
improving
that
or
changing
the
way
that
currently
works,
but
with
the
basic
consumption.
Now
it's
all
or
nothing
simply
because
it
gives
us
the
best
understanding
of
whether
there's
side
effects
in
other
parts
of
the
code
base.
A
All
right
I'll
add
on
from
a
few
different
points
of
view,
and
it's
going
to
be
complementary
to
what
you've
heard.
So
I
heard
the
I
heard
a
few
things
that
you
mentioned
the
big
one:
scope
of
control
for
an
individual
team
being
a
subset
of
what
was
there
that
I
agree
with
taylor.
That
is
a
filtering
problem
on
what
we
currently
have
today
for
vulnerability
dashboards
and
those
results.
A
The
the
as
far
as
whether
or
not
and
getting
outside,
of
spot,
bugs,
specifically
and
into
a
sas
tooling
generically
type
of
question
is
like
is
a
per
directory
approach.
Correct
that
depends
on
how
the
tool
works
if
the
tool
is
trying
to
is
working
off
of
an
abstract,
syntax
tree
and
is
working
its
way
through
how
code
and
how
data
perpetuates
itself
through
an
application
than
having
partial
compilation.
Partial
understanding
of
that
is
going
to
be
problematic
without
understanding
what
the
code
paths
look
like
throughout
the
entire
application
itself.
A
The
the
other
aspect
of
this
is
when
we
start
talking
about
monte,
rocos
or
mono
projects
or
multi-project
support
within
a
repository,
the
there's
a
number
of
reasons
why
they
are
set
up
the
only
technical
reason
or
that
would
drive
someone
to
this
particular
setup
that
I
am
aware
of,
and
that
I
have
seen
is
in
a
is
in
a
microservice
architecture,
where
there
is
a
common
or
central,
artifact
or
project
that
is
required
by
every
single
microservice
that
is
deployed.
A
That
must
be
kept
in
sync,
every
single
time
that
that
one
directory
is
updated,
which
would
then
spawn
off
multiple
cds
to
happen
as
a
result
of
this.
So
the
the
per
directory
approach
in
that
particular
case
would
have
would
spawn
everywhere.
If
there
is
a
if
there
is
a
weakness
within
the
that
common
directory,
it
is
going
to
be
a
weakness
that
has
manifested
itself
everywhere
to
which
it
was
deployed.
A
The
the
the
the
other
parts
of
this
that
are
that
are
worth
noting
and
is
an
underlying
assumption
that
I
think,
we've
too
frequently
forget
as
an
organization,
is
that
there
are
two
things
that
will
move
over
the
course
of
a
project's
life
cycle.
One
is
the
code
itself
and
what
vulnerabilities
are
there
and
what
capabilities
are
there
and
how
it
is
refactored.
A
The
other
is
in
the
capabilities
of
the
analyzer,
in
that
it
will
be
able
to
detect
new
things
that
we
were
blind
to
previously,
which
will
uncover
weaknesses
that
that
have
been
there,
but
we
could
not
see
before
so,
even
though
it
is
unrelated
to
the
changes
that
are
there,
it
is
a
weakness
and
a
risk
that
we
now
can
see
and
is
worth
something
that
is
calling
out.
So
it's.
A
C
Yeah
it's
an
idea
that
is
certainly
not
going
to
be
applicable
to
all
teams
or
organizations,
but
but
it's
an
idea
nonetheless,
that
we
thought
might
have
merit,
but,
as
I
say
once
we
got
into
it
and
we
started
thinking
about
how
the
reports
were
generated,
how
the
findings
were
were
you
know,
you
know
distilled
and
brought
to
the
surface?
Then
it
was
like.
Okay.
Actually,
you
know
in
practice,
this
you
know
might
be
a
bit
misleading.
You
know.
A
It's
it's
interesting
to
approach
it
from
an
efficiency
standpoint
where
I
think
you're
trying
to
get
at
is
developer
efficiency,
yeah
you're
talking
about
cycle
time,
but
you're
also
trying
to
hint
at
what
are
the
vulnerabilities
that
were
in
the
scope
of
the
changes
that
I
just
put
forward,
which
is
an
area
of
improvement.
Certainly
it's
there's
some
ideas
that
are
how
we
have
around
that.
But
it's
it's
not
something
that
we
have
had
the
opportunity
to
mature
on
to
this
point,
yeah
a
little
bit
yeah.
B
C
I
was
taylor.
I
was
just
going
to
say
that
I
I
was
just
I
was
just
and-
and
it's
it's
already
been
said-
that's
a
great
point
I
think
filtering
would,
I
think,
there's
a
lot
of
value
in
being
able
to
filter
if
you're,
just
to
be
able
to
kind
of
filter
out
all
the
noise
that
you're
not
necessarily
interested
in
right.
Definitely,
I
think,
there's
value
in
that
how
that
would
look
like
and
what
that
would
look
like
in
an
mr
I'm
not.
C
B
How
do
you
filter
that
on
the
dashboard?
Now
with
that
said,
I
think
we
have
an
interesting
example
of
where
this
kind
of
works
too.
If
you
think
about
our
review
apps
like
when
I
make
a
change
to
our
handbook
page
most
of
the
time
it
can
determine
what
page
I
changed
and
when
I
click
the
view
app
button.
It
takes
me
exactly
to
the
page
that
it
changed,
except
when
it
doesn't
and
then
you're
kind
of
screwed,
and
you
have
to
figure
out
where,
where
was
the
change?
B
How
do
I
get
to
the
file
but
at
the
same
time
it
also
solves
this
problem
where,
but
also
there
are
outstanding
changes
that
me
changing
something
with
a
css
class
will
break
the
front
page
and
so
being
able
to
go
and
see.
Those
things
is
also
something
that,
like
we
have
to
be
careful
on
how
we
change
the
way
the
analyzers
work,
because
if
there
is
an
unintentional
in
impact
that
you
make
in
your
project,
that
now
creates
a
vulnerability
in
the
other
project.
B
B
C
Yeah
yeah,
it's
you
know
we're
at
this,
we're
at
the
at
this
idea
stage.
So
it's
you
know
and
we're
raising
this
with
you
early
these
ideas.
There
might
be
merit
to
these
ideas.
Some
there
may
not
be,
but
I
think
filtering
definitely
sounds
like
it
would
help
a
lot
for
this
particular
scenario.
C
There
may
be
a
benefit
in
in
doing
this
so
as
to
reduce
cycle
time
right,
not
scanning
every
single
project
and
that's
a
harder
one
to
to
to
to
address.
I
think,
but
that's.
B
Another
benefit
there.
It
is
something
we've
got
to
care
about,
because
when
we
hear
customers
complain
about
our
scan
length
time,
it's
almost
always
a
mono
repository,
that's
gigantic.
So
this
is
the
case
where
that
happens,
and
we
have
to
worry
about
scanning
time
again.
It
brings
up
that
same
problem,
though
of
do
you
want
something
fast
or
do
you
want
something
complete
and
you
can't
have
both
of
those
things.
C
So
what
I'll
do
is
we'll
we'll
share
the
project
with
you
and
and
some
of
the
work
that
we've
done
and
you
know
take
it.
Take
it
from
there.
A
A
A
Hearing
nothing
thanks
everybody.
This
was
a
great
discussion.
We
very
much
appreciate
you
being
here
we'll
see
everybody
here
same
bat
time
same
bat
channel
next
week
see
you
bye.
Everybody
thanks.