►
From YouTube: Secure::Static Analysis office hours for 2020.12.03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
righty
so
happy
thursday,
it
is
december
3rd.
So
this
is
our
inaugural
static
analysis,
office
hours.
I
see,
we've
got
a
few
things
that
have
been
listed
here,
and
this
structure
that
we
have
for
an
agenda
is
something
that
I
was
hoping
to
to
keep
and
so
we'll
start
off
with
demos
and
we're
intending
these
to
be
non-technical
demos
and
to
drive
some
questions.
We've
got
one
that
we
want
to
show
real,
quick
and
that's
so
mono.
B
Yep
thanks
thomas,
so
I
will
share
my
screen
and
folks
can
hear
me
right.
Just
give
me
something.
C
B
Sure
yeah
so
with
the
node.js
scan
analyzer
before
it
required
a
package.json
file
to
be
at
the
root
of
a
project.
So
if
customers
had
or
users
had
nested
node.js
apps,
the
scan
would
not
kick
off
right.
So
if
they
had.
B
So
you
know
one
level
down
the
scan
would
also
not
kick
off.
So
the
change
was
it's
a
real,
simple
change.
B
Now
we
support
node.js
scan
scans
for
multiple
repos
or
sorry
multiple
projects
within
a
repo
right,
so
this
might
be
better
illustrated.
If
I,
if
I
share
my
screen,
okay.
D
B
B
Firefox,
okay,
so
I
set
up
a
demo
project
and
this
is
an
example
of
a
node.js.
B
B
So
looking
at
the
gitlab
ci
just
including
sas,
and
if
we
look
at
the
latest
pipeline
that
had
ran
for
this
branch,
you'll
see
that
the
only
thing
that
ran
was
eslint
and
that's
because
eslint
matches
on.
If
there's
any
javascript
code,
it
won't
run.
But
there
is
no.
There
is
no
no
js
scan.
B
Jobs
here,
so
what
we
did
was
changed,
so
I'm
going
to
the
master
branch
now
with
the
change
that
is
currently
in
a
maintainer
review.
Right
now
for
the
sas
lender
templates
and
essentially,
we
just
changed
the
rule
to
include
a
double
wild
card
for
the
package.json
club,
and
so
this
will
tell
the
the
analyzer
okay,
if
you
encounter
package.json
file
anywhere
in
in
the
repo
kickoff
and
no
jscan
scan.
B
And
so,
if
we
look
at
the
most
recent
pipeline
for
this
branch,
we'll
see
that
okay,
that
job
was
kicked
off
oops.
And
then,
if
we
look
at
the
security
dashboard,
we
can
see
there's
app
one
vulnerabilities
and
have
two
vulnerabilities.
So
this
is
how
we
support
node.js
scan
for
mono,
repos
yeah
and
that's
the
end
of
the
demo.
F
There
any
questions
on
that.
Would
you
mind
dt
santa
cruz,
california
solution
architect?
Would
you
mind
going
back
to
the
your
config
file
for
that
which
is
kind
of
the
get.
F
F
B
Yep,
so
for
for
for
reference,
what
we
have
right
now
this
this
is
the
job
and
the
rule
for
node.js
scan
right
now
and
so
you'll
see
you.
F
I'm
know
a
lot
of
questions
from
customers
around
like
defining
pipelines
and
and
the
rules
engine
is,
is
coming
out.
It's
it's
working
out
great
for
conversations,
but
this
is
a
great
example.
So
thank
you.
E
Yeah,
do
we
have
david
tosher?
Do
we
have
a
list
of
those
those
rules
that,
like
you,
have
there
for
the
node.js
for
all
the
languages?
Is
that
well
documented
somewhere.
F
E
E
Is
that
documented
anywhere,
where,
like
for
maven,
you
should
have
these
variables
set
or
have
this
stage
or
is
it
automatic
off
after
just
adding
the
include.
B
So
I
mean
the
as
soon
as
the
there
there's
an
mrn
right
now,
specifically
for
the
node.js
scan
one.
As
far
as
the
the
other
ones
go.
I
mean.
A
Let
me
try
to
translate
real
quick,
so
there's
there's
two
questions
that
I'm
seeing
number
one.
What
do
we
have
to
do
for
mono
repo
support
with
a
node.js
scan?
Specifically,
what
you
saw
here,
which
you
saw
zach
demo
was
the
was,
is
the
crux
of
a
change?
That's
going
to
be
merged
into
the
sas
vendor
template.
It
is
not
a
recipe
that
needs
to
be
applied
for
all
customers
to
get
support.
A
Zach
correct
me.
If
I
made
a
mistake.
A
Okay,
so
in
13
7,
including
the
vendor,
template
you
get
it.
So
that's
so
that's!
So
that's
that's
the
one
thing
I
want
to
call
out.
The
other
question
that
was
asked
was:
is
there
documentation
for
the
rules,
syntax
that
is
used
globally
within
gitlab
ci
configuration?
Is
that
a
correct,
restatement
of
the
question.
E
Well,
specifically,
to
make
the
scans
work
like
is:
is
it
because
you
read
the
documentation
that
says:
hey
just
add
the
include
sas
yemel,
you
know
and
and
it
works
and
for
the
most
part
it
does.
But
I
was
wondering
if
maybe
because
I
am
currently
getting
a
failure
in
a
maven
that
if
I
need
to
put
some
buildups
into
like
a
sas
job
or
how
that
correlates,
because
or
if
it's
just
supposed
to
work,.
D
G
E
That
does
that
does
make
sense.
Okay,
I
I
think
I
will
need
to
know
the
format
of
inserting
those
like,
for
example,
in
an
earlier
stage,
we're
already
building
it.
So
is
there
a
way
to
just
pull
that
object
off
of
cache
or
do
we
need
to
go
through
the
whole,
build
stage
inside
the
analyzer
image
as
it's
analyzing
and
then
and
then
scan.
G
G
D
G
Yeah-
and
this
is
where
I'll
step
in
again
and
say
that
the
pattern
that
we
want
customers
to
be
using
here
is
to
include
our
sas
ci
template
and
then
to
override
it
with
a
custom
ci
configuration
where
you
can
modify
those
variables.
G
A
A
H
Forward
taylor's
taylor's
point
might
be
more
interesting,
yeah.
G
A
H
Yeah
hi
I
wanted
to
stop
by
and
talk
about,
docker
and
docker
because
it
I
had
heard
through
you
know
the
rumor
mill
that
this
team
had
some
issues
with
privileged
mode
and
using
docker
and
docker.
We're
running
into
the
same
thing
and
lucas
has
responded
a
little
bit
already
and
we've
we've
thought
about
trying
to
not
use
docker
and
docker
either
there's
a
little
bit
of
a
sort
of
abiding
sense
of
like
it.
H
Could
it
be
simpler
if
we
don't
use
docker
and
docker
anyway,
but
code
climate
publishes
analysis
tools
that
we
use
only
as
a
docker
image.
So
we've
talked
about
trying
to
use
this
like
the
shell
executor
that
can
solve
some
other
caching
problems,
but
we're
not
sure
if
we
can
get
away
from
needing
to
run
privileged
mode
for
code
climate
analysis,
because
code
climate
will
always
be
available
as
a
docker
image.
I
Yes,
so
just
to
vocalize
what
I
had
here,
we
eventually
dropped
docker
and
docker
13-0,
which
was
much
easier
for
us,
and
the
primary
reason
we
had
in
the
first
place
is
because
we
we
also
wrapped
a
bunch
of
our
own
containers.
So
in
a
similar
way,
we
broke
each
one
of
our
individual
containers
at
separate
jobs,
and
so
it
would
be
the
equivalent
having
a
code,
quality,
ruby
job
and
a
code
quality
java
job.
And
then
that
case
we
could
actually
use
the
specific
image
as
the
baseline
container.
I
So
we've
talked
about
this
in
a
lot
of
different
ways,
and
one
of
the
big
issues
that
we've
had
with
a
lot
of
build
environments
is
for
complex,
builds
like
monorepo
builds.
We
try
to
kind
of
invert
this
paradigm
where
we're
we're
looking
at
publishing
our
analyzers
as
packages,
and
so
individuals
can
set
the
base
image
for
their
sas
job
and
then
have
like
a
curl
down
for
our
specific
package
into
it.
I
So
if
there's
kind
of
like
a
inversion
of
control
that
you
can
do
as
well,
I
I
don't
know
too
much
about
the
internals
of
code
climate.
To
do
that.
The
the
other
thing
that
we
talked
about
is
setting
up
like
separate
services
as
well.
So
you
could
have
a
if
it
would
be
possible
to
set
up
these
containers
as
services
and
then
rely
on
them
instead
of
docker
and
docker,
and
I
mean
service
in
terms
of
the
ci
construct
of
service.
I
Yeah
so
so
I
guess
I
don't
know
too
much
about
how
code
climate
does
does
this
step.
But
if
it's
fetching,
what
is
the
interaction
between
the
actual
say
the?
Is
it
the
ruby
code,
climate
image
and.
H
Code
so
code
climate
publishes
an
image
that
does
they
publish
an
image
that
does
some
quick
static
analysis
to
determine
which
engines
which
are
also
published
as
docker
images.
It
should
download,
and
then
it
goes
to
docker
hub
and
it
says:
okay,
you
have
a
rails
project,
we're
gonna,
take
code,
climate,
ruby
and
code
climate
rubocop,
we're
gonna,
install
those
engines
and
run
our
analysis.
H
So
they
have
a
sort
of
like
a
base
image
that
then
just
does
analysis
and
decides
how
to
fetch
other
analysis
engines
to
run
that
this
would
be
at
least
difficult
and
probably
unwise
for
us
to
try
to
like
unschool
and
we're
we've.
We've
talked
about
things
like
just
pre-installing
everything
and
publishing
a
30,
gigabyte
image
or
something,
but
like
people
can
cache
it
locally.
Then
it's
kind
of
whatever.
I
Yeah
we
used
to
do
this.
We
used
this
as
well
with
the
we
would.
We
have
like
a
our
previous
orchestrator
would
use
a
file
detection
to
say
if
there's
package,
json
file
pull
our
sas
node.js
analyzer
and
we
were
able
to
replace
that
with
rules.
So
that's
how
we
managed
to
break
out
into
separate
jobs
there,
so
the
actual
job
that
relies
on
whatever
the
base
image
is
there
for
doing.
I
The
analysis
would
rely
on
the
rules
that
matches
that
it's
a
bit
hacky
to
have
to
rely
on
rules,
because
it's
not
that
sophisticated
for
doing
those
things
yeah,
but
I
think
the
biggest
issue
there
being
you'd
have
to
convert
it
to
the
specific
report.
Format
too,.
H
Yeah,
but
I
so
to
the
original
security
problem.
I
don't
know
if
that
frees
us
from
needing
docker
and
privilege
mode
on
the
runner.
Do
you
think
it
would?
I
think.
I
It
would
if
so,
if,
if
I'm
understanding
this
correctly,
if
you
had
a
code
quality,
ruby
job
whose
base
image
is
the
fetched
image,
that
code
climate
would
normally
fetch,
or
rather
maybe
it's
maybe
there
is
a
docker
file
that
uses
that
as
the
base
image
and
then
that
allows
you
to
do
whatever
custom
behavior.
On
top
of
that,
then
the
actual
code,
climate
logic
that
fetches
that
image
can
be
used
done
using
rules
at
that
point,
there's
no
docker
in
docker,
it's
just
using
the
base
image
and
executing.
A
One
thing
that
we'll
call
out
when
we
were
trying
to
decide
how
we're
going
to
do
this
and
the
transitions
whether
it
was
the
deprecation
and
archiving
of
the
orchestrator
in
favor
of
the
vendor,
template
and
doing
file
and
doing
detection
there
is
that
it
was
not
a
one-to-one
replacement.
There
was
a
lot
of
conversation
on.
Is
it
good
enough
that
it's
80
90
95
of
a
replacement,
and
can
we
go
with
that?
And
that
was
a
decision
that
we
were
able?
A
That
was
a
conversation
that
we
had
to
work
through
and
make
that
evaluation
and
that's
what
we
ultimately
did,
because
getting
rid
getting
away
from
docker
and
docker
was
becoming
imperative
for
us
and
so
just
to
call
it
out
right
now.
What
you
may
get
may
not
be
exact,
but
the
question
is
for
us
was:
is
it
good
enough
and
maybe
that's
a
maybe
that's
a
reframing
of
the
conversation
within
quote
quality.
A
All
right,
david
you've
got
one,
that's
fun,
so
I
know
where
this
is
coming
from
all
right.
E
Yeah
so
basically
thomas
you
and
I
have
had
conversations
before
that-
you
were
gonna-
create
a
process
in
which
customers
who
find
vulnerabilities
in
our
images
all
of
our
images.
But
in
this
particular
case
it's
actually
a
secure
image.
How
we
could
intake
that
and
if
that
process
has
been
created
and
documented
as
of
yet,
and
if
not
can
I
do
today.
D
A
Containers
themselves:
okay,
the
fun
part
about
security
tools,
and
I'm
I'm
I'm
preaching
a
little
bit
so
apologize
apologies
is
that
all
of
them
are
going
to
find
different
things
and
and
if
they're
provided
without
context,
it's
not
every
vulnerability
is
applicable
to
every
context
and
that's
the
challenge
that
we're
running
into
here,
or
at
least
that's
the
challenge
that
I'm
running
into
is
that
there
may
be
a
cve
for
it,
but
it
is
only
applicable
in
a
durable
web
application
context.
Well,
that's
not
what
we
run
in
and
so
what
you're.
A
E
That
is
part
of
it
or
if
they
actually
find
a
legitimate
vulnerability,
because
not
all
of
these
projects
that
we,
we
are
not
the
owners
of
a
lot
of
these
projects.
There
are
other
communities
as
well,
and
so
it's
very
possible
that
we
are.
E
There
are
other
scanners
that
pick
up
true
vulnerabilities
as
well,
so
I
mean
until
we
know
that
this
cv
cd
is
not
pertainable
to
this
use
case.
That's
definitely.
One
thing
is
a
report
to
be
able
to
say.
Yes,
this
scanner
comes
up
with
these
these
vulnerabilities,
but
you
know
this
is
why
they
don't
work
in
context,
but
the
other
part
is
an
intake
if
there's
an
actual
vulnerability,
so
that
we
can
handle
our
security
quickly.
You
know
much
more
fast,
rapidly
and
agile.
A
Okay,
so
I'm
going
to
give
you
a
partial
answer
right
now,
both
of
which
are
related
to
my
q4
okrs
number.
One
is
an
increase
in
dog
fooding
that
we
are
doing
within
secure
itself,
and
the
reason
that
I'm
calling
that
out
is
that
this
is
involving
a
mirroring
of
those
open
source
projects
that
we
rely
upon,
so
that
we
can
run
our
own
scans
upon
them.
A
So
this
is
a
step
that
we
are
not
currently
taking
it's
new,
it's
something
that
we're
standing
up
today.
There
was
a
specific
question
around
one
analyzer
in
slack
about
and
and
I'll
I'll
write.
What
that
is
here
in
just
a
minute
or
link
over
to
that
thread
in
just
in
just
a
minute
rather
than
vocalizing
it.
A
So
so
that's
the
partial
answer
that
I
have
in
that
we're
increasing
our
owns,
we're
using
our
own
tools
against
the
very
things
that
we
rely
upon,
so
that
we
can
make
them
better,
and
so
that
should
make
us
more
proactive
and
also
provide
us
with
a
mechanism
of
choosing
when
and
what
we're
going
to
include
and
then
therefore
ship.
A
So
that's
step.
One
step
two
is
a
continuation
of
an
of
an
ongoing
effort.
That
is
really
something
that
we've
that
we
have
deservedly
needed
to
have
our
hand
slapped
for
and
that
we
were
not
managing
the
security
dashboards
on
the
very
analyzer
projects
themselves.
A
We're
doing
that
and
it
needed
and
that's
lagged
over
the
past
few
weeks.
But
it's
we're
going
to
we're
going
to
continue
that
effort
so
that
when
we
do
dismiss
a
specific
cve,
we
can
hand
you
the
dismiss
vulnerability
with
rationale
as
to
why
it
has
been
dismissed,
and
that
should
be
the
documentation
that
we
would
start
with
and
if
there's
more
needed
than
that,
then
that
at
least
provides
us
a
jumping
off
point
for
what
we're
discussing.
E
A
D
G
Awesome
well
thanks
everyone
for
attending.
I'm
surprised
we
had
such
a
good
showing
for
this
first
attempt
we're
going
to
be
holding
this
weekly.
So
this
is
the
place
to
bring
your
questions
if
you
want
to
see
how
a
feature
works,
if
you
want
to
talk
through
how
a
feature
works,
we'll
be
giving
demos
for
new
things
that
we're
working
on
so
tell
your
friends.
Tell
your
colleagues
tell
your
parents
come
join
us
and
chat
about
static
analysis
and
all
of
our
other
secure
features.
Thanks
for
joining
and
we'll
see
you
next
week.