►
From YouTube: Sec Section Strategy Review - September 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
fiscal
year
23
q3
sec,
section
strategy
review
today
we're
going
to
walk
you
through
some
recent
updates
to
our
org
structure
and
a
summary
of
increased
investments
across
the
section
and
then
we'll
also
walk
you
through
what
we've
delivered
over
the
last
couple
quarters
and
what's
coming
up
over
the
next
couple
quarters
in
terms
of
product
functionality.
A
A
B
A
It's
a
broad
topic
with
many
moving
pieces
across
the
entire
sdlc
for
a
lot
of
folks
right
now.
Supply
chain
security
is
very
much
about
managing
upstream
dependencies
and
that's
the
first
thing
that
many
people
are
really
grappling
with,
so
we
needed
to.
We
know
that
we
needed
to
invest
more
there
as
well.
So
as
a
result,
we've
established
the
govern
stage
to
focus
on
exactly
that.
Establishing
this
stage
comes
with
a
handful
of
organizational
changes
which
you
can
see
summarized
here.
That's
the
previous
slide.
A
Actually,
sam
yeah,
there
you
go,
and
so
basically
to
summarize
the
changes
that
we
made.
We
deprecated
the
protect
stage
and
created
the
govern
stage
in
its
place,
and
we
made
a
handful
of
team
and
scope
changes
that
will
organize
us
in
a
way
that
really
lends
itself
better
to
providing
this
this
holistic
solution.
A
So
those
changes
include
moving
the
manage
compliance
group
to
the
govern
stage
so
that
our
security
and
compliance
vision
can
be
better
aligned
and
executed
with
one
another.
Also,
we
moved
the
secure
threat
insights
group
to
the
government
stage.
We
created
a
new
category
called
the
dependency
management
category
within
governed
threat,
insights
and
the
scope
of
that
category
includes
portions
of
dependency
scanning
or
what
was
in
dependency
scanning
and
composition.
A
Analysis
you
can
think
of
the
new
dependency
management
category
as
the
piece
of
functionality
that
ingests
dependency
scanning
data
and
findings,
as
well
as
the
user
facing
workflows.
So
both
of
these
categories
exist
but
now
mirror
the
same
model
that
we
have
with
other
with
the
other
secure
analyzer
groups.
A
We
also
moved
the
container
scanning
category
from
protect
to
composition,
analysis,
we
renamed
the
container
security
group
to
security
policies
and
renamed
security
orchestration
to
security
policy
management
next
slide.
So
that's
a
lot
to
keep
track
of
so
some
visuals
might
help.
This
is
the
org
structure
that
you
all
knew
and
loved
before
we
made
these
changes
so
you'll
see
that
thread.
Insights
falls
under
secure
here
and
protect
had
just
the
one
group
called
container
security
and
then
next
slide.
A
This
summarizes
sort
of
how
all
of
the
categories
fall
out
at
the
end
of
the
day.
I
won't
go
through
all
of
these
in
detail
again,
but
this
slide
is
here
for
your
reference.
Just
for
for
a
visual
later
next
slide,
some
important
things
to
point
out
in
terms
of
impact.
You
know
what
does
this
do
for
all
of
us,
so
the
first
thing
to
highlight
is
that
these
changes
did
not
result
in
any
deprecation
of
capabilities
from
the
product.
A
A
A
I
know
there's
a
lot
of
content
on
organizational
changes,
but
hopefully
it
was
helpful,
we'll
be
happy
to
take
any
questions
that
you
have
in
our
live
q.
A
on
that
and
with
that
I
will
hand
it
off
to
sam
to
get
us
started
on
recent
product
highlights
and
upcoming
sam
over
to
you.
B
Generally,
we've
been
trying
to
minimize
many
of
our
investments,
particular
in
our
license
compliance
solution,
because
we
actually
are
planning
to
replace
that
with
something
considerably
more
advanced
moving
forward.
So
I
want
to
really
focus
in
on
some
of
the
big
work
that
the
team
has
been
doing
behind
the
scenes.
Although
we
haven't
had
a
ton
of
huge
new
features
in
the
last
three
to
six
months.
This.
B
These
underlying
architectural
changes
are
going
to
make
some
pretty
big
differences
in
our
team's
velocity
moving
forward
and
in
the
kinds
of
features
and
capabilities
that
we
can
deliver
so
coming
up.
We
have
a
couple
features
here
that
I'll
highlight
briefly
before
getting
into
the
architectural
rework.
So
first
of
all,
we've
been
working
on
operational
container
scanning
for
quite
a
long
time
at
this
point,
we're
getting
very
close
to
delivering
that
as
a
general
availability
release.
B
The
underlying
architectural
work
that
I
was
referring
to
involves
changing
our
container
scanning
and
dependency
scanning
analyzers
to
output,
a
cyclone
dx
formatted
artifact
file.
Some
of
that
work
has
already
been
done,
so
our
dependency
scanner
can
already
output
a
file.
That's
an
s-bom!
B
This
means
that
they'll
no
longer
need
to
do
that
going
forward
because
we'll
be
able
to
receive
those
new
advisories
and
then
automatically
rescan
everything
that
uses
one
of
those
components
or
one
of
those
versions
and
update
the
vulnerabilities
automatically.
So
it's
really
streamlines
our
workflow
and
it
opens
the
path
to
being
able
to
do
things
like
alerting
and
when
a
new
vulnerability
is
introduced
in
the
default
branch
moving
forward
just
to
wrap
it
up
with
license
finder.
I
mentioned
we're
planning
to
replace
that
current
solution.
B
Once
again,
we
plan
to
do
that
matching
job
in
the
rails,
backend
synchronously
on
change,
so
that
those
licenses
are
always
kept
up
to
date
with
the
latest
information
and
that
will
allow
us
to
deprecate
and
remove
our
current
license.
Finder
implementation
of
license
finder
that'll
be
one
less
job
that
the
customers
have
to
run
in
their
pipeline.
C
Great
thanks
tim,
my
name
is
connie
gilbert.
I
am
the
project
manager
for
static
analysis
that
covers
three
categories:
path:
secret
detection
and
code
quality.
I
call
that
iit
scanning
separately
as
well,
because
it
is
a
little
bit
different
from
the
other
from
the
other
ones.
So
I'll
talk
to
each
of
those
categories
today
and
the
main
highlights
that
we
have
from
the
previous
times
and
where
we're
going.
C
So
we
did
take
a
pause
a
little
bit
during
the
last
few
months
to
focus
on
bug
and
customer
issues.
We
do
hope
to
have
a
better
handle
on
those
going
forward
because
we
do
maintain
so
many
analyzers
and
so
much
survey
area
of
all
the
programming
languages
and
all
types
of
things
people
could
scan.
C
We
are
investing
in
both
proactive
things
to
fix
that,
such
as
working
on
some
grip
based
scanning,
as
well
as
reactive
embossing
that
type
so
some
key
highlights
from
the
past
six
months
or
so
really
excited
about
new
sub-grip-based
scanning
for
java.
As
a
reminder,
when
we
convert
from
an
existing
open
source
analyzer
to
some
grip,
that
means
that
we
have
taken
a
look
at
all.
The
rules
evaluated
them
translated
them
over,
so
we
have
nearly
equivalent
coverage,
we've
tested
them,
we've
maintained
them.
C
This
is
in
collaboration
with
our
vulnerability
research
team.
The
result
of
this
is
that
java
scanning
now
no
longer
required
to
build,
which
is
the
main
source
of
customer
issues
for
sas
and
runs
about
seven
to
eight
times
faster
jobs,
going
from
40
minutes
down
to
five,
for
instance.
C
That's
the
recently
introduced
offering
so
we
you
know
have
had
really
great
uptake,
both
internally
and
externally,
and
that
means
that
there
have
been
some
customer
issues,
so
we've
been
looking
at
resolving
those
directly
and
also
improving
what
I
call
the
diy
debugging
experience
so
giving
more
information
in
the
log
when
something
fails,
making
sure
that
we're
giving
customers
the
ability
to
self-remediate
if
they
can
before
they
have
to
raise
a
support
ticket
or
come
up
with
an
issue.
C
Finally,
for
code
quality
code
quality
is
a
recent
addition
to
the
section
and
as
part
of
re-evaluating
the
technology
that
was
used
for
it
or
that
is
used
for
it.
We
have
completed
ux
research
to
validate
what
people
are
looking
for
in
their
code,
quality
solutions
that
is
helping
us
define
a
new
scanning
system
which
we
will
be
focusing
on
in
the
coming
months.
We
also
have
updated
the
mr
widget
to
match
the
new
framework
and
refine
the
inline
disputes.
C
This
is
a
really
cool
feature
where
you
can
see
the
actual
code
quality
findings,
while
you're
reviewing
a
merge
request.
So
you
can
see
the
problems
right
there
as
a
reviewer.
As
an
author,
we
can
move
on
to
moving
forward
and
for
this
I'll
cover
all
the
categories
again.
So
just
more
on
that
working
for
more
some
group-based
scanning
conversions.
So
again,
that's
taking
the
existing
analyzer
looking
at
the
rules
and
converting
them
over
making
sure
we
have
defensible
coverage
that
we
stand
behind
the
first
priority.
C
Languages
are
t-sharp
and
scala
php
had
no
day
after
the
likely
next
candidate,
but
we
welcome
feedback
on
that,
especially
from
customers
who
are
actively
using
faf.
Today
we
will
be
looking
at
expanding
our
propriety
technology,
so
that's
called
vet
into
relief
product
features,
so
we've
been
inventing
a
new
language
front.
End
then
working
on
the
the
platform
there
and
want
to
interpret
that
more
throughout
the
product,
we'll
be
applying
the
code
quality,
inline
diffu
to
sas
findings.
C
It
may
take
a
little
while,
but
that
is
our
main
focus
on
the
ui
side
now
for
secret
detection,
we're
nearly
there
with
a
feature,
including
some
community
contributions,
automatically
remove
gitlab
tokens
when
they're
exposed,
so
on
gitlab.com.
If
you,
if
we
detect
a
gitlab
token,
we
can
automatically
send
it
in
for
remediation
so
that
it's
revoked
and
that's
no
longer
a
threat
to
the
to
the
instance.
C
We
are
also
directionally
looking
to
cover
more
of
git
lab
beyond
just
code
bases
and
also
protect
users
by
default,
without
having
to
require
opt-in
in
a
pipeline
variety
scanning,
the
main,
visible
change
will
be
around
rules.
We
understand
that
new
rules
are
appearing
in
different
versions
and
want
to
make
sure
that
we
have
a
handle
on
that
experience
for
customers.
We've
done
some
recent
improvements
there
to
remove,
for
instance,
some
duplicative
secret
detection
rules
there
and
for
code
quality.
The
main
focus
is
going
to
be
delivering
that
new
scan
ingestion
system.
C
The
details
will
be
flushed
out
as
we
come
through
technical
architecture
and
and
define
what's
possible
matching
that
up
with
our
ux
research
findings,
a
lot
of
really
exciting
stuff
across
all
of
the
categories
and
product
offerings
in
static
analysis.
But
that's
the
main
direction
that
we're
heading
toward
I'll
now
pass
off
to
derrick
for
dynamic
analysis.
D
Awesome,
thank
you
connor,
so
I'm
derek
ferguson
I'll
be
going
over
the
dynamic
analysis
area,
so
what
we
did
in
the
last
six
months.
The
first
thing
to
point
out
is
that
we
reached
complete
maturity
for
dust.
D
We
think
that
where
we
are
right
now
with
the
addition
of
aggregated
dast
findings
and
reduction
of
false
positives,
we
are
in
a
place
where
we
can
call
it
complete
and
we're
working
on
attaining
the
next
level
of
maturity,
which
would
be
lovable.
So
looking
forward,
we'll
be
working
on
things
to
improve
that.
D
Looking
at
what
we've
done
with
the
das
configuration
ui,
we
redesigned
it
to
make
it
easier
to
create
profiles
for
both
on-demand
scans
and
ci
cd
scans.
So
you
can
create
those
profiles
in
line
rather
than
having
to
jump
over
to
multiple
different
pages,
to
create
them
and
then
add
them
to
your
scans.
D
D
So
all
of
those
are
available
now
for
our
opt-in,
beta
that
we
are
running,
we
added
the
crawl
graph
artifact,
so
we
can
show
you
on
an
svg
artifact
what
we
have
crawled
on
your
application
to,
so
that
you
can
see
what
coverage
your
test
has
has
on
your
website
and
then
we
also
added
some
improvements
around
authentication,
adding
in
some
different
types
of
authentication,
basic
and
digest
authentication,
as
well
as
fixing
a
bunch
of
bugs
and
performance
improvements.
D
Moving
on
to
api
security,
we've
really
been
focused
on
speeding
up
the
scan
time
for
api
security,
we're
now
in
a
place
where
we
are
able
to
take
advantage
of
runners
that
have
multiple
cpus
assigned
to
them
and
spreading
the
tests
across
multiple
cpus,
so
that
we
can
run
a
whole
lot
faster.
We've
gotten
speeds
of
four
to
five
times
what
we
had
before,
depending
on
the
number
of
cpus
that
are
added
to
the
runner,
and
then
we
also
created
a
fixed
fips
compliant
api
security
image
for
coverage
guided
buzzing.
D
We
added
corpus
management
to
help
with
future
testing.
So
when
you
do
run
a
coverage
guided
fuzz
test,
it
will
automatically
upload
the
corpus
to
this
management
ui
so
that
you
can
easily
update
those
and
keep
track
of.
What's
going
on
with
your
coverage,
guided
fuzz
tests
all
right.
Looking
at
what
we
have
coming
up
in
the
roadmap
for
dust,
we
are
going
to
be
focused
completely
on
browser-based
desk.
D
Now
we
have
been
working
on
the
active
vulnerability
checks
once
all
of
those
are
ready
for
use,
then
we'll
move
on
to
cleaning
up
a
few
things
so
that
we
can
ga
browser-based
test
for
api
security
right
now.
D
We're
working
on
delivering
graphql
schema
support
so
that
we
can
automatically
look
at
your
graphql
apis
rather
than
having
a
postband
collection
or
har
file
required
to
define
those
tests
and
then
we'll
be
going
into
our
first
first
foray
into
auto
discovery
for
apis
by
looking
at
java
spring
boot
rest
apis
and
auto,
generating
the
open
api
spec
for
those,
so
that
you
don't
actually
have
to
do
anything
to
configure
those
tests
and
then
for
on-demand
testing.
D
We
will
be
redesigning
the
profile
library
so
that
we
can
support
additional
types
of
tests
and
the
first
thing
that
we'll
be
looking
at
with
that
will
be
splitting
off
the
dashed
api
tests
into
their
own,
which
we'll
also
be
using
the
new
api
security
engine
for
and
then
also
adding
api
fuzz
testing
into
demand.
E
Great
thanks
derek
so
for
the
newly
renamed
govern
thread
insights
section
over
the
last
six
months,
we
had
two
large
features
that
are
worth
reiterating
calling
attention
to
so
the
first
is
manually
creating
a
vulnerability
record.
Now
this
work
was
actually
split
into
two
parts.
The
first
was
a
new
graphql
mutation
that
allowed
programmatically,
creating
a
vulnerability
record
on
any
project.
E
The
other
thing
that
it
does
for
us
is:
it
continues
to
leverage
what
we've
already
got
in
vulnerability
management
today
as
the
single
source
of
truth
for
all
risk
that
we've
located
in
the
application.
So
it's
now,
scanners,
plus
any
other
sources
that
you
can
pull
in
the
workflows
are
the
same.
The
only
change
is
the
source
of
that
information.
E
E
So
this
is
a
great
way
to
not
just
keep
the
shift
left
momentum
inside
of
your
organization,
but
to
actually
empower
that
shift
left
and
then
the
majority
of
our
focus
has
actually
been
on
stability
and
other
we're
calling
enterprise
readiness.
So
some
of
these
are
bugs,
but
a
lot
more
of
this
is
about
maintaining
the
scale
at
which
we're
seeing
the
utilization
of
our
security
tools.
E
So
over
the
past
year,
we've
seen
an
explosion
of
uptake
on
both
our
sas,
offering,
as
well
as
the
size
of
some
of
the
self-managed
instances,
we're
re-architecting
to
make
sure
that
we
can
not
just
maintain
what
we've
got
today
but
scale
into
the
future,
and
this
is
also
going
to
unlock
a
lot
of
opportunities
for
us
to
expand
on
our
feature
sets
more
rapidly
as
well
ready
to
talk
about.
What's
coming
next.
E
Now
that
requires
a
manual
intervention
today
by
person
they
would
have
to
go
in
and
change
the
status
to
resolve,
either
individually
or
through
a
bulk
action.
This
is
going
to
leverage
our
existing
scan,
scan,
result
policy
builder
and
create
a
new
policy
type.
So
this
will
allow
control
over
where
and
when
you
want
to
automatically
markers
resolve
vulnerabilities
once
our
analyzers
say
that
they're
no
longer
detected
in
the
code
base.
So
we
think
this
is
going
to
be
huge
operational
time
savings
for
teams,
triaging
vulnerabilities
on
the
enhanced
vulnerability
filtering
side.
E
E
You
can
actually
pick
specifically
the
semi-grep
analyzer,
and
this
is
also
going
to
lay
the
groundwork
for
what
we
eventually
hope
to
be
a
search
plus
filter
experience
very
similar
to
what
you
get
in
issues
and
mr
lists
today,
I'm
going
to
pass
it
to
myself
for
dependency
management
all
right.
So
this
is
a
new
category,
not
a
lot
to
be
shown
here.
Yet
if
you
are
familiar
with
our
existing
vulnerability
management
capabilities,
a
lot
of
what
we're
talking
about
here
are
going
to
be
analogous.
E
So
you
can
imagine,
as
you
start
to
aggregate
dependencies
up
the
higher
you
go,
the
more
you're
going
to
have
to
sort
of
look
through.
So
that's
going
to
be
really
key
to
the
usability
of
that,
but
it
also
unlocks
things
like
discovery
goals
like,
for
instance,
in
that
last
example
there.
If
I
want
to
answer
the
question,
show
me
all
the
dependencies
across
my
entire
gitlab
instance,
where
I
have
log4j
potentially
we'll
be
able
to
answer.
B
All
right
so,
with
the
security
policies
group,
we
have
been
hard
at
work.
We
added
scan
result
policies
in
14
8,
which
replaced
the
previous
capability
of
vulnerability
check
and
in
14
9.
We
added
in
a
rule
mode
to
make
it
easier
for
users
to
edit
those
policies
in
the
ui
without
having
to
learn
the
specific
yaml
syntax
of
their
policy.
Editor
we've
followed
that
up
with
a
number
of
ux
updates,
we've
also
streamlined
it
with
the
ability
to
view
those
policies
in
the
merge
request
settings.
B
So
that
way,
you
can
still
see
all
of
your
merge
request
approvals
in
one
place
without
having
to
go
out
to
the
security
policy
editor
on
the
scan
execution
side
of
things.
We've
added
in
group
scan
execution
policies
in
15.2
and
we've
added
in
the
ability
to
have
some
interactive
inline
policy
validation
as
you're
editing
the
yaml
for
these
security
policies
going
forward,
we
plan
on
continuing
to
expand
on
what
we
have
by
introducing
scan
result
policies
at
the
group
level.
B
Also
going
forward.
We
plan
to
create
a
new
kind
of
a
policy
called
a
license
approval
policy.
This
will
basically
be
another
version
of
what
currently
is
a
scan
result
policy,
but
the
intention
is
that
this
will
be
able
to
replace
what
we
have
today
with
license
check
and
just
like
when
we
replace
vulnerability
check
with
scan
policies.
B
We
anticipate
that
this
is
going
to
add
a
lot
of
additional
features
for
users,
such
as
the
ability
to
gate
their
policies
with
a
two-step
approval
process,
have
a
full
change
log
of
all
changes
made
to
the
policies
and
also
the
ability
to
have
separation
of
duties
so
that
users
can
manage
those
policies
with
only
the
security
and
compliance
teams,
instead
of
letting
the
developers
also
change.
Those
rules
also
going
forward,
we
plan
to
add
support
for
dependency
scanning
right.
Now,
that's
not
an
option
in
your
scan
execution
policies,
support
for
role.
B
I
can't
talk
today,
but
role
based
approvers,
so
that
you
can
define,
allow
all
maintainers
or
all
developers
to
approve
a
merge
request
when
certain
criteria
is
met,
and
then
we
also
plan
to
expand
out
our
scan
result.
Policies
to
add
in
a
number
of
additional
filters
specific
to
user
needs
things
like
based
off
of
the
age
of
vulnerability.
B
How
long
ago
it
was
initially
reported
or
other
criteria
like
that,
just
to
allow
for
more
fine-grained
criteria
that
triggers
those
approvals
and
then,
lastly,
we
plan
to
give
users
the
ability
to
limit
group
policies
to
only
apply
to
projects
that
have
a
specified
compliance
framework
label,
whereas
those
apply
to
all
projects
by
default.
So
again,
with
the
increased
investment
here,
we
plan
to
expand
out
our
security
policy
editor
rather
significantly
going
forward
with
that
I'll
turn
it
back
over
to
hillary.