►
From YouTube: Secure Stage Strategy Review - April 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
know
security
is
a
quite
broad
market
and
in
terms
of
analyst
coverage
and
other
ways
of
thinking
about
the
market,
we're
actually
covering
lots
of
different
areas.
So
the
application
security
tests,
a
governance,
risk
compliance
asoc.
Now
you
can
see
that
they
kind
of
overlap.
There
are
a
lot
of
different
tasks
that
apply
to
different
roles.
A
We
can't
solve
all
of
these
things,
but
we
do
Target
ASD
directly.
We
do
Target
parts
of
GRC
and
we
partially
play
in
the
asoc
market
as
well,
by
consolidating
the
vulnerability
data
into
our
platform.
So
there's
some
places
where
we're
working
today,
places
where
we
would
consider
moving
in
the
future
and
places
where
we're
not
actually
intending
to
displace
entire
players
complicated
world.
A
But
these
different
personas
teams
and
Industry
segments
are
worth
knowing,
so
you
can
kind
of
contextualize
what
we
do
and
what
we
don't
do
so
in
terms
of
where
we're
going
to
go
for
this,
the
way,
the
ways
that
we
can
create
value,
their
short-term
and
often
always
like
that.
So,
in
the
short
term,
we're
really
really
focused
on
improving
SAS
secretion,
the
static
analysis,
work
and
also
the
software
composition,
analysis
scanning
capabilities.
A
A
That
includes
being
in
the
IDE,
the
integrated
development
environment,
where
developers
are
writing
code
in
the
longer
term,
we
do
have
to
mature
our
security,
offering
overall
we've
gotten
by
on
a
lot
of
open
source
and
kind
of
bringing
things
together,
providing
a
sort
of
platform.
That's
that's
an
easier
purchasing
and
operation
decision,
but
we
do
need
to
get
past
that
there's
a
risks
that
are
not
just
vulnerabilities
and
really
helping
people
prioritize
and
resolve
the
most
important
things.
First,
we
also
need
to
innovate.
A
So
the
secure
stage
has
certain
roadmap
themes,
including
simplifying
security
scanning.
As
we
just
talked
about
often,
the
scanners
are
coming
in
early
in
an
ultimate
evaluation
and
we
need
to
make
sure
that
it's
easy
to
get
them
up
and
running
making
it
so
that
you
don't
have
to
think
about
security.
A
We
also
have
to
shift
left
no
more
left
than
that.
So
getting
testing
earlier
into
the
life
cycle
makes
it
cheaper
to
resolve
problems
and
me
makes
it
easier
for
people
to
self
remediate.
So
the
more
developers
are
able
to
look
at
a
finding
that
is
high
confidence
and
correct
resolve
it
themselves,
not
even
involve
security
team.
We've
just
won
the
game
here.
The
more
people
can
spread
out
the
security
work
across
the
organ
make
it
everyone's
responsibility.
The
better
their
organizations
will
operate.
The
more
value
they'll
see
from
Ultimate.
A
The
stickier
that
workflow
becomes
that
also
that
relates
to
our
security
as
a
team
effort,
but
when
security
testing
is
coming
in
last
at
the
end
of
a
release,
that's
just
it's
a
nightmare
for
everyone,
the
more
we
can
get
it.
The
shifted
Less
in
the
process.
Approachable
the
right
results,
the
right
number
of
results,
the
results
that
people
can
act
on
the
better.
A
We
are
in
our
ability
to
deliver
a
really
stunning,
stunningly
good
experience
with
that
we'll
shift
into
a
recent
Highlights
Area,
where
we
talk
about
some
of
the
things
that
have
happened,
the
sort
of
a
Roundup
of
the
features
across
the
stage
in
the
last
quarter,
six
months
or
so,
and
then
talk
about
really
where
our
heads
are
for
for
the
next
three
six
months,
even
12
months.
The
key
features
that
we'll
be
pushing
on
and
the
key
problems
we're
trying
to
solve
in
that
time.
A
So
recent
highlights
I'll
start
with
some
static
analysis
items
before
passing
up
to
Derek
and
Sarah,
who
cover
Dynamic
analysis.
A
So
some
really
exciting
things
in
in
South
analysis
a
lot
of
times
when
we
find
a
rule
as
problematic.
We
want
to
remove
it,
so
customers
don't
see
it
anymore
or
if
they
disable
a
rule,
it
means
that
they
don't
want
to
see
it
anymore.
A
So
now,
when
that
happens,
we'll
actually
automatically
resolve
findings.
So
the
findings
that
existed
before
they're
no
longer
valuable,
we've
judged
they're,
not
valuable,
so
they
can
automatically
resolved.
There's
a
comment:
that's
left
there
and
they're
no
longer
required
manual
triage.
We
actually
use
this
to
resolve
the
single
most
positive
prone
rule
in
zest,
which
we've
been
waiting
a
while
to
remove
waiting
for
this
feature
now
moving
to
more
towards
secret
detection
making.
A
So
you
don't
have
to
triage
the
finding
to
even
get
a
response
that
you'll
be
more
secure
automatically.
We
actually
now
publicly
if
you
publicly
leak
a
gitlab,
personal
access,
token
project
group
access
token-
that's
now
automatically
revoked.
So
if
you
do
that
in
a
public
repository,
we
will
just
catch
it
and
let
you
know
that
we
that
we
got
rid
of
it
that
protects
you
within
seconds,
instead
of
waiting
for
you
to
see
it
and
respond
it
also.
That
response
also
now
covers
all
branches,
not
just
the
default
Branch.
A
A
Then
we
also
need
better
results,
so
I
talked
about
high
confidence
results,
smoother
integration
into
the
workflow
one
thing
across
the
whole
stage
is
that
all
the
secure
scanners
now
support
Mr
pipelines.
Previously
it
was
just
Branch
pipelines.
This
was
released
a
number
of
releases
ago
in
the
latest
templates,
which
are
allowed
to
have
breaking
changes,
and
it
will
be
on
by
default
for
all
stable
templates
in
16.0
and
above
within
static
analysis.
A
We
also
added
new
SAS
rules
for
better
coverage
of
the
languages
that
we
do
support
and
we
integrated
by
proprietary
technology
to
reduce
false
positivities
for
go
you'll,
see
in
the
future
direction.
That
will
also
be
integrating
more
proprietary
technology
to
cover
up
where
some
of
the
other
open
source
tools
are
lacking.
B
Thanks
Connor,
so
we
recently
introduced
a
new.
B
News
integrated
with
dependency
scanning,
so
they
support
the
same
languages
and
versions,
and
you
no
longer
are
required
to
run
a
separate
job
for
license
compliance
which
results
in
Faster
security
results
and
also
fewer
CI
pipeline
minutes
being
consumed.
B
So
the
results
get
updated
automatically
without
running
another
job
and
it's
capable
of
identifying
over
500
different
license
types.
So
our
own
testing
has
demonstrated
that
the
results
are
more
complete
and
if
you
look
at
the
two
screenshots
here,
you
can
see
on
the
left.
The
previous
version.
B
There
were
a
lot
of
unknown,
license
types
being
found
and
then,
with
the
new
license
scanner,
we
just
have
many
more
results
appearing
for
the
exact
same
dependencies
in
both
of
these
lists
and
then
there's
also
better
support
for
packages
that
are
dual
licensed
or
have
multiple
different
licenses.
B
We've
also
released
a
number
of
other
exciting
dependency
scanning
features,
so
we've
released
a
new
variable
called
DS
max
depth
that
allows
you
to
configure
the
depends
dependency
scanning
depth,
so
you
can
scan
your
entire
Repository
block
files
and
then
we've
also
added
support
for
npm3
V3.
Oh
sorry,
npmv3
dnpm,
yarn,
V2
and
yarn
V3
and
I'll
pass
it
over
to
Derek.
C
All
right,
thanks
Sarah,
so
the
thing
that
we
are
focusing
on
here
with
Dynamic
analysis
is
that
we
released
our
new
proprietary
dast
scanner,
the
browser-based
scanner,
which
runs
Dynamic
analysis
from
a
browser
instead
of
using
a
proxy
to
analyze
the
results.
So
this
actually
helps
to
scan
single
page
applications
and
modern
JavaScript
Frameworks.
This
is
something
that
Das
historically
across
the
board
within
the
market,
has
had
issues
with,
so
this
will
significantly
improve
our
code
coverage
of
our
application
coverage,
as
well
as
reduce
the
number
of
false
positives
that
are
reported.
C
It
also
provides
a
much
easier
way
to
debug
and
configure
your
scans
by
providing
artifacts
like
an
authentication
report
which
gives
you
screenshots
of
the
authentication
process.
So
you
can
see
where
something
might
have
gone
wrong,
as
well
as
a
crawl
graph,
showing
you
exactly
how
we
crawled
your
application
and
what
pages
were
scanning
in
there.
It
also
provides
you
with
a
URL
output
that
will
tell
you
exactly
what
we
covered
within
the
the
scan
all
right.
C
Moving
on
to
our
roadmap
plans,
first
I
need
to
to
say
that
this
is
looking
at
upcoming
products
and
functionality
and
features.
So
it's
very
important
that
this
information
is
used
for
informational
purposes.
Only
don't
rely
on
this
for
purchasing
or
planning
purposes.
As
with
everything
that
we
do,
things
may
change.
So
everything
here
is
subject
to
a
change
or
delay.
C
So
we'll
we'll
be
working
through
multiple
vulnerabilities,
utilizing
this
callback
server
for
authentication
we're
planning
on
improving
quite
a
bit,
including
things
like
the
stay
logged
in
feature
for
multiple
Cloud
providers,
improving
logout
detection,
so
that
we
know
if
the
application
is
logged
out
during
the
scan
and
we
need
to
re-authenticate
doing
more
with
the
auth
report
and
the
crawl
graph
and
then
making
it,
making
it
easier
for
users
to
support
more
customized
authentication,
workflows
and
MFA
to
make
sure
that
we
support
things
that
are
being
used
in
Enterprises
now
and
we'll
look
at
API
security.
C
Next.
So,
with
this,
we're
really
looking
at
API
Discovery
as
one
of
our
first
focuses,
we
want
to
be
able
to
find
rest
apis,
since
we
found
that
one
of
the
big
blockers
to
entry
for
API
security
is
that
a
lot
of
companies
don't
have
a
specification
or
definition
that
they
can
use
to
to
guide
the
scan.
So
we
want
to
be
able
to
to
do
that
automatically
so
we'll
be
looking
at
Java
spring
boot.
C
So
you
can
include
that
in
your
repository,
not
just
for
the
API
scans,
but
just
for
generic
use
outside
of
any
scanning
jobs,
then
we
want
to
be
able
to
analyze
apis
better,
to
discover
relationships
between
the
API
calls
parameter
dependencies
so
that
we
can
call
things
like
an
insert
before
we
delete
or
make
sure
that
the
data
is
in
the
database
before
we
try
to
call
it
in
a
in
a
subsequent
API
call.
Also.
C
We
want
to
be
able
to
know
the
type
of
data
that
the
API
calls
are
expecting,
so
that
we
can
more
intelligently
provide
input
and
make
sure
that
we
can
let
you
know
when
things
are
not
responding.
B
Thanks
Derek,
so
continuing
with
that
theme
of
simplifying
security
scans
that
we
talked
about
previously
the
composition,
analysis
team
is
focused
heavily
on
continuous
container
and
dependency
scanning,
so
we've
really
re-architected
both
of
these
features.
B
These
are
in
progress,
but
once
this
is
complete,
you'll
be
able
to
identify
vulnerabilities
as
soon
as
the
advisory
database
is
updated
without
needing
to
run
additional
scans,
which
is
a
huge
benefit
for
all
of
our
customers,
and
then
scanning
rescanning
can
be
done
synchronously
as
soon
as
new
threat
data
becomes
available
and
ident
identical
findings
would
be
deduplicated
automatically,
and
this
is
also
really
going
to
pave
the
way
for
us
to
add
alerting
in
the
future
when
new
critical
vulnerabilities
are
detected
and
then
we're
also
working
on
scanning
all
images
within
the
gitlab
container
registry.
B
So
we
know
that
there's
a
lot
of
pain
points.
Our
customers
currently
have
where
they
can't
easily
scan
multiple
container
images
that
are
built
in
one
pipeline
or
can't
easily
scan
container
images
that
are
built
by
other
vendors
and-
and
this
will
help
solve
for
that.
So
we're
moving
to
do
this
by
default
so
that
you
don't
have
to
manually,
run
or
schedule.
Scans
of
schedule
runs
of
container
skins
and
then
the
findings
will
appear
in
a
new
tab
on
the
vulnerability
report.
B
Page
and
a
summary
will
also
be
shown
on
the
registry
page
and
it's
really
going
to
better
Empower
security
teams
to
get
better
visibility
into
potential
vulnerabilities
in
all
of
your
container
images,
and
with
that
I'll
hand
it
back
to
Connor.
A
Sure
really
exciting
stuff
so
for
Saturday
analysis
just
wanted
to
cover
some
of
the
main
themes
that
we're
investing
in
and
the
two
sort
of
high
level
summaries
here
are
more
efficient,
triage
and
smoother
day,
one
day,
two
and
onward
operation.
A
So
for
triage,
it's
sort
of
having
better
results
in
the
first
place,
and
we
have
a
variety
of
things,
we're
doing
to
make
sure
that
the
results
you're
getting
are
higher
confidence
and
better
quality.
That
includes
some
proprietary
detection
technology
for
SAS
in
detection
mode.
A
Previously,
we've
used
that
more
for
catching
false
positives
from
other
analyzers.
It
improves.
It
includes
rule
improvements,
so
just
making
the
things
we're
looking
for
more
specific
and
more
likely
to
be
correct
and
better
explanations,
when
we
have
a
result
explaining
it
better
so
that
people
can
understand
the
finding
and
resolve
it
better
without
maybe
suspecting
that's
the
false
positive
because
they
didn't
understand
what
the
finding
was.
A
Also
people
often
have
opinions
about
which
rules
they
want
to
scan
for
or
which
ones
don't
apply
in
their
context.
So
today
you
can
customize
rules
on
a
project
level.
Now
we're
adding
group
level
customization,
starting
with
an
MBC,
that's
based
on
TI
CD
variables,
but
looking
for
a
longer
term
Direction
there
as
well.
A
A
But
if
we
Peak
past
that
one
level
deeper,
it's
likely
that
there
are
things
like
examples
or
documentation
or
tokens
that
are
not
active
anymore.
So
really
getting
past.
A
That
everything
is
critical
phase
will
help
us
help
security
teams
direct
their
attention
better
to
the
things
that
need
to
be
handled
right
away
versus
the
things
that
have
maybe
even
automatically
responded
to,
or
that
are
examples
that
also
improves
the
ability
to
use
security
scan
resolved
policies
so
that
you
can
set
approvals
for
what
you
really
need
to
know
about
and
then
showing
static
analysis
findings
in
the
Mr
view,
right
in
the
changes
in
diffuse,
that's
available
for
code
quality
today
and
we're
adding
SAS
there
so
that
when
you
see
a
SAS
result,
you
can
see
it
with
the
code
instead
of
having
to
see
it
in
a
widget
or
a
pipeline
report,
and
then
try
to
bounce
back
to
the
code
and
see
the
details
out
of
context
to
really
make
it
easier
to
understand
the
results
and
then
for
smoother
operation.
A
It's
really
about
being
able
to
configure
and
operate
the
scanners
more
easily.
So
a
really
key
thing
is
improving
vulnerability
tracking.
Once
a
phone
is
found.
How
do
we
track
it
so
that
we're
not
raising
new
findings
or
saying
that
something's
no
longer
detected
when
it's
really
just
moved,
especially
for
secret
detection?
The
the
current
algorithm
is
is
using
only
the
line
number.
A
When
really,
we
can
use
the
contents
of
what's
found
to
make
this
better
a
sort
of
strategic
level
push
we're
making
is
for
a
service
architecture
for
secret
detection
to
provide
better
coverage.
So
right
now
you
can
only
scan
code
bases
because
it
runs
in
pipelines,
but
tokens
leak
other
places
a
service
will
help
us
do
this
and
also
help
us
be
on
by
default.
Without
requiring
cicd
pipeline
modifications.
A
We
have
on
the
docket
about
three
times
more
integration
partners
for
automatic,
seek
protection
response
so
when
we
can
send
the
leaks
that
are
public
leaks
to
Partners,
to
have
them
fix
them
or
mitigate
the
the
leak.
The
severity
automatically
means
they'll
be
able
to
contact
the
person
whose
credentials
are
leaked
and
sometimes
apply
technical
safeguards
to
make
sure
that
the
account
is
protected.
In
the
meantime,
sometimes
they
also
will
automatically
revoke
the
credential
and
SAS
review
for
reduced
the
number
of
SAS
analyzers
over
time.
A
We'll
continue
to
do
that
centralizing
toward
a
summary
based
analyzer
for
languages
where
this
is
appropriate
and
using
proprietary
technology
for
others
we're
especially
targeted
targeting
those
analyzers
that
require
compilation,
because
that's
a
hard
operational
experience
to
try
to
make
sure
that
the
build
has
completed
and
that
we
can
run
the
build
instead
of
just
being
able
to
scan
the
source
code
directly
and
then
for
code
quality.
Really,
the
focus
is
on
reducing
liquid
climate
dependency,
removing
that
dependency,
because
it
provides
a
poor
operational
experience.
A
We're
likely
to
be
able
to
invest
in
this
type
of
thing
for
code
quality,
but
aren't
maybe
immediately
planning
proprietary
technology
for
scanning
that
type
of
thing
more
of
a
ability
to
to
mix
and
match
the
tools
that
you
use
to
make
your
workflow
more
efficient.
A
No
just
to
sort
of
summarize
some
of
the
main
things
we've
talked
about
in
this
12
months
or
in
this
presentation
across
maybe
the
next
12
months,
you'll
see
that
there's
a
mix
of
short,
medium
and
long-term
efforts.
I
won't
necessarily
read
them
all
around,
but
you'll
see
that
we're
investing
across
the
categories
to
improve
the
experience
across.
You
know
continuous
scanning
for
dependency
scanning,
making
it
easier
to
understand
SAS
findings
just
showing
them
in
more
places.
Prioritizing
secret
detection
results
better,
and
then
you
start
to
see
additional
more.
A
You
know
perhaps
strategic
level
Investments
and
better
advanced
in
phone
tracking
coverage
for
SAS
cleanup's.
The
rule
set
scanning
of
all
the
containing
registry
images,
SAS
profiles
or
another
way
to
manage
Google,
customization
skill
and
then
for
toward
the
end
of
this.
This
12
month
period
really
seeing
a
lot
more
of
the
proprietary
things
coming
in
the
the
tracking
SAS
findings
as
they
move
with
a
new
method
or
a
proprietary
analyzer
for
Googling
or
other
languages.
It's
a
really
exciting
set
of
things
that
we're
looking
forward
to
I.
A
Think
I
can
speak
for
all
of
the
the
product
managers
and
product
groups
that
we
present
that
we're
really
focused
on
delivering
a
quality
security
experience
that
helps
security
teams,
bring
the
right
results
to
the
right
people
at
the
right
time.
So
that's
better
results,
more
places,
easier
to
act
upon
them
and
easier
to
achieve
the
governance
that
you
need
so
really
looking
forward
to
to
the
live
session,
where
we
can
discuss
any
questions
that
folks
have.
Please
also
check
out
the
governance
strategy
review.
A
They
pick
up
where
we
leave
off
in
terms
of
some
of
the
the
features
we're
discussing,
and
we
look
forward
to
both
learning
more
and
delivering
these
features
thanks
for
coming
foreign.