►
From YouTube: Secure Stage Strategy Quarterly Review: 2020-12-09
Description
Secure Stage Direction - https://bit.ly/2F7WBxd
* SAST Direction - https://bit.ly/31TUyWI
* Secret Detection Direction - https://bit.ly/3lPMRZq
* DAST Direction - https://bit.ly/31UrjTy
* Dependency Scanning Direction - https://bit.ly/2QTc2wc
* Fuzz Testing Direction - https://bit.ly/3lHAazQ
* Vulnerability Database Direction - https://bit.ly/353jrBi
* Vulnerability Management Direction - https://bit.ly/32U9GTk
* Security Orchestration Direction - https://bit.ly/37SaiLx
A
Okay
and
welcome
everyone
to
our
quarterly
secure
stage
strategy
review,
I'm
david
de
santo
senior
director
of
product
for
our
sec
section,
so
we're
focused
on
the
secure
and
defend
our
security
and
protect
stages.
Today,
we'll
be
giving
a
secure
strategy
update,
I'm
joined
by
all
the
pms
who
are
here
to
give
you
brief
updates,
but
we're
also
here
to
answer
your
questions,
see
some
people
have
already
added
some
questions
to
the
documents
we'll
place
those
over
at
the
end.
A
Then
our
goal
is
to
leave
a
good
chunk
of
time
for
q.
A
today
so
feel
free
to
ask
questions
along
the
way
or
add
them
to
the
document,
and
we
can
answer
them
during
the
q
a
part.
A
A
Samuel
sam
white
is
here
even
though
he's
part
of
protect
we'd
like
him
to
give
an
update
on
the
security
orchestration
category,
which
goes
across
stages,
that'll,
be
followed
by
me
playing
taylor
to
give
an
update
on
sas
derek
for
das,
nicole
for
dependency
scanning,
and
then
sam
kerr
will
take
us
home
with
buzz
testing
before
we
get
started.
A
I
wanted
to
highlight
a
couple
of
some
accomplishments
since
the
last
time
we
talked
first,
we
continue
to
see
a
rapid
adoption
rate
for
secure
and
that's
due
to
all
the
hard
work
you
are
all
putting
in
it's
also
leading
to,
by
extension,
accelerated
growth
in
our
gold
and
ultimate
arr.
A
I
can't
stress
enough
how
awesome
we're
doing
as
a
team
and
as
a
stage
we're
definitely
making
an
impact
both
for
the
company
and
for
our
customers
as
well.
A
We
also
had
vulnerability
management
moved
to
viable
maturity
this
week,
that's
big
and
exciting
as
well.
We
also
have
fuzz
testing
nearing
viable
as
well.
I
believe
sam
is
kicking
off
the
cms
process
and
sas
is
very
close
as
well,
and
if
people
have
questions
about
that,
I'll
do
my
best
to
answer.
If
not
maybe
thomas
could
also
play
taylor,
maybe
together
we
can
have
the
energy
tailor
house
next
and
I'll
share
my
screen.
A
So
you
can
see
some
of
these,
but
we
are
getting
very
good
coverage
within
the
industry
for
what
we're
doing
today.
So
first,
this
is
a
new
report
that
just
came
out,
but
gigaom
listed
us
in
there,
basically
on
their
leader
line
here
with
an
arrow,
getting
very
close
to
the
center
and
you'll
notice
that
that's
a
third
party
giving
us
their
external
review
and
talking
to
our
customers
and
they're,
seeing
us
as
providing
a
lot
of
value.
A
Forester
also
included
us
in
their
tech
tide
for
application
security,
with
a
call
out
to
our
new
api
security
focus
between
the
post-testing
team
and
the
das
team.
So
it's
also
very
exciting.
A
A
Research
team
is
doing
whether
it's
the
the
recent
data
that
was
provided
for
the
vulnerability,
fingerprinting
and
tracing
the
great
demo,
video
that
james
gave
related
to
vet
or
the
numbers
that
seth
and
the
vr
team
are
sharing
with
regards
to
what
browser
is
showing
so
a
lot
of
great
work,
having
on
initial
results
that
I
expect
we'll
have
more
to
talk
about
in
the
next
quarter,
the
updates
to
our
one
year.
A
If
you
all
got
a
chance
to
read
it,
you
may
have
noticed
the
first
thing.
Is
we
moved
protocol
fuzzing
up
to
what's
next
from
what
we're
not
doing
in
the
next
12
months?
Sam
will
probably
touch
on
this
during
his
direction,
but
at
a
high
level
we're
getting
a
lot
of
great
response
from
what
we're
doing
in
post-testing
and
people
are
asking
for
the
protocol
poster
be
available
as
soon
as
possible,
and
so
I
know
sam
is
working
with
the
team
to
see
how
we
can
begin
to
pivot.
A
Towards
that
the
other
changes
we
push
vulnerability
or
sorry
machine
learning
out
of
the
next
12
months.
That's
primarily
due
to
the
fact
that
we
have
a
single
person
group
now
stan
you
if
you're
not
familiar
with
him,
he's
a
distinguished
engineer,
who's
working
on
our
initial
goals
for
mo
ops
and
insider
threat,
so
because
of
that,
we've
kind
of
pulled
that
out
of
the
current
plans
for
secure
and
we'll.
Let
that
work
through
I'll
include
a
link
to
this
taylor.
A
A
Congratulations
to
the
the
sas
team
for
making
that
one
of
their
focuses
we're
definitely
seeing
the
improvement
and
usability
on
the
sas
side,
and
I
know
that's
a
question
that
seth
has
in
the
agenda,
and
I
know
the
rest
of
the
teams
are
working
on
that
as
well.
So
just
keep
that
in
the
back
of
your
minds
as
you're
working
on
things,
that
as
we
continue
to
grow
more
users
are
using
the
product.
We
need
to
make
sure
it's
easy
to
use
easy
to
approach.
B
David,
so
I'll
start
off
with
a
cheat.
I
didn't
list
two
to
three
recent
items,
making
a
difference,
because
I
think
that's
going
to
be
harder
to
pick
out
individual
things
and
instead
I
want
to
focus
on
sort
of
the
totality
of
what
we've
delivered
on
the
vulnerability
management
team
over
the
past
several
months.
B
It
includes
a
lot
of
things
on
the
back
end
as
well.
That
are
not
necessarily
visible
specifically
around
consistency
when
it
comes
to
the
mr,
what
you're,
seeing
in
the
security
widget
and
the
diff.
So
all
this
is
to
say
david,
you
kind
of
stole
my
thunder.
I
had
that
as
my
highlight
here,
but
as
of
yesterday,
our
category
is
officially
at
viable
maturity,
which
is
extremely
exciting
and
impressive,
because
we
just
moved
to
minimal
eight
months
ago.
So
that's
actually
two
maturity
revs
within
the
calendar
year.
B
B
It's
pretty
fantastic,
so
the
category
maturity
scorecard
is,
if
you're
not
familiar
it's
what
we're
using
internally
now
to
measure
when
we
have
achieved
a
next
maturity
level,
and
it
basically
involves
putting
depending
on
where
you're
at
internal
or
external
users
in
front
of
the
your
product
and
you
walk
them
through
their
primary
sort
of
jobs,
and
you
see,
can
they
do
it
or
not?
And
it's
so
it's
a
user
rating
system
so
pretty
exciting
and
to
kind
of
show
off
a
little
bit
more
specifically,
I'm
going
to
share
a
slide
here.
B
B
So
just
to
give
a
sense
of
where
vulnerability
management-
this
is
what
our
we
only
had
a
security
dashboard.
So
this
is
also
kind
of
another
little
thing
which
included
right.
Here
is
just
the
vulnerability
list.
It
was
a
very
kind
of
a
limited
here's.
What
we
found
in
your
default
branch
and
at
the
group
level,
we
had
a
widget
on
the
same
page,
so
it
was
kind
of
a
lot
of
information
on
screen
and
you'll
notice.
There's
not
a
lot
you
can
do
here.
B
This
was
just
about
maybe
six
to
eight
months
ago.
This
is
what
things
look
like
today.
We
have
on
the
project
security
dashboard.
We
now
have
information
about
the
latest
pipeline,
so
you
can
see
it
at
a
glance.
Is
there
a
problem
which
may
indicate
I
need
to
go
re-run
security
jobs
to
get
most
the
most
up-to-date
results,
we've
added
things
like
multi-select
and
right
now
you
can
bulk
dismiss
coming
soon,
we'll
have
the
ability
to
actually
change
status
to
other
things
as
well
detected
date.
It's
little
things
like
this.
B
We
weren't
listing
the
detection
dates.
It
was
hard
to
know,
vulnerabilities
were
stale
or
not
sortability
little
things
again
that
add
up
to
big
impact
on
the
way
that,
like
appsec
teams,
actually
try
to
do
their
triage
work,
allowing
them
to
focus
in
on
you
know
exactly
what
they
need.
Is
it
by
you
know:
status,
detection
date,
severity,
etc.
B
Other
things
like
showing
if
we've
got
related
issues
or
if
there's
actually
a
vulnerability,
has
been
remediated,
and
it
just
needs
somebody
to
review
and
accept.
So
this
is
kind
of
the
snapshot
of
what
we've
got
today
and
the
culmination
of
like
the
last
six
to
eight
months
worth
of
work.
So
super
excited
about
that.
B
B
This
is
work
that
he
started
to
basically
allow
us
to
have
a
more
flexible
information
structure.
It
is
a
way
that
we
think
we
can
onboard
not
just
new
information
from
our
internal
scanners,
but
partners
as
well,
and
it
opens
up
the
door
to
really
write
some
cool
custom
built
tools
that
will
integrate
into
the
vulnerability
management.
B
Workflow
jira
integration
is
another
exciting
one
that
we've
got
in
flight
right
now,
we're
working
on
the
mvc,
so
this
is
probably
the
highest
requested
feature
that
we
have
in
vulnerability
management
from
customers,
which
is
creating
issues
in
jira
from
a
vulnerability
what's
really
exciting
about.
This
is
that
we
will
be
the
first
area
in
all
of
git
lab,
where
you
can
actually
make
something
in
jira
directly
from
the
gitlab
interface.
So
that's
pretty
cool.
So
it's
a
single
click
to
get
there
and
then
finally,
I've
linked.
B
This
is
a
little
bit
further
out,
but
I've
been
working
with
andy.
I
should
say
working
with
andy's
been
doing
the
majority
of
this
work
here,
but
he
has
outlined
a
this
is
kind
of
our
broad
future
vision
for
the
vulnerability
management
ui
over
the
next,
probably
six
to
12
months.
It's
still
at
the
very
beginning
of
solution,
validation,
so
not
quite
ready
for
consumption,
but
I
put
it
there.
B
So
I
want
to
show
off
very
quickly
what
we've
got
in
mind
for
this
you'll
see
this
fairly
soon
adding
a
new
finding
is
going
to
be
as
simple
as
going
to
your
vulnerability
dashboard,
you
can
fill
in
your
basic
information.
Like
you
want,
say
you
know
possible,
sql
injection
you
can
fill
in
your
description.
B
B
A
You
are,
I
will
use
the
zoom
chat,
to
keep
you
all
aware
of
your
five
minutes
and
in
case
of
matt
being
over
by
three
so
sam
white,
you
wanna
go
next.
D
Awesome
well
thanks.
Those
looks
like
some
great
great
features:
matt
for
security
orchestration.
This
is
a
new
category
in
get
lab
and
it's
one
that
spans
both
secure
and
protect.
So
I
just
wanted
to
run
through
some
of
the
high
level
problems
to
solve.
This
is
not
a
comprehensive
list
by
any
means,
but
some
of
the
challenges
that
we're
seeing
we've
got
organizations
with
a
lot
of
projects
and
no
way
to
manage
those
centrally.
D
The
way
that
works
is
you
get
somewhat
of
a
an
english
language.
If
this
than
that
statement
that
you
can
then
go
through
and
add
clauses
to
so
you
can
say
if
a
pipeline
is
run
on
the
default
or
master
branch,
then
I
want
to
require
sas
to
run,
and
I
want
to
also
require
das
to
run,
for
example,
and
you
can
then
specify
the
settings
that
you
want
to
use
to
run
that
scan.
D
You
could
also
use
this
to
do
things
like
schedule,
a
scan
to
be
run
on
a
regular
interval,
or
you
could
also
in
the
future
potentially
run
some
of
these
scans,
even
at
commit
time
when
a
commit
is
pushed
up
to
a
branch,
the
most
useful
one,
for
that
would
be
something
like
secret
detection
if
you're
wanting
to
prevent
secrets
from
ever,
even
making
it
into
the
code
base
to
begin
with,
on
the
scan
results
policy
side,
this
these
types
of
policies
govern
what
happens
when
a
scan
completes
rather
than
when
a
scan
runs.
D
So
with
this
type
of
policy
you
can
say
if
any
scan
finds
one
or
more
critical
vulnerabilities.
I
want
to
fail
the
pipeline,
and
I
want
to
send
a
slack
notification
to.
Let
me
know
about
it.
For
example,
and
again
you
can
customize
this
depending
on
your
needs
if
it's
license
compliance,
maybe
you're
looking
for
certain
licenses,
but
again
right
now.
This
is
all
a
prototype.
The
prototype
is
way
ahead
of
where
we
actually
are
today,
because
the
category
is
so
new.
D
Just
to
give
you
some
context
into
that,
here's
sort
of
a
really
big
matrix
of
you
know
all
of
the
scanners
that
would
like
to
support
along
with
the
different
types
of
policies,
and
we
also
want
to
do
this
at
the
project,
the
group
level
and
the
instance
level.
So
for
now
our
initial
mvc
is
starting
just
with
dast
and
we're
starting
just
with
scheduled
scans,
so
the
ability
to
schedule
a
scan
to
run
on
a
regular,
daily
or
weekly
interval
and
again
for
mvc
we're
scoping
that
just
to
the
project
level.
D
So
we
can
get
our
feet
under
us
and
get
our
initial
architecture
in
place.
After
that,
we
plan
to
expand
to
sas,
as
well
as
to
move
that
up
to
the
group
and
instance,
levels
some
guiding
principles
that
were
not
covered.
In
that
brief
demo.
We
do
want
to
make
sure
that
all
of
the
policy
changes
are
audited.
We
also
want
to
provide
for
a
two-step
approval
process,
so
no
one
can
edit
the
policies
by
themselves
and
a
few
others
in
there
as
well,
but
just
wanted
to
provide
that
at
a
high
level.
D
A
All
right,
thank
you,
sam
to
kind
of
provide
some
highlights
for
sask.
I
do
recommend
if
you've
not
had
the
chance
to
read
through
the
updates
tailored
to
the
direction
page,
there's
a
lot
of
really
good
detail
on
there
from
a
recent
items
that
taylor
called
out
so
from
the
agenda
document.
The
first
thing
he
highlighted
is
custom
rules
that
has
been
made
available
for
sas
and
secret
detection.
A
We're
getting
customers
actively
engaging
with
us
on
this,
and
he's
noted
that
some
customers
are
beginning
to
write
their
own
role
sets
and
they're.
Seeing
that
as
an
improvement,
I
actually
had
a
quarterly
executive
business
review
with
one
of
our
customers
leadership
team,
and
this
was
actually
called
out
by
our
sponsor
there.
The
person
who
uses
the
product,
or
at
least
helps
companies
the
product,
the
most
that's
something
they're,
seeing
as
a
great
improvement
to
the
results
they're
seeing
with
sas
and
secret
detection
the
next
one
I'm
very
excited
about
as
well.
A
A
They
did
the
integration
work
for
mob
sf
and
then
contributed
that
back
into
our
code
base,
so
everyone
can
use
it,
so
that's
very
exciting
to
have
that
support
and
to
have
that
community
being
formed
around
our
product
and
then
finally,
the
post-processing
of
leaked
secrets.
A
This
is
now
a
step
that
where
we
can
actually
call
revocation
of
leak
secrets
in
public
projects,
so
as
we
have
open
source
customers
who
are
having
power
projects,
some
companies
like
us,
also
do
that
we're
able
to
call
aws
to
replicate
the
key
before
it's
abused
by
someone
by
being
leaked
publicly.
So
those
are
some
of
the
things
that
are
planned.
C
On
that
the
post-processing
of
league
secrets,
many
companies
have
such
a
problem
with
that
of
the
when
a
secret
is
leaked
tracking
words
in
use
revoking
it
updating.
You
know
things
to
the
new
secret.
Some
companies
have
thousands
of
secrets
that
they
manage
and
it
becomes
an
unwieldy
mess.
I
think
this
is.
I
think
this
will
be
really
popular,
especially
with
enterprise
customers.
A
That's
just
a
really
bad
security
joke
so
anyway,
yeah
what
I
was
going
to
say
as
well
as
that
this
is
already
having
a
big
impact,
even
though
it
just
recently
came
out,
taylor
and
thomas
and
the
static
analysis
team,
also
partnered,
with
global
search
and
the
changes
in
global
search
to
make
it
so
our
searches
and
actually
return
sensitive
data
has
also
made
a
big
impact
as
well.
A
So
for
some
things
coming
up
that
he's
excited
about.
Okay.
That
will
hop
to
a
combination
of
the
comment
he
has
there
about
reducing
false
positives
and
vet.
So
this
is
a
video.
That's
linked
off
the
issue
and
taylor
has
it
in
the
agenda
document,
but
if
you're
not
unfamiliar
with,
what's
going
on
within
the
project,
this
is
a
great
walkthrough
that
james
did.
I
hop
near
the
end
we're
all
familiar
with
what
the
pipeline
view
looks
like
on
a
scan
prior
to
using
vet.
A
There
were
three
returned
vulnerabilities
and
actually
two
of
those
are
false
positives,
and
so
what
the
vulnerability
research
team
is
working
on
is
the
ability
to
identify
that
and
mark
them
as
such,
and
so,
if
I
hop
ahead
a
little
bit
here,
here's
an
example
of
what
james
demoed.
So
this
was
automatically
marked
as
a
false
positive
by
us,
because
we're
able
to
look
at
the
the
tree
of
the
application
and
that
it
was
actually
not
remotely
exploitable
and
to
kind
of
walk
you
through
it.
A
This
line
here
is
what
was
flagged
by
the
scanner,
and
so
here
you
can
see
it
works
its
way
back
up
to
where
that
is
actually
called,
and
you
can
see
the
return
value
is
actually
just
static
text.
It's
not
a
place
where
a
user
or
a
an
attacker
could
inject
something
to
be
executed,
and
so
because
of
that
it
was
able
to
be
marked
as
a
false
positive.
Now,
that's
still
visible
to
the
customers.
A
The
final
thing
was
our
push
to
get
adoption
in
earlier
tiers.
A
A
It's
going
to
allow
them
to
click
the
enable
with
a
merge
request
and
just
going
to
enable
the
scanner
in
the
pipeline,
but
that,
as
was
highlighted
by
a
couple
people
before
myself
and
probably
be
highlighted
again,
this
is
where
customers
are
getting
tripped
up
as
we've
expanded.
Our
customer
base
not
everybody's
as
technical
with
some
of
our
original
customers.
E
E
The
main
thing
that
we've
been
focusing
on
recently
is
the
on-demand
scans.
As
many
of
you
know,
this
is
really
being
focused
on
developers
or
and
security
team
members
who
want
to
run
das
scans
outside
of
their
pipelines
to
scan
code,
that's
already
been
deployed
and
when
there's
no
changes
to
code,
so
one
of
the
main
things
that
we've
been
adding
on
after
the
mvc
release
of
the
on-demand
scans
are
adding
more
options
for
users
in
here.
E
So
you
can
see,
for
example,
there's
now
the
the
active
versus
passive
mode
and
then
we'll
also
be
adding
in
this
release.
Hopefully
the
plan
is
to
add,
in
things
like
authentication
and
header
support,
so
we're
we're
hoping
to
have
this
fully
gade.
This
release
we'll
see
how
how
things
work
out
with
that,
but
adding
in
all
those
additional
options
are
really
what
will
get
us
to
the
ga
and
really
increase
usage
of
the
on-demand
scans
as
more
people
find
it
useful
for
their
sites.
E
So
we've
got
that
going
on
and
then
moving
forward,
we're
focusing
on
removing
the
the
noise
that
das
can
cause
within
the
the
dashboards
and
the
vulnerability
reports
by
aggregating
the
das
vulnerability.
So
anybody
that's
run.
A
desk
scan
can
know
that.
E
Sometimes,
depending
on
how
many
pages
you
have
in
your
your
application,
we
can
report
a
single
vulnerability,
hundreds
or
thousands
of
times,
depending
on
the
number
of
pages,
so
we're
working
on
aggregating
all
of
those
into
a
single
vulnerability
with
the
urls
listed
out,
so
that
you
only
have
to
deal
with
one
of
them,
especially
when
the
the
fix
is
typically
one
one
thing
that
could
fix
it
for
all
the
pages,
such
as
a
a
server
header
or
a
line
of
code,
that's
being
included
on
all
the
pages
moving
past
that
we're
also
actively
working
on
integrating
peach
api
scanning
for
dast.
E
We
already
have
api
scanning
right
now,
but
integrating
peach
allows
us
to
get
a
lot
of
new
features
out
of
the
gate,
including
things
like
soap
scanning,
which
is
something
that
we
don't
have
right
now
and
multiple
different
entry
points
for
the
api
other
than
just
an
open
api
spec,
which
is
what
we
support
currently
and
then,
of
course,
a
lot
of
you
know
that
we're
working
on
browser,
which
is
a
browser-based
scanner
for
dast.
E
So
this
will
allow
for
much
better
code
coverage
and
users
being
sure
that
their
entire
application
is
being
scanned
rather
than
a
small
subset
past
that
we're
going
to
be
focusing
on
ui
configuration
working
with
sam
white
and
his
team
to
do
scheduled
scans
with
within
the
policies
ui,
and
you
know,
trying
to
to
build
out
better
ui
options
for
the
customers
that
are
using
das.
So
that
configuration
is
not
as
big
of
a
sticking
point
as
it
is
right
now,
so
that
is
mainly
what
we've
got
going
on
in
dast.
E
So
you
can
see
in
the
direction
page
here
that
we've
got
a
lot
of
things
coming
up
and
some
of
these
things
are
going
to
be
taking
longer
than
others,
such
as
peach
and
browser
being
implemented.
So
there
may
be
a
few
releases
here
where
we
don't
have
major
feature
updates,
but
hopefully,
when
we
do
release
these
things,
you
can
see
the
massive
benefit
that
they'll
get
us,
and
with
that
I
will
hand
it
back
off
to
nicole.
F
I
guess
we'll
find
out
so
recently.
We
added
nuget
package
manager
support
which
gave
us
a
c
sharp,
which,
looking
at
the
language
statistics
for
projects
of
ultimate
users,
is
now
three
percent
of
composition,
analysis
users.
So
those
are
users
that
up
to
that
point,
couldn't
use
dependency
scanning
at
all.
So
I
think
that's
pretty
exciting
that
we
enabled
all
of
those
people
to
start
using
our
scans.
F
In
addition,
we
also
have
been
working
on
merge,
request
approval
improvements
steadily
for
a
while.
I
don't
know
if
anybody
has
tried
to
set
those
up
before,
but
we
added
these
nice
little
pop
overs
to
help
you
easily
set
it
up
and
know
if
you've
got
like
a
license
check
active
or
over
here.
F
I
do
not
yet
have
statistics,
I'm
working
with
the
data
team
to
find
out
how
many
of
these
are
currently
in
use
today,
but
I
know
that
this
is
a
frequently
asked
question
previously.
I
haven't
been
getting
it
for
the
past
few
months,
so
fingers
crossed.
That
means
this
has
been
working.
You
know,
how
do
we
prevent
vulnerabilities
critical
vulnerabilities
from
going
out
to
production,
or
how
do
I
prevent
licenses
that
we
want
to
deny
from
going
out
to
production?
F
But
what
if
you
just
want
to
quickly
grab
this
and
go
and
try
it
out
or
what,
if
you're
a
little
bit
less
technical,
we're
going
to
go
ahead
and
just
make
that,
mr
for
you
and
then
that
does
edit
your
pipeline
config
yaml.
So
you
can
see
what
it's
doing
in
the
mr
request
and
then,
if
this
is
as
effective
as
it
was
for
sas,
we'll
work
on
adding
a
configuration
page
next
and
then
the
other
thing
that's
coming
up
is
dependency
pathing.
F
So
today
you
can
see
on
the
dependency
page
that
you've
got
a
particular
component
with
issues,
but
you
may
not
know
exactly
where
that's
coming
from,
and
this
little
pop
over
is
going
to
help
you
with
that.
We're
going
to
implement
this
pop
over
in
multiple
places,
we're
going
to
work
through
certain
languages
first
or
at
least
certain
lock
files
first
and
then
spread
it
out
to
everyone.
G
All
right
thanks,
nicole
yeah,
so
I
was
having
a
really
fun
time
preparing
for
this
meeting.
I
thought
it
was
a
good
opportunity
to
take
a
step
back
think
about
where
we
were
a
year
ago.
Last
december,
the
fuzz
testing
group
did
not
exist
at
this
point
last
year.
I
I
think
it's
been
an
incredible
journey.
We've
seen.
As
you
know,
our
group
has
formed
we've
finalized
the
two
acquisitions
of
both
fuzzet
and
peachtek,
and
we've
completed.
As
of
this
quarter,
our
first
integrations
of
both
products.
G
G
In
a
similar
vein,
I
realize
that
every
iteration
of
this
mean
I've
gone
over
time
while
reviewing
this
direction
page.
So
please
take
a
read
of
this
asynchronously.
There
is
an
easter
egg
in
there.
If
you
find
it,
let
me
know,
but
I
just
want
to
focus
on
a
couple
parts
of
this
page
that
I
think
are
really
worth
highlighting.
G
We've
made
a
large
amount
of
progress
this
past
year,
but
one
of
our
key
areas
of
focus
is
still
going
to
be
around
the
usability
issues
with
fuzz
testing
this
category
in
this
space.
We're
really
up
against
the
misconception
that
fuzz
testing
is
very
hard
to
use
and
difficult
to
use,
and
so
that's
why
one
of
our
our
guiding
principles
is
that
usability
for
fuzz
testing
and
making
customers
and
end
users
able
to
use
it
successfully
without
much
or
any
of
our
help
is
really
going
to
be
paramount.
G
So
I'm
really
excited
that
we
were
able
to
bring
this
as
part
of
gitlab
in
a
similar
way.
With
that
usability
focus
after
we
had
released
our
java
support,
we
started
working
with
end
users
and
we
found
that
the
way
we
had
approached
it
using
the
jqf
engine
for
fuzzing
didn't
fit
all
of
the
use
cases
around
java,
specifically
where
customers
and
end
users
were
using
spring.
G
I
was
really
proud
of
the
team.
How
quickly
we
were
able
to
iterate
show
those
gitlab
values
to
be
able
to
put
something
in
the
product
that
would
enable
those
end
users
to
successfully
use
fuzzing
on
their
projects
and
also
we
completed
our
first
delivery
of
api
fuzz
testing
and
we've
done
a
number
of
enhancements
on
it
as
well.
G
They
can
easily
add
api
fuzz
testing
directly
into
those
projects
with
gitlab,
and
so
all
three
of
these
things
are
really
impactful.
We're
seeing
a
lot
of
really
good
commentary
when
we
work
with
sales
when
we
work
with
end
users
and
customers
directly
and
if
you
want
to
go,
read
more
about
them,
there's
a
link
to
each
of
their
respective
release
posts.
G
So
that's
where
we've
been.
Let's
talk
a
little
bit
about
where
we're
going.
David
talked
about
a
couple
of
the
points,
but
let's
drill
down
a
bit
and
really
one
of
the
the
key
areas
we're
going
to
be
focusing
on
the
near
future
is
around
api
fuzz
testing
and
making
the
results
that
we
show,
which
are
currently
today
inside
of
our
test.
Tab
on
a
pipeline
make
those
really
feel
like
every
other
security
scanner,
we're
going
to
be
making
those
results
show
up
in
the
security
dashboard.
Mr
widgets
pipeline
security
tabs
everywhere
else.
G
G
It
solves,
why
it's
important
and
how
gitlab
is
unique
and
how
we
approach
it
is
going
to
be
an
area
of
focus
for
fuzz
testing.
This
coming
year,
like
I
mentioned
before,
one
of
the
obstacles
we
face
is
misconceptions
about
fuzz
testing
that
come
from.
You
know
many
many
years
ago,
when
fuss
system
is
very
different
and
so
to
help
address
those
we're
going
to
be
hosting
live
sessions.
To
talk
to
the
community
about
this
talking
to
git,
lab
internal
teams,
writing
articles
and
blogs
to
really
help
with
that
educational
aspect.
G
I
see
david's
one
minute
left
mark
in
chat,
so
hopefully
we're
not
too
far
over.
But
the
last
thing
I
wanted
to
highlight
was
protocol
fuzz
testing
protocol.
Fuzz
testing
is
another
piece
that
came
with
the
peach
tech
acquisition
and
we're
gonna
be
working
on
our
plans
in
2021
on
how
we're
going
to
bring
that
to
market.
G
Our
current
direction
is
that
we
want
to
open
source
portions
of
the
peach
engine.
I'm
really
excited
about
this
because
it's
going
to
allow
for
community
contributions,
people
to
add
features,
fix
bugs
and
then
we'll
figure
out
how
certain
parts
are
going
to
live
inside
of
gold
or
ultimate.
We
know
the
appetite
is
there.
G
I've
talked
to,
I
think,
probably
three
different
sales
teams
every
single
week
for
the
past
month
asking
what
are
we
doing
for
protocol
fuzzing
when
is
it
available
and
I'm
really
excited
about
where
these
items
are
going
to
take
us
in
2021,
and
so
with
that?
That
is
the
overview
of
fuzz
testing,
where
we've
been,
where
we're
going
and
with
that
I'll
pass
it
back
to
david.
A
Hey
thanks
everyone
for
the
great
updates.
We
can
go
into
the
questions
now
todd.
Would
you
like
to
to
voice
over
seth's
questions
or
seth
on.
H
Okay
sure
sure
so
all
voice
over
question
one,
and
that
is
what
is
the
the
future
of
yaml
versus
database
configuration.
Should
our
tools
be
zero?
Config
like
sas
like
tools?
Are
we
still
pipeline
oriented
we're
taking
both
approaches
today?
So
it
seems
like
an
unanswered
question
and
it
goes
on.
A
sas
has
a
merge
request.
Generator
dest
has
on
demand
scans.
H
A
Yeah-
and
actually
I
missed
the
last
part
of
that
so
I'll
speak
about
first.
I
what's
the
best
way
to
wear
this.
So
as
a
stage
seth
points
to
the
user
adoption
journey
there,
which
you've
not
seen
it
it
says
you
must
actually
adopt,
create
and
then
verify
to
get
the
secure,
we're
actually
seeing
that
change
within
our
customer
base.
A
One
of
our
large
government-based
customers
in
the
apac
region
was
actually
adopting
secure
by
first
using
jenkins,
so
they
were
using
us
as
a
ci,
but
they
were
using
us
for
I'm
sorry
for
scm,
but
not
for
ci,
and
that's
due
to
the
work
that
all
of
you
have
done
to
have
standard
apis
that
can
be
called
from
anywhere.
Now.
All
of
us
know
that
in
the
back
end,
that
means
we
actually
are
spinning
up
a
pipeline.
A
A
I
will
state
I
tried
to
update
in
the
handbook
and
if
someone
can
explain
how
to
make
mermaid
do
what
you
want
it
to
do.
I
would
probably
have
submitted
my
merge
request.
Instead,
somehow
I
made
almost
like
a
half
circle
by
trying
to
modify
the
existing
rendered
out
mermaid
diagram
to
the
to
go
through
the
rest
of
the
questions
like.
Should
our
tools
be
zero,
config
we're
actually
seeing
customers
leverage,
auto
devops
to
kind
of
get
close
to
that?
A
I
I
don't
remember
where
the
team
was
on
making
a
demo
of
this,
but
literally
with
a
couple
of
clicks,
you
can
be
up
and
running
getting
secure
results,
leveraging
auto
devops,
but
kind
of,
as
I
touched
on
at
the
beginning
and
you're,
seeing
the
examples
that
seth
is
calling
out,
we
need
to
be
able
to
handle
customization
in
an
easy
way,
and
so
sas
is
trying
out
the
merge
request.
Approach
dash
of
on
demand
is
creating
profiles
to
try
to
do
that.
A
As
that
plays
out,
there's
probably
going
to
be
a
point
where
yaml
becomes
less
of
a
thing.
That
kind
of
goes
to
my
last
comment,
which
I'll
make
here
in
a
minute.
Are
we
still
pipeline
oriented?
Yes,
we
are
developer
persona
first
and
developers
will
be
leveraging
pipelines,
but
that
does
not
prevent
us
from
trying
to
support
other
things
like
dust
on
demand
and
then
on
the
comment
about
the
future
of
yml.
I
think
this
one's
actually
a
really
tricky
question.
J
Yeah,
because,
indeed
dropping
the
yaml
is
a
drastic
change
in
in
our
vision.
It
also
comes
with
a
lot
of
changes
in
the
way
we
are
supporting
the
execution
of
our
tools
and
to
achieve
the
the
main
goal,
which
is
a
major
adoption
and
a
non-yamal
configuration.
J
There
are
intermediary
steps
and
I
think
james
provided
a
nice
breakdown
of
those
by
having
a
dynamic
pro
dynamic,
dynamically
generated,
cim
or
config
from
a
ui.
And
so
I
encourage
you
to
have
a
look
at
this
as
this
issue,
because
I
think
it's
clearly
highlight
the
same
benefits
you're
looking
for
without
getting
drastically
away
from
the
pipeline.
A
A
Nicole
t
well,
I
can
read
taylor's
but
nicole
and
I
guess
olivia
you've
got
a
couple.
Other
comments:
did
you
want
to
vocalize
those
as
well.
F
Basically,
in
the
far
future,
I
want
to
burn
it
with
fire,
but
in
the
interim
I
think
doing
an
database
driven,
dynamic
generation
of
a
child
pipeline
is
probably
going
to
be
where
we're
going
to
be
for
the
foreseeable
future,
just
because
of
life,
the
universe
and
everything.
So
I
think,
that'll
solve
our
problems
in
the
short
term.
K
If
I
can
voice
a
potentially
contrarian
point
of
view,
none
of
our
secure
jobs
are
configured
by
yaml.
The
build
pipelines
are,
and
that
passes
environment
variables
which
configure
our
jobs.
Yaml
is
simply
an
interface
by
which
our
configurations
are
expressed,
and
we
should
not
consider
ourselves
wed
to
that.
There
is
a
level
of
indirection
that
we
are
forgetting
or
not
articulating
here.
A
A
good
point
I
mean
it
does
kind
of
fit
in
with
taylor's
comment.
Thomas
like
he
says,
he's
a
fan
of
it
because
it
aligns
with
what
our
pipeline
configurations,
but
when
I
had
this
conversation
with,
I
think,
was
derek
and
nicole.
A
J
Yeah
one
other
aspect
that
could
be
helpful
is
in
this
vision
of
detaching.
Our
features
from
the
pipeline
is
this
idea
of
having
hidden
or
ghost
pipeline.
I
just
found
back
the
issue
that
philippe
suggested
to
have
again
from
the
user
perspective.
This
is
not
arranging
the
pipeline,
but
it
still
provides
us
with
the
same
execution
environment.
So
this
could
be
one
more
way
to
achieve
that.
A
Yeah
sam
k,
like
you,
did
a
little
bit
of
work
on
that
back
with
with
me
with
felipe.
A
G
Yeah
that
that's
the
the
short
summary
of
it,
so
we
could
create
a
pipeline.
That's
not
necessarily
attached
to
an
individual
commit
or
merge
request,
so
it
could
start
stop
be
restarted.
You
know,
independently
of
a
developer,
working
on
a
piece
of
code,
necessarily
that
might
be
a
way
that
we
could
start
approaching.
Something
like
this.
I
Yeah,
so
the
next
question
is:
we've
got
a
number
of
tools
that
we
are
proprietary
to
gitlab
gymnasium,
fuzzing
browzurker.
We
talked
about
that
today,
so
just
curious.
If
this
is
kind
of
a
strategy
of
like
trying
to
get
more
and
more
proprietary
tools
built
at
gitlab
or
whether
it
just
happens
to
be
opportunistic
and
just
to
meet
the
needs
of
specific
product
functionality,.
F
Because
it
is
yours,
yeah
for
right
now,
there's
a
possibility,
we're
going
to
do
an
analysis
of
the
license
items
that
are
out
in
the
open
source
environment
today
because
license
finder
just
is
not
it's
not
doing
it
and
we
are
definitely
going
to
evaluate.
Can
we
just
have
gymnasium,
do
a
little
bit
extra
and
cover
that
use
case
and
not
have
to
use
an
open
source
tool
and
we're
going
to
evaluate
you
know
how
much
work
is
that
going
to
be?
F
How
long
is
it
going
to
take
us
versus
how
long
would
it
take
to
just
bring
this
in
and
then
ongoing
maintenance?
So
I
am
hoping
that
each
time
there's
some
kind
of
new
item
out
there
I
mean.
Obviously
there
is
tons
of
open
source
projects
out
there
that,
at
least
for
for
my
group,
that's
a
consideration
that
we
take
into
play,
but
I'm
not
actively
hunting
down
things
that
are
external
to
us
or
I'm
not
actively.
A
Yeah
to
kind
of
add
to
that,
so
I
would
state
that
there's
not
an
objective
or
metric
we're
looking
to
measure
as
far
as
this,
I
think
what
what's
being
seen
is
that
we're
choosing
to,
in
some
cases
acquire
technology
like
the
case
of
peachtek,
where
there
wasn't
a
good
open
source
alternative
to
make
that
work
as
well
as
continue
to
invest
in
securing
the
open
source
community
and
fuzz.
A
It's
really
coming
down
to,
and
maybe
seth
has
seen
this,
the
most
of
all
the
ems,
the
reviews
of
build
versus,
buy
versus
partner,
and
I
expect
that
we'll
continue
to
do
all
three.
As
we
continue
to
mature
our
portfolio
to
me.
It
really
comes
down
to
how
do
we
meet
the
customer's
needs
like
what's
the
best
way
to
do
that,
and
you
can
see
that
in
what
nicole
just
said
with
license.
A
Finder,
like
it's,
it's
not
meeting
our
customers
needs
so
they're,
exploring
other
open
source
scanners
and,
if
not
they'll,
extend
gymnasium
to
support
that,
and
that's
really
why
we
ended
up
with
the
two
acquisitions
we
had
and-
and
I
see
us
again
continue
to
do
that
as
need.
Be
taylor's
comment
relates
to
something
that
he's
seeing
in
the
market
needing
to
have
a
component.
That's
proprietary
related
to
data
intelligence,
and
this
comes
down
to
a
question
that
customers
used
to
pose
to
us
back
in
the
beginning
of
the
year.
A
Why
pay
you
for
ultimate
when
I
can
just
grab
the
same
open
source
scanner
and
start
in
the
pipeline
and
deal
with
the
results
wherever
they
they
land,
and
so
what
taylor
is
seeing
is
the
need
to
have
custom
role,
sets
decisions
being
driven
off
of
data
as
a
way
to
say,
yeah
sure
you
can
use
breakman
today
yourself,
but
you're,
not
getting
that
intelligence
of
what
I
just
showed
that
from
james's
video,
which
james
on
the
call
I
could
just
had
him
walk
through
his
video
in
hindsight,
but
you
know
you
got
to
hear
my
voice
instead
of
his
talk
about
what
was
on
the
screen.
E
Sure
yeah
mine,
I
mean
I
agree
with
what
taylor
said:
we're
seeing
that
with
dust
as
well,
but
really
for
us
the
choice
to
build
browser
it
comes
down
to.
We
didn't
specifically
try
to
invent
anything,
we're
looking
for
open
source
tools,
but
the
opportunity
provided
by
having
isaac
in
the
vr
team
and
having
brazerker
be
a
project
that
he
had
already
started
and
was
working
on
was
just
way
too
good
to
pass
up.
You
know
we
have
that
expertise
here.
So
why
not
leverage
it
and
continue
to
build
what
he
started.
E
Technically,
it
is
still
an
open
source
tool,
so
that
gets
a
little
fuzzy
because
it's
you
know
it's
similar
to
if
we
did
grab
an
open
source
tool
from
the
market,
but
we
happen
to
have
the
main
developer
working
for
us.
So
it
seems
like
it's
it's
an
internal
thing
more
than
it
is
leveraging
open
source,
but
at
the
same
time
it
is
still
leveraging
open
source.
So
it's
a
little
bit
of
an
odd
duck.
There.
H
And
I'll
just
throw
in
mine,
I
I
actually
have
been
encouraging
this
on
the
vulnerability
research
team.
For,
for
a
few
reasons,
you
know
taylor
and
derek
basically
just
voiced
one
of
them.
We
believe
that
we
have
the
expertise
and
we've
done
searches
and
it
just
made
sense
to
build
it
in
house
and
the
other
one
is
as
we're
getting
ready
to
go
public.
H
You
know
we're
we're
actually
we're
we're
trailblazing
the
patent
process
for
gitlab,
because
right
now,
gitlab
doesn't
have
much
or
any
patents
that
I'm
aware
of
so
having
having
patents
under
our
belt
as
we're
getting
ready
to,
as
as
we're
marching
towards
being
a
public
company
makes
for
a
very
good.
G
Story
vocalize,
my
I
think
a
lot
of
this
we're
seeing
is
in
each
of
those
scenarios.
We
did
a
build
versus
buy
versus
partner
sort
of
trade-off.
I
I
don't
think
we
have
an
official
x
over
y
stance.
We
do
have
an
explicit
principle
about
avoiding
not
invented
here,
though,
just
because
something's
not
necessarily
built
at
get
lab,
so
I
mean
where
there
are
those
opportunities
to
either
buy
something
use
an
open
source
project.
You
know
we'll
consider
those
in
balance
that
between
should
we
build
it
ourselves.
A
Thanks
everyone,
we've
got
five
minutes
left
in
two
questions,
so
I'd
like
to
see
if
we
get
through
both
of
them
elevate,
you
want
to
vocalize
three.
J
Sure
I've
seen
a
lot
of
phrases
from
having
a
customer
contributing
and
adding
a
brand
new
analyzer,
which
is
mobius
f,
and
I
think
this
is
a
great
argument
to
to
promote
the
usage
of
the
common
library
as
a
framework
to
help
building
analyzer.
J
I
don't
know
if
this
is
something
want
to
build
on
and
promote
that
to
to
generate
more
contribution
like
this
one,
because
I
believe
this
is
saving
us
quite
some
time,
but
at
the
same
time
we
we
have
several
discussion
and
and
decision
that
head
toward
having
a
less
constrained
around
a
way,
a
unique
way
to
build
an
advisor
which
is
an
opposite
duration
of
having
a
framework
to
tell
how
to
build
it.
And
there
is
no
proper
owner
on
that
on
that
framework,
and
so
I
see
two
diverging
goals
here.
J
A
So
I
guess
I'll
vocalize
what
I'm
writing
and
then
we
can
go
into
james
and
matt's
comments,
but
from
my
point
of
view,
the
primary
reason
I
at
least
I
am
promoting
the
fact
that
customer
did
that
is
it
it's
not
necessarily
that
we
got
the
scanner,
though
I'm
sure
thomas's
team
likes
having
other
people
help
with
the
volume
of
work
they
have
planned
to
me.
A
L
Yeah-
let's
see
so
I
wanted
to
mention
that
I
created
a
standalone
command
line
tool
to
generate
security
reports
for
the
generic
security
reports,
brown
bag-
that
I
did
in
that
brown
bag
or
demo.
I
made
a
bash
based
analyzer
to
detect
time
travelers.
You
know
silly
concept,
but
it
was
all
from
the
command
line
and
it
was
super
fun
to
do
it.
L
It
took
me
maybe
half
an
hour
and
that,
with
the
details
field,
I
had
custom
information
from
bash
as
an
analyzer
yeah
anyway,
so
having
an
easy
way
to
produce
security
reports
and
different
scenarios,
so
from
code
or
from
the
command
line
to
me,
that
makes
writing
analyzers
fun.
So
I'm
all
for
it.
B
So,
from
my
perspective,
I
would
love-
and
this
may
be
wildly
unrealistic,
but
having
the
generic
security
report
or
a
single
point
of
entry
is
what
I
would
love
to
see
long
term
for
us,
for
both
third
party
integrations
and
custom
tools.
B
What
I've
kind
of
my
perspective
from
the
vulnerability
management
side
is,
while
the
tools
have
a
lot
of
very
detailed
information,
and
they
may
have
very
specific
meanings
in
the
context
of
a
given
tool
when
it
comes
to
the
actual
output
in
the
consumption
of
that
there's
very
little
little
information.
That's
actually
needed
to
do
logic
inside
of
the
gitlab
application.
The
rest
of
it
is
presenting
it
in
the
vulnerability
details
page
itself,
so
that
a
user
can
understand
what
the
tool
output.
B
Kind
of
looking
what
what
sam
is
writing
so
my
early
experience
with
some
of
the
partner
integrations
is
that
they
were
constrained
by
the
limitations
of
our
scanner
specific
formats.
B
B
So
I
think,
even
just
on
the
shorter
term,
I
linked
their
an
mr
working
to
put
this
generic
details,
field,
which
is
taken
from
kind
of
a
single
generic
scanner
report
concept
and
adding
that
in
that's
kind
of
what
I'm
seeing
is
more
important
is
a
way
to
be
able
to
include
sort
of
unstructured
detail
in
a
way
that
has
definitive
types
that
we
can
render
consistently
to
the
ui
so
that
we're
not
having
to
do
this
custom
work
every
single
time.
A
Yeah,
so
we're
at
time,
I'm
okay.
If
people
want
to
stay
and
go
over
the
next
question,
if
not,
we
can
handle
it
synchronously
because
we
have
the
time
I
do
want
to
end
by
saying
you
know.
Thank
you
very
much
team
for
putting
together
all
the
stuff
to
go
over
today.
Everybody's
really
really
excited
about
it.
So
thank
you
for
putting
in
that
effort.
A
So
that's
it
I'll
stay
on
and
let
the
recording
keep
on
going.
If
we
want
to
finish
up
the
last
question.
L
So
I
guess
that's
me,
so
I
just
wanted
to
ask
if
we
have
any
goals
around
visualizations
or
analytics
around
vulnerabilities.
I
guess
I
should
also
say
not
just
vulnerabilities.
L
L
Currently
in
the
gitlab
ui,
you
can
see
code
coverage
from
testing,
but
as
a
user,
and
that
knows
that
that
information
isn't
that
far
off
like
I
want
to
be
able
to
see
it
in
the
ui
somehow
and
know
what
is
actually
being
fuzzed,
but
that's
just
one
example
in
the
example
for
the
vet
false
positive
stuff,
that
david
showed,
I
showed
data
flows
showing
where
the
data
flowed
into
a
vulnerability
finding.
L
That
could
be
a
generic
thing
if
the
analyzer
uses
flows
of
data
to
determine
if
something
is
a
vulnerability
or
not
so
having
so.
My
question
is
about
having
goals
or
something
long-term
around
making
those
types
of
extra
data,
accessible
and
kind
of
in
a
single
place
right
without
having
to
go
into
each
individual
vulnerability.
Details.
B
So
I'm
not
a
hundred
percent
sure
it's
it's
quite
where
you're
headed,
but
I
think
the
general
answer
is.
We
haven't
at
least
in
the
vulnerability
management
side
spent
a
lot
of
time
in
the
visualizations
that
to
me
is
sort
of
a
secondary
or
tertiary
thing
to
can
you
just
see
the
vulnerabilities
triage
them
and
manage
them,
but
to
that
end,
splitting
out
the
the
old
dashboards
and
the
vulnerability
information,
the
lists
having
that
all
on
one
page
wasn't
really
conducive
to
expanding
the
visualizations,
so
that
was
kind
of
the
primary
step.
B
So
now
you've
got
separate
vulnerability,
reports
and
security
dashboards
at
all
levels,
so
we
at
least
have
a
landing
place
for
them.
We
do
have
there's
a
vulnerabilities
by
age.
Chart
that's
kind
of
another
common
ask.
I
think
the
challenge
is
there
are
so
many
different
possibilities
and
requests
for
custom
visualizations.
It
could
easily
consume
a
team,
probably
two
or
three
times
the
size
of
the
current
third
insights
team
just
doing
visualizations.
B
Personally,
I
don't
want
to
be
in
that
game.
So
what
I
sort
of
see
the
longer
term
is,
I
would
love
for
any
of
the
the
scanner
teams
that
have
a
visualization
to
be
able
to
put
that
on
the
dashboard,
that's
kind
of
an
option.
You
know
now
and
then
long
term
we'll
need
to
provide
some
sort
of
a
way
to
build
visualizations
custom
visualizations
off
of
the
data
that
we
have
available,
but
that's
more
of
a
grafting,
a
bi
tool
on
top
of
it.
B
So
that's
probably
much
further
out
the
one
missing
piece
is
probably
necessary
to
enable
some
of
this
is
customization
for
the
dashboard,
by
which
I
mean
being
able
to
turn
things
on
and
off
at
a
group
in
project
level,
because
I
may
not
necessarily
want
to
see
you
know
the
coverage
guided
fuzzing
widget.
If
there's
nothing
on
that
project,
for
instance,
so
it's
kind
of
it
in
a
nutshell.
So
I
think
that
the
long
and
short
of
it
is
yes,
but
not
in
the
near
future.
G
G
So
if
it's
just
a
number,
we
can
find
a
way
to
present
that,
with
our
results,
kind
of
like
what
matt
was
talking
about
longer
term
in
terms
of
actual
visuals,
I
think
that's
probably
tricky,
I'm
personally
really
interested
in
exploring
like
what
reverse
engineering
tools
generally
do
like
if
you've
ever
used
a
hilbert
curve
to
visualize
coverage
or
scene
tools
like
cantor
dust,
I
think
there's
probably
some
out
there
like
that
that
we
could
potentially
build
off
of
that
are
open
source.
But
we
are
a
ways
out
from
those.
A
K
Yeah
sure
so
related
concepts
been
talked
about
as
sas
secret
detections,
like
heat
maps,
as
an
idea
of
where
are
there
clusters
of
vulnerabilities
and
parts
of
the
applications
that
are
being
scanned.
Other
ideas
have
been
talked
about
related
to
code
coverage.
What
percent
of
your
project
has
been
covered
by
the
various
scan
types?
These
are
things
that
have
been
talked
about,
but
have
not
progressed
beyond
the
idea
phase,
because
we've
got
some
fundamentals
that
we
see
need
to
be
first
before
we're
ready
to
start
thinking
about
those.
L
Sweet,
thank
you.
I
it's
something
I
was
curious
about
and
I
was
excited
to
have
everyone
together,
so
I
could
ask
I
I've
been
stewing
on
an
idea.
That's
very
closely
related
to
this.
I
just
haven't
written
it
up
yet
too
many
things
on
the
burner
right
now,.
A
Yeah
so
now
I
know
I
I
can't
stress
enough.
The
questions
are
great
they're,
huge
questions,
so
I'm
glad
we
have
it
we'll
continue
to
shift
to
more
q
and
a
time
as
we
move
forward
since
every
time
we
add
more,
we
still
go
over
so
we'll
we'll
shorten
the
updates
and
make
sure
there's
more
time
for
q
a.
A
But
thank
you,
everyone
for
attending
I'll
post
the
recording
later
today
and
have
a
good
rest
of
your
day,
the
rest
of
the
week,
and
if
I
don't
talk
to
you
all-
have
a
good
holiday.