►
From YouTube: Discussion about fuzz testing & DAST split
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
derek
raised
some
concerns
about
the
the
peach
documentation
around
how
we
want
to
position
fuzzing
alongside
some
of
the
das
capabilities,
and
I
think
it
makes
a
lot
of
sense
if
we
put
all
the
dash
stuff
and
the
fuzzing
stuff
together,
it
might
confuse
users
about
which
one's
which
so
what
we
want
to
do
moving
forward
is
for
the
initial
fuzzing
releases.
Have
the
docs
only
mention
the
fuzzing
capabilities
and
the
fuzzing
related
profile
information
over
the
next
couple
of
iterations?
B
C
So,
sam,
let
me
recap
kind
of
what
you're
saying
so
we
would
have
the
dast
and
fuzzing
eventually,
but
for
the
api
out
of
out
of
peach
and
then
dashed
what
what
is
dash
today.
We
would
focus
on
just
the
the
web
application,
not
necessarily
the
api.
A
D
Sorry,
sorry,
I
didn't
mean
to
interrupt
yeah
I
was
just
gonna
say
so
I
think
that
it's
more
of
a.
D
User-Facing
split
than
it
is
anything
else
we
wouldn't
be
showing
any
dast
related
information
in
the
fuzzing
area,
at
least
not
initially,
and
then
you
know
we
we'd
integrate
peach
into
dast.
I
think
that
the
big
thing
there
is
just
making
sure
that
the
results
show
up
as
dast
results.
So
when
you
configure
a
deskt
api
scan
that
uses
peach,
it
doesn't
come
back
as
fuzzing
a
fuzzing
test
found
this,
but
so
yeah.
D
I
don't
think
that
we
would
have
the
the
dast
capabilities
within
fuzzing,
maybe
eventually
we
could
figure
out
a
way
to
to
have
those
kind
of
combined
where
you
could
run
both
of
the
tests
at
the
same
time,
but
the
results
would
still
need
to
be
somehow
split
up
so
that
users
aren't
confused
as
to.
Why
am
I
getting
dash
results
in
my
fuzz
test?.
C
Yeah,
so
it
makes
sense
that
you
would
we'd
always
want
to
have
fuzzing
results
which
would
be
strictly
fuzzing
results
and
then
anything,
that's
a
dash
result,
regardless
of
the
fact
that
it
came
from
peach
would
come
up
as
a
dash
result.
So
I
think
that
totally
makes
sense.
That's
obviously
that's
that
that's
easier
said
than
done,
because
it's
one
scan
and
so,
for
example,
when
peach
runs,
it
doesn't
care
whether
it's
fuzzing
or
whether
it's
a
dash
check.
It's
just
gonna
return.
C
Those
results,
so
we'd
have
to
figure
out
somewhere
in
peach
to
say
what
kind
of
result
is
it?
Okay,
it's
this
kind
of
result
we're
going
to
output
this
report.
It's
this
kind
of
result
will
output.
The
other
type
of
report,
so
there'll
be
there'll,
be
some
work
to
do
within
peach
in
order
to
output
that
it's
it
sounds
like
we're
not
quite
there
yet.
So
it
may
be
worth
having
some
more
discussion
as
we
get
there.
B
B
I
don't
know
if
it's
necessarily
extra
work
as
much
as
if
you're
configuring,
api,
fuzzing
you're
only
configuring
fuzzing
settings
and
not
traditional
dash
settings
and
then
when
mike,
is
ready
to
start
working
with
isaac
and
derek
on
integrating
it
in
for
dast
api
specific
functionality.
B
At
that
point,
we
would
not
be
setting
fuzzing
configuration
we'd
only
be
setting
the
dashed
api
configuration.
E
So
as
we
do
our
fuzzing
results,
you
know
how
does
the
back
end
cue
between
the
two
types
of
findings,
since
we're
now
saying
that
we
want
the
fuzzing
findings
to
show
up
differently
than
the
das
findings.
So
there's
gonna
have
to
be
some
indicator
for
how
the
back
end
should
should
handle
those.
D
Right,
but
I
think
that
if
you,
if
you
don't
mix
the
two
like,
you
have
a
fuzz
test
and
a
dash
test
in
the
same
configuration
in
the
same
test,
then
you
could
use
that
input
that
configuration
input
that
says
hey.
I
want
to
run
the
desk
checks
versus
the
fuzz
test
and
then
output
a
object.
Based
on
that.
I
don't
know
I
mean
I
don't
know.
Do
we
deal
with
the
metadata
around
like
the
json
report
in
the
engine,
or
is
that
done
in
like
the
ruby
code
or
outside
of
the
actual
scanner.
C
Yeah,
so
I
think
we
want
to
handle
that
outside
the
scanner,
because
what
we're
proposing
is
you
run
two
different
scans
right.
You
run
a
dash
scan
or
you
run
a
fuzzing
scan
based
on
the
configuration,
the
engine's
written,
that
it
could
run
both
at
the
same
time,
ideally
from
an
efficiency
perspective
and
for
a
customer
perspective
you
just
run
one
scan,
and
so,
if
you
configure
both
those,
we
shouldn't
fire
up,
peach
run
one
type
of
scan
and
then
fire
up
peach
again
and
run
another
type
of
scan.
C
So
what
we
should
really
do
is
probably
focus
on
really
not
doing
much
different
on
peach
and
focus
on
the
result.
So
peach
would
output
all
that
data
and
then
peach
would
just
tag
it
with
a
particular
metadata.
Whatever
we
decide
that
says,
hey,
it's
a
fuzzing
result
or
it's
a
dash
result
and,
frankly,
I
think
for
the
most
part,
that's
there
already,
because
it's
just
using
different
checks
and
then
the
front
end,
the
rails
code
would
say:
okay,
I'm
going
to
display
it
this
way
or
I'm
going
to
display
it.
That
way.
E
C
E
Fact
all
the
fuzzing
checks
currently
have
the
word
fuzzing
in
them.
So
we
would,
you
know
we
wouldn't
necessarily
have
to
add
any
new
metadata
to
the
schema.
It
could
be
a
purely
backend
sorting
mechanism.
A
What
we're
asking
for
is
not
new
development
essentially,
but
to
just
remove
the
dash
references
from
the
dock,
as
well
as
those
template
files
that
users
are
going
to
copy,
that
being
a
first
step
that
we
can
do
orthogonally
from
the
the
thing
we're
talking
about
now,
where
we're,
actually,
you
know
splitting
those
results
out,
so
I
just
want
to
make
sure
that's
clear.
We're
talking
about
two
different
sets
of
deliverables.
E
Yeah,
I
think
that's
clear
and
actually
after
derrick's
comment
yesterday,
I
went
back
and
did
exactly
that,
so
the
current
wrap
of
those
docs,
if
you
want
to
look
at
them,
have
no
mention
of
dast
and
I
trimmed
down
a
configuration
file.
I
was
able
to
remove
all
mentions
of
any
of
the
das
checks
from
it
entirely
and
that's
also
kind
of
up
on
the
api
fuzzing
repo.
So
you
can
take
a
look
at
that.
E
Seth
had
mentioned
possibility
of
including
up
on
that
site.
Also,
a
a
das
enabled
configuration
file
that
could
either
be
used
by
us
for
testing
or
if
someone
want
to
explore
that
functionality,
I'm
not
sure
how
you
guys.
Think
of
that.
D
I'm
not
I'm
not
quite
sure.
I
understand
that
statement.
C
Yes,
so
in
the
documentation
right,
it
refers
to
a
configuration
file
and
it's
right
now,
based
on
the
update
mike
just
did
it's
just
a
fuzzing
configuration
file,
but
where
that
file
lives,
there's
a
folder
with
a
bunch
of
other
configurations,
including
some
that
have
a
das
configuration
in
there.
C
So
my
thought
is:
we
can
leave
all
those
configurations.
Those
are
all
peach
configurations
are
all
valid,
but
not
document
any
of
those
configurations
so
that
if
someone
wanted
to
switch
over
to
a
different
configuration
file,
they
could
get
dash
results
but
not
necessarily
document
those
the
benefit
to
that
is
like
derek.
If
you
want
to
run
it,
you
can
just
change
your
configuration
name
and
you
get
dash
results
to
see
what
it
would
look
like,
but
that
again,
that
wouldn't
be
documented
in
right.
Any
setup.
C
So
that's
kind
of
the
unknown.
It's
just
a
quick
place
to
leave
those
files
to
start
playing
around
with
stuff.
D
D
A
A
A
So
I
mean
if
the
engine
still
has
the
functionality,
we're
not
asking
to
remove
any
engine
functionality
for
this,
essentially
just
not
publishing
it
externally.
For
this
version,
you
know
if
we
have
those
template
files
internally
for
us
to
look
at,
because
I'm
sure
derek
has
experiments,
he
wants
to
run
or
whatever
that's
fine.
We
just
don't
want
users
seeing
them
and
getting
confused
about
them.
D
That
makes
sense
to
me
that's
kind
of
I
think
that's
probably
the
best
best
way
forward
right
now,
especially
since
we're
going
to
be
using
peach
in
dast.
I
think
that
you
know
having
that
available,
even
just
as
a
hey.
It's
here
could
get
very
confusing
once
especially
once
we
start
to
to
roll
it
out,
because
then
it's
like
you
know,
do
I
use
what's
already
there
and
fuzzing.
Do
I
go
to
desk?
How
do
I,
how
do
I
configure
this
scan.
B
I
guess
in
a
way
mike,
I
hope
this
makes
hitting
your
milestones
easier,
because
you're
only
going
to
be
worried
about
the
part
of
the
api
security
and
then
like
seth
todd's,
going
to
talk
to
you
about
the
proposal
of
switching
to
peach
and
browzerker
for
dast.
You
know
I
met
this
morning
so
whenever
he
does
that,
like
I,
I
look
at
this
api
security
for
derrick's
need
as
kind
of
outside
mike's
milestones
for
fuzzing,
so
I
think
they
can
just
be
matched
up
with
that.
C
A
Okay,
well,
those
were
the
the
main
points
I
wanted
to
cover
with
this.
We
still
have
15
minutes
left
anything
else.
We
want
to
talk
about
while
we're
together.
C
C
So
one
of
the
ideas
is
that
right
there
could
be
potentially
where
we
take
a
different
path.
So,
if
you
plug
in
a
url,
we
go
with
zap.
If
you
plug
in
that
open
api
specification
today,
that's
going
through
zap
and
we
do
the
scan
throughs
app,
but
that
could
be
a
very
easy
thing
that
we
say
hey.
If
you
have
an
open
api
specification
boom,
don't
load
up,
zap
we're
gonna
load
up,
you
know
peach
and
then
run
the
skin
through
ph.
C
C
You
know,
what's
the
delta
in
terms
of
the
benchmark
right,
what
is
zap
show?
What
is
peach
show
if
peach
is
showing
more
boom,
we're
ready
to
go,
and
then
we've
got
a
couple
technical
things
to
to
figure
out
to
to
do
that,
but
it's
a
very
clean
line
to
implement
it
right.
There.
D
Yeah,
it
makes
sense
and
yeah
just
as
a
mvc
initial
implementation.
That
makes
total
sense
to
me.
I
think
if
peach
is
showing
more
in
terms
of
vulnerabilities,
I
think
we'd
quickly
want
to
use
the.
D
Add,
on
the
other
entry
points,
I
guess
the
har
files
and
things
like
that
being
able
to
to
supply
those
and
do
as
well
as
like
enable
soap
and
graphql
scanning,
since
those
are
already
supported
in
peach,
so
yeah.
But
I
think
that
makes
total
sense
to
just
switch
that
over
when
we
get
a
good
idea
of
what
I
guess,
what
we're
missing,
whether
if
there's
any
gaps
in
terms
of
features
between
peach
and
zap,
that
we
would
lose.
D
I
kind
of
doubt
that
there
is
because
I
know
that
zap's
api
functionality
is
not
incredibly
robust,
so
I
it
seems,
like
there
probably
isn't,
but
we
probably
need
to
do
that
that
analysis
to
make
sure
and
then
do
that
benchmarking,
which
mike
and
I
actually
talked
about
yesterday
so
yeah.
We
could.
D
I
think
that
as
soon
as
we
do
that,
and
we
figure
out
the
whole
splitting
off
the
desk
results
and
making
sure
they
show
up
in
the
report,
as
you
know
being
run
from
best
that
I
I
mean,
I
see
this
as
being
a
pretty
quick
integration
once
we
have
time
to
work
on
it,
both
between
the
death
team
and
mike.
E
So
I
did
dig
up
those
benchmarks
we
talked
about
yesterday.
E
E
E
The
same
came
back
with
like
a
tls
configuration
error,
new
application
security
bugs
acunetix
came
back
with
blind
sql,
so
it
found
at
least
an
sql
issue
in
there
it
didn't
have
any
false
positives
and
four
findings,
and
then
I
could
did
see
the
numbers
on
qualis,
but
I
remember
cost
being
pretty
similar
in
terms
of
getting
accunex
running
they
require
you
couldn't
actually
do
the
normal
configuration
to
get
an
api
to
work.
They
had
to
download
a
special
proxy
tool.
E
D
A
Well,
and
for
the
benchmarking
too,
we
can
start
looping
in
the
vulnerability
research
team.
I
know
that
they've
done
a
lot
of
work
on
making
a
a
test
harness
for
evaluating
tools
on
a
bunch
of
different
web
apps
and
they
did
it
with
zap
initially.
So
we
already
have
a
good
baseline
to
compare
against.
D
Right,
I
don't
think
that
we
have
a
baseline
for
zap
for
api
scans.
At
least
when
I
looked
I
couldn't
find
anything.
I
couldn't
even
find
a
like
a
test
project
or
a
test
application
for
the
api.
So
seth,
I
don't
know
if
you
know
what
we
tested
it
against
when
we
did
our
initial
implementation.
C
I'm
not
sure
I'd
have
to
check.
Are
you
talking
about
the
original
benchmark
that
was
done
a
while
back.
C
Yeah,
so
the
api
application,
at
least
on
the
dash
team,
has
not
done
a
benchmark
like
we've
got
a
built-in
test
which
got
kind
of
fake
vulnerability,
but
that's
not
really
a
benchmark.
I
think
isaac
did
have
some
benchmarks
that
he
set
up.
We
could
check
in
with
him.
D
Yeah
do
do
any
of
those.
I
thought
that
all
those
were
actually
the
web
app
benchmarks,
not
the
api
benchmarks.
I
can't.
B
C
So
so
the
other
thing
that
we'll
need
to
think
about
too
is
once
we
start
moving
over
to
our
own
scanners.
Any
new
vulnerabilities
that
come
out
right,
we're
not
really
subscribing
to
any
database
or
or
it's
not
like
we're
getting
the
latest
version
of
zap.
C
So
that's
something
that
our
vulnerability
research
team,
I
think,
would
be
needing
to
lead
us
on
of
saying:
hey,
there's
a
new
whatever
it
is
tab,
napping
or
sql
injection,
or
you
know,
vulnerability,
providing
that
vulnerability
definition.
And
then
we
would
need
to
be
writing
the
code
to
implement
that.
C
It's
not
something
that
I
think
we're
set
up
to
do
right
now,
at
least
in
dast
or
fuzzing,
but
something
that
we'll
need
to
gear
up
for.
B
I
sure
hope
that
they're,
that
they're
suggesting
we
moved
to
browsworker
and
ph
api
security.
They
realize
they're
signing
up
for
that.
A
Yeah,
so
we
can
confirm
to
make
sure
that's
explicit.
We've
worked
with
some
of
the
other
groups
like
sca,
though
to
bring
in
new
data
feeds
to
populate
for
them
to
check
again.
So
you
know,
they've
gone
through
that
exercise.
We'll
just
have
to
point
out
where
we
want
to
look
at
those
feeds
from
and
then
you
know
they
can
put
a
process
together
to
import
them
like
they
do
with
the
other
ones.
C
A
Engine,
I
think
our
current
la
is
five
business
days
and
that's
one
of
the
metrics
that
team
actually
tracks
there's
a
graph
somewhere
in
the
engineering
handbook
yeah.
I
don't
think
what
I'm
trying
to
say
is.
I
don't
think
this
is
going
to
be
something
completely
out
of
left
field
or
brand
new
for
this
team.
I
I
think
we
should
be
able
to
solve
this
pretty
straight
in
a
pretty
straightforward
manner.
D
All
right,
cool
yeah.
I
don't
think
that
I've
got
anything
else.
I
think
that
it's
all
pretty
clear
what
what
we
need
to
do,
what
the
next
steps
are-
and
I
mean
the
timeline,
maybe
isn't
quite
as
clear
just
because
we're
finishing
up
other
stuff
in
das,
in
the
fuzzing
area
as
well.
So
once
once
we
get
a
better
view
of
the
timeline
and
when
everybody
has
time
for
this,
I
think
that
we'll
we'll
know
better,
but
I
think
it
seems
like
to
me
at
least
next
steps
are
pretty
clear.
A
All
right,
I
think
it
sounds
like
we're
all
good,
then
I'll,
post,
a
link
to
this
video
recording
once
it
converts
or
whatever
into
the
the
issue.
So
we
have
it
as
well
as
a
summary
of
these
notes,
but
thanks
everyone
for
hopping
on
appreciate
the
time.