►
From YouTube: Types of DAST scans
Description
A video where we discuss the differences between:
Passive scans
Active scans
Full scans
API scans
Authenticated scans
A
All
right,
so
what
I
wanted
to
do
is
just
talk
about
the
active
passive,
active
scan,
passive
scan,
full
scan,
API
scans
and
authenticated
scans,
all
of
which
are
mentioned
in
our
documentation
in
one
form
or
another.
So
let
me
share
my
screen
here
and
we
can
kind
of
work
through
what
these
are.
You
able
see
my
screen
here.
I
am
alright,
so
we've
got
active
and
passive
scans
which
in
our
documentation
we
refer
to,
but
we
don't
actually
have
them
as
environment
variables
and
I.
A
Think
that's
one
of
the
reasons
that
there's
a
little
bit
of
confusion
with
how
these
work.
So,
if
you
look,
we've
got
a
baseline
scan,
it
says,
execute
a
baseline
scan,
but
it
won't
actually
attack
your
application
but
there,
but
you
can
also
configure
it
for
an
active
scan
and
then
down
here
we
have
a
full
scan
which
runs
an
active
scan
and
I.
Think
that's
a
little
bit
confusion.
So
what
I
wanted
to
just
do
is
quickly
define
these
so
an
active
scan.
A
B
And
specifically,
so
is
that
the
tool
that
does
depends
on
is
a
proxy
server.
So
that
means
when
you
set
up
a
proxy
server
in
your
browser,
if
you
navigate
to
a
website,
if
the
request
goes
via
SAP
to
the
website,
and
then
the
response
comes
back
via
SAP
to
your
browser
and
we
show
on
the
page
a
passive
scan.
All
it
means
is
that
as
app
handles,
that
request
and
response
on
the
way
through
it
reads
the
the
headers
in
the
URL
and
and
the
bodies
and
runs
vulnerability
rules
against
them
to
work
out.
B
A
That
basically
sits
in
the
middle,
and
it
just
just
looks
at
the
requests
that
are
going
through.
It
doesn't
do
anything
with
the
requests
it
just
analyzes
them.
It's
it's
very
hands-off.
It's
saying:
okay,
I'm,
seeing
the
traffic
go
through
I
see
what
it's
doing.
Here's
the
problems
I
see
with
it,
I'm
just
gonna,
pass
that
traffic
through
I'm
not
gonna,
do
anything
with
it.
Yeah.
B
So
it's
it!
It's
passively
scanning
now
zap
offer
a
zap
baseline
script
which
helps
users
passively
scan
and
that
actually
does
more
than
passively
skin.
It
actually
spiders
your
website
too,
so
it
it
basically
goes
and
loads.
A
webpage
finds
all
the
links
follows
all
the
links
loads.
Those
webpage
follows
those
links,
etc,
and
while
it's
following
those
links,
it's
obviously
doing
the
passive
scanning
and
that's
called
as
app
baseline,
scan.
Okay.
A
So
maybe
a
better
definition
of
passive
is
actually
just
sitting
in
the
middle
and
looking
at
the
results,
passiveness
the
baseline
is
what
is
actually
including
the
spidering
correct,
okay,
so
passive
is
just
sitting
in
the
middle.
Looking
at
the
results
and
doing
some
analysis,
it's
not
necessarily
responsible
for
generating
the
new
URLs
or
figuring
out
what
URLs
need
to
be
scanned.
Yeah.
B
A
B
Actually,
disagree
with
that,
because,
typically
when
we
talk
about
HTTP
requests
being
safe
as
an
example,
we
typically
mean
they're
doing
requests
that
won't
change
State
on
the
server
side,
but
and
that
typically
limits
them
to
operations
like
get
methods
on
it
on
a
HTTP
and
when
making
HTTP
requests
the
thing
about
the
zap
zap.
Is
it
the
the
attack
surface
or
finding
the
attack
service
also
makes
post
requests
to
forms?
Okay
makes
it
unsafe,
so
I
think
on
our
documentation.
B
A
A
B
So
once
the
attack
surface
has
been
discovered
and
active,
scan
introduces
more
vulnerability,
definitions
that
for
for
each
URL
and
method
discovered
in
the
attack
surface,
they
will
target
and
maliciously
try
and
break
or
discover
vulnerabilities
against
that
URL.
So,
for
example,
the
path
traversal
might
take
the
URL
and
then
you
know
deliberately
chunk
parts
of
the
URL
off
and
and
try
and
make
requests
to
to
find
access
to
things
as
I
shouldn't
have
otherwise
being
able
to
access.
So.
A
It's
gonna
actually
manipulate
the
request
in
a
way
that
this
spider
did
not
intend
right.
So
a
spider
is
gonna,
pass
in
URL,
parameter,
strings
or
whatever
that
it
found
on
a
page
and
the
active
scan
is
gonna,
say:
hey
I,
see
what
those
variables
are.
I'm
gonna,
actually
change.
Those
and
I'm
gonna
try
to
to
be
bad
with
those
yeah.
B
And
which
means
it
actually
makes
new
requests?
Okay,
so,
where,
as
a
passive
scan
will
just
you
know,
it
will
just
pass
on
the
requests
you
have
made,
an
active
scan
will
say:
oh
I,
see
you've
made
this
request,
I'm
going
to
actually
try
and
break
that
even
further
by
making
a
hundred
more
requests
each
changing
the
parameters
and
seeing
what
I
can
break
okay
and
you
can
configure
the
active
scan
to
work
out.
A
Configurable
to
determine
the
level
of
requests,
/
manipulation,
that's
right!
Alright,
that's
my
understanding,
anyway,
cool
and
then
should
these
are
built
in
active
and
passive
scan.
Concepts
are
built
into
Zapp
and
they
have
a
set
of
rules
that
are
associated
with
each
of
those
so
and
we
could
take
a
look
at
the
code
shortly
like
it
loads
in
a
set
of
rules
for
active
or
it
loads
in
a
set
of
rules
for
passive
now
Zapp
by
itself
does
not
have
a
concept
of
a
full
scan.
A
A
B
A
Right
but
like,
for
example,
if
you
just
set
up
the
scanner
just
out
of
the
box,
you
know
the
include
and
then
the
URL
it's
gonna
run
a
baseline
and
that's
defaulted
to
60
seconds.
So
it's
gonna
run
for
a
minute
and
then
it
stops.
There's
my
understanding.
Okay
I
thought
otherwise,
but
we
should.
We
should
figure
that
out.
Okay,
so
my
understanding
is,
it
runs
for
60
seconds
and
that's
part
of
my
question
and
we
can
get
into
this
is.
A
B
A
B
I
mean
we've
essentially
hidden
away
the
difference,
but
under
the
covers,
it
calls
a
dip
script.
But
the
big
difference
is
when,
when
determining
the
attack
surface,
it
doesn't
spider
well,
I
think
it
tries
to
spider
as
well,
but
predominantly
it
determines
the
attack
surface
by
reading
an
API
definition
using
the
open
unit,
open
API
specification
and
then
once
you've
got
the
so
that
gives
you
all
the
URLs
of
your
restful
endpoint,
which
it
then
tries
to
I.
Think
it's
spiders
it.
A
B
A
B
So
traditionally,
a
lot
of
websites
have
a
login
gateway
at
the
side
of
their
website.
So
obviously,
if
you
run
dust
on
a
site
like
that,
you
would
only
be
testing
the
public
part
of
the
website,
because
dust
wouldn't
be
able
to
login.
You
wouldn't
be
scanning
any
of
the
pages
beyond
the
login
gateway,
so
we're
trying
to
allow
people
to
log
in
now
on
a
typical
website,
most
of
them
use
JavaScript.
B
B
Okay,
now
for
an
IPO
scan,
it's
it's
different,
because
you
don't
have
a
page
where
he
knew
the
logs
in
an
API
scan.
Typically,
authentication
is
performed
using
some
kind
of
authorization.
Toka
token,
you
know,
request
header,
so
there's
a
slightly
different
process
for
authenticating
API
scans,
but
it's
essentially
the
same
thing:
you're
trying
to
get
best
access
to
the
endpoints.