►
From YouTube: Walkthrough of DAST
Description
This is walkthrough of how the DAST repository and project are put together.
B
B
What
I
also
did
is
I
reimplemented
get
labs
tasks
to
him
when
I
started
at
gate
lab
about
a
year
ago.
We
we
already
had
a
tool,
but
it
wasn't
a
fork
from
some
other
project
and
we
yeah.
It
was
not
ideal
because
it
was
a
fork
and
we
could
easily
upgrade
to
new
that
versions
and
so
on.
So
the
reimplementation
that
I
did
use
some
other
mechanism
to
extend
sup
with
the
features
that
we
want
to
add
forget
lab
and
yeah.
C
B
That's
the
reimplementation.
We
are
going
to
have
a
look
today
in
so
my
background
is
that
I
did
a
PhD
in
automated
security
testing,
so
I
spend
a
lot
of
time
and
looking
at
you
know
how
to
devise,
clever
testing
techniques
and
so
sort
of
academic
problems.
Like
you
know,
detecting
of
the
test
case
worked
something.
B
It's
called
an
Oracle
problem
designs,
a
lot
of
empirical
studies
that
compare
different
dynamic
testing
tools-
I'm
not
going
too
deep
into
this,
but
I'm
linking
the
publication's
here
that
I
did
on
Google
Scholar
feel
free
to
check
them
out.
And
if
you
find
anything
interesting
there,
we
can
chat
about
it.
A
E
F
G
C
J
J
C
K
B
Okay,
cool
yeah
welcome
everyone,
it's
very
cool
that
we
have
so
many
people
now
working
on
dust
and
I.
Think
with
that
much
engineering
power,
we
can
really
do
something
cool,
so
yeah,
I
guess
I
can
I
can
start
them
a
little
bit
from
the
basics
of
to
us.
Since
many
people
said
they
didn't
had
too
much
exposure
on
tasks
before
so.
Are
we
going
to
go
all
the
slides
and
yeah?
If
you
have
questions
just
interrupt
me,
all
right,
yeah
get
lap
12.2
now
with
100
percent
more
tasks,
it's
just
a
funny
slide!
Nothing.
B
B
If
you
compare
that
with,
for
example,
static
analysis
when
I
was
asked
to
us
as
asked,
looks
at
a
source
code,
and
so
it
analyzed
the
source
code
and
build
some
abstract
syntax
tree
and
this
kind
of
stuff,
but
it
does
not
necessarily
executed
code.
What
does
this
doing?
It's
it's
executing
the
code
and
by
sending
test
cases
through
the
running
application
and
observing
the
behavior
to
that
test
case,
the
tool
in
first,
if
the
assemble
mobility
in
the
tested
application.
B
To
give
you
amazing
example,
if
if
we
send
a
an
HTTP
request
that
has
some
sequel
injection
payload
and
then
the
the
server-side
application
replies
for
it
without
500
something
error
that
says
you
had
an
arrow
in
your
secret
syntax,
then
you
have
a
very
good
indication
that
there,
some
sequel
injection
problem
happening.
So
to
get
to
this
knowledge,
we
don't
need
to
look
at
a
source
code
of
the
application,
but
we
can
infer
it
from
the
response
that
we
got
after
sending
a
sequel
injection
payload.
B
B
Yes,
but
the
way
we
propose
it
to
be
used,
this
in
combination
with
SAS
dependency
scanning
and
so
on,
because,
of
course,
tasks
will
not
find
all
wrong
abilities.
That's
why
it's
complementary
to
other
approaches,
exhaust
and
so
on
that
can
together,
give
you
good
coverage
of
the
vulnerabilities
that
might
be
in
the
source
code.
B
B
You
can
I
link
it
to
the
documentation
where
you
can
read
more
about
it.
It
also
supports
additional
variables
that
you
can
specify,
which
are
then
used
to
to
start
an
authenticated
scan.
What
that
exactly
means
I
go
into
detail
later
on,
and
if
you
set
up
tasks
on
your
project,
the
pipeline
might
look
something
like
this.
B
First,
you
build,
you
add
value
to
your
normal
unit
tests
as
there's
dependency
scanning.
What
so?
What
and
so
on
and
after
that,
you
would
spin
up
in
review
app
and
once
that
is
up,
you
can
run
dust
against
that
way.
View
app.
So
here
you
see
again.
Das
is
running
against
a
live
application
and
based
on
the
behavior
of
that
application,
it
can
tell
you,
if
their
own
abilities
and
the
application.
B
If
it
found
something
you
will,
he
will
see
it
in
the
merge
request,
like
the
pig
duty
on
the
screenshot
oops
yep
like
a
new
screenshots
like
it
complains
about
certain
headers
are
missing
here
and
some
other
vulnerabilities.
You
can
also
click
on
each
of
these
findings
and
will
open
the
model
of
the
alok
where
it
will
tell
you
what
was
the
exact
page
where
it
found
this.
It
was
fine,
so.
B
This
is
how
it
looks
from
the
user
perspective,
but
most
few
engineers,
and
we
we
want
to
dive
a
little
bit
deeper
on
what
task
is
actually
under
the
hood.
So
dust
is
built
on
over
sap,
which
is,
let
me
maybe
say
couple
of
words
to
zap-zap
is
a
proxy
that
is
used
a
lot
with
semi
automated
security
testing.
Basically,
what
you
do
is
you
proxy?
B
The
requests
that
you
do
from
a
browser
to
true
SAP
and
ZEB
shows
you
exactly
what
is
going
going
over
the
wire
on
an
HTTP
level,
and
that
is
very
useful
to
play
around
with
different
parameters
to
see
how
the
application
react.
When
you
send
certain
parameters
that
you
might
not
be
able
to
set
via
the
UI.
B
Zab
also
has
an
automated
mode
where
you
can
just
point
them
to
a
certain
ul
and
then
Zeppelin
start
to
to
test
that
application.
That
is
founded
at
URL
and
exactly
this
behavior
is
what
we
leverage
in
good
lab,
so
yeah
that
is
applicable
to
to
that
application
to
web
applications,
and
that
means
HTML
and
JavaScript
and
all
that
stuff.
The
repository
is
I
linked
it
there
I
guess
most
of
you
already
are
way
out
there
and
it's
implemented.
B
B
B
Maybe
that
might
be
there.
There
are
some
things
that
are
not
possible
via
the
API,
but
I
haven't
run
into
these
yet
right.
Thank
you
all
right,
then.
Let's
move
on
and
the
two
most
significant
features
that
we
have
added
to
zap
so
far,
support
for
authenticated
scans,
and
we
also
report
now
a
little
bit
more
details
on
what
a
crawler
found.
While
it
was
running.
Let
me
explain
you
a
little
bit
more
on
these
two
features.
B
Why
do
we
need
authenticated
scans?
Most
of
the
application
replicate
that
quickly
only
accessible.
If,
if
you
assigned
in
right,
if
you
authenticated
think
about
kidnap,
if
you're
not
authenticated,
you
only
get
to
see
about
that
kidnapped
outcome
once
you
sign
in
you
can
have
access
to
much
more.
So
in
order
to
test
everything
that
is
behind
authentication
that
needs
to
be
able
to
authenticate
and
the
basic
zebb
scripts
didn't
have
any
support
authentication.
B
So
this
is
what
we
added
as
a
feature.
The
way
that
works
is
you
specify
a
command
line,
parameter
or
environment
variables?
That
tells
that
which
username
and
password
to
use,
and
then
our
Python
scripts
will
use
selenium
and
Firefox
web
driver
to
to
actually
request
a
sign-in
page
to
fill
in
user
name
and
password
to
submit
this,
and
it
will
pass
the
session
cookie.
Then
that
is
returned
from
the
application
and
every
following
call
will
add
the
session
cookie.
That
means
the
older.
B
You
know
two
to
several
reasons:
it
might
have
not
scanned
everything
and
as
a
user,
you
want
to
know
about
that.
That's
why
we
added
these
features.
This
feature:
where
actually
tells
you
what
you
else,
it's
can't
both
features.
I
I
was
linking
some
relevant
code
and
issues
where
you
can
read
up
more.
Oh.
B
Yeah,
this
is
an
example
how
it
looks
when,
when,
when
zabis
reporting,
the
UL
said
actually
scanned,
so
that
the
crawler
here
is
called
spider,
it
tells
you
things
like
progress
100.
It
is
important
because
the
crawler
might
not
actually
to
finish
in
time
because
he
has
been
running
into
timeout
and
and
if
that's
the
case,
then
the
coverage
of
your
application
won't
be
100%
and
we
want
users
to
notice
if
it
won't.
If
it
was
running
in
a
timeout,
it
would
also
not
say
finished
here,
but
they'll
say
something
else
and
then
under
results.
B
B
C
B
Yeah
also
scope
in
in
the
basic
form
scope
keeps
basically
the
tests
focused
on
the
target.
What
that
means
to
really
understand
is
maybe
I
explain.
First,
how
crawling
works,
and
then
we
we
come
back
to
the
question
of
how
why
scopes
are
important.
Okay,
that's
actually
the
next
slide,
so
let
me
quickly
stay
with
what
crawling
is
and
then
we
come
back
to
how
scopes
work
with
that.
B
So
right
now,
dust
essentially
works
in
two
phases:
the
discovery
phase,
that's
the
crawling
or
spidering,
and
then
the
second
phase
is
the
actual
testing
phase.
The
crawler
works
by
you
give
it
in
an
URL
from
where
it
starts
crawling
and
starting
from
that
URL
it
will
follow
all
the
links
that
can
find.
So
it
will
request
that
that
initial
URL
passes
all
the
the
links
that
can
find
or
references
to
to
javascript
sources
to
CSS
sources,
everything
that
somehow
looks
like
an
UL
it
will
extract,
and
then
it
will
follow
these
links.
B
Yeah,
so
it
starts
basically
crawling
from
the
start
page
and
recursively.
It
will
go
into
all
ul
and
finds
it
for
all
the
pages
it
finds
it
stores
them,
and
it
also
looks
for
potential
input
parameters
like
think
of
it.
If
it's,
if
it's
in
form
that
can
submit
it,
it
will
remember
which
parameters
are
on
that
form
or
if
it,
if
it
requests
the
page
we
get.
It
will
also
remember
what
we
are
the
parameters
on
the
get
request
and
so
on.
B
And
then,
finally,
the
last
run
is
going
to
run
either
until
it
runs
into
timeout
or
so
that's
finished.
This
ties
back
into
what
I
told
you
before
earlier
about
the
progress
here
and
state
finished.
If
it
runs
in
the
timeout,
it
won't
be
at
100%
yeah
by
default.
Our
baseline
scan
is
just
crawling
crawling
for
one
minute,
everything
more
than
that
would
be
timeout
and
the
active
scan
doesn't
have
any
timeout
set.
So
that
also
means
only
I
am
dead
for
very
large
applications.
The
growler
might
run
for
very
long
time.
B
B
B
C
B
So
yeah
you're
right,
basically
scopes
can
have
wildcards,
and
so
you
can,
if
I
remember
correctly,
you
can
say
what
Stokes
to
include
and
also
if
you
want
to
exclude
certain
sub
parts
of
your
application
right,
you
might
only
want
to
scan
a
certain
path
but
not
other
patterns
within
your
application.
This
should
also
be
possible
to
be
set
via
the
API,
but
I
haven't
done
it
yet,
but
I
think
that
API
call
exists.
A
Yes,
okay,
yeah
I,
think
Victor
I,
don't
know
if
this
is
one
of
the
reasons
you're
asking,
but
it's
particularly
relative.
One
of
the
things
that
we
want
to
do
in
a
future
release
is
to
have
multiple
URLs,
so
you
could
have
URL
one.com
URL
to.com
as
a
single
scan,
so
I
think
the
scope
might
be
the
area
if
we
pass
in
those
URLs
as
a
scope
as
a
JSON
or
whatever
format,
it
takes
to
kick
off
the
scan.
Yeah.
B
B
All
right,
so
we
have
been
talking
about
crawling.
This
is
the
first
phase.
This
is
basically
the
discovery
phase
where,
where
our
tool
finds
out,
what
are
all
the
pages
and
what
are
the
put
potential
parameters,
the
air
which
we
can
pass
malicious
inputs
and
the
actual
testing
phase
where
zap
is
discovering
vulnerabilities,
is
the
second
phase.
B
How
this
exactly
works
depends
on
the
scan
mode.
If
we
use
the
passive
scan
mode,
what
it
will
do
is
it
will
just
look
at
the
pages
that
have
been
stored
during
the
crawling.
So
all
the
HTTP
communication
during
the
crawling
phase
is
recorded
and
in
a
passive
scan.
It
just
looks
over
these
HTTP
messages
to
see
if
there's
some
vulnerabilities
things
that
you
can
identify
by
doing
this,
for
example,
you
can
check
if
form,
submissions,
so
post
requests,
heaven
CSF
token,
if
they
don't
haven't
CSF
token.
B
A
B
So
they
they
are
not
tied
directly
to
that,
and
probably
it
would
be
even
impossible
to
develop
our
own
test.
Heuristics
and
Adam.
We
add
ons,
that's
actually
something
that
is
pretty
cool
too
for
myself,
because
I'm
looking
for
vulnerabilities
and
based
on
the
knowledge
that
we
get
a
knapsack,
it
would
be
cool
to
add
some
additional
tests
with
that.
That
can
look
for
the
vulnerabilities
that
we
found
good
thanks.
B
B
Regarding
the
first
phase,
the
discovery
phase,
some
problems
we
have
been
running
into
is
insufficient
support
for
different
depth
technologies
like,
for
example,
JavaScript
I
mentioned
that
a
crawler
is
actually
fetching
pages
and
is
parsing
the
content
for
the
default
crawler
does.
Is
it
only
passes,
HTML
content
now?
What
happens
in
JavaScript?
Heavy
applications
is
that
a
lot
of
the
links
are
only
loaded
when
javascript
is
executed,
so
our
our
crawler
would
will
only
find
them
if
it's
executing
JavaScript-
and
this
is
for
dast
only
the
case.
B
If
you
pass
in
a
certain
parameter,
that's
not
a
difficult
set
and
then
I
also
found
an
issue
which
is
somewhat
annoying
where
it
starts
crawling,
not
at
a
URL
that
you
specify
data
entry
URL,
but
it
will
always
start
crawling
at
the
root
URL,
and
this
leads
sometimes
to
problems.
If
your
application
does
not
serve
content,
which
we
are
then
two
hours
would
crawl.
I
won't
find
anything
I
flipped
a
related
issue
here
and
yeah.
We
should
look
into
that.
B
B
And
then
the
last
point
that
I
was
mentioning
here
is
that
crawling
can
really
take
a
long
time
since
it's
following
all
the
links
it
can
find
and
if
you,
if
you
run
the
colon
and
then
exhaustive
fashion,
where
you
don't
set
a
timeout
that
can
be
for
a
long
time
like
we
run
it.
This
way
in
for
a
full
scan
and
the
poor
scan
can
take
can
take
up
to
a
couple
of
hours.
B
B
B
F
B
B
You
have
lost
your
authenticated
context,
please
log
in
again,
I
think
this
should
be
like
the
ultimate
solution
that
we're
having
the
easier
solution
right
now
would
be.
If
we
just
enforce
that
URLs
that
we
already
told
us
not
to
crawl
to
hit
and
then
as
a
follow-up,
we
should
teach
it
how
to
Rio
tentacle
in
case
it's
losing
authentication.
F
B
B
The
baseline
scan
typically
takes
around
5
minutes,
because
crawling
is
limited
to
1
minute.
You
can
eat
this.
This
is
just
a
default
value,
but
if
you
don't
override
the
default,
we
only
would
submit
it
at
1
minute
and
then
it
will
do
the
passive
test
just
based
on
the
traffic
that
the
crawler
was
seeing
and
the
baseline
scan,
because
it's
not
running
that
long,
it's
suitable
to
be
run
in
time-sensitive,
CIA
pipelines.
B
You
can
imagine
that
if
you
run
a
long-running
task
test
in
the
pipeline,
your
developers
are
going
to
complain
that
you
know
they
want
to
merge
their
code
and
White
House
does
take
again
two
hours
just
for
one
little
commit.
So
we
need
to
be
aware
that
time
is
very
is
very
critical
when
we
run
in
the
context
of
CI
pipelines.
B
A
lot
of
these
other
points
that
I
that
I
list
on
the
basement
scan
I
already
mentioned,
and
in
comparison
to
that,
the
the
active
scan
and
so
release
that
takes
a
longer
time,
because
there's
no
limit
for
a
crawler
and
all
the
parameters
it
identified.
It
actually
sends
HTP
requests
with
malicious
payload,
so
where
I
see
active
scans
applicable,
is
more
in
a
scheduled
pipeline.
That
is
not
run
for
every
commit
that
you're
pushing,
but
you
run
it,
for
example,
nightly
or
in
12-hour
intervals,
or
something
like
that.
E
So
we've
recently
been
talking
about
strategies
of
running
incremental,
stands
for
static
analysis
and
I'm
curious.
Do
you
see
any
any
strategy
for
moving
desk
close
great
incremental
skims?
Oh
yeah,.
B
Absolutely
I
see
absolutely
I.
Think
that's
a
that's
a
great
point
and
I'm
talking
about
this
a
little
bit
that
it's
one
of
the
last
slides,
whereas
a
future
vision,
if
we
could
add
incremental
scans
for
tasks
that
would
be
awesome
and
I.
Think
that
also
would
help
a
lot
with
cutting
down
the
the
runtime
and
with
focusing
the
time,
we're
spending
with
dust
to
really
test
the
changes
that
have
been
introduced
by
a
recent
committee.
B
So
that
would
be
an
awesome
feature
and
as
far
as
I'm
aware
other
tools
that
are
out
there,
they
they
don't
really
do
that
very
well
over
at
all.
So
other
Darst
words
out
there
might
be
better
at
you
know,
coming
up
with
testing
for
various
wound,
abilities
and
so
on,
but
in
the
context
of
CI
CT
I
think
we
have
a
chance
here
to
to
do
what
you
were
saying
and
to
actually
make
dust
useful
in
the
CI
CD
context
by
doing
incremental
scans
and
focusing
on
introduced
functionality.
B
B
Yes,
so
right
now
we
only
test
and
okay,
but
we
also
have
an
issue
where
we
talk
about
adding
support
for
testing
api's
versus
of
api's.
You
can
have
a
look,
a
link,
the
issue
and
the
last
slide.
Here
is
my
personal
future
vision.
Where
I
could
see,
we
could,
we
could
add,
create
value
to
dust,
so
the
first
one
is
related
to
what
Lucas
was
mentioning,
which
is
incremental
scans
right
now.
How
it
works
is
on
every
commit.
B
We
crawled
the
entire
cider
over
and
over
again,
so
we
start
again
from
the
entry
ul
and
we
try
to
discover
the
entire
site
structure.
I
think
we
could
save
a
lot
of
time
if
free
would
be
a
little
bit
smart
on
what
to
scan
and
do
it
to
some
kind
of
incremental
skin
only
scan
the
part
of
the
application.
It
has
been
affected
by
changed
new
commits.
B
So
for
that
we
would
need
to
to
infer
what
has
been
affected
by
a
commit,
and
we
also
would
need
to
persist
state
between
the
strands
and
to
pass
it
from
one
run
to
the
other
one
and
another
area
where
I
see
we
could
do
really
big
improvements
for
Darcis,
but
the
user
experience
once
we
actually
present
a
list
of
findings
right
now.
It's
it's
just
a
layer
list
of
findings
that
is
pointing
you
to
the
URL
of
to
review
up
with
whatever
it
happened.
G
B
B
G
B
H
I
H
Okay,
thank
you.
You
know
I
just
I'm
curious
about
the
crawler
I
just
wanted
to
ask:
how
does
the
crawler
avoid
cycles,
and
is
it
just
that
whenever
he
goes
back
to
the
same
URL,
he
won't
like
crawl,
the
same
page
again
and
and
related
to
that,
if
you
have
like
second
order
vulnerabilities
in
your
application.
Basically,
this
implies
that
a
crawler
that
don't
visit
the
same
page
cannot
detect
these
kind
of
fun,
abilities,
I,
suppose.
B
Yeah
that
these
are
all
very
good
questions,
so
the
first
question
cycle
detection.
It
has
some
logic
to
detect
loops
based
on
if
it
has
seen
in
URL
before.
But
then,
of
course,
you
know,
the
same
content
can
be
served
under
dynamic
side
passes
right,
so
it
would
keep
requesting
new
URLs
but
keeps
getting
the
same
content.
So
in
theory,
I
think
it's
possible
that
runs
into
loops,
but
I
haven't
seen
that
yet
much,
but
we
we
should
keep
that
in
mind
and
then
the
other
question
that
you
brought
up
was
with
second-order
volatilities.
B
So
just
a
little
bit
of
background
on
second-order
vulnerabilities.
What
it
means
is
basically
that
first,
you
need
to
you.
You
need
to
submit
the
payload
at
one
part
of
the
application,
and
but
it's
not
directly
executed.
You
need
to
request
a
different
part
of
the
application
to
actually
get
the
payload
that
you
have
placed
on
the
first
request
executed.
B
A
A
B
First,
it
was
called
directly
because
it
we
were
flocking
the
project
and
we
were
editing
the
the
source
files,
so
we
were
directly
calling
our
modified
source
file
when
I
really
meant
that
task.
My
goal
was
to
do
it
in
a
non-breaking
fashion.
That
means
I
I
kept
calling
the
the
same
source
file,
but
this
time
we
went
modifying
the
source
file.
B
A
B
Using
the
full
upstream
project
yeah
and
it's
using
two
ways
to
extend
the
the
upstream
project.
One
way
is
we:
we
have
a
wrapper
script
around
the
upstream
project,
for
example
the
different
stem
modes
that
we
are
providing.
They
have
different
entry,
scripts
and
Zab
so
based
on
whatever
scan
what
we
want
to
run.
We
call
different
entry
script
and
we
also
do
some
things
like
checking
that
that
application
has
already
started
right.
C
Have
a
question
about
mitigating
the
issues
related
to
JavaScript?
So
have
you
considered
moving
away
from
SAP
or
might
be
building
some
kind
of
scaffolding
on
our
site
on
it
website
to
be
able
to
swap
the
scanners?
The
scanning
tools
under
these
scaffold
remember
this
common
interface
to
the
best
scanning
tools
to
try
and
leverage
in
some
more
joy
as
friendly
scanning
tools
them.
As
that
absolutely
yeah.
B
A
B
A
What
I
found
in
the
source
code
is
the
timeout
and
the
full
scan
and
the
website
environment
variables
are
used,
I
believe
in
in
the
analyze
script,
but
the
dashed
username
password
I
couldn't
find
any
reference
to
those
in
in
the
analyzer
or
in
the
Python
scripts.
You
can
pass
them
in
as
for
as
as
long
parameters,
but
it
won't
pull
the
environment
variables,
at
least
from
what
I
saw
on
the
source
code.
Yeah.
B
C
B
B
B
So
if
we
would
plugin
for
that
at
least,
we
had
future
priority
with
good
purpose
right
here,
especially
since
SF
now
against
a
lot
of
attention
and
they
might
even
included
in
the
overstock
10,
so
yeah,
absolutely
I,
think
the
the
AB
SEC
team
can
can
give
good
input
here,
because
they
have
good
understanding
of
what
a
very
common
or
trendy,
multiple
teas
right
now
and
based
on
this,
we
could
develop
add-ons
for
that
test.
For
these
kind
of
vulnerabilities.
J
B
B
Okay,
if
there
are
no
other
questions,
I
would
say
we
are
at
time
yeah
I'm,
really
looking
forward
to
working
with
you
and
to
to
focus
a
bit
more
on
adding
advanced
test.
Testing
heuristics
like
Ethan,
was
just
mentioning
yeah,
but
if
you
have
any
questions
regarding,
what's
what
I've
been
doing
in
the
past,
just
you
know.