►
From YouTube: Fuzzing Report Schema Discussion
Description
This is a recording of a discussion about the structure and architecture of the fuzzing report schema used in GitLab.
A
So
what
one
of
the
benefits,
or
one
things
that
we
can
certainly
do
in
the
schema,
if
we
piggyback
off
the
schema,
is
we
can
mark
certain
fields
is
optional
as
well.
Obviously,
if
we
start
getting
a
schema
that
the
entire
thing
is
optional,
it's
kind
of
like
well,
what's
what's
the
point,
you
know
so
there's
there's
some
kind
of
tipping
point.
There
yeah.
B
I
think
the
the
thing
that
comes
to
mind
is
I.
Think
about
sharing
with
DAST
is
that
I
feel
like
they
have
a
very
stable
platform
that
they're
unlikely
to
change
in
the
future
extensively,
whereas
ours
is
probably
a
little
different
from
that.
You
know
as
we
integrate
and
then
also
you
know,
start
expanding
the
tool
of
features.
It's
more
likely
that
our
version,
the
schema,
will
need
to
be
be
iterated
over
and
so
I
guess.
A
The
upside
to
piggybacking
off
one
of
the
existing
schemas
is
anything
that's
in
those
schemas
that's
exposed
in
our
rails
application
we
basically
get
for
free.
So
if
we
populate
certain
fields
is
gonna
show
up
in
rails,
no
problem,
anything
that
we
have
in
our
own
schema
is
not
gonna
show
up
in
rails
without
additional
work.
I
guess
that's
the
this!
That's
that's
true!
For
anything,
we
add
to
the
DA
schema
right.
If
we
add
a
new
field,
that's
not
going
to
show
up
someone's
you
have
to
go
in
and
add
that
to
Rails.
A
B
I
think
that
you,
you
hit
a
great
point
about
reusability
of
UI
components
and
so
I
guess
that
the
next
thing
is,
you
know:
do
we
expect
to
diverge
the
UI
or
you
know,
keep
it
similar
to
what
gas
is
doing.
You
know
I
think
if
we're
diverging
the
UI,
then
again
we're
kind
of
in
a
position
where
you
know
diverging.
The
schema
probably
fits
in
with
that
plan.
B
You
know
also
I
wonder
like
from
a
UI
perspective.
Even
if
we
don't
have
the
same
schema
you
know,
does
that
I
mean
even
if
we
have
our
ARM
processor
for
the
report,
could
we
not
just
populate
fields
that
translate
perhaps
there's
a
mapping
that
we
have
that
could
fulfill
the
UI
component
requirements
right.
I
know.
I,
definitely
see
your
point,
though,
about
reusing
the
the
desk
stuff.
So.
C
It's
kind
of
an
area
I
hadn't
really
considered
too
much
is
extending
I
was
focusing
a
lot
more
on
the
similarities
between
coverage
base,
fuzzing
and
API
fuzzing,
because
then
we
would
reuse
similar
components.
We
have
a
similar
type
of
goal
with
what
we're
collecting
in
those
reports,
but
I
I
think
there
is
definitely
things
that
we
should
reuse
and
carry
over
either
between
existing
schemas
or
between
the
two
different
types
of
fuzzers.
A
My
so
the
way
that
I
kind
of
connected
these
is
is
very
cool
is
as
close
to
the
API
fuzzing.
So
that's
a
schema
and
then
the
coverage
guide
of
fuzzing
is
a
different
schema.
If
anything,
the
coverage
guide
of
fuzzing
might
get
closer
to
our
sassed
schema
and
so
potentially
extending
as
a
schema
to
incorporate
things
in
the
coverage
guide
applausing.
B
So
probably
there,
the
first
thing
we
want
to
do
for
the
kind
of
the
desk
versus
you
know,
using
it
versus
extending
it
versus
having
our
own
is
to
have
me
sit
down
and
do
a
rough
look
at
what
fields
translate
to
our
tool.
What
field
you
left
out
and
then
what,
in
the
immediate
feature,
we
might
be
adding
to
it
and
let's
just
get
a
handle
on
on
how
well
it
works,
but
I
think
that
the
big
for
me,
the
big,
quite
the
main
thing
I
wanted.
B
A
Yeah,
I
mean,
I
think,
if
we
try
to
build
off
the
desk,
that's
gonna
make
the
peach
integration
a
lot
easier,
yeah
and
then
really
like
you're
saying
what
what
we
have
to
do
is
look
at
dast
or
you
know
you
look
at
dass
and
say:
okay,
this
we've
got
that's
gonna
cover
80%
of
the
fields
that
that
peach
provides
where'd.
Those
20%
of
those
extra
fields,
gonna
go
and
then
we
can
add
a
merge
request
to
add
those.
B
D
So
one
idea
I
want
to
float
out
there
and
see
what
you
all
thought,
rather
than
extending
completely
off
of
one
schema.
What
if
we
broke
down
portions
of
the
schema
into
blocks
and
then
included
those
into
multiple
places
right
like
we,
don't
have
to
strictly
extend
Basques,
but
we
could
inherit
one
portion,
which
is
also
in
dass
in
coverage
and
Sask
what-have-you,
and
then
that
would
let
us
kind
of
mix
and
match
the
schema
definition
based
on
the
use
cases
that
those
blocks
support
rather
than
you
know,
trying
to
cram
everything
into
one
schema.
D
A
Sam
I
think
that's
a
great
point
and
I
haven't
looked
technically
at
how
all
this
has
put
together,
but
my
understanding
is:
that's
actually
how
some
the
schemas
are
working
today.
So
exhaust
ass
are
incorporating
the
vulnerabilities
definition,
so
we
would
do
the
same
thing.
We
would
just
incorporate
the
definition,
so
it
does
reuse
exactly
like
you're
saying
blocks,
I,
don't
know
what
they
call
didn't
seem
a
definition
but
yeah.
C
One
of
the
really
big
benefits
I
think
we
get
out
of
that
is.
We
would
be
able
to
add
support
for
locations
or
okay.
So
if
we
had
common
definitions
for
items,
I
want.
The
first
example
that
came
to
mind
was
a
location
within
the
project
right
then,
we
can
use
that
and
have
common
UI
components
that
know
how
to
work
on
that
field,
and
it's
not
specific
to
a
report
right
and
that
I'm
loving.
That
idea.
That
makes
a
lot
of
sense
to
me.
Yeah.
A
So
that's
a
good
point
like
if
you're
reusing
the
location
you
know,
perhaps
if
in
the
case
of
like
Dastur,
API
fuzzing
and
it's
a
HTTP
URL
or
in
the
case
of
code
and
it's
a
line
number
if
we're
reusing
that
across
our
tools
and
it's
a
line
number
and
a
path
to
that
code.
Theoretically,
right
you
can
click
on
that
and
take
you
to
that
part
of
the
repo
as
opposed
to.
A
D
A
D
A
C
C
See
I
definitely
think
we
should
reuse
common
definitions.
I,
whatever
way
we
go,
I,
definitely
think
we
need
to
do
that.
Does
I'm
not
I'm
kind
of
familiar
with
the
sass
one,
but
we
like
we
need
to
add
extra
fields
to
it.
C
A
Yeah
so
I
mean
it
gets
back
to
I'm.
Looking
at
the
sass,
we've
got
a
start
line,
Emeline
class
method.
Things
like
that
that
if
we're
able
to
use
that
and
if,
if
the
fuzzing
knows
those
that
information
again,
if
the
vulnerability
dashboard
or
the
security
dashboard
of
links
that
information
we'd
get
some
that
functionality
yeah.
C
C
Yeah,
so
we
would
keep
a
location
block
right,
but
it's
so
let's
say
we
don't
have
symbols
on
the
target
that
we're
fuzzing
we're
not
going
to
have
four
locations,
but
we
may
know
it's
this
offset
in
this
module
right
to
me.
That
is
something
that
I
would
want
to
add
in
as
the
location
of
the
vulnerability
is
whatever
information
we
have,
even
if
it's
not
line
or
source
based
and
to
me,
that's
where
the
difference
between
the
basic
location
that
is
source
based
and
the
other
types
come
in.
C
D
D
Is
then
what
we
could
do
the
UI's
could
display?
You
know
if
source
code
line
present
display
that
else
display
hex
address,
but
if
we
build
the
UI's
around,
this
is
all
this.
Data
will
always
be
source
files
and
lines,
and
then
we
give
it
a
hex
address
that
might
look
funky
in
the
UI,
because
it's
gonna
be
displaying
a
different
type
of
data
than
it's
expected.
C
I,
do
let's
see
I'm
looking
up
the
link
I
added
in
the
mr
that
I
made
for
a
I
guess
my
first
stab
at
having
a
fuzz
schema
in
json
schema.
You
can
have
optional
different
types
where
a
location
object
needs
to
be
one
of
these
types
of
objects
right,
that's
the
way
I
was
going
with
that
with
in
goal
in
mind
where,
if
the
location,
dot
type
field
equals
source
then
display
like
this.
If
not
display
like
this
trying
to
find
the
exact
okay
I
got
it
and
okay.
C
Here
we
go
this
one
I'll
put
that
and
I'll
drop
it
in
the
chat,
I'm,
not
sure
where
to
put
it
in
the
document.
Right
now,.
D
C
Well,
and
it
would
it
would
piggyback
off
of
the
current
size
format
how
the
UI
implements
it,
of
course,
would
be
up
to
them,
but
they
could
say
if
the
location
dot
type
fields
doesn't
exist,
then
assume
it's
source
right
from.
Otherwise
you
know
use
the
type
and
know
it's
module,
location
or
well.
The
code
that
I
linked
also
has
endpoint
location,
but
we're
not
doing
that
in
the
same
schema,
so
ignore
that
oh.
C
I
didn't
want
to
say
that
dast
and
API
fuzzing
I
think
any
improvements
we
add
to
the
desk
EEMA
because
of
API
fuzzing
will
only
enhance
the
dest
schema
itself
and
we
could
probably
use
it
with
actual
tasks,
results
and
I.
Don't
think
that
same
type
of
relationship
exists
with
the
extending
the
sass
schema
and
the
coverage
guided
fuzzum
yeah,
because
it's
the
because
we
are
running
a
live
program
where
you
will
never
get
those
types
of
values
from
sass.
A
Yeah
I
think
that
makes
sense
in
Sam.
You
had
a
question
this
morning,
I
think
about
higher
files
and
whether
gas
is
using
higher
files.
So
some
of
the
same
stuff
that
we're
looking
at
on
the
API
fuzzing
is
stuff
that
we
can
bring
over
as
well.
It's
not
in
the
schema
but
yeah
I
agree.
James,
there's,
there's
very
much
a
you
know:
each
one's
kind
of
kind
of
advanced,
the
other
tool.
A
A
A
C
A
A
C
Now
it
is
not
pulling
in
the
sass
box,
let's
see
so
there
is
the
tie
out
at
night
and
add
so
at
the
top
of
the
document.
There's
the
existing
schemas
and
I'm
gonna
add
the
common
schema.
So
the
all
of
these
specific
schema
types
inherit
from
the
common
schema
and.
C
C
This
is
it
did
that
and
all
right-
and
this
is
one
that
all
of
them
extend
from
right,
and
this
is
where
you
have
information
about
the
analyzer
type,
the
name
and
if
you
want
to
see
the
full
expanded
versions
where
you
don't
have
the
includes,
if
you
go
to
the
dist
folder
instead
of
source,
then
you'll
see
the
full
schema
that
all
of
them
use
right.
So
this
is
the
full
SAS
kheema,
with
everything
and.
C
C
C
C
C
Exactly
and
inheriting
from
SAS
also
means
any
changes
to
this
a
scheme.
It
will
also
automatically
be
put
into
the
fuzzy
one
with
whatever
pro's
and
con's.
That
means
right,
yeah,
but
also
something
for
us
to
remember
so
yeah.
This
is
all
I
added.
Here
was
the
location
object
and
then
a
stack
trace,
object
and
the
target,
since
it
will
be
specific
to
a
target
right,
we
had
talked
about
how
you
could
build
multiple
targets
and
have
different
jobs
to
fuzz
those
targets.
C
A
C
D
C
Exactly
which
is
one
reason
why
I
really
like
point
out
these
common
definitions,
because
I
think
they
are
used
in
other
places
like
dependency
scanning
or
I
thought
I
saw
it
somewhere
else.
There
was
also
other
location
objects
that
were
custom
added
right,
so
I
think
I
think
we
definitely
should
pull
these
out
and
then
to
me
it
makes
sense
to
add
this
type
field
and
then
have
a
switch
like
this
or
you
could
say
a
generic
location.
C
A
C
Ignore
these
parts
this
is
specific
to
the
json
schema
saying:
I'm
the
file
location
object
definition.
This
is
metadata
about
the
eyes
itself,
like
the
definition
right
yep.
So
the
type
is
not.
This
is
a
description,
and
these
properties
contains
the
actual
fields.
Yep
I,
actually
really
hate
working
directly
in
JSON
schema.
C
A
C
D
C
A
B
C
D
C
C
C
Okay,
so
we
would
go
about
it
by
let's
see
I
think
we
could
have
a
common
definitions.
Json,
where
all
it
does
is
define
things.
It
does
definitions
blocks
right,
so
we
would
include
the
common
definitions,
just
like
we
do
here
into
here
and
then
reference
them
in
this
part.
So
here
where
we
say
the
location
is
ref
definitions,
location.
This
would
be
source
from
some
common
definitions.
Json
file
instead.
A
Yeah
so
I
mean
at
the
very
least.
We
could
do
that
with
the
sassed
schema
and
it
would
have
no
impact
right.
We
could
actually
just
take
a
line
20
through
whatever
it
is
46
or
somewhere
around
there.
Yeah
pull
that
into
a
shared
library.
That
should
have
no
impact,
and
then
the
next
question
is
okay,
that
definition,
that's
defined,
I'm,
that's
starting
to
get
defined
on
line
20
on
the
left
and
then
the
definition
you
have
on
line
40.
A
A
A
C
A
C
A
A
A
C
That
was
when
I
was
thinking.
This
might
be
used
to
cover
API
fuzzing
as
well,
so
ignore
the
endpoint
location.
Okay,.
C
Definitely
okay!
So
if
you
look
at
the
from
you've
Jenny's
demo,
where
the
crashed
state
showed
like
mileage
module
plus
offset
it
wasn't
line
numbers,
that's
exactly
what
this
is
like:
it
is
a
specific
module.
Oh
it's,
the
type
equals
module,
the
module
is
the
name
of
the
module
and
then
an
offset
into
the
module.
Okay
and
it's
relatives
not
based
on
the
specific
address
in
memory
right,
cool.
A
C
So
it
would
be
module
definitions
or
source,
and
if
we
knew
how
to
work
with
either
of
them,
it
would
do
exactly
what
you
were
saying.
You
could
look
at
the
stack,
trace
and
jump
directly
to
a
source
in
the
project,
or
it
would
just
show
you
the
module
plus
officer
yep
and
their
target
was
just
the
string
saying
the
name
of
the
target.
Maybe
we
should
change
it
to
some
identifier,
I'm
thinking
this
would
just
be
the
name
of
the
binary.
It
was
run
with
right.
A
A
A
C
A
Because
I
think
I
think
that
should
be
able
to.
Hopefully
the
SAS
team
is
okay
with
that,
and
we
should
be
able
to
get
that
through
pretty
quickly
and
then
and
then
take
the
the
merger
class
that
you
that
we
had,
on
the
right
hand,
side
of
the
screen
and
then
just
update
that
accordingly,
so
in
and
then
once
we
have,
that
I
think
we
can
just
get
you
guineas
feedback
just
see.
If
that
takes
care
of
everything
that
he
needs
output.
It.
A
C
A
C
C
Coverage
based
I
think
that's
just
the
name
we're
going
for
with
that
type
of
okay.
A
D
A
Yeah
so
Sam,
one
of
the
things
that
came
out
of
our
brainstorming
yesterday
is
you
know,
we're
gonna
version
these
schemas
appropriately
as
per
schema
version.
What
we
have
to
do
is
talk
to
the
threat,
insights
team
because
they're
running
the
dashboard
and
they
will
ultimately
I
think
decide
what
versions
of
the
schema
they
are.
Gonna
drop
support
for.
So
you
know
if
we
have
whatever,
like
the
the
number
of
schemas
versions,
is
kind
of
arbitrary.
D
A
And
it's
gonna
be
the
same
problem
internally
as
it
is
externally
right,
so
SAS
may
be
producing
schema
version,
2
schema
version
2
and
at
some
point
threat
insight,
says:
hey,
we
can't
support
schema
version,
2
anymore
and
SAS
will
have
to
upgrade
to
version
3
or
whatever,
and
that's
the
same
communication
we'll
do
internally,
as
we
will
do
externally.
Yeah.
D
I
mean
it's
gonna,
be
tricky.
We're
gonna
have
to
figure
out
what
those
timelines
look
like.
Cuz
it
get
lab,
we
can
coordinate
internally.
Pretty
fast
I
mean
it's
just
a
matter
of
getting
on
an
issue,
but
once
we
have
external
people
in
green
I'm
thinking,
probably
at
least
six
months
at
a
minimum,
that's
gonna
be
the
kind
of
timelines
we're
talking
about
nation
yeah,
but.
A
Frankly,
most
of
that
work
is
is
gonna
fall
on
the
threat,
insights
team
because
we're
gonna
need
to
just
output.
Whatever
new
data
we
have
like,
we
are
for
fuzzing
or
for
whatever,
and
if
we
need
to
add
a
new
field,
we'll
have
in
a
field
and
the
threat
insights
team
will
have
to
figure
out
how
to
basically
inject
old
values
or
inject
values
into
old
schemas.
If
they
do
that.
A
C
I
didn't
want
to
bring
up
this
world
the
I
kept
this
schema
as
it
is
very
minimal
when
I've
written
fuzzing
frameworks
and
things
to
collect
crashes
before
and
make
you
eyes
for
them.
There's
a
lot
of
other
information
that
you
do
want
to
have,
such
as
the
crashing
instruction
instruction
itself
and
all
the
registers.
That
way,
you
don't
have
to
reproduce
the
crash
locally
to
tell
exactly
how
its
crashing
right
and
that
does
tend
to
look
like
you
know.
C
It's
save
x86
assembly
instructions
right
and
you'll
have
a
list
of
registers,
possibly
surrounding
code
snippets.
Those
are
all
things
that
I've
added
in
the
past
and
for
MVC,
probably
don't
need
it,
but
I
think
yep
Jenny
may
already
be
capturing.
It
I'm,
not
sure,
but
it's
not
that
hard
to
capture
it's
part
of
the
Lib
buzzer
or
go.
A
How
much
should
the
the
thing
that
we
haven't
done
yet-
and
you
know
we're
kind
of
on
the
cusp
of
having
this
conversation
of
like,
for
example,
like
our
files
on
API
fuzzing
or
some
of
this
register?
Like
that's
a
lot
of
data
and
the
question
is
it
should
that
live
in
this
JSON
document,
or
should
that
live
as
a
link
to
to
that
data?
That's.
A
I
think
kind
of
my
gut
reaction
is
to
figure
out
a
way
to
do
this
extensively,
which
would
be
to
figure
out
a
way
to
add
basically
artifacts
if
you
will
inside
the
JSON,
so
that
we
could
do
this
for
any
tool
and
we
don't
have
to
come
back
and
make
a
lot
of
changes
to
the
schema.
So,
for
example,
there
might
be
something
like
supporting
info
or
whatever,
and
it
has
a
link
to
a
file,
and
that
could
be
the
case.
A
If
you
had
that
on
the
fuzzing,
it
could
be,
you
know
your
register
or
memory
dump
or
whatever
it
is,
and
that
way
we
don't
have
to
figure
out
how
to
to
have
all
that
data
in
the
schema
and
then
that's
a
generally
extensible
format,
assets
and
basically
it's
a
name,
a
file,
name
and
I.
Don't
know
maybe
some
other
metadata
well.
A
B
A
D
C
C
I
did
I
think
I
need
to
add
this,
or
it
could
be
useful
for
me
to
add
this
into
the
issue
about
displaying
fuzzing
results,
but
I
did
I
forget
about
it
until
right
now.
This
was
a
project
that
I
worked
on
a
long
time
ago,
but
it
uses
a
CLI
tool.
It's
a
plug-in
framework,
and
this
is
specifically
viewing
a
specific
crash
from
the
fuzzing
framework
right.
Everything
has
an
ID
severity
or
exploitability
rating,
and
again
my
attention
with
this
wasn't
so
much
how
severe
it
is.
C
It's
more
I
want
to
find
exploitable
bugs,
so
I
can
exploit
them,
but
this
is
where
I
have
a
crash
instruction.
Crashing
module
would
have
gone
here,
and
the
registers
surrounding,
or
the
registers
on
the
right
right
here
and
on
the
left
are
the
crashing
instructions
surrounding
where
the
crash
actually
happened
actually
happened.
A
C
C
A
D
A
And
then,
once
you
have
that
I
think
I
think
once
we
have
that,
then
we
could
probably
have
a
conversation.
You
and
you
f.
Kenny
can
have
a
conversation
just
to
make
sure
that
everything
that
he's
displaying
today
matches
with
this
schema
and
if
he
signs
off
on
that
schema,
then
I
think
we're
good
to
go
on
the
fuzzing
schema
awesome.