►
Description
An initial technical walk through of Fuzzit and discussing how this could integrate with GitLab.
A
Great,
so
so
Allah
so
I
think
we
have
quite
quite
I,
think
we
have
more
hour
and
20
minutes
yep.
B
A
D
A
So
so
I'll
go
a
little
bit
over
like
the
the.
A
West
they
did
and
like
I,
have
I,
have
an
idea
for,
like
the
first
really
like
even
be
version
zero
of
the
integration
that
we
discussed
and
how
to
implement
it
without
without
any
changes
to
how.
Currently
it
llem
infrastructure
works
and
even
without
the
current.
The
merge
request.
So
we'll
have
like
very
basic
pausing
runs
on
on
get.
A
C
That's
perfect,
yeah
and
I
think
that's
exactly
I!
Think
later
next
week,
Sam's
gonna
walk
us
through
kind
of
the
vision
and
the
different
aspects
of
of
how
he
sees
the
product
growing,
so
I
think
I.
Think
continuing
the
conversation
and
kind
of
your
thought
process
in
that
in
that
meeting
will
be
really
great
great.
So.
C
So
I
don't
know
if
you
guys
have
seen
there's
a
channel
called
office
today
and
it's
photos
of
everyone's
office
of
what
it
looks
like
and
there's.
Some
really
clever.
People
like
they've
got
a
desk
in
their
bathroom
there's
some
people
that
are
out
on
patios,
overlooking
like
beautiful
sceneries,
oh
I,.
A
B
B
D
A
A
B
C
And
I
think
one
of
the
areas
that
I'm
interested
in
kind
of
zooming
in
on
is
the
database
structure,
and
you
know
what
is
being
stored
right
now.
It's
in
your
your
no
sequel
database,
so
I'm
interested
in
knowing
all
the
different
fields,
because
that's
what
we'll
want
to
make
sure
that
we
design
properly
yeah.
A
A
Okay,
I'll
discuss
the
two
main
workflows:
the
developer
used
for
the
continuous
fuzzing.
The
first
workflow
is
I
said
essentially
the
fuzzing
workflow
and
the
second
one
is
the
regression
I
think
I
talked
about
it
a
little
a
little
bit
about
and
sorry
I
talked
about
this
in
the
in
the
talk
but
I
pictured,
I,
think
in
the
channel
and
then.
A
So
it's
also
mentioned
there
it's
talking
in
those
days.
So
essentially
the
knots
are
it's
it's
on
this
slide,
one
second
yeah,
so
two
were
close.
So
the
first
workflow,
which
is
the
fuzzing
workflow,
is
let's
say
we
want
the
master
branch.
We
usually
choose
one
or
two
branch.
We
can
also
choose
multiple,
but
just
for
the
sake
of
the
example,
let's
say
we
want
our
master
branch
to
be,
let's
say
fast,
okay,
so
every
time
a
developer
will
push
new
code
to
to
master.
A
So
of
course
we
will
say
we'll
have
to
save
those
precious,
so
the
developer
will
be
able
to
reproduce
it
and
all
of
those
father's
because
they
are
coverage,
guided
buzzers,
probably
what
we
are
talking
about
like
FL.
If,
rather,
they
generate
a
corpus,
so
the
corpus
consists
of
test
cases
that
will
that
help
those
first
tests
advance
and
cover
more
more
code
bases,
so
essentially
just
automatically
generates
thousands
of
very,
very
good
test
cases
just
like
it
has.
A
A
A
C
A
C
A
Automatically
the
father
usually
takes
care
of
that
okay,
so
he
he
tries
to
minimize
the
the
test
case
to
to
the
minimal
test
case
possible
to
get
to
the
same
the
same
for
the
classes
in
the
program
program.
So
I
I
would
say
like.
Oh,
you
can
have
a
lot
of
test
cases.
Each
test
case
is
small
and
maybe
everything
together
and
it
I
need
to
check
for
like
for
the
exact
numbers
but
I
think
usually
it's
kilobyte
and
maybe
something
around
small
megabyte.
A
C
A
C
A
A
In
github,
we'll
just
mention
it
now
and
then
we'll
go
back
back
to
that.
The
result
like
the
you
can
you
there
is
a
job
API.
There
is
a
job
token
that
you
have
in
every
CI
and
you
can
store.
You
can
essentially,
of
course,
upload
like
there
is
the
artifact
and
also
you
can
download
download
the
artifact
that
you
want.
A
C
Yeah,
the
artifact
is
great
after
job
is
run
exactly
like
you're
saying
so.
A
job
is
run.
We
can
save
the
corpus
and
then
I
think
the
question
I
have
in
gyms.
Maybe
you've
got
some
more
insight
on
this
is,
is
how
do
we
take
that
from
the
job
and
then
persist
that
so
that
we
can
feed
it
into?
You
know
if
someone
runs
that
that
job
again,
we
want
to
get
that
as
the
input
in.
D
D
So
if
I'm
working
on
my
own
branch
we'd
have
to
decide,
do
we
always
pull
from
the
master
branch
or
do
we
always
only
use
the
fuzzing
corpus
artifacts
from
your
current
branch
or
maybe,
if
one
doesn't
exist,
then
you
pull
from
the
master
branch
and
use
that
for
your
current
branch,
yeah
so
like
there.
That
would
totally
work.
We
would
just
have
to
make
sure
they
don't
expire
just
in
case
it's.
The
time
between
fuzz
sessions
is
longer
than
the
default
expiration
and
when.
C
D
Just
how
it's
designed
so
I
think
mm-hmm
by
default.
If
you
say
it
fetch
me
the
latest
artifacts
for
job
X
if
it
uses
the
default
branch.
Let's
see.
I
may
be
wrong
on
that
either
there
is
a
default
and
it's
the
default
branch
or
you
always
have
to
specify
the
branch
to
fetch
the
latest
artifacts
from
okay.
B
D
C
D
B
Yeah
up
link
to
the
docs
for
the
artifacts
as
well,
so
a
couple
the
questions
might
be
entered
there.
One
of
the
points
there
Roy
jumps
out
to
me
is
the
artifact
expiration.
Timer
is
defaulted
to
30
days,
so
it's
not
like
this
would
be
probably
a
between
fuzzing
jobs
issue.
But
you
know,
as
long
as
we're
pushing
the
corpus
to
the
latest,
build
I.
Think
we'd
be
good
for
a
while
yeah.
D
D
C
A
Cool
so
yeah
I
think
I
think
it
should
be
pretty
pretty
good
actually,
for
the
first
version,
I
mean
there
are
some.
Maybe
I
would
like
more
flexibility
with
this
API
but
like,
for
example,
like
somehow
specify
my
own,
like
storage
with,
which
is
not
connected
to
to
a
job
like
maybe
provide
the
SIPC
or
booze,
or
something
like
that.
But
for
that,
for
that
use
case,
it
might
work
out
of
out
of
the
box
yep.
C
Yeah
and-
and
things
like
a
seed
corpus
might
be
something
that
we
can
package
up
as
part
of.
If
there's
some
standard
see
corpuses
that
we
can
package
up
as
part
of
get
lab
yeah,
but
it
sounds
like
we've
got
some
function.
We've
got
some
infrastructure
to
handle
what
we
need
to
do
for
at
least
the
initial
release.
Yeah.
A
So
so
yeah,
so
that's
that's
the
the
first
parting
workflows,
the
the
regression,
so
the
fuzzing,
workflows
or
just
I
think
is
so
it's
pretty
like
the
standard,
essentially
workflow
and,
for
example,
OSS
files.
A
Also
they
they
support
I,
think
yeah,
so
always
as
far
as
also
they
support
this
workflow
as
well,
and
our
flow
that
we
also
added
some
of
the
our
user
is
request,
I
think
requested
it
or
maybe
we
introduced,
and
then
they
said
that
they
were
very
happy
with.
That
is
the
regression
workflow
so
for
every
pool
request.
A
For
every
pool
request
you
you
run
in
line
in
the
CI,
so
you
you
utilize
the
standard,
CI
infrastructure,
you
download
the
corpus.
So
essentially
you
download
the
test
case
that
doesn't
crash
your
your
target
and
maybe
crashes
that
you
already
fixed
you
download
them
into
the
CI,
and
then
you
run.
Those
first
asks
for
circuits
with
with
this,
with
with
those
test
cases,
so
you
will
be
able
to
check
like
to
catch
new
bugs
or
you
will
see.
Oh
there
was
this
crash
that
I
already
fixed
them.
A
Master,
which
is
obviously
very
good
so
just
I,
can
show
this
small
example
how
it
looks
so
how
it
looks
in
travesty
I
suppose
it
is
not
a
CID.
So
that's
why
father
had
integration
with
like
a
lot
of
other
CI,
so
we
have
ever
example
with
circle
CI
or
with
trauma
CI,
but
essentially
with
any
any
CI.
8
will
work
because
everything
we
do
is
we
just
want
the
first
targets
in
the
in
the
CI,
with
the
with
the
test
cases
that
we
downloaded.
So
let's
go
here
so.
A
Okay,
just
done
it
downloads
some
of
some
of
the
stuff,
but
the
important
thing
is
that
this
is
output
output
from
leaf
buzzer.
If
someone
is
familiar,
so
this
is
say,
it's
running
34
test
cases,
it's
usually
usually
it's
very
quick
and
and
that
okay,
it's
success,
the
program
doesn't
crash
and
the
pipeline
succeed
succeeds
and
we
can
merge,
merge
the
job.
If
we
look
at
how
it
looks
like
in.
A
C
A
C
A
C
B
So
we
do
have
a
report
in
sizing
things
about
that
exact
question
in
terms
of
languages
we
should
start
with,
though
I
think
go
is
actually
gonna,
be
our
best
fit
because
we
can
dog
food
it
ourselves
more
quickly.
As
far
as
I
know,
gitlab
itself
does
not
use
C
C++
in
many
places,
if
any
right,
so
I
think
go
would
actually
be
the
best
technology
stack
for
us
to
start
with
yeah
I'll
see
if
I
can
find
a
link
to
the
dashboard
about
the
programming
languages
and
I'll
drop
it
in
the
document.
Yep.
A
So
it's
not
to
find
memory,
corruption,
vulnerabilities,
but
to
find
essentially
crashes
and
bugs,
and
so
to
improve
stability
in
some
programs
and
maybe
even
to
find,
like
other,
like
logic
bugs
or
crashes,
but
so
forgo
projects.
I
think
it
might
be
actually
good
a
good
fit
because
Pro,
though,
is
more
popular
than
I
would
say.
C
C++,
at
least
these
days.
A
And
we
also
had
more
go
use:
go
users
on
posit
than
people
that
use
C
C++,
even
though
it's
a
more
more
needed
they're,
just
more
go
projects,
yep
and
I.
Think
they're!
So
for
is
it's
not
a
great
fit
for
all
the
go
projects
because,
but
it's
actually
a
must
for
some
go
project,
so
it
really
depends.
What
is
your
projects
is
doing
so
good?
A
good
example
is,
for
example,
Eddie.
A
There
is
also
talk
about
this,
and
maybe
I
also
talked
about
this
in
the
and
the
talk
that
I
gave
that
there
is
a
cobbler,
so
their
DNS
parser
is
written
and
go
think
it's
some
kind
of
work
of
core
DNS
or
something
like
that.
So
it's
written
and
go
and
yeah.
If
you
have
a
crash
there,
even
if
it's
not
security
vulnerabilities,
then
you
will
have
like
a
denial
of
service,
which
is
a
security
vulnerability
and
your
whole
service,
which
is
essentially
what
your
whole
company
is
doing,
will
be
down
right.
A
So
you
want
your
like
DNS
parser,
which
is
external
to
the
world.
You
want
it
to
be
more
thoroughly
tested
with
us
with
fuzzing,
so
they
use
no
cousin
core
DNS,
also
they
use
fuzz
it,
and
so
that
this
is
like
a
good
example
for
a
project.
That's
it's
important
to
use
fastest,
and
maybe,
if
we
like
a
bad
example
or
I,
don't
know
some
kind
of
maybe
like
utility,
which
is
not
necessary,
like
go
utility,
which
is
not
necessarily
getting
some
input
from
the
outside
world
and
doesn't
have
any
like
security
implications.
A
A
Maybe
too
too
much
and
also
like,
if
it's
a
it,
doesn't
really
get
any
like
raw
data
or
so
not
every
every
go
project
will
be
the
good
fit
I
think
important
to
to
understand.
We
need
you
to
find
the
right
projects.
Actually
the
go
runner
might
be,
might
be
good
fit.
I
need
to
look
into
this
a
bit
more.
A
If
we,
if
there
is
some
places
where
it's
yeah,
getting
some
or
parsing
some
output
from
from
an
untrusted
user
and
the
go
runner
is
running
on
the
trusted
environment
I'm
not
sure
exactly
yet
how
dare
that
should
works.
But
if
something
like
that
happens,
it
might
be
good
a
good
fit
for
dog
14
for
the.
C
A
B
C
Can
have
the
runner
right
and
shared
environments
in
a
corporation,
so
if
you're
big
corporation,
you
may
have
a
pool
of
runners
and
then
each
individual
project
is
going
to
use
those
runners.
So
if
there's
ways
for
frankly
a
malicious
developer
to
send
in
bad
data
to
that
runner
and
that
runner
can,
you
know,
get
access
to
other
data.
B
I
think
the
runner
was
one
of
the
the
places
we
had
talked
about,
potentially
being
a
great
place
to
start
in
terms
of
dogfooding
and
doing
fuzzing
Atget
lab
I
guess.
One
thing
I
also
want
to
add
on
to
the
discussion
there,
because
it's
great
to
point
out,
you
know
that
C
and
C++
type
languages
find
bugs
and
terms
of
implementation
like
unsafe
pointers
that
that
type
of
thing,
I
think.
B
B
C
A
So
what
I'm
doing
here
is
I'm
downloading
the
gafas
libraries
it's
called
gafas
and
office
bill,
though
we
can
also
bake
it
into
the
docker.
If
we
want
to
save
save
this
step
and
then
we
call
the
office
build
so
this
is
essentially
builds
the
the
target
which
is
like
cover
instrumented
with
the
coverage
library,
because
it's
coverage
guided
father.
So
we
have
two
two
instruments
with
this
additional
coverage:
instrumentation
and
then
we
need
to
compile
it
with
the
sanitizer
and
yeah.
We
need
to
compile
it.
A
So
essentially,
this
is
how
the
box
of
yes,
r84
Segawa,
has
two
two
engines:
lick
buzzer
engine
and
like
gafas,
vanilla
engine,
so
here
I'm
using
leaf
father
engine,
so
it
involves
two
steps
which
involves
dofus,
build
and
then
si
Lang
compilation
sit
like
leaf.
Father
is
part
part
part
of
si
Lang,
okay
compiler.
So
you
have
to
create
it
and
essentially
parse
complex.
This
is
like
the
only
binary
that
you
need
for
fuzzing,
so
this
is
the
output
and
you
just
need
to
to
run
it
and
sure.
A
A
Yeah,
it's
called
parse
complex,
it's
getting
an
array
of
data
yep
and
like
there
is
a
bug
here
that
essentially,
if,
if
it
gets
the
word
fuzzy
fuzzy,
then
it
will
access
the
fifth
element
and
it
will
crash
okay.
So
it's
not
a
memory
like
it's,
not
an
memory
issue,
but
it
because
the
go
runtime
will
catch
it,
but
the
your
the
program
will
will
crash
right
right,
yeah.
C
A
A
B
A
C
A
D
A
A
Positive
horse,
complex
is
fuzzy,
Dell
is
like
the
organization
and
posit-
and
this
is
parsed
complex-
is
the
target.
So
now
our
Scylla
I
will
know
to
download
the
right
corpus.
Okay,
because
because
like
every
every
target
has
its
own
corpus,
so
we
know
that
parse
complex
under
our
account.
So
we
know
to
download
this
corpus
and
then
we
run
parse
complex
will
with
with
the
corpus
and
in
the
in
the
mode
that,
but
essentially
like
actually
indeed
lob.
A
C
So
what
I'm
thinking
about
when
I
look
at
this
file
is
that
there's
a
short-term
and
a
long-term
kind
of
solution?
One
is,
in
the
short
term.
You've
got
this:
the
shell
script,
that
a
user
adds
to
their
repo
and
I
think
we
could
do
that
in
the
short
term
in
the
long
term.
If
I
look
at
the
shell
script,
there's
only
there's
there's
one
value
year,
which
is
the
target
right,
which
is
parsed,
complex
yeah.
So
in
the
long
term
we
could
actually
take
this
shell
script
and
it
could
be
part
of
gitlab.
C
A
I
think
I
think
yeah
I
agree.
There
is
room
for
more
optimization,
even
though,
of
course,
this
script
is
pretty
short
yeah
and
some
of
those
lines
are
already
in
the
github
CI
ml
for
the
project
because,
like
they
build
like
this
they'll
get
and
like
go
build.
They
are
already
like
those
lines
already
exist
and
in
those
get
loved,
CI
yeah.
No,
but
yes,
it's
possible
yeah,
of
course
like
first.
This
is
probably
not
not
necessary
right.
C
B
A
A
A
Cool
so
like
the
first
version
is
a
test
we
can
have
I
played
with
it,
but
to
add
another
another
stage
here
we
can
have
like
a
stage
and
so
I
just
need
to
change
it
to
you
to
father,
but
essentially
it
can
be
also
in
a
test
station.
It's
really
up
to
us
or
to
the
user.
How
we,
how
they
yeah
I,
think
we
want
the
fuzzing
to
run
after
the
unit
tests.
So
let's
say
so.
A
We
can
have
another
another
stage
and
so
I'm
downloading
the
golf
has
engulfed
us
build
calling
office
bill,
and
here
it
looks
a
little
bit
even
easier
but
different
from
the
last
example,
because
it's
here
it's
using
the
vanilla,
vanilla
engine,
so
Gophers
build
and
then
I'm
just
running
Gophers,
but
the
last
step
here,
I'll
probably
will
use
the
facets.
Eli,
which
will
call
can
we
can
find
a
new
name
for
it.
Gl,
fast,
CLI
or
I.
A
Don't
know
think
of
a
good
name
for
our
fuzzing
CLI,
so
GL
fuzz
will,
we
will
pass,
will
pass
essentially
the
the
binary
that
is,
that
was
the
outputted
from
Gravas,
build
to
the
Jill
CLI
and
the
jail
for
CLI
will
download
the
corpus
using
the
API.
But
but
this
is
this
will
be
internal.
It
will
just
it
will
do
this
behind
the
scenes,
will
download
the
corpus
and
we'll
run
the
the
target
either
in
fuzzing
or
in
regression
depending
if
it's.
So,
if
it's
a
pull
request,
it
can
automatically
detect.
A
If
it's
a
pull
request
and
run
it
will
in
regression
mode,
and
if
it's
a
merge
to
true
master,
then
we
can
automatically
run
it
in
fuzzing
mode.
So,
let's
say
for
version
one.
We
can
just
run
it
for
for
a
minute
or
two
or
to
give
a
user
the
option
to
configure
it
and
then,
in
the
long
term,
we'll
have
to
configure
it
as
like
icing
or
long-running
jobs,
or
something
like
that.
I've.
D
Got
I've
got
a
couple
of
questions
from
that,
let's
see
so,
let's
say
we're
in
fuzzy
mode
and
we're
generating
a
corpus
right.
However,
we
figure
out
the
corpus
storage,
which
I
think
will
totally
work
using
the
artifacts.
It's
a
little
more
complicated
but
I
think
it'll
work,
but
so
we've
got
a
normal
corpus
for
fuzzing
the
regression
data.
D
That's
basically,
if
I
understand
it
right,
a
corpus
of
tests
that
have
been
known
to
call
it
the
find
bugs
in
the
past,
you
just
run
each
of
them
once
to
make
sure
there's
no
crashes
right.
It's.
A
Either
questions
that
were
fixed
or
just
test
cases
that
were
generated
and
never
crash
the
program,
but
they
were
like
the
the
fun
essentially
leap.
Father
generates
test
cases
so
say
you
have
in
generates
a
corpus
of
100,
that's
case,
so
each
test
case
I
guess
she
gets
the
the
program
to
new
code
passes,
and
this
is
so.
A
D
A
Track
them
right,
I
mean
we
can
I
I,
don't
I,
don't
have
I,
don't
know
what
will
be
the
right
solution
in
posit.
We
store
the
crashes
just
in
also
in
storage,
just
in
different,
like
different
paths
like
if
another
artifact
like
there
is
corpus
dot.
Zip
are
the
effect
and
there
is
crash.
That's
it
just
crash
okay
for
a
job,
so.
D
What
happens
if,
okay
so
say,
I've
got
to
two
separate
branches,
floor
requests
or
merge
requests
going
on
it
for
both
of
them
yeah,
and
they
each
generate
two
different
sets
of
crash
data.
D
A
A
D
D
That
it
was
just
like
with
the
sassed
testing,
or
you
know
our
other
analyzers.
It
runs
on
all
branches
or
all
merge
requests.
I
was
picturing
the
fuzzy
in
the
same
way,
and
if
we
already
discussed
that
at
some
point,
I
may
have
missed
it
and
it
may
not
have
been
apparent
to
me.
Wait
yeah!
So
that's
where
I
had
a
little
mix-up.
So.
C
The
I
think
question
around
that,
in
terms
of
from
a
user
perspective
is
and
I
think
in
the
chat
room
we
were
talking
about
this.
Is
you
know,
a
lot
of
the
issues
are
a
lot
of
fuzzing
problems
or
vulnerabilities
are
caused
by
new
code.
So
in
this
workflow
what
happens?
Is
you
have
new
code?
It's
on
your
it's
on
your
new
branch.
We're
gonna
run
a
regression.
We're
not
gonna
find
any
regression
problems,
because
that's
old
code
that
we're
we're
testing
we've
got
new
code.
C
A
D
A
Yes,
the
problem
yes
or
the
problem,
yes,
so
there
is
obviously
always
a
trade-off
right
of
how
much
resources
also
you
want
you
to
consume
and
also
in
terms
of
CPU
resources,
not
only
in
terms
of
how
difficult
it
to
develop
like
all
the
permutations
and
stuff
like
that,
but
the
problem
was
fuzzing
that
the
pool
request
is
that
also
like
there
is
a
question:
okay.
How
long
do
you
want
to
fast
pull
requests?
I
mean
we
can
convey.
The
user
can
configure
it.
A
D
A
Point
but
like
from
from
what
we
saw
is
that,
like
the
current,
this
is
I
think
the
like
OS,
as
far
as
in
terms
of
the
fuzzing
workflow,
also
I,
think
they're,
also
fuzzing
only
one
branch
and
for
most
of
the
project
I
think
it's
really
good
enough,
because
we
bring
the
like
letting
the
clients
that
need
fuzzing,
that
they're
prod
like
the
project
is
security,
sensitive,
C+
class
or
like
go
security
sensitive.
Then
we
give
them.
The
ability
from
not
fuzzing
is
not
available
at
all,
because
it's
not
possible
to.
A
Currency
is
we
have
fuzzing
on
like
one
brand
master
branch
ongoing.
We
have
on
pull
request.
It's
really.
It's
usually
like
good
enough
and
me
like
we
might
encounter.
That's
that's
my
guess.
I
don't
know
it
might
encounter.
You
know
crazy.
I,
don't
know
like
like
fuzzing
like
project
that
is
really
into
like
fuzzing,
all
the
possible
branches
for
a
long
time,
spinning
like
having
multiple
versions
that
are
important
to
be
fast,
but
I.
Think
it's
very.
Like
very
specific.
You
use
case
that
you.
D
A
B
I
think
the
way
that
we
should
think
about
approaching
this
is
inside
of
each
pipeline
for
the
merge
request.
We
should
do
as
much
fuzzy
as
we
can
and
let
users
dial
it
back
as
desired
regression.
Testing
regression
fuzzing
being
quick,
is
great.
I
think
that
we
should
do
that,
but
you
know
we've
got
up
to
an
hour
by
default
per
pipeline.
We
should
be
using
that
hour
and
less
customers
tell
us
not
to
it's
not
going
to
be
a
great
user
experience.
B
D
D
Their
goal
is
only
to
fuzz
a
master
branch
or
its
I'm,
not
gonna,
say
their
goal
is
to
only
fuzz
a
master
branch
they're
coming
at
it
from
a
different
direction
and
we're
situated
in
a
better
place
to
help
developers
with
their
new
code
better
than
OSS
fuzzes.
So
I'm
like
I,
like
the
comparisons
with
us
nos
I
suppose,
but
it's
their
decisions,
don't
have
the
meaning
that
they
do
for
us.
A
C
D
A
A
Also
it
gives
the
developer
the
option
to
rerun
specific,
specific
job
or
reproduce
locally,
so
you
can
run
it
from
yours
from
your
CLI
from
your
computer
and
it
will
download
the
corpus
for
the
specific
target.
Like
you
will,
you
will
have
to
pass
like
your
project
ID
manually,
because
it's
not
running
at
the
CI
but
the
same
the
same
COI
running
in
the
CI.
Now
you
can
run
it
locally
as
well
and
just
pass
it
the
right
environment
variables,
like
your
account
and
choking
and
and
your
yeah.
A
Talk
and
like
project
ID
and
target
name
or
job
name
whatever
it
is,
and
then
it
will
download
the
corpus
and
it
will
run
the
like
rerun
the
father
to,
and
you
will
see
locally
the
the
crash.
So
you
can
just
rerun
it
locally
and
see.
Maybe
it
will
be
easier
to
debug.
It
also
knows
to
like
it
has
two
modes:
it
can,
because
a
lot
of
developer,
running
Mac,
some
of
the
buzzing
engines
works
only
on.
D
A
So
it
only
also
knows
to
spin
spin
up
doctor
locally
and
run
so
it
takes
care
of
like
downloading
the
the
corpus,
like
essentially
reproducing
the
same
thing
that
you
have
in
the
CI
locally
at
the
developer.
So
just
like
you
run
unit
tests,
the
developer
might
want
to
rerun
rerun
it
locally
before
it.
D
Right
awesome,
cool
yeah
I
was
curious.
Thank
you.
A
B
A
Right
so,
let's
see
geo
fuzz,
so
yeah
I
will
rewrite
the
jail
Foskey
to
work
with
Keith's
lab
infrastructure
and
to
do
a
short
demo
with
with
this
and
like
add
artifacts,
and
then
we
will
be
able
to
run
the
pipeline
and
both
in
fuzzing
mode
and
regression
mode
and
see
see
which
works
for
version
1,
of
course,
without
like
without
so
and
then,
like
version
1.1,
we'll
be
adding
the
merge
request
that
is
currently
reviewed.
C
A
A
D
A
A
Maybe
if
okay,
there
is
one
one
more
thing
like
long-running
jobs
loop,
it
looks
like
so
so
here.
What
fuzz
it
takes
care
of
is
like
the
a
synchronous
jobs.
So
we
run
them
on
on
kubernetes
and
we
want
to
obviously
want
to
reuse
the
get
LeBron
there,
because
it
takes
care
already
of
a
lot
of
things
and
integration
will
give
up
and
like
logs.
So
we
just
want
to
add
the
long-running
feature
to
give
them
Runner
and
then
in
father.
It
looks
like
this.
A
So
we
are,
we
have
the
like
the
logs
and
then
okay,
for
example,
it
found
a
hip
overflow
crash,
so
this
is
like
roll
roll
logs
from
from
Lib
father,
and
so
we
are
doing
additional
analysis
like
with
stack
parsing.
So
we
and
we
are
saving
like
we're
saving
the
crash.
We
are
parsing
the
stack
trace
and
so.
A
Exactly
the
fuzzy,
so
the
CLI
knows
how
to
analyze
different
exit
codes
and
formats
of
different
fathers.
So
we
know
that
like
leap,
father
returns
and
then
one
for
crash
two
for
timeout
and
zero
for
success,
so
he
knows
to
take
okay.
We
have
a
crash.
I
know
where
the
crash
part
file
is
located
and
now
I'm
putting
it
in
fuzzy
for
much
saving
it
and
fuzz.
A
It
storage
analyzing
the
the
logs
parsing
to
parse
the
stack
trace,
because
I
know
there
is
a
stack
trace
and
now
you
know,
yeah
saving
all
this
mated
major
data
and
yeah,
so
that
that's
essentially
what
we
also.
What
we
will
do
just
like
there
are
like
the
vulnerability
list,
will
have
kind
of
the
same
list
of
crashes
that
we
had
for
for
each
foster.
Yet
now
we
can
we'll
be
able
to
like
resolve
it.
C
A
A
A
It
will
do
like
GL
fast
report,
J
dimensional,
and
there
will
be
all
the
all
the
data
like
the
first
sec
trace
like
crash
or
not
like
yeah
crash
timeout,
and
then
we
will
just
put
it
on
the
ruby
side,
just
like
in
the
fast
food
all
this
Jason
data,
we'll
put
it
in
the
database
and
then
we'll
be
able
to
use
it
in
the
in
the
fondant.
So
yeah.
C
D
The
CEO
I
would
generate
the
JSON
artifacts.
Do
we
have,
that
seems
to
be
I,
think
that
would
be
a
pretty
definitely
part
of
the
NBC
right.
Do
we
have
an
issue
where
we
could
talk
about
the
structure
of
that
anywhere
I.
B
Have
the
issue
for
discussing
presenting
the
results
so
with
that
download
report
option
I,
don't
think
it
goes
into
the
actual
structure
of
what's
in
there
beyond,
let
them
download
it
I
think
that's,
probably
the
the
place
to
start
that
discussion
and
it
probably
will
break
into
some
some
issues.
Okay,.
C
And
we
probably
have
a
good
starting
point:
you
of
getting
you've
got
that
all
that
right
now
goes
into
fire
store.
So
it's
a
JSON,
it's
a
JSON
document,
so
I
would
think
we'd
probably
take
that
you
know
wholesale
and
use
that
and
then,
if
there's
any
tweaks
James
that
you
see
on
that,
that
would
be
a
good
way
going
about
it.
A
Yeah,
so
so
that's
I
mean
that's
how
it
looks
like
so
like
we
have
for
each
project.
We
will
have
all
the
fuss
targets
that
the
trend
so
like
fast
aggregates
will
be
kind
of
jobs
in
github.
I.
Think
something
like
that,
like
each
job,
is
a
like
false
target
that
can
run
concurrently
because-
and
it
gives
like
very
good
distinction,
I
think
in
the
in
the
UI,
and
we
will
have,
we
can
show
result
results
for
each
like
history,
results
for
each
target
or
job,
essentially
a
job
yeah.
C
A
C
As
you
say,
and
maybe
this
is
one
of
the
ways
we
could
do-
that
as
we
set
up
fuzzing
as
a
job
or
whatever,
and
then,
when
you
setup,
let's
say
it's
CSV
or
gif
or
whatever
it
would
just
inherit
or
grab
all
the
values
from
the
fuzzing
job
and
then
that
way,
you've
got
a
job
called
CSV
or
gif
or
open
SSL
or
whatever.
But
it's
using
all
the
template
values
that
that
gitlab
pre-packing
is
up
for
you,
yeah.
A
A
So
so
a
mob
I'm,
not
sure
I'm,
not
sure
I
understood
but
like
that.
What
I
thought
of
is
that
there
is
like
a
master
template
or
whatever
or
a
stage
that
we
do
fuzzing
and
then
we
have
for
each
target.
We
have
a
job,
so
there
is
like
fuzz
heartbleed
for
HTTP
response.
So
essentially
each
target
is
usually
a
function
or
a
test.
So
we'll
have
like
four
for
each
first
test.
A
We
will
have
a
separate
job
which
we,
like
the
user,
can
call
just
with
the
name
of
the
of
his
like
unit
test
function,
which
is
if
he
is
fuzzy
function
one.
So
it's
like
fuzz
func,
one
fuzz
funk.
It
will
be
like
jobs
in
the
same
age.
So
I'm
not
sure.
If
I
answered
your
questions
that
Sam
you
can
ask
the
gun,
yeah.
B
A
C
This
one
yeah
sorry
go
back
to
the
go
code
yeah,
so
my
understanding
is,
for
example,
if
you
have
a
go
project
you,
let's
really
simple,
go
project.
You've
got
three
functions
right.
Those
are
gonna,
be
three
different
targets,
yeah.
So
it's
not
it's
not
one
fuzzing
job.
It's
not
one
fuzzing
thing,
you're,
actually
sending
them
three
different
calls.
C
So
in
this
case
you
know
you
would
have
parts
complex
are
simple:
parse
yeah
complex,
so
you
have
three
different
targets
and
when
we
convert
that
to
get
lab
speech,
that
would
be
three
different
jobs,
yeah
one
job
one
target.
So
we
don't
want
that
job
to
be
called
fuzzing,
because
it's
not
really
fuzzing.
It
would
be
anything
fuzz,
okay,
harsh
complex,
fuzz,
simple
fuzz,
super
complex.
They
would
all
use
or
inherit
from
our
fuzzing
template,
which
would
give
you
certain.
You
know
boilerplate.
If
you
look.
B
Imagine
if
we're
fuzzing
like
gitlab,
for
example,
we
have
two
hundred
targets
detected,
I,
don't
know
if
Gil
lab.
Today
we
have
some
mechanism.
I
know
we
have
child-parent
pipelines,
but
we're
gonna
need
to
figure
out
how
we
present
this
users
without
overwhelming
them
in
terms
of
the
number
of
jobs
that
get
created,
because
I
can
see
it
becoming
pretty
confusing.
If
you
know
your
pipeline
has
four
jobs
in
it
that
you
defined
in
all
of
a
sudden,
it
has
hundreds
or
dozens
in
the
actual
pipeline
result
for
you,
some.
D
D
However,
they
set
it
up,
it's
up
to
them
and
then
they
could
specify
in
a
variable
like
a
directory
full
of
target
binaries,
and
then
we
could
dynamically
create
a
child
pipeline
that
spits
out
a
new
job
for
every
binary
in
the
pipeline,
and
then
you
know
name
the
job
after
the
binary
name.
So
it's
easy
to
figure
out
which
one
is
which
and
then
we
would
have
to
have
a
separate
job
that
waits
for
the
child
pipeline
to
finish
and
merge
all
the
results
together.
I
think
we
could
totally
do
it.
D
It
is
but
yeah
that
that's
my
first
thought
I'm
making
it
so
the
user
doesn't
have
to
explicitly
define
100,
separate
fuzzing
jobs.
A
Yeah
yeah,
actually
actually
also
generating
some
of
the
jobs
can
be,
can
be
good,
optimization,
I.
Think
a
good
note
here
is
that,
unlike
unit
tests
that
you
have
like
usually
unit
has
probably
everywhere.
This
is
how
you
do
test
works.
You
have.
The
thousands
of
unit
has
with
two
lines
of
code,
and
every
time
you
add
new
code,
you
had
copy
pasting
old
unit
tests
and
adding
another
test
for
like
for
your
code
or,
like
you,
have
a
lot
a
lot
of
unit
tests,
but
I
think
in
fuzzing.
A
A
Like
I
think
system,
D
has
like
30
or
40
pass
targets,
but
but
this
is
I
think
this
is
the
most
like
the
the
project
like
the
biggest
project,
with
a
lot
of
fast
targets
that
I
saw,
but
most
most
of
them
have
really
like,
let's
say
up
to
ten,
okay
and
maybe
even
less,
because
essentially
each
fast
surrogate
generates
a
lot.
A
lot
of
test
cases
for
you
and
like
it's.
Every
first
target
is
a
lot
of
unit
tests.
A
Okay,
we
can
think
of
it
like
this,
and
this
is
why
we
have
much
less
like
fast
tests
in
in
a
project,
so
we
so
I
think
from
user
perspective.
It
will
be
good
to
kind
of
like
have
some
some
way
of
showing,
maybe
or
like
minimizing
the
number
of
jobs
or
like
expanding,
but
but
we
will
will
even
in
the
beginning,
probably
it's
one
ad,
like
hundreds
of
for
targets
for
for
for
a
project.
D
Sam
related
to
this,
when
we've
talked
about
automatically
extracting
the
fuzz
targets
from
the
unit
test,
there's
we
would
have
to
deal
with
having
too
many
jobs.
Okay,
say
we
extract
a
hundred
different
jobs,
and
this
is
definitely
not
MVC
right.
That's
a
hundred
hours
worth
of
CI
if
we
have
each
of
them
run
for
an
hour,
so
I
think
we
would
have
to
have
a
different
approach
for
fuzzing
individual
functions
extracted
from
unit
us
anyways
yeah,
but
III
agree
with
you
of
Jenny.
D
B
Yeah
now
that's
understood,
I
think
the
point
of
concerning
reasons
from
our
perspective.
Since
we
know
what
fuzzy
is
we're
thinking
in
terms
of
targets
in
terms
of
where
our
customers
and
our
users
are
going
to
be
coming
from,
they're
gonna
be
thinking
in
terms
of
their
app
and
their
codebase
as
a
whole.
They're
only
going
to
be
thinking
in
terms
of
I
want
to
do
fuzzing,
full-stop
they're,
not
going
to
think
about.
Why
need
to
make
one
fuzzing
job
to
test
this
aspect
of
my
app
one
fuzzing
job
to
test
this
aspect
of
it?
B
If
we
can
shift
our
thinking
when
we're
coming
up
with
solutions
to
look
from
that
angle,
I
think
that
will
get
us
a
better
result.
A
pitfall
I
can
see
is
falling
into.
If
we
think
about
it
from
the
more
fuzzing
expertise.
Angle
is
it'll.
Make
sense.
If
you
already
know
what
fuzzing
is
in
terms
of
having
many
different
targets
and
jobs
created,
but
our
end
users
that
don't
have
that
context
might
get
lost
in
following
with
us
because
yeah.
But
the
point
is
well-taken.
D
C
B
A
C
D
C
D
A
lot
of
fun
with
this
lately
maybe
a
little
too
much
fun
so
I've
wanted
this
feature
for
years.
So
I'm
like
super
excited
about
it,
but
so
you
can
have
one
job
that
dynamically
creates
yanil
and
then
so
it
has
to
be
in
a
separate
stage
and
or
not
separate
stage.
That
job
has
needs
to
come
before
the
next
job,
which
is
a
trigger
job
that
points
to
the
previous
jobs
build
artifact.
D
So
it
says,
take
this
file
from
the
previous
jobs
that
was
generated
and
run
it
as
a
child
pipeline,
and
then
you
can
tell
it
to
depend
on
that
child
pipeline,
so
it
blocks,
and
then
you
could
have
another
job
after
that.
That
then
goes
through
and
collects
all
the
build
artifacts
from
the
child
pipeline.
C
C
D
We
could
we
would
have
a
problem
with
you
know,
say
worst-case
scenario:
thousands
of
targets,
that's
instantly,
thousands
of
jobs
and
hours
worth
of
compute
time
right
but,
like
you
know,
that's
worst-case
somebody
wanting
to
mess
us
up
or
something
it.
You
know
just
something
to
be
aware
of
when
we're
dynamically,
creating
jobs,
cool.
C
B
So
maybe
that
speaks
to
we
need
to
put
not
a
fail-safe
but
an
upper
bound
and
say
my
appetite
for
this
fudge.
Job
is
100
CI,
minutes
per
run
and
then
restrict
ourselves
within
there,
because
yeah
James
to
your
point.
It
wouldn't
mess
us
up
if
they
spawned
off
hundreds
of
jobs,
but
their
bill
would
go
through
the
roof.
Probably.
D
C
So
I
know
we're
coming
up
on
time.
Here
we
get
about
10
minutes.
This
is,
this
is
I,
think
been
exactly
what
I'm
looking
for.
Is
this
kind
of
discussion
and
figure
these
things
out,
any
things
that
that
anyone
else
wants
to
cover
today,
I
feel
like
our
next
session
next
week.
We
can
continue
where
we
we
leave
off
today.
If.
D
Possible
I
would,
let's
see
I
know
if
Jeanie
Jeanie
is
working
on
a
proof
of
concept,
for
the
fuzzy
and
using
the
build
artifacts
is
having
fuzzy
and
it
sounds
like
having
fuzzy
and
every
merge
request
is
something
we
want.
I'd
like
to
help
get
that
into
the
proof
of
concept.
I
think
there's
a
very
easy
way
to
have
a
global
corpus
store
tied
to
the
master
branch
using
just
build
artifacts,
so
that
we
can
get
into
the
proof
of
concept.
Is
that
we
agree?
That's
something
that
we
should.
A
B
A
C
A
C
D
D
That
sounds
great.
You
have
Jenny
if
you
have
a
creative
new
repository,
where
we
could
create
a
proof
of
concept
and
hash
out
that
get
web
CI
that
you
animal,
yeah
I
can
help
make
merge
requests.
We
could
talk
about
it.
Work
things
through
I
can
comment
on
your
stuff
awesome.
A
To
those
I
will
add,
like
an
example
repository
so
I
say
we
we
said
gold,
oh
and
I'll.
Also
add
the
repository
of
the
GL
files
and
I
will
do
essentially
two
branches
there.
There
is
the
master
branch
which
is
currently
working
with
posit
and
I,
opened
the
new
branch
which
will
be
gitlab
branch,
and
then
we
will
just
publish
it.
There
will
be
just
them
like
we'll
push
the
gate,
lab
branch
to
the
master,
and
it
should
be
only
only
this
branch,
so
I
will
push
those
two
repositories
a
hopefully
built
by
tomorrow.