►
From YouTube: Continuous fuzz testing sync - 2020-09-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
just
a
quick
check
on
what
we
want
to
accomplish
today.
Is
it
really
just
going
over
the
continuous
fuzzing
and
the
things
that
you
guys
found
so
far?
Were
there
other
goals
that
you
had
sam.
B
A
Okay
sounds
good,
so
you're
getting
you
want
to
share
some
of
the
things
that
you
found
here.
C
Sure
so
share
my
screen
in
a
second,
so
yeah.
So
what
we
were
trying
to
do
is
actually
to
have
a
way
of
running
the
fast
targets
instead,
instead
of
running
them
inline,
just
as
we
are
running
them
now,
so
essentially
you
know
we
can
run
them
for
10
minutes
an
hour,
but
it
will
block
the
pipeline.
C
So
we
wanted
to
have
an
option
for
the
user
to
run
the
fast
target,
for
you
know
for
a
longer
amount
of
time
or
for
the
same
time,
but
without
blocking
the
pipeline
and-
and
then
also
you
know
when,
when
the
user
pushes
new
code,
we
want
it
also
to
to
have
an
option
to
stop
to
stop
the
old
job
and
start
a
new
one,
so
essentially
really
running
async
jobs,
and
so
I
talked
today
to
fabio
and
actually
had
a,
I
think,
quite
a
good
solution
and
we
can
discuss
it
now.
C
So
I
really
did
a
very
small
poc
and
there
are
some
more
open
questions
as
we
can
discuss
now,
but
the
the
very
simple
one
is
so
we
have
the
gitlab
ci.yamo,
the
main
one
and
what
we
can
do
is
use
the
apparent
child
is
to
use
the
trigger
word
and
then
include
our
original
steps.
So
I'll
go
to
that
step.
C
So
this
step
is
the
same
step
that
we
had
in
the
original
example.
We
include
the
template,
we
extend
fast
base
and
we
run.
We
run
our
father
with
a
gitlab
confidence.
We
compile
it
and
run
it.
You
can
ignore
this
one.
It's
just
some
of
my
experiments,
but
actually
it
has.
It
has
no
effect
actually.
C
Keep
going
yeah,
I
think
I
think
it's
not
yeah.
I
think
it's
something
that,
but
so,
okay,
so
now,
let's
go
to
the
ci.
C
So
now
we
have
okay,
our
main
pipeline,
which
is
running,
and
we
have
like
the
fuzzing
step.
It's
actually
kind
of
a
dummy
step
which
will
be
completed
and,
and
then
we
have
the
child
pipeline
but
like
the
main
pipeline,
won't
be
blocked
by
the
fuzzing
step,
or
you
know
the
fuzzing
steps,
depending
on
how
many
targets
we
have
so
here
we
actually
have
a
kind
of
the
same
behavior
and
yeah.
B
C
So
it's
really,
we
kind
of
have
the
async
running
jobs
out
of
the
box.
The
questions
that
still
have
open
and
we
need
to
address
is
so.
When
someone
pushes
new
code,
we
need
to
add
the
the
behavior
to
stop
the
old
child,
like
the
last
child
pipeline,
the
last
job
and
start
a
new
one,
because
it
doesn't
happen
out
of
the
box,
but
actually
I
think
it
will
be
pretty
easy
using
the
jobs,
api
or
the
pipeline
api,
because
we
can.
I
then
identify
the
last
running
jobs.
D
C
D
C
That
sounds
good
I'll.
Try
that
yeah,
I
didn't
use
the
resource.
I
just
try
to
use
the
interruptable
keyword
so
yeah
this.
This
is
one
question.
The
second
question
that
I
had
is
so
the
current
way
that
it's
working
is
that
the
user
currently
have
to
define
like
either
he
want
to
have
an
asking
jobs
or
he
want
to
have
an
inline.
C
So
it's
not
really
controlled
by
kind
of
an
environment
variable
or
in
a
programmatic
way.
I
think
I
don't
know
if
it's
good
enough
for
now
just
to
give
the
user
this
option
to
choose,
but
maybe
in
the
future
we
can
have
a
bit
more
complex
logic
to
to
have
this
kind
of
controlled
by
maybe
by
an
environment,
variable
or.
C
B
C
Yeah,
but
I
think
that
maybe
let's
let's
go
to
this
one-
maybe
it's
possible
to
here
to
use
like
I
just
I
set
it
up
just
you
know
today
after
my
talk
with
fabio,
but
maybe
it's
possible
to
have
like
this
fast
target
and
then
to
include
my
fast
target
and
have
some
kind
of
rules,
so
it
will
be
controlled
right,
but
I'm
not
sure
what
this
include
statement
you
know
can
can
take
us
in
the
input.
B
Yeah
because
then
they
could
just
copy
that
into
one
file
and
include
it
both
yeah
yeah,
I
mean
as
long
as
they
could
run
it
in
both
locations.
I
think
that's
really
one
of
the
key
things
we
need
here
because
kind
of
how
we're
gonna
position.
This
is,
you
know,
run
your
your
quick
fuzz
with
every
single
commit
and
then
you
know,
don't
block
the
developer.
Let
them
continue
on
their
day
and
then
the
continuous
fuzzing
runs.
B
You
know
forever.
Essentially,
right
I
mean
as
long
as
they
can
do
both
that
that
seems
like
it
would
get
what
we
would
want.
A
A
Yeah,
I
think,
there's
two
questions.
One
is
running
both,
but
I
think
the
issue
that
we
may
run
into
is
what
is
going
to
show
up
on
the
dashboard
or
the
merge
request.
Widget,
because
if
you
have
jobs
that
are
running
on
a
continuous
basis
and
then
you
have
one
that
might
be
running
in
a
pipeline,
they
may
I'm
worried
that
they're
going
to
have
issues
in
terms
of
which
corpus
they're
going
to
pull
and
then
what
whether
they
just
keep
overriding
each
other
in
the
dashboard.
A
Because,
well,
I
think
we
talked
about
this
on
our
last
call
james,
I
think
you
you
looked
at.
It
was
like
the
dashboard
pulls
in
the
last
artifact
from
the
last
pipeline,
so.
A
D
I
don't
I
didn't
test
the
overall
project,
one
for
reports
off
of
the
default
branch.
I
don't
know
if
those
are
all
merged
in
or
if
those,
let's
see
if
vulnerability
is
reported
on
multiple
pipelines
on
the
default
branch.
I
don't
know
if
those
are
combined.
I
know
for
sure
they're
not
on
the
merge
request.
One.
A
Right
yeah
so
on
a
merge
request
so
like,
for
example,
on
this
branch
we
have
here
called
continuous
fuzzing.
If
you
had
a
job
that
ran
synchronously
and
the
async
job,
I
would
imagine
that
the
merge
request
widget
is
just
going
to
pull
in
the
last
pipeline.
That's
completed,
which
kind
of
makes
it
pointless
if
you're
trying
to
do
both
both
jobs.
C
B
C
So
that's
that's
the
same
yeah,
so
that
was
my
kind
of
the
the
best
practice
that
I
pushed
in
the
fuzzy
to
have
the
the
long
one
on
the
master
or
release
branch
and
have
short
one
on
emerging
quest
yeah.
This
was
my
idea
as
well.
D
That
makes
sense
to
me.
I
do
we
know.
If
so
I
guess
we
still
have
to,
or
let's
see
I
should
say
for
me.
I
don't
know
if
the
results
are
combined
together
or
not
even
on
the
default
branch.
I
think
they
are
because
they're
pulled
out
into
their
own
like
separate
objects
in
the
database
and
everything.
So
I
I
would
think
it
would
work
differently
than
the
pipeline,
which
only
operates
off
of
the
combined
like
report
json.
Basically,.
B
C
Not
yet,
I
think
I
think
this
is
like
this
is
something
that
we
will
have
to.
C
I
don't
have
a
separate
issue
or
not
to
have
like
the
gitlab
backhand
rails,
be
able
to
pick
up
results
not
in
the
end
of
the
of
the
job,
but
at
the
beginning
or
yeah
you
know
or
or
there
might
be
like
a
workaround
that,
like
the
job
trigger
itself,
every
let's
say,
let's
say
you
run
it
for
three
hours,
so
you
can
split
split
it
in
you
know
various
points
in
time,
so
you
split
it
every
every
hour
and
then
you
reach
regular
or
something
like
that,
but
I
think
it
might
be
a
bit
confusing
in
the
in
the
dashboard.
A
Yeah,
so
my
concern
is
that
the
the
security
dashboard
is
just
gonna
pull
the
latest
pipeline
job.
A
D
And
we're
talking
specifically,
we
were
the
default
branch
of
the
project,
not
in
the
merge
request,
one
right,
yes,
okay!
So
if
we're,
however,
we're
calculating
if
a
vulnerability
is
unique
that
should
come
into
play
here.
So
if
you
have
multiple
of
the
same
vulnerability,
it
should
only
show
up
as
one
vulnerability,
even
though
it's
recorded
multiple
times,
and
so
everything
that's
in
that
location,
blob
and
security
report
should
prevent
that
way
of
detecting,
basically
vulnerability,
collisions
from
having
multiple
entries.
A
And
and
james
you're
talking
about
that
happening
on
the
rail
side
right,
yes,
yes,
this.
A
Okay,
yeah,
because
that
that's
one
of
the
things
is,
you
know
the
only
thing
that
would
be
passing
from
fuzzing
job
to
fuzzing
job
would
be
the
corpus
not
necessarily
the
found
vulnerabilities.
A
C
Usually
I
mean,
I
think,
it's
configurable,
but
the
default
is
that
after
it
finds
a
crash,
then
it
stops
yeah.
B
C
So
the
the
default
we
are
kind
of
using
the
default
of
lib
fuzzer
here
and
the
other
engines
as
well,
but
specifically
in
leaf
other
the
default,
is
to
stop
after
it
finds
the
first
crash.
Because
what
happens
a
lot
of
time
when
you
you
know,
when
you
hit
the
crash,
then
it
kind
of
slows
down
the
fuzzer
unless
you
fix
it
because
it
keeps
hitting
the
the
same
one
and
most
of
the
time
it.
C
You
know,
hitting
the
same
crash
and
you
know
very
the
other
part
of
the
time
it's
progressing,
but
usually
it
really
slows
down
the.
B
D
D
Stops
when
it
finds
a
crash,
then
you'll
only
ever
have
that
one
crash
and
you're
not
going
to
like
dealing
with
duplicates
is
part
of
fuzzing
like
it's.
I
I've
never
worked
with
it
where
it
just
stops,
and
I
I
have
found
that
that's
really
weird
of
lib
fuzzer
to
just
stop
and
you
have
to
specifically
tell
it
to
run
so
many
jobs
or
for
a
certain
amount
of
time,
but
yeah.
D
If
you
don't
add
those
extra
arguments,
it
does
just
stop,
and
maybe
it's
just
my
workflow
like
I've,
never
used
it
that
way,
it's
always
been
just
like
fuzz
and
finds
all
the
bugs
and
I'll
deal
with
duplicates
later.
I
don't
want
to
have
to
worry
about
it.
Stopping
on
me
yeah.
C
C
So
afl
has
a
workflow
that
you
just
described,
which
is
going
on
forever
and
keeping
finding
looking
at
duplicates,
and
you
know
telling
you
yeah,
I
found
1000
crashes
of
taipei
and
three
crushes
crashes
of
type
b
partially
because
like
afl
was
is
more
like
well
for
recent
researchers,
I
think
like
the
workflow,
so
you
can't
usually
you're
not
fixing
the
the
crash
right,
so
it
has
to
go
on
and
in
leapfaster
it's
more
for
like
a
developer
workflow,
so
the
default
there
is
to
fix
it
because
you're
the
developer,
so
you
can,
you
know,
fix
it
and
then
it
can
go
further.
C
So
I
think
this
was
the
the
design
between
the
leapfrog
workflow
and
the
afl
workflow.
D
Okay,
so
already,
if
you
run
one
pipeline
the
same
pipeline
two
times,
you'll
have
different
crashes,
it's
non-deterministic
right,
and
so,
if
it
was
deterministic,
then
stopping
at
the
first
crash
would
make
a
little
more
sense
to.
B
D
C
Yeah
I
mean
it's
a,
I
think
everything
is
configurable
and
at
least
fuzzer.
So
it's
yes,
it's
possible
to
use
default
parameters.
C
B
So
let
me
ask
a
let
me
ask
a
question:
is
the
way
that
lib
fuzzer
would
either
do
or
not
do
this
functionality
and
same
with
the
other
fuzzers?
Are
we
talking
essentially
about
a
command
line,
flag,
yeah?
And
if
so,
could
we
just
expose
this
as
a
config
variable
in
the
pipeline
that
users
could
change
if
they
want
it.
C
And
also
now
the
user
can
just
he
can
pass
any
additional
arguments
to
the
underlying
engines.
So.
D
To
me,
kind
of
a
bigger
question
is:
what
is
our
default
like?
Yes,
we
should
definitely
let
the
user
override
whatever
our
defaults
are.
I
think
our
default
should
be.
It
just
runs
all
the
time,
no
matter
what
for
an
amount
of
time
that
we
say,
because
whether
it
runs
and
stops
at
the
first
crash,
we're
still
going
to
have
to
deal
with
duplicate
found,
vulnerabilities
or
crashes,
whether
they're
from
separate
pipelines
or
all
from
the
same
pipeline.
D
B
Yeah,
I
think
that's
fair
and
I
mean
if
we
are
only
talking
about
a
command
line
variable.
I
think
it
sounds
like
it's
probably
about
the
same
amount
of
work
to
make
either
choice
right
and
then,
if
we
get
feedback,
that
one
approach
is
not
helpful
and
the
other
is
better,
we
can.
We
can
change
our
default
to
be
opposite
right.
B
So
I
mean
given
that
I'm
starting
to
come
around
to
james's
point
about,
we
should
just
run
continuously
and
find
multiple
vulnerabilities
and
especially
since
we're
not
blocking
a
pipeline.
Maybe
that
is
what
we
should
do
as
the
default
gather
feedback.
If
people
don't
like
it
and
we
can
switch
them,
what
do
you
all
think.
B
C
Okay,
can
you
repeat
again,
I
I
was
blacked
out
for
for
a
second
which
we.
B
Suggested
yeah.
No,
so
I
was
just
saying.
I
think
james
convinced
me
that
we
should
we
should
default
to
just
reporting
as
many
as
we
find,
because
it
sounds
like
the
underlying
work
to
do
either.
One
is
the
same
because
it's
just
a
variable:
we
can
always
change
the
default
if
we
find
one
approach
doesn't
work
and
since
this
job
is
ranked
continuously,
it's
not
like
the
user
is
going
to
be
waiting
for
us
to
finish
before
they
can
continue
their
daily
work.
C
It
sounds
sounds
good
to
me
a
lot
just
need
to
to
check
that
I
I
I
will
need
to
do
a
proof
of
concept
that
it
doesn't
introduce
other
other
problems
or
other
issues.
We
have
to
take
care.
A
A
That
job
would
probably
let's
say,
run
for
an
hour
and
it
finds
a
crash
in
the
first
10
minutes,
and
then
it's
gonna
keep
going
it'll
find
another
crash,
let's
say
later,
and
then
at
the
end
of
an
hour
it
would
take
those
whatever
number
of
crashes
put
that
into
a
json
document
that
would
go
onto
the
dashboard
and
then
it
would
kick
off
another
job
with
the
corpus
that
it
just
saved
from
the
previous
job,
and
then
it
would
keep
running
is.
Is
that
a
fair
summary
of
how
this
would
work.
C
Yeah,
I
think
so,
but
but
actually
I
I
thought
a
little
bit
more
about
that,
so
it
will
actually
it
will
require.
C
I
will
need
still
to
do
a
proof
of
concept
to
okay
in
any
way,
I
think
I'll
do
a
proof
of
concept
so
I'll
be
able
to
outline
what
what
additional
work
it
requires
in
gitlab
kaphas
to
support
it.
So,
yes,
like
it's
an
argument
to
lip
fuzzer,
but
we
will
need
additional
support
and
get
laptop
fuzz
to
output.
C
What
sas
just
said
so
we'll
have
like
we'll
have
like
a
buffer
of
those
crashes
and
we'll
be
able
to
output
it
so
I'll
I'll
open
an
issue
of
what
exactly.
A
And
the
thing
with
that,
the
thing
with
that
buffer
is
if,
if
that
job
is
killed
right,
maybe
through
the
manual
interface
or
whatever,
it
will
need
to
single
to
the
program
to
grab
those
three
crashes
or,
however
many
crashes.
It
has
write
those
to
a
json
document
and
then
output
that,
because
it's
possible
that
you
know
30
minutes
into
an
hour
job,
that
job
gets
killed
and
we
don't
want
to
lose
that
work.
C
Yeah
yeah
definitely-
and-
and
I
thought
about
this-
so
I
think
in
we
can
double
check
it,
but,
for
example,
in
oss
fuzz,
I
think
they
also
used
the
default
of
stopping
after
one
crash
is
introduced.
C
But
that's,
I
think
it
really
depends
depends
on
the
project.
So
I
think
a
lot
of
time
when
the
project
is
more
mature
and
you're
not
running
it.
Maybe
the
first
time
you
usually
don't
have
like
a
lot
of
very
different
crashes.
So
this
is
why
I
think
they
chose
it
as
well,
because
because.
C
For
example,
insist
caller,
which
is
a
kernel
fuzzer,
so
yeah
they're.
You
know
it
continues
and
it
just
finds
thousands
of
bugs-
and
you
know
open
an
issue,
but
it's
not
really
part
of
the
development
fork
flow.
It's
kind
of
kind
of
you
know
a
different.
C
D
I
just
realized:
maybe
we're
talking
about
different
things.
Are
we
saying
lib
fuzzer
runs
and
then
it
finds
a
crash.
The
process
dies,
the
job
stops,
but
then
a
new
one
is
immediately
spawned
again
or
the
entire
thing
stops
where
it's
like.
If
I
want
to
fuzz
my
project,
I
will
only
ever
find
that
one
bug,
unless
I
manually
restart
the
whole
thing.
C
C
C
But
if
you
will
continue
running
it,
it
will
find
still
the
same
crash
over
all
over
again.
So
this
is
why
they
usually
want
the
developer
to
stop.
You
know
they
stop
it
and
they
want
the
developer
to
fix
it,
and
then
you
know
they
run
it
again,
also
to
save
cpu
time
as
well
right,
because
it's
a
it's
a
free
service
on
their
end
for
some
for
some
open
source
projects
that
they
use
so
yeah.
C
I
think
it's
a
bit
different,
workflow
yeah,
unlike
a
researcher
when
you
research,
a
program
that
you
don't
fix,
you
wanted
to
run
on
a
lot
of
cpus
and
you
just
say:
okay
just
find
me
all
the
possible
bugs,
but.
D
But
yeah,
I
understand
that
it
doesn't
mean,
or
let's
see,
I'm
realizing
what
the
big
problem
I
have
with
it
is
that
that
logic
doesn't
work
in
the
other
analyzers.
So
we
can't
tell
a
sas
analyzer
just
show
me
the
first
bug
and
I'll
show
you
the
next
bug
when
you
fix
it
like
that.
Just
absolutely
does
not
work
it.
C
A
And
I
guess
kind
of
the
way
I
described
it
where
lib
fuzzer
finds
a
bug
and
it
keeps
going.
Maybe
what
there's
a
slight
variation
to
that
which
is
lib,
fuzzer,
finds
a
bug
stops,
and
then
it
restarts
to
clear
the
state,
because
there
is
the
issue
of
like
once
you
once
you
find
a
crash.
A
The
state
of
the
application
might
be
such
that
you
don't
want
to
continue
fuzzing
on
that
corrupted
state
and
that
you
actually
want
to
kill
the
job
restart
it
which
actually
takes
advantage
of
one
of
the
things
james
was
talking
about,
which
is
it's
not
deterministic?
So
if
you
start
again,
you
might
end
up
finding
a
different
bug.
These
different
paths.
C
So
you
can
see
that
here
I
think
it's
the
latest
sleep
fuzzer
and
it's
like
the
this
mode
of
continue
running
of
leaf
other
is
still
still
experimental,
which
is
ignore
crashes.
Okay.
This
is
yeah,
ignore
timeouts
and
ignore
oms,
so
you
have
to
run
it
in
fork
mode.
Okay,
another
thing:
the
difference
between
a
fallen
believe
fuzzer,
so
afl
is
by
default
in
fork
mode.
So
every
time
it
feeds
data
or
you
know,
chunks
of
data,
it
spawns
a
new
process.
C
So
if
it
was
crashed,
it
doesn't
kill
the
father
and
lib
fuzzer
is
in
process
father.
So
whenever
it
crashes
or
it
has
out
of
memory,
the
whole
like
the
whole
engine
is
killed
and
it
can't
restart
itself.
So
lyfazar
also
added
this
like
possibility.
D
D
C
I'm
not
sure
I
have
to
check,
but
but
like
this
is
like
minus
fork,
I
think
windows
4,
you
can
do
minus
fork,
one
okay,
because
essentially
we
are
running
on
one
cpu,
so
kind
of
I
think
minus
fork
is
the
new
thing
after
minus
jobs,
but
so
you
have
minus
minus
fork
one
so
now
now
the
leave
fuzzer
will
use
essentially
a
fork
mode
or
kind
of
just
like
afl
runs
and
you,
then
you
will
have
to
pass
minus
ignore
crashes
as
well
too,
and
then
the
leap
father
will
continue.
D
A
D
A
So
I
I
guess
the
other
thing
is
there's
two
places
that
you
can
run
continuously.
One
is,
you
can
run
continuously
right
here
in
lib
fuzzer,
but
the
other
option
is
that
when
a
crash
is
found
from
lib,
fuzzer
goes
up
to
get
lab
cover.
Fuzz
that
reports
it
and
then
get
lab
cup.
Fuzz
restarts
lib
fuzzer.
C
A
C
C
A
Be
a
lot
less
expensive
to
do
it
here
at
lib
buzzer,
as
opposed
to
restarting
you
know,
several
processes,
yeah.
D
B
D
I'd
actually
assume
that's
the
way
we
were
already
doing
it
and
I'd
that's
on
me
just
assumptions
I
had
made
because
that's
the
way
I'm
used
to
doing
it.
Yeah.
C
Yeah,
so
we
we
can,
we
can
do
that,
but
I
think
it
will
involve
a
bit
of
more
work
on
the
gitlab
cafes,
so
I'll
have
to
I'll
specify
the
what
what
we
have
to
do
in
the
gitlab
kappas
in
the
issue.
Yeah
cool.
B
Another
question
about
how
we
would
kick
all
this
off
from
ci
we're
gonna
have
customers
that
have
their
existing
repo
and
they
want
to
start
this
process
without
doing
a
commit
by
using
that
trigger
approach.
Is
it
going
to
be
possible
for
us
to
expose
that
to
them
somehow.
D
D
B
No,
so
not
not
necessarily,
I
think
it's
reasonable
to
assume
that
they
already
have
a
ci
aml
file
set
up,
and
this
is
kind
of
where
I
was
trying
to
get
to
with
those
figma
designs
that
are
linked
on
the
very
last
page
of
the
document.
But
the
way
I
can
see
our
users
getting
started
with
continuous
fuzzing
is
a
project
admin
or
an
engineer
is
going
and
wanting
to
set
it
up.
We
should
not
make
them
required
to
do
a
commit
to
kick
all
of
this
process
off.
B
D
Man,
this
would
be
an
amazing
feature,
not
not
fuzzy
and
specific.
Just
like
here's,
my
project,
I've
got
some
rough
ci
and
just
like.
D
Yeah
yeah
no
it'd
be
so
cool
gosh.
It
ties
into
so
many
other
things.
I'm
very
excited
about
being
able
to
do
that
type
of
thing
and
say
just
kick
off
a
pipeline
include
this
other
yaml
into
like
merge
it
in
to
whatever
the
current
ci
ammo
is
and
just
start
a
new
pipeline.
D
It
could
be
as
simple
as
like
on
the
new
pipeline
page.
You
could
have
variables
and
then
you
could
also
have,
I
don't
know
include
you
refer
urls,
basically
to
existing
yamls
and
it
will
just
insert
those,
as
include
include
statements
in
the
yaml
itself.
D
Maybe
it
really
would
be
that
simple
that
would
be
so
cool.
A
A
So
this
I
don't
see
why
this
wouldn't
allow
you
to
do
exactly
what
we
want,
which
is,
if
we
have
a
variable
that
just
says
continuous
fuzzing,
it
goes
true
whatever
and
you
hit
run
pipeline
now.
What
we
could
do
is
take
this
page
and
actually,
instead
of
the
user
having
to
type
in
the
variable
name
or
whatever,
it
would
already
be
pre-filled.
C
D
So,
okay,
that
brings
up
three
different
things:
we've
talked
about,
one
is
run
a
specific
job
within
an
existing
defined,
ci
yaml
and
then
what
sam
was
just
talking
about
was
basically
run
a
pipeline
on
the
current
project,
but
include
some
other
external
yaml
without
modifying
the
project,
and
then
there
was
a
whole
discussion.
We
had
about
continuous
fuzzing
right
yeah,
so
just
be
clear.
D
We
brought
up
three
totally
separate
to
me
concepts
all
at
the
same
time.
So
sam
was
I
understanding
that
right,
you
were
talking
about
not
modifying
an
existing
ci
yaml
and
saying
I
want
to
run
fuzzing
on
this
without
making
changes
to
the
code
right
to
the
ci
animal.
B
So
I
think
it's
reasonable
if
we
have
some
requirements
that
there
are
already
things
in
the
gamma
that
we
can
use
like.
I
was
kind
of
thinking
that
we
could
require
a
user
to
have
defined
a
fuzz
target
or
a
fuzz
job,
and
then
that
they
would
point
us
at
that
job
to
run
continuously,
because
there's
no
way
we're
going
to
know
how
to
build
those
fuzz
targets
without
them
informing
us
somehow.
B
C
But
he
can,
I
think
he
can
just
rerun
rerun
the
job
right
at
least
yeah,
like
the
last
running
job,
on
on
a
specific
range
or
in
the
master
branch
like
one
of
the
child
pipelines
like
that.
This
is
just
as
a
workaround.
I
guess.
A
A
C
A
C
Rerun
your
rerun
just
by
you,
know
going
to
this
child
pipeline
and
re-running
it,
but
yeah
it's
not
the
same
as
kicking
a
new
one
but
yeah.
If
you
are
not
doing
a
new
commit,
then
I
guess
you
can
go
to
any
of
the
old
commits
and
just
rerun
it
right
I'll
I'll
go
to
I'm
not
sure.
B
A
I
just
did
this:
that's
why
it's
running
right
now:
okay,
so
but
yeah!
If
you
go
back
to
one
of
the
previous
pipelines
and
you
can
see
what
I
did
so
go
to
the
one
of
the
ones
that
you
ran
before
yeah.
So
if
you
go
to
yeah,
go
to
72
the
one,
that's
down
one!
So
if
you
click
pipeline.
A
B
B
And
so
so
part
of
this
is
I'm
not
super
familiar
with
the
way
trigger
works
in
our
ci
cd
syntax
today,
but
I
I
just
wanted
to
bring
that
that
use
case
up
again
to
see
if
this
would
support
that
or.
C
Not
yeah,
so
I
need
to
check
the
the
dogs
and-
and
I
guess,
ask
fabio
who's,
pretty
knowledgeable
about
all
this
parent
child
pipeline
and
the
options.
Oh
that's.
A
Fine,
so
I
guess
he's.
D
D
A
But
sam,
I
guess,
to
kind
of
put
a
box
around
the
problem
we're
trying
to
solve,
which
is
how
do
you
run
a
fuzzing
job
if
it
hasn't
been
run
previously
right
and
and
not
not
through
a
commit.
B
Yes,
that's
the
big
one!
That's
the
big
one,
yeah
to
speak
to
your
to
your
box
point
a
little
bit
more
like
I'm!
Okay.
If
we
come
back
and
get
in
gitlab,
technically
it's
impossible
for
us
to
run
individual
jobs,
I
don't
think
we
have
an
explicit
yes
or
a
know
that
it's
possible,
though,
if
it
isn't
possible,
for
whatever
reason
we
can
revisit
this,
but
that's
the
that's
the
box.
A
I
think
one
of
the
challenges-
and
this
is
just
my
guess,
based
on
what
I've
seen,
is
that
git
lab
itself
doesn't
have
the
concept
of
a
job
until
you
run
a
pipeline
when
you
run
a
pipeline,
it
parses
your
gamel
and
uses
your
yaml,
and
so
in
the
web
interface.
A
It
doesn't
know
that
you've
got
10
different
jobs.
It
only
knows
that,
because
it's
parsed
your
yaml
and
it's
run
a
pipeline,
and
then
it
sees
the
jobs
that
came
out
of
that.
So,
for
example,
when
you
save
your
yaml
file,
it
doesn't
tell
you
that
it's
wrong.
In
fact,
the
first
time
it
tells
you
your
yaml
is
wrong
is
when
you
try
to
run
a
pipeline
because
that's
the
first
time
it
parses
it.
So
if
we
want
to
run
a
particular
job,
that's
not
part
of
an
existing
pipeline,
which
means
it's
been
parsed.
D
I
think
there
might
be
a
way
to
do
it
if
you,
so
I
was
looking
at
the
jobs
api
if
you
have
a
manual
job.
So
when
you
say
you
can
say
that
a
job
is
manually
run,
our
yaml
that
a
user
would
include
would
have
two
jobs.
Basically
one
is
the
manual
job
and
another
one
is
the
job
that
kicks
off
the
manual
job.
D
So
it's
since
that
code
would
only
run
if
there
was
an
existing
pipeline.
The
pipeline
would
have
been
parsed
and
loaded
into
the
project.
Then
you
could
use
the
jobs
api
to
look
up
the
id
of
that
manual
job
and
then
our
manual
job
runner
job
would.
This
is
where
it
would
look
up
the
id
and
then
specifically
play
only
that
manual
job.
It
might
work
I've.
I
was
starting
to
test
it
out
right
now.
It
should
just
be
a
like
two
curl
requests
and
like
some
jq
stuff,
but
yeah.
D
B
A
B
B
B
Okay,
well
anything
else
we
want
to
go
through
today,
then
I
mean
that
you
appreciate
you
walking
us
through
all
the
the
stuff
you
found.
It
looks
like
we've
made
a
lot
of
progress
answered
a
lot
of
the
questions
we
talked
about
last
week
and
came
up.
C
Yeah
definitely,
I
already
started
writing
code
in
the
backhand
rails
and
luckily
fabian.
He
was
supposed
to
be
ooh
for
two
weeks,
but
he
changed
his
plans.
So
he
helped
me
to
avoid
two
weeks
of
unnecessary
beckett
rails
code.
A
All
right
sounds
good,
so
you've
got
some
some
things
to
work
on.
Do
we
wanna
touch
base
next
week
or
we
can
look
at
what's
going
on
in
the
issue
and
then
go
from
there.
C
We
can
decide
on
our
one-on-one
meeting,
maybe
okay
yeah.
A
I
I
found
this
extremely
helpful.
I
think.
B
Okay,
well,
we
can
break
here
and
I'll
talk
again
in
about
10
minutes.
Yep.
A
And
sam,
when
this
recording
is
done,
can
you
add
potentially
a
link
in
this
document
to
wherever
the
recording's
saved.
B
Oh
okay,
yep
yeah.