►
From YouTube: Discussion about asynchronous scanning jobs
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
all
right
so
we're
recording
now
so
for
those
on
the
recording.
What
this
mean
is.
This
is
a
kickoff
call
where
we
want
to
start
talking
about
asynchronous
workflows
and
asynchronous
pipelines.
Inside
of
gitlab,
we
have
a
number
of
categories
coming
up
that
are
going
to
require
a
somewhat
different
paradigm
from
our
users
and
how
they
interact
with
us
compared
to
our
normal
CI
pipelines.
A
Specifically,
is
we're
gonna,
be
focusing
on
vast
PKI
management
and
fuzz
testing.
All
of
these
categories
require
relatively
long-running
jobs
on
the
number
of
out
on
the
order
of
hours
potentially
days
or
even
weeks,
and
our
current
pipelines
are
not
such
a
great
fit
for
that
where
we
expect
you
know
a
developer
to
commit
code
and
wait
for
the
pipeline
for
results
so
really
want.
The
goals
of
this
meeting
was,
to
you
know,
kind
of
discuss
that
use
case
really
dig
into
kind
of
that
problem.
B
Yes,
sir
Sam,
the
one
thing
that
I
would
tweak
that
a
little
bit
is
when
we
talk
about
long-running
jobs.
Certainly
I
know
Gast
is,
can
be
a
very
long
running
job
we've
had
customers,
in
fact,
even
dogfooding.
It
ran
longer
than
a
day
to
do
full
tasks
in
if
they
get
lab
so
fuzzing.
So
there's
going
to
be
very
long-running
jobs,
PKI
I
think
would
not
necessarily
be
a
long-running
job,
but
I
think
the
reason
may
be
that
we
want
to
you
know.
B
Cluster
it
in
here
is
PKI
is
something
that
you
may
want
to
run
just
outside
of
a
pipeline.
So
it's
not
necessarily
related
to
a
build
right,
like
your
SSL
certificate.
You're,
probably
not
doing
a
lot
of
work
in
your
build
regarding
an
SSL
certificate,
but
you
want
to
check
your
SSL
or
some
your
other
PKI
infrastructure
outside
of
a
pipeline.
That's
independent
of
commits.
B
A
C
C
A
C
B
So
if
you
look
at
the
non-technical
version
of
that,
is
you
know
you
get
a
website,
it's
HTTP
or
you
get
a
website.
It's
HTTPS
mm-hmm.
That
HTTPS
requires
the
certificate
that
you
have
to
renew
on,
depending
on
the
service,
but
anywhere
from
every
30
days
or
90
days
to
a
year,
so
those
have
to
get
renewed,
so
those
can
get
expired.
They
expire
that
HTTPS
doesn't
work.
B
Also
when
you
set
those
up
there's
a
bunch
of
different
encryption
keys
and
protocols
that
you
have
to
set
up
and
as
time
goes
on,
what
happens
is
a
lot
of
those
protocols
kind
of
get
deprecated
and
that
we
move
to
more
sophisticated
protocols
and
encryption
keys.
So
the
PKI
infrastructure
would
be
monitoring
those
and
making
sure
that
your
deprecating,
the
old
encryption
keys,
so
that
you're
using
new
or
newer
ones,
so
that
your
website
continues
to
be
very
secure,
not
subject
to
old
encryption.
C
B
B
B
In
the
background,
like
you
said
every
30
days
or
90
days,
and
you
know
long
term,
we
can
get
really
smart
about
it
right
if
we
know
that
the
expiration
is
I,
don't
know
December
31st,
you
know
we
may
be
able
to
say
you
know
what
we're
not
going
to
run
the
scan
until
November,
because
we
know
that
it's
not
going
to
expire
for
a
while,
so
is
to
be
really
smart
and
efficient
about
it.
Yeah.
A
B
Have
to
double-check,
but
I
believe
when
you
set
up
your
your
I,
believe
it's
on
the
yeah
mile
side.
You
set
up
your
X
your
time
out
for
your
runner,
so
in
your
gamal
and
you
just
set
what
the
timeout
is
and
then
you
just
run
your
job
and
as
long
as
your
timeout
is
greater
than
the
length
of
the
job,
it
will
continue
to
run.
One
of
the
areas
that
I
think
is
really
unfortunate
is
if
you
set
your
time
out
too
early.
B
Let's
say
you
set
your
time
out
to
an
hour
and
your
job
is
still
running
after
an
hour.
It's
just
because
of
the
way
our
scanners
work
is.
They
have
to
finish,
generate
a
report
and
then
send
that
back
to
get
lab.
So
if
you
time
out
first,
all
of
that
work,
kind
of
gets
thrown
out,
because
the
reports
not
finished.
A
B
B
A
Guess
the
the
thinking
with
the
timeout
then,
is
that
we're
really
trying
to
send
upper
bound
on
what
the
job
is
allowed
to
do,
because
we
don't
expect
it
to
do
anywhere
close
to
that
amount.
I
think
the
difference
in
the
problem
we're
talking
about.
Now,
though,
is
we
don't
know
what
an
upper
bound
of
these
jobs
will
be?
You
know
we
expect
them
to
be
very
long-running
I'm
wondering
if
this
is
pointing
to
you
know
that
timeout
limit
is
not
really
the
right
approach.
We
should
be
using
for
this
use
case,
but
rather.
C
B
B
The
so,
if
you're
streaming
results
to
get
lab,
then
you
know
your
dashboard
or
whatever
can
be
updating
with
new
vulnerabilities
as
they're
coming
in
and
that's
one
one
thing:
that's
really
nice
is,
if
you
have
a
long-running
job,
you
may
have
a
long
tail
in
terms
of
the
vulnerabilities
that
are
found.
You
might
find
a
bunch
when
you
first
start
that
scan
right.
It
finds
a
bunch
of
header
problem
so
on
and
so
forth,
and
then
it
just
continues,
but
it
finds
very,
very
it
finds
small
fewer
and
fewer
vulnerabilities
as
time
goes
on.
B
So
that's
why
a
streaming
service
might
make
a
lot
of
sense
where
it
streams
as
results
as
the
job
is
running,
and
that
would
give
you
a
couple
of
things
one
you
could
say:
okay,
it's
now,
it's
run,
for
you
know
ten
hours
and
in
the
last
hour
it
found
no
vulnerabilities
or
no
new
vulnerabilities.
Everything.
It's
finding
is
duplicate
of
what
it's
already
reported,
and
so
you
could
set
rules
like
okay,
if
you
haven't
found
a
new
vulnerability
in
the
last
hour.
B
B
So
that's
one
way
of
doing
it.
I
think
the
other
thing
that's
important
is
introducing
some
idea
of
a
health
check
because
you
need
the
streaming
helps
with
the
health,
because
you
know
that
things
are
running
because
new
vulnerabilities
are
coming
in,
but
it's
still
possible
like
it's,
not
finding
anything
and
it's
still
running
and
it's
still
healthy.
B
So
you
need
to
introduce
some
kind
of
idea
of
a
health
check
that
says:
hey
I'm,
I'm,
a
scanner
I'm
still
running
everything's
good
and
that
needs
to
be
you
know,
get
lab
needs
to
be
paying
every
whatever
five
minutes.
You
know
that
says
hey
this
is
still
healthy
and
then,
if
it
misses
one
or
two
health
checks,
get
labs,
says:
okay,
that's
a
dead
runner
and
stops
listening
for
that
runner,
yeah.
A
I
think
that's
a
really
good
point
too,
because
with
DAST
and
fuzziness
Peschel
ii,
we're
very
likely
just
break
the
application
in
some
way.
So
you
know
a
health
tech
seems
like
a
great
way
to
make
sure
that
you
know
the
the
scanner
hasn't
just
crashed
the
system
and
that's
why
it's
not
reporting
yep.
B
Yeah
and
I
mean
certainly
with
fuzzing,
there's
a
lot
of
different
strategies
as
to
when
you
stop
buzzing
again,
it
could
be
you're,
not
finding
new
vulnerabilities
or
fuzzing
its
faults
right
you're,
not
finding
any
new
faults.
Everything
you
find
is
is
already
been
in
there.
You
could
set
it
up
for
a
duration
of
time.
I
know
some
tools
say
you
know
what
we're
just
gonna
go
indefinitely
and
we're
gonna
we're
gonna,
kill
the
process
when
we
have
a
new
build
and
so
we'll
kill
the
process.
B
B
The
way
I
think
about
pipelines,
if
you
think
about
it
really
from
like
a
QA
process
right,
the
idea
of
a
pipeline
is,
is
a
very
deterministic
right.
I
put
in
code,
I'm
gonna
run
all
the
same
tests.
If
those
tests
are
good,
I
can
push
this
out
to
a
server
to
production
in
a
perfect
world
right,
and
that's
that's
one
of
the
reasons
we
don't
stop
with.
We
don't
stop
a
pipeline
based
on
vulnerabilities,
because
it
requires
some
some
human
interaction
to
say.
Nope
I
need
to
push
the
brakes
on
this.
B
We're
never
going
to
stop
someone
from
going
out
to
production
because
of
a
vulnerability,
particularly
since
they're,
not
necessarily
deterministic
fuzzing.
It's
finding
faults.
These
aren't
necessarily
problems
that
would
prevent
you
from
going
to
production
or
anything
of
that
sort,
and
so
they
don't
really
fit
into
that
paradigm
of
like
if
you,
if
you
don't
get
through
this
pipeline,
you
should
not
go
to
production.
B
A
That's
probably
a
point
that
it'd
be
good
to
be
very
specific.
On
cuz,
I
see
fuzzyness
Peschel
ii
as
two
different
approaches:
we'll
have
a
quick
scan
version
of
fuzzing,
which
will
always
run
in
say
five
to
ten
minutes.
It's
intended
to
be
run
on
every
commit
that
may
or
may
not
block
the
pipeline
based
on
an
end
user
settings,
but
to
your
point,
yeah
these
long-range
scams.
These
are
never
intended
to
block
a
pipeline
from
completing
because
they're
gonna
be
running
for
that
long
period
of
time,
so
they
should
be
done.
B
B
C
B
A
B
A
B
I
mean
one
of
the
benefits
of
the
term
job.
Right
now
is
jobs
are
run
on.
You
know
on
your
runners,
which
is
exactly
how
we
would
get
to
be
doing
this
so
for
all
intents
purposes
we
are.
We
are
running
a
job.
The
only
thing
that
I
don't
like
about
the
word
job
is
right.
Now
it
only
exists
within
the
context
of
a
pipeline,
so
there
may
be
a
way
of
saying
we're
running
a
detached
job,
a
non
pipeline,
job
and
instantiate,
a
jobs,
some
other
word
or
yes,
that's
actually
a.
A
B
So
you
absolutely
should
the
schedule
pipelines
I
played
around
with
that,
so
you
can
set
up
a
job
whatever
your
rules
are
and
then
say
only
on
schedule,
and
so
it
only
runs
on
a
scheduled
basis.
So
that's
actually
a
way
that
you
could
do
things
like
rpki.
That
says
only
on
a
schedule
and
then
you
go
in
and
you
put
a
schedule,
you
know
whatever
the
first
of
every
month
and
then
that
becomes
independent
of
commits
and
things
of
that
sort.
So
that
could
be
a
really
simple
way
of
doing
the
PKI.
B
B
A
One
thing
I
did
also
find
interesting.
I
didn't
know
this
existed
until
yesterday,
I
was
looking
at
the
API
documentation.
We
can
schedule
pipelines
and
manually
start
them.
Programmatically
and
I
put
a
link
to
a
repository
where
I
was
doing
some
experiments.
There
I'm
wondering
if
we
could
run
the
pipeline
for
a
user
as
part
of
the
normal
CI
CD
flow
and
start
what
we
call
a
a
scheduled
pipeline,
but
essentially
with
no
schedule.
A
We
just
use
scheduled
pipelines
as
a
way
to
host
it
and
then
start
it
manually,
because
what
that
would
do
is
it
would
D
couple
the
fuzzing
or
the
long-running
pipeline
from
the
users.
You
know
I
did
a
commit
of
code
pipeline
and
then
presumably
we
could
set
the
timeout
settings
for
that.
As
you
know,
to
infinity
in
the
short
term,
or
until
we
figure
out
how
to
do
better
timing
out
right.
B
Yeah
and
and
the
other
thing
that
you
can
do
that
I
found
is
you
can
programmatically
create
jobs
in
in
source
code
in
like
in
the
realm
on,
unless
we
could
create
a
job
and
run
that
job
and
it
wouldn't
actually
ever
need
to
exist
in
the
amel
file
we
haven't
played
around
with
that
to
see
like
where
does
it
show
up
on
the
web
interface?
What
happens?
B
What's
your
traceability
things
of
that
sort,
because,
right
now,
if
you
wouldn't,
when
a
job
happens,
you
can
go
back
and
look
at
the
IBM
will
file
and
see
what
the
configuration
was
so
on
and
so
forth.
But
we
could
actually
do
all
that
stuff
programmatically
and
not
rely
on
any
kind
of
yellow
file.
B
The
schedule
pipeline,
you
can
do
schedule
pipelines
right
now
that
are
based
on
the
mo
file
same
I,
don't
know
if
you
were
talking
about
something
you
you're
still
same.
What
you
were
talking
about.
It's
still
based
on
the
ya,
know:
file
it's
just
through
the
API.
You
can
kick
it
off
right,
correct.
A
So
the
the
idea,
I'm
proposing
users
would
never
see
it.
This
would
all
be
inside
of
that
fuzzing
dot,
yeah
mol
template
or
yeah
mol
template.
This
would
just
be
something
we
get
lab
would
use
to
write
that
template
ourselves,
I!
Think,
okay,
so
from
a
customer
perspective,
I
think
the
two
are
the
exact
same
in
that
they
never
see
them.
I
think
we
want
to
always
make
sure
that
what
we
do
here
is
transparent
to
users,
because
I
think
this
is
going
to
be
complex.
B
A
So
the
approach
that
that
repo
is
taking
is
it's
scheduling,
a
pipeline
to
be
run
once
every
year.
So
essentially
never
it's
setting
all
the
variables
that
would
be
needed.
So
I
I
just
used
fuzz
testing
enabled
as
an
example.
Then
it
hits
the
API
endpoint
to
start
the
pipeline
and
then
it
can
delete
the
scheduled
pipeline
after
it's
done.
I
say
we're
after
is.
B
A
B
Yeah
yeah
I
mean
I,
think
the
infrastructure
is
there
to
allow
us
to
do
a
lot
of
these
things.
The
the
question
that
I
have
with
that
is
like
it's
a
little
confusing
because
it's
like
we're
not
really
running
a
schedule
thing
in
the
traditional
sense,
so
we're
overloading
some
of
these
concepts
and
I
wouldn't
want
to
leave
our
implementation.
That
way
for
too
long,
because
then
it
kind
of
waters
down
like
what
is
a
schedule,
pipeline
and
and
I
just
makes
how
that
stuff
gets
used.
B
A
A
And
kind
of
to
that
point
I
think
the
the
interface
we
would
have
to
have
to
this
to
make
sure
that
we're
not
overloading
say
scheduled
pipelines
inappropriately
or
some
other
interface
I
think
really.
The
key
jobs
we
need
to
be
doing.
Users
would
need
to
know
how
these
async
pipe
lines
are
started.
They
need
to
be
able
to
see
them
on
an
ongoing
basis,
so
they
could
get
that
info
about.
A
You
know
the
health
check
coming
in
and
just
understand
what's
happening,
because
this
will
be
using
compute
resources
that
will
presumably
be
using
their
minutes
or
some
sort
of
resource
allocation,
as
well
as
the
ability
to
kill
these
five
lines
because,
like
we
said
the
fuzzer
or
the
scan
or
you
know,
go
completely
sideways
and
wreck
the
system,
they
need
to
be
able
to
kill
those
jobs.
Yeah.
B
A
B
B
Yeah
I
think
there's
a
couple
of
questions
on
the
killing.
Jobs
right
is
making
sure
that
it
stops
the
use
of
your
minutes.
If
that's
the
case,
if
it's
hitting,
if
your
fuzzing
against
a
lot
live
or
a
testing
website
or
whatever
you
may
want
to
stop
that,
so
we
have
to
figure
out
like
what
does
it
mean
to
kill
it,
you
know,
is
it
gonna,
stop
all
the
attacks?
Is
it
just
gonna
stop
taking
results
and
it's
just
gonna,
let
it
die
on
its
own.
A
C
B
A
A
C
One
question
like
is
like
the
3a
area,
so
when
no
matter
like
which
technology
we
use
like
schedule
pipeline,
are
this
programmatically
create
jobs?
That
means
we
can
use
interface
in
a
different
like
under
the
Security
tab.
We
don't
have
to
reach
the
right
user
to
the
pipeline
page
right.
We
can
just
kind
of
boring
the
technology
behind
the
schedule
pipeline
into
a
different
area.
B
I
mean
that's
that's
kind
of
my
preference.
Is
that
and
I
don't
know
what
the
balance
is
like.
If
we
have
a
lot
of
us
show
up
on
our
own
security
version
of
the
pipeline
page,
then
it's
very
clear,
like
hey
here's,
my
jobs,
there's
my
PKA
job
tonight,
whatever
jobs
they
are,
that
are
security
related
one
of
the
problems
that
I
have
when
you
go
to
the
pipeline
page.
Is
it
just
everything's
a
pipeline
right,
everything's,
a
pipeline?
A
B
Out
the
jobs
that
says
you
know:
security,
scan,
1,
2,
2,
3,
so
on
and
so
forth,
and
you
don't
have
to
go
digging
through
your
pipeline
into
the
jobs
to
find
what
we
just
ran:
mm-hmm,
because
I
think
it'll
be
really
confusing.
If,
like
we
kick
stuff
off
and
then
you
gotta
go,
you
know,
search
through
the
bus.
A
B
I
think
the
way
that
it
might
work
if
we
use
all
the
existing
infrastructure,
our
stuff
is
all
gonna
show
up
in
their
pipelines
and
in
the
jobs
pages,
but
I
think
and
I'm
just
kind
of
brainstorming
here,
I
think
if
our
job
names
have
a
specific
name
or
a
report
type,
we
can
query
the
database
and
say
give
me
all
jobs
names
secure
whatever,
and
then
we
can
create
a
an
output
of
all
those
jobs.
So
ours
is
basically
a
subset
of
what
exists
on
the
other
part
of
the
site.
I.
A
Think
it's
a
great
point.
So
one
of
the
points
that
was
raised
when
I
think
we
discussed
this
a
few
weeks
ago
with
some
other
folks,
is
that
this
could
potentially
add
more
noise
to
that
pipeline
and
job
page
versus
what
is
there
today
thinking
about
it,
some
more
than
kind
of
what
we
just
talked
about
I,
don't
think
that
customers
go
directly
to
that
pipeline
page
generally,
I
think
they
go
to
it
via
the
merge
request
or
a
dashboard.
A
B
At
least
from
I
agree
with
you,
at
least
from
my
own
personal
use
case
like
the
pipeline
page
is
just
like
everything's
in
there.
If
you
wanted
to
go
sort
through
it,
you
could
generally
I
would
go
through
a
pipeline
through
my
merge
request
in
particular
because,
like
if
I've
got
a
merge
request
and
I
keep
adding
commits
to
it
or
whatever
I,
don't
really
care
about
all
the
previous
versions
of
the
pipeline.
B
B
It's
fine
I
think
there's
other
use
cases
of
why
you
might
use
that
to
say.
Okay,
how
many
runners
am
I
using
how
many
pipelines
are
in
process
of
any
given
point,
but
generally
not
for
interacting
with
the
data
like
you're,
not
gonna,
get
the
pipe
on
page
click
on
it
will
give
each
of
the
tabs.
Do
it
that
way,
I
think
you're
much
more
likely
to
do
it
through
the
merge
request,
but
that's
that's
my
sample
of
one.
That's
about
it.
Yeah.
C
I
can
talk
with
like
the
designers
from
the
Python
like
stage
and
to
see
like
what
they
think
you
they
are
using
it
like,
if
you
add
those
things
like
do
they
like,
if
we
have
a
different
profile
like
security
analysis,
are
creating
it
and
it
show
ops,
another
security
tab,
but
it's
also
automatically
under
the
pipeline
tab.
That
is
a
noise
like
they're
targeting
user.
If
it
does,
we
might
be
able
to
hide
it
like
with
a
setting
or
something
and
if
not,
I,
couldn't
make
a
leave
it
there.
B
A
B
B
Now
it
says
all
is
a
thousand
plus
and
I.
Don't
know
what
this
is
probably
20
per
page
and
there's
a
bunch
of
they're
still
running
a
lot
of
clicking,
so
I
don't
know
who's
actually
using
this
page
to
do
much
work
and
frankly,
I
would
imagine
even
a
lot
of
our
customers
have
the
same
similar
kind
of
experience
where
they've
got
thousands
of
pipelines
running
like
you
know.
You
know
this
isn't:
okay!
Well,.
A
A
Well
so
sounds
like
we're
all
pretty
much
agreed
on
this
with
the
info.
We
have
now
Camellia
if
you
hear
something
different
from
the
other
teams.
Let's
talk
about
it,
but
I
I
didn't
want
to
bring
at
this
point,
because
you
know
this
is
an
assumption,
so
I
want
to
make
sure
it's
explicit
that
we'd
be
okay,
adding
more
noise,
potentially
these
screens
without
negatively
impacting
users
too
much.
It
sounds
like
we're.
Okay,
with
that
right
now
and.
B
I,
don't
think
we
like,
we
would
be
adding
any
disproportionate
amount
of
data
right.
I
mean
some
of
these
pipelines
that
are
out
there.
It's
like,
hey,
I,
mean
I'm
running
on
like
linting
right.
If
you
run
a
linter
you're
running
it
on
every
command,
every
every
tiny,
little
change
and
then
that's
gonna
be
a
heck
of
a
lot
more
noisy
than
our
security
scans,
which
are
gonna,
get
kicked
off
on
a
less
frequent
basis,
yeah
exactly.
A
B
Mean
I
think
the
the
question
ultimately
that
I
have
in
terms
of
how
this
gets
implemented
is
what's
a
primary
interface,
is
going
to
look
like
to
instantiate
these
jobs
and
trying
to
understand
what
that
looks
like
because
I
think
the
pipeline
and
all
that
is
frankly,
a
lot
more
behind
the
scenes,
and
the
question
is
like
okay:
if
we're
gonna
kick
off
a
fuzzing
job,
what
does
that
sting?
You
know
look
like
to
kick
that
fuzzing
job
off.
How
are
we
gonna?
Do
that
and
I
know?
B
C
A
C
A
C
C
We
can
schedule
it
immediately.
I
depends
on
our
capacity
or
what
wait
if
an
MVC
and
after
you
create
this
and
I'm
thinking
about
just
show
a
simple
least
about
like
okay.
This
is
test
active
scan
and
this
is
a
target
and
it's
running,
and
you
can
stop
it
and
go
to
details
and
go
to
details.
It
depends
on
like
the
streaming
or
not
so
you
show
readout
or
not.
It
depends
and
also
quickly
leaves
out
if
there
are
more.
What
are
the
scenarios
and
it's
like
the
running
way.
C
So
that's
I,
think
maybe
I
have
more
scenario
can
list
there
and
I'm
now
for
like
for
fading,
is
also
used
targeted
as
kind
of
identifier
for
jobs
or
I
should
have
a
like
job
ID
here,
or
something
that
part
of
North
Korea
to
make
an
ID
ID
like
Peppa
Heidi's
little
purple
eyes,
something
like
that.
Yeah.
A
I
mean
there's
a
high
level
I
really
like
this
I
think
this
is
exactly
what
we
were
talking
about
in
terms
of
that
interface
to
start,
monitor
and
stop
async
jobs.
Yeah
we'll
definitely
have
to
dig
into
what
are
those
fuzzing
specific
settings,
but
I
think
they
would
fit
in
this
paradigm
pretty
well.
B
B
B
B
B
B
I
mean
I,
think
Sam.
The
question
for
on-demand
scans
is
how
we
think
about
these
as
products
and
get
lab
like.
Are
we
thinking
about
it?
That's
like
a
single
security
tool
or
we
want
to
kind
of
call
out
each
individual
tool
more
as
a
first-class
tool
so
like
on
the
Left
where
it
says
on
demand.
Scans
like
is
that
gonna
be
scans,
fudging
scans
or
we
want
to
kind
of
group
them,
and
some
of
that
I
think
is
more
of
a
positioning
for
gitlab
as
opposed
to
a
design
problem.
Yeah.
A
A
It
it's
important
that
we
are
specific
enough,
that
customers
understand
the
value
and
what
each
of
these
functionality
pieces
is
doing.
It's
not
so
critical.
You
know
we
get
them
split
up,
so
much
that
it
looks
like
each
one's
its
own
product.
If
that
makes
sense,
yes,
your
subscription
gets
you
all
of
them.
Yeah
I
mean
it'd,
be
a
different
discussion
if
we
sold
a
fuzzing
license
and
a
license
and
a
SAS
license,
but
that's
not
how
we
do
things
today,
yeah.
B
And
maybe
there's
maybe
there's
a
short
in
a
long
term
solution
right
like
as
these
products
are
less
mature,
maybe
they're,
grouped
under
on-demand
scans
and
then
as
they
mature
out.
We
bring
them
out
to
their
own
separate
left-hand
nav
anyway.
We
don't
need
to
go
down
that
route.
That
whole
thing
right
now.
Yeah.
A
It's
a
point:
we
will
need
to
dig
into
more
but
I
think
you're
right.
It's
more
of
a
longer-term
thing
once
these
get
further
along
I'm
wondering
about
some
of
the
the
freezing
of
on-demand
now
would
mean
that
we
can't
really
put
PKI
management
here,
because
that's
not
an
on-demand
thing.
That's
more
of
a
continuous
monitoring
thing
where
we
wanted
to
say
you
know
everyday
check
to
make
sure
my
site
is
secure,
I,
don't
know
it's
a
naming
question
primarily
so
I'm
sure
we
can
do
some
brainstorm.
A
B
B
So
I
know
we
don't
have
too
much
time
left
I
mean
I've
got
a
lot
of
there's
a
lot.
We
could
talk
about
on
these
screens.
I,
don't
know
whether
we
should
set
up
another
meeting,
maybe
with
Derek
or
if
we've
got
another
one
to
kind
of
go
through
these
and
brainstorm.
What
works?
What
doesn't
with
the
best
way
to
go
through?
This
is
yeah.
A
That's
actually
a
good
segue.
We
should
probably
talk
about
next
steps
from
where
we
go
from
go
from
here.
Yeah.
So
can
you
yeah
definitely
appreciate
you
walking
us
through
the
the
screens
that
you
have
now?
These
look
really
good
and
we
identified
a
number
of
different
areas
where
we
have
open
questions
around
this
problem
space
that
we
need
to
dig
into
a
little
bit
more.
A
So
let's
talk
about
what
we
want
to
do
next,
so
we
have
the
asynchronous
workflow
epic
I,
think
creating
issues
under
that
makes
the
most
sense
I.
Think
three,
a
there
is
an
open
question
around.
What
do
we
do
from
a
technical
perspective
of
how
do
we
do
some
of
this
work
on
a
scheduling
basis,
whether
we
can
use
something
we
have
in
pipelines
today,
programmatically
make
jobs?
Something
else
entirely.
B
Yeah
I
think
that's
that's
fine,
my
personal
preference
to
be
honest
in
terms
of
how
to
move
this
forward
would
be
to
work
through
these
designs
and
try
to
refine
these,
at
least
among
the
product
team.
Myself
and
Camilla
try
to
refine
these
designs
and
then
once
we
like.
Basically,
the
four
of
us
have
a
good
draft
of
these
designs,
get
them
in
front
of
some
of
the
engineers
and
that's
gonna
bring
up
some
of
these
real
technical
questions
of
like
okay.
B
How
do
we
get
that
start
button
or
that
stop
button
or
whatever
it
may
be?
How
do
we
get
that
working
because,
at
least
for
an
MVC
like
we're
not
going
to
be
redoing
a
lot
of
infrastructure?
So
it's
more
gonna
be
like
how
do
we
implement
the
designs
that
that
we
come
up
with
so
I
think
for
me,
the
most
efficient
way
to
do
this
is
let's
work
here,
the
designs
from
there.
We
can
go
back
and
look
at
the
engineering.
B
C
Yeah
I
think
they'll
work
for
me
as
well,
like
you
can
start
I
think
we
can
all
start
commenting
on
the
designs
and
then
later
on
next
week,
we
can
have
a
think
meeting
to
see
like
what
are
like.
They
think
are
useful
for
you
there
and
then
to
see
like
what
our
technical
possible
and
then
they
can
start
breaking
it
down,
and
when
we
have
like
more
concrete
on
the
engineer.
Side
like
this
is
a
big
changer.
I
need
to
bring
it
to
user,
to
have
a
lot
of
certain
validations.
Yep.
B
B
I'm
not
sure
how
the
best
way
to
do
this
is
because
your
design
basically
shows
what
kind
of
the
ideal
situation
is
going
to
be.
Then
there's
like
okay,
we'll
have
that
and
then
we'll
have
to
peel
it
back
to
figure
out
what
our
MVC
is,
which
I
think
is
fine
I,
think
that's
easier
than
trying
to
do
an
MVC
and
then
trying
to
figure
out
how
we
get
to
the
next
step.
I
actually
prefer
to
think
of
like
what
does
it
look
like
in
six
months
or
a
year,
and
then
okay
say:
okay.
C
B
C
A
B
B
I
mean
the
the
reason
the
ideal
case
is
nice
to
look
at
from
an
engineering
perspective.
Is
it
allows
us
to
think
a
little
bit
bigger
in
terms
of
like
how
we
should
be
setting
this
up
technically,
as
opposed
to
like
okay,
we
just
kind
of
monkey
patch
it
here
monkey
patch
it
here
and
then
in
three
months
we're
like
crap,
we
got
to
read,
we
got
to
throw
all
this
stuff
out
yeah,
and
so
it's
not
that
we're
gonna
try
to
do
any
more
work,
at
least
for
NBC.