►
From YouTube: 2020.05.07 - SAST to Complete working session 3
Description
Final working session to define and scope SAST to Complete.
A
Alright
happy
Thursday
glad
to
see
everybody's
here,
so
what
we're
doing
so
instead
of
office
hours
for
refinement,
we're
going
to
continue,
as
we
mentioned
yesterday,
and
the
working
session
versus
asked
to
complete
we're
gonna
continue
with
deck
scoring.
What
we
are
left
with
is
one
more
section
that
is
a
work
that
we
could
not
get
to
before
people's
brains,
melted,
so
I'm
gonna
share
my
screen.
We're
gonna
continue
with
the
same
kind
of
activity
that
we
did
yesterday.
A
I
have
already
gone
through
the
worksheet
in
an
attempt
to
clean
it
up,
so
that
it
is
not
a
weird
mismatch
of
requirements
and
also
jobs
to
be
done
and
I
have
also
gone
through
and
put
in
straw
man
arguments
for.
Is
it
defined?
How
much
effort
is
there?
What's
the
complexity?
So,
let's
get
let's
get
to
it.
A
All
right
we're
an
improve
the
product,
that's
where
we
left
off
so
mono
repo
support
two
items
to
be
done:
number
one.
There
is
a
subset
of
analyzers,
which
and
I
don't
think,
we've
done
if
we
don't
think
we've
looked
at
all
of
them
with
this
lens
of
some
of
them,
look
for
one
project
and
one
project
only
and
that's
all
they
scan
versus
some
of
them
go
through
everything
that
is
in
a
doctor
files
in
the
file
system
and
will
scan
everything
that
Matthew.
A
So,
according
to
this,
the
scanners
detection
rules,
we
need
the
ability
for
all
of
our
analyzers
to
scan
everything
that
is
within
a
particular
project,
regardless
of
where
it
happens
to
be
so
when
you
figure
out
which
which
analyzers
need
to
be
updated,
and
then
we
need
to
provide
the
update
itself,
and
so
those
are
to
me.
Those
are
the
two
jobs
to
be
done
to
give
to
better
support,
mono,
repos
and
I.
You
can
see
the
sizing
that
I
foot
forward
for
small
for
defined
effort,
complexity,
I.
A
C
Yeah
that
looks
that
looks
correct
to
me.
I
would
say,
though,
having
looked
and
done
a
little
bit
of
this,
do
you
think
that
it's
some
of
the
effort
it's
variable
to
with
like
regard
to
how
deep
you
are
in
that
language
or
stock
as
well,
to
be
able
to
like
quickly
see?
Does
it
do
more
than
one
or
not
or
maybe
I'm
misunderstanding
this.
C
Like,
for
example,
like
elixir,
is
pretty
straightforward,
it's
like
an
umbrella
but
umbrella
apps,
but
like
I,
knew
that
because
I've
done
a
lot
of
elixir
but
I've
I'm,
not
as
deep
in
Java
or
Colin
or
some
of
the
others.
So
it's
like
there's
quite
a
bit
of
digging
to
do
to
like
figure
out.
Is
this
a
common
pattern
or
not
so
there
cabral
would
be
some
efficiency
here
with
folks
picking
up
the
languages
or
stocks
that
they're
most
comfortable
with
or
most
knowledge.
A
A
Mush
every
single
integration
test
we
have
together
into
one
repository
and
make
multiple
copies
of
every
single
one
of
those
projects
in
some
and
and
in
in
sibling
directories
and
then
run
those
against
every
single
analyzer.
We
have
to
see
if
it
caught
every
single
project
that
it
was
supposed
to
catch.
B
B
One
kind
of
interesting
distinction
that
I
think
that
we
were
kind
of
we
haven't
quite
addressed
on
this
is
the
difference
between
a
mono
repo
as
a
combination
of
different
languages
and
a
mono
repo
as
a
combination
of
projects
in
the
same
language
so
like
you
can
obviously
run
it
against
the
main
rails,
app
right
now
and
that's
going
to
run
a
JavaScript
analysis
and
a
ruby
analysis
and
that's
a
mono
repo.
But
that's
not
really
what
we're
talking
about
here.
C
I
totally
misread
this.
That's
a
good
call
out
also
like
your
example
of
like
mashing
them.
All
together
should
probably
work,
it's
more
so
having
it
multiple
times,
right,
I
think
the
car
calling
pattern
I've
seen
is
like
having
like
the
front
end
next
to
the
back
end.
When
you
have
like
a
spa
or
you
know,
a
single-page
application
for
the
front
end
or
and
then
maybe
like
multiple.
You
know,
AWS
lambda
functions
which
would
be
multiple.
You
know
Python
projects,
and
sometimes
we
have
like
a
shared
library.
You
know
so
there
are
different
ways.
C
C
C
C
Yeah
I
think
and
make
some
sense
in
this:
there
is
there.
You
know
it's
not
super.
There's
open
questions
about
whether
this
happens
in
a
you
know
a
different
way
for
each
analyzer.
If
we're
able
to
abstract
into
like
a
common
pattern.
So
I
have
some
questions
there,
so
it
makes
sense,
metacognition
effort
in
complexity.
Again
there
have
on
whether
you're
you
know
doing
a
generic
solution
for
all
or,
if
you're
touching
each
one.
A
Ownership
over
the
notes
column.
So
if
you
have
some
that
you
want
to
make
sure
a
recorded
or
record
in
advance,
of
course,
getting
there
please,
please
contribute
all
right,
I'm
moving
on
mechanism
for
vulnerability,
research
to
improve
rules.
These
are
existing
scanners,
so
this
is
separate
into
so
this
is
as
a
reminder
from
a
couple
of
days
ago.
A
B
C
B
G
Think
that's
a
fair
point
and
I
think
that's
probably
something
we
should
call
out
here.
I
think
both
cases
are
valid
and
we
will
see
examples
of
both
of
them
and
I
think
this
actually
goes
to
answering
that
question
of
what
would
be
publicly
available
versus
what
would
be
not
I
think
you
know
if
we're
improving
existing
rule,
I
think
there's
a
very
easy
case
to
say
we
should
just
contribute
back
to
the
open
source
project
in
terms
of
net
new
rules.
There
are
certainly
things
that
the
community
will
create
and
I.
Imagine.
G
If
we
do
this
right,
there
would
be
a
repository
somewhere
where
people
contribute
rules,
and
then
there
certainly
will
be
things
that
we
will
do.
That
will
be
part
of
get
lab
secret
sauce.
When
you
look
at
all
of
our
competitors,
every
single
one
of
them
is
doing
some
form
of
this,
where
they
have
their
layer
of
intelligence
on
top
of
whatever
scanners
they're
using.
We
will
certainly
do
that
as
well,
and
this
is
one
where
I
would
encourage
us
to
look
at
the
longer
timeframe.
G
B
F
Yeah
I'm
just
a
little
bit
lost
here,
so
we
are
using
a
wrapper
around
third-party
lint
errors,
I
mean
suppose
break
man
or
other
link
turn
right.
So
what
are
we
doing
here
like
by?
What
do
you
mean
by
injecting
rules?
Here
we
are
just
using
the
third
party
tools:
the
polynomial
ax
t
research
I
mean
what
direction
are
we
going
for
this
I'm
just
trying
to
understand?
A
I'll,
try
in
to
let
others
amend
to
it.
For
me
specifically,
the
third
part,
the
open
source
scanners
that
we're
using
right
now
we're
using
them
for
two
purposes,
one.
What
is
the
engine
used
for
detection
of
vulnerability,
findings
in
the
projects
which
they
can
scan
and,
secondly,
is
the
rule
definitions
that
they
are
using
as
a
part
of
that
that
detection
engine
itself?
A
The
idea
behind
this,
at
least
to
me,
is
a
decoupling
of
those
concerns,
so
that
the
mechanism
which
is
the
capability
is
separate
from
the
data
which
are
the
rules
themselves,
and
so,
when
we're
talking
about
injecting
rules
into
a
into
an
analyzer,
we
are
talking
about
amending
the
capabilities
of
the
underlying
scanner
itself,
so
it
can
either
detect
new
things
or
it
has
a
different
way
of
detecting
what
it
can
already
detect
itself.
Okay,.
A
F
B
Yeah
I
guess
without
getting
too
deep
into
solution
in
here,
I'm,
not
sure
what
what
ground
want
to
cover
here,
because
yeah
without
it's
a
discovery
issue,
so
we'll
have
to
figure
out
exactly
how
we
want
plan
on
doing
this.
I
think
I
think
that
the
idea
is
fairly
well
defined,
but
there
are
still
open
questions.
So
I
guess
I'd
still
say
medium
for
this
yeah.
F
F
A
Okay,
when
we
were
talking
about
this
on
Tuesday,
we
mentioned
there's
two
ways
to
do
it
as
I
was
jokingly
referred
to
the
right
way
in
the
easy
way.
So
one
is
two
is,
though,
is
the
creation
of
a
brand-new
AST
based
scanner,
which
is
something
that
is
currently
under
research?
The
other
is
a
massive
reg
X
engines.
So
those
were
the
two
ideas
that
have
been
put
forward
as
a
way
to
approach
this,
the
the
the
other
two
task,
dentists,
that
I
think
is
common,
regardless
of
which
way
or
which
ways
are
going.
A
B
B
C
F
A
A
E
For
it,
yes,
I
have
several
questions
about
the
regex
option.
Like
are
we?
How
seriously
are
we
considering
that
I,
that's
kind
of
unclear
to
me
in
these
vulnerability
research,
gonna,
potentially
vulnerable,
'ti
research,
potentially
right,
some
of
those
projects
did
to
detect
things
that
they
found
and
are
we
if
we
went
this
route,
would
we
be
shipping
with,
like
close
to
an
empty
rule,
set
of
things
that
we're
gonna
be
able
to
detect
since
we're
talking
about
having
the
customers
right
than
themselves?
E
G
So
I'm
gonna
try
to
unpack
some
of
these
things.
I'm
gonna
keep
it
short
because
I
we
could
talk
about
this
for
hours,
so
no
largely
I,
don't
want
to
go
the
regex
route
for
all
of
the
reasons
that
anyone
really
right,
reg
X's,
know
largely.
This
is
looking
at
doing
this.
The
right
way.
I
think
this
is
more.
This
is
an
area
of
research
and,
in
fact,
like
this
is
largely
what
get
lab
or
github
was
talking
about
yesterday
with
their
code
QL
language.
G
G
G
The
Cisco
Talos
group,
microsoft
itself,
is
known
for
having
a
very
robust
security
research
group,
so
creating
a
way
for
us
to
then
have
those
partnerships
with
research
groups
like
that,
so
that
they
can
directly
influence
and
impact
our
scanners
and
when
you
take
their
research
and
are
reached
by
having
control
and
access
to
a
lot
of
code
sitting
in
repositories.
That
together
becomes
a
very
large
net
positive
interest.
Whereas
this
is
why
no
one
like
individually
can
do
this.
G
F
A
Okay,
then
refocusing
this
hold
that
so
if
we
were
to
do
an
ast
based
scanner,
this
would
be
presuming
that
one
is
identified
and
therefore
this
becomes
a
new
scanner
that
we
would
be
integrating,
not
that
we
would
go
inside.
So
if
we
were
to
refocus
what
our
did
this
to
the
scope
that
we
would
have
and
I'm
a
default
for
this,
so
that
we
would
be
integrating
a
new
scanner,
whether
it
is
a
reg
ex
engine
or
it
is
an
ASC
based
engine
or
both.
B
So
I
wonder
if
we
want
to.
Maybe
this
is
part
of
a
tea
actually,
but
I
think
a
prerequisite
to
this
would
be
defining
either
a
common
query,
language
or
inputs
to
that.
If
we
find
an
out-of-the-box
ast
base
canner,
then
maybe
that
gives
us
one,
but
we
probably
still
need
some
kind
of
interchange
format
so.
B
A
B
Officers,
in
that
case,
I
would
say,
the
engineering
effort
for
an
ASE
would
actually
be
pretty
small.
If
there
we
just
found
one
I,
don't
think
we
will
so
we're
either
gonna
have
to
build
it
or
build
a
regex
one.
At
the
moment,
I
almost
feel
like.
We
should
wait
that
differently,
defining
the
outcome
of
whether
there
is
one
or
whether
we're
gonna
build
this.
A
Right
benchmark
projects:
now
these
are
not
the
creation
of
benchmarks
projects.
These
are
the
integration,
so
they're
part
of
our
build
process.
So,
if
we're
going
to
release
a
new
version,
how
is
it
tracking
against
our
benchmarks?
Are
we
improving
or
are
we
staying
the
same
or
are
we
getting
worse
and
in
my
head,
this
is
analogous
to
us
integrating
integration
tests
as
a
part
of
our
build
process
as
well.
I.
B
A
A
I'm
gonna
come
back
to
the
telemetry
block
because
well
think
we're
gonna
spend
the
most
time
there,
because
there's
a
lot
there
that
we
captured
I'm
skipping
down
to
102
for
all
analyzers
report
severity,
so
refresher
for
everyone
from
Tuesday.
So
we
have.
We
have
a
delightful
mix
of
different
types
of
tools
that
are
available
to
us,
as
within
our
analyzers,
some
of
which
to
tell
us
the
severity
of
the
findings
that
we
have,
some
of
which
the
per
don't.
A
A
So
the
the
ask
of
this
is
that
we
provided
make
it
more
uniform
so
that
it's
so
that,
regardless
of
what
language
or
framework
or
that
we
happen
to
be
trying
to
scan
we're
going
to
get
the
benefit
of
a
very
opinionated
definition
of
how
severe
that
finding
is
small
meeting
of
whether
that
is
a
low,
medium,
high
or
critical
severity,
and
therefore
we're
no
longer
reporting
up
notes.
C
So
is
this:
the
idea
here
is
to
update
the
analyzer
to
have
like
a
map,
or
you
know
like
this
data
file.
That
would
like
say,
okay,
this
one
is
this
severity
etc?
Yes,
has
there
been
any
talk
about
actually
engaging
with
the
upstream
scanners,
the
interesting
to
actually
get
back
to
the
community
to
provide
this
to
them,
and
then
it
would
just
be
exposing
it
right.
I.
A
C
C
A
B
G
To
add
one
additional
thing
that
vulnerability
Matt's
group,
which
is
interacting
with
vulnerabilities
vulnerability
management.
Thank
you.
I
walk
through
that
in
my
head.
Vulnerability
management
is
working
through
as
a
concept
of
basically
severity
is
a
helpful
thing
to
have.
There
are
scoring
mechanisms
for
that
that
exists
like
CBS
s.
However,
the
problem
is
organizations
have
different
opinions
about
how
important
things
are
or
not.
G
So
it's
one
thing
to
have
the
metadata
about
that,
like
what
kind
of
rule
are
at
what
kind
of
severity
or
type
of
exploit
or
anything
that
you
you
would
use
to
sort
of
determine
how
risky
something
is.
Is
things
that
we
should
absolutely
be
collecting
and
getting
scanners
to
put
in
to
pass
through
to
our
scanner
reports
on
the
other
side
of
that
is
cool.
We
found
a
vulnerability.
G
We
have
the
metadata
associated
with
it,
which
is
severity
if
it's
wasp,
top-10,
certain
classes
of
vulnerabilities,
etc,
and
then
vulnerability
management
wants
to
go
and
create
basically
like
a
a
scoring
system
where
an
organization
can
set
their
different
settings
and
risk
tolerances,
and
all
of
that
and
get
a
magic
sort
of
score
of
how
risky
it
is
for
an
in
foreign
organization.
So
I
sort
of
separate
these
two
things.
There's
the
we
just
need
the
raw
metadata,
which
I
agree
scanners.
G
F
G
Is
an
idea
that
the
vulnerability
management
team
is
considering
they're?
Also
looking
at
letter
grades
like
there
are
all
sorts
of
grading
type
of
risk
that
they
can
do.
What
this
is
really
focused
about
from
our
perspective
is:
let's
make
sure
that
we
have
consistent
and
rich
metadata
associated
with
findings
that
come
from
our
tools.
C
A
C
The
only
know,
there's
that
open
question
can
we
give
back
coming
up
upstream
right.
C
It
seems
like
it's
pretty
well
define
if
we
were
just
situate
in
the
analyzer
as
well
as
doesn't
seem
very
complex
or
now
that
effort
I,
think
is
probably
medium
in
the
sense
that
you
gotta
go.
It's
kind
of
it's
somewhat
tedious.
You
just
have
to
go
through
each
analyser.
Look
at
the
different.
You
know
look
at
all
of
the
different
things.
That's
providing
then
have
someone
who
knows
how
to
score
appropriately
did
create
these.
The
creation
of
those
data
files.
C
A
G
Yeah
I
agree
with
that
and
that's
something
that
I
need
to
help
David
articulate
in
terms
of
what
is
it
that
we
need
from
a
bold
research
standpoint
to
make
our
scanners
compete
with
some
of
our
competitors
from
a
quality
standpoint.
So
yeah
totally
agree
happy
to
help
out
with
prioritizing
that
okay.
B
G
Think
that
does
go
to
part
of
what
this
issue
is.
Is
that
I
don't
think
the
answer
is
the
same
for
all
of
the
scanners,
some
scanners?
We
will
have
a
much
higher
confidence
in
and
others
we
won't
and
trying
to
like
decide
what
data
do
we
need
to
supplement
what
data
do
we
want
to
trust
or
what
data
do
we
want
to
completely
override
from
our
scanners.
A
Okay,
I've
had
of
the
question:
what
fields
do
we
will
we
overwrite
or
amend
as
a
post,
processing,
stuff,
post,
processing,
stuff,
okay
and
then
the
other
two
steps
here?
So
we've
got
a
data
file
that
we've
we've
made
room
for
in
each
of
these
analyzers
now
we're
gonna
have
to
do
the
act
of
actually
doing
the
post
processing
stuff
and
then
there's
an
integration
test
updates
that
need
to
be
applied.
A
A
Where
we
make
building
a
separate
concern
from
the
active
scanning
and
so
step,
one
of
this
to
me
was
audit.
Analyzers,
which
care
about
building
were,
and
let's
make,
let's
figure
out
which
ones
we're
building
is,
is
actually
part
of
the
the
workload
of
the
of
the
analyzer
itself
and
then,
as
we
discussed
on
Tuesday
either
we
need
to
make
building
optional
or
provide
a
package
rather
than
a
container
to
inject
the
analyzer
into
an
existing
build
workspace.
A
C
There's
one
more
piece
to
this
in
that
so
there's
compiling
with
some
of
them
there's
fetching
their
depths
in
some
of
them
and
then
there's
also
injecting
/
fetching
the
scanner
depth.
Sometimes
when
we
need
like
when
it's
using
build
output,
does
that
make
sense
so
like,
for
example,
security
codes,
game,
for
example,
and
in
that
those
cases
were
also
modifying
their
projects
to
add
the
depth
yeah.
C
My
I
have
a
question
with
regards
to
this,
in
that
is
how
how
sensitive
are
we
to
make
C
thinking
phrases
some
of
my
ideas
around
this
basically
push
some
upfront
work
to
the
user
and
their
project
and
how
they
configure
it.
But
how
much
work
are
we
okay,
with
requiring
a
user
to
do
to
set
this
up?
You
know:
is
this
going
to
be
an
optional
path,
or
is
this
something
where
at
some
point
down
the
road
will
deprecate
ever
building.
C
So
the
split
is
that
split
as
an
optional
thing
kind
of
like
right
now
spot
bugs
you
can
set
a
flag
in
it.
It
doesn't
try
to
compile
because
you've
already
compiled
it's
an
optional
thing,
or
are
we
looking
to
actually
stop
building
all
together,
and
this
is
the
path
towards
that?
What
what
is
needed
for
complete
does
that
make
sense.
I.
B
Think
it
would
be
incremental
where
the
first
step
would
be
in
making
the
build
step
optional,
mm-hmm,
the
next
step
of
whether
or
not
we're
completely
removing
the
build
step
I,
don't
at
least
in
my
opinion.
We
shouldn't
do
that,
because
we
want
a
favor
convention
over
configuration,
and
that
means
things
should
work
out
of
the
box
which
we
do
right
now,
so
it
would
be
a
bit
of
a
regression
to
remove
that
capability.
That's
just
like
my
opinion.
Yeah.
G
So
here's
how
I
would
think
about
this
is
the
way
it
works
today,
where
we
do
a
best-effort
build
works
for,
let's
say
80%
of
cases.
That
is
a
magical
experience
that
it
just
works.
That
already
is
better
than
a
lot
of
our
competitors,
who
you
have
to
zip
up
a
built
code
or
a
compiled
code
and
send
it
off
to
some
service
to
go
and
do
vulnerabilities.
G
The
problem
is
that
there's
the
20%
that's
really
really
hard
and
when
you
get
into
the
enterprise,
as
I've
mentioned,
whatever
monster
a
customer
has
assembled
in
terms
of
their
build
or
their
implementation
of
how
they
just
work.
Maybe
it's
mono
repository
whatever
their
unique
monster
is
that
they've
got
cobbled
together.
That's
the
part
that
we're
never
going
to
ever
support
every
single
possible
thing
that
a
customer,
whether
it's
a
good
practice
or
bad
practice.
So
instead
it's
okay.
G
We're
gonna
have
a
great
answer
for
80%
of
people
and
if
you
fall
into
that
20%,
where
you've
done
something
custom,
we
need
a
structured
way
for
you
to
go
and
define
a
build
for
us
to
then
go
and
scan
and
have
a
way
for
our
scanners
to
know.
Hey,
don't
try
to
build
anything
use
whatever
the
build
step
was.
C
Yeah,
that
makes
sense
so
cover
the
80%
case
and
then
create
an
escape
hatch
of
sorts
for
the
20%,
where
they
they
have
a
way
to
do
it.
Okay,
yeah.
That
makes
a
lot
of
seconds
and
that
also
will
still
simplify
a
lot
of
what
we're
trying
to
do,
because
then
we
can
just
really
focus
on
the
simplest
case,
the
most
common
case
with
our
analyzers
and
then
just
make
sure
that
there
is
a
way
out
easily.
If
they
have
some.
You
know
special
unicorn
of
a
project
got
it.
A
C
C
Okay,
so
the
effort
is
medium
because
we
got
it
tight.
We
gotta
look
at
them
all
complexity.
C
I
was
thinking
medium
because
you
you
they
don't
they
don't
all
work
the
same,
so
you
have
to
like
dig
in
deep
and
think
about
critically
for
each
scanner
we're
wrapping
how
it
works
and
you've
gotta
dig
into
how
that
language
stack,
etc,
works
for
each
time
when
you're
doing
the
audit.
So
that
was
why
I
was
thinking.
But
if
you
think
it's
a
small
one
time
at
that,
like.
B
I
think
that
makes
a
lot
of
sense
and
when
in
doubt,
size
up
so
medium
work
screen.
C
C
There's
also
the
consideration
for
so
I'm
going
to
use
security
code
scan
as
an
example.
If
we
make
it
optional
and
they
miss
configure
their
project,
do
we
want
to
also
create
a
tool
to
basically
validate
that
they've
configured
things
correctly
or
is
it
you've
you've
gone
into
the
escape
hatch
you're
on
your
own?
Now?
C
G
G
You've
created
I
do
think
there
is
a
particular
thing
we
can
do
to
help
them
along
that
journey,
which
is
today,
it's
really
really
hard
to
know
what
has
been
scanned
or
not,
and
so,
for
example,
what
was
in
a
call
with
a
customer
where
they
didn't
believe
that
we
were
scanning
things
and
as
it
turned
out
because
of
weird
things
that
they
were
doing
with
pawn
files
and
other
stuff
that
is
truly
over
my
head.
Our
scanners
were
running,
but
it
wasn't
actually
scanning
dependencies,
and
so
they
didn't
have
a
good
sense.
G
They
knew
that
dependences
they
were
using
had
tons
of
vulnerabilities
in
them
and
they
were
baffled
when
our
scanners
came
back
and
said,
no
looks
good
and
so
like.
We
need
a
way
to
help.
Tell
them
what
has
been
scanned,
which
is
something
that
we
very
easily
could
go
and
do
I.
There's
lots
of
options
here.
G
One
thing:
that's
in
my
head
right
now
is
this
idea
of,
like
a
code
coverage,
type
test
of
having
a
code
coverage
like
visualization
to
help
you
understand
what
has
been
scanned
or
not,
so
that
you
know
that,
like
hey,
look
at
this
build
tree
or
this
this
file
tree
and
we're
what
was
scanned,
what
was
not
scanned?
What
scanner
ran
where
so
I
think
there
are
ways
that
we
can
help
provide
that
information
without
getting
into
the
weeds
of
what
a
customer's
particularly
doing.
G
C
A
C
B
A
B
Yeah,
yes,
I
mean
in
some
cases
we
might
have
to
do
cross
compilation.
So
that
is
slightly
higher
complexity,
but
I
think
the
definition
is
pretty
straightforward.
The
effort
would
probably
just
be
exposing
an
artifact
during
the
build
stage
and
the
complexity
that
would
probably
the
highest.
So
when
I
say
medium,
across-the-board.
B
C
Right
yeah,
so
I
think
it's
at
least
m's
down.
There
I
think
it
might
be
even
a
little
bit
easier
in
some
ways,
because
I
think
building
and
delivering,
but
yeah
I
think
there's
some
open
questions
too.
So
n/m
is
good
for
define,
because
how
are
we
going
to
do
this
with
like
a
yamo
file
or
like
making
it
more
automatic.
A
C
B
C
A
C
Looks
good
I
think,
potentially
even
this
might
be
more
of
an
escape
hatch
anyways.
So
it's
like
already
pretty
clear.
It's
like!
Oh,
you
just
need
this
executable.
Then
they
can
do
it
in
whatever
bespoke
offline
environment.
They
have
it's
just
something
they
got
to
put
in
the
environment,
but
update
updating,
documentation,
good
call
out
so.
B
The
reason
that
we
didn't
end
up
going
this
way
in
the
first
place
for
a
lot
of
our
tools
is
because
we
we
don't
have
self-contained
analyzers.
What
our
end
layers
do.
Is
they
wrap
an
underlying
tool?
So
unless
we're
shipping
a
copy
of
es
lint
with
earliest
an
analyzer,
it's
not
going
to
be
a
single
executable,
they're
grabbing.
C
B
D
B
C
C
That
what
I
just
heard
yeah
I,
think
I
think
that
the
definition
is
at
large
because
it's
like
how
are
we
gonna
do
this
I,
don't
know
both
effort
and
complexity
are
also
spaceships,
but
that's
I
think
where
I'm
it's
like
squishy.
How
are
we
gonna
go
about
doing
that?
What
are
we
gonna
do
with
those
depths?
First
I
was
like.
Oh,
it's
just
a
go
but
go
executable,
but
it's
not
that
anymore.
C
C
B
D
C
A
B
A
A
G
What
I
will
say,
there's
a
whole
lot
of
discussion
to
be
had
in
this
whole
area?
What
we've
got
to
get
towards
is
there
are
a
few
pieces
of
key
data,
how
people
are
interacting
with
our
vulnerabilities
and
then
knowing
how
many
vulnerabilities
and
what
scanners
they
come
from.
If
we
were
able
to
get
access
to
that
data
and
to
that
telemetry,
we
can
make
a
lot
smarter
decisions
around
what
languages
we
invest
in,
what
scanners
we
invest
in
and
how
we
build
features
and
functionality
to
make
interacting
with
findings
easier
and
more
actionable.
C
G
G
A
number
of
groups
insecure,
have
agreed
that
this
approach
is
how
we
want
to
approach
fast
tasks,
container
scanning
and
a
few
others,
and
so
we
are
working
with
I
think
the
enablement
group
to
get
these
things
set
up
so
that
they
can
be
consumed
properly
through
either
snow
plow
or
what's
the
other
one
usage
ping?
And
it's
a
matter
of
our
team
SAS
desk
containers
gaming
need
to
make
sure
that
we're
doing
the
work
to
actually
fire
those
calls
to
snowplow
or
to
the
usage
ping.
Okay,.
C
E
G
The
functions
that
we
need
to
call
to
do
usage
ping
and
to
do
snowplow
calls,
but
we
as
a
stage
as
secure
need
to
make
sure
that
we're
doing
it
in
a
way
that
sends
the
data
that
we
want.
So,
for
example,
right
now,
the
way
that
we
count
jobs
and
the
names
of
jobs
doesn't
catch
everything
and
it
isn't
the
data
that
we
want.
You
have
duplication
problems,
so
we
just
need
to
be
consistent
and
then
actually
going
to
implement
those
calls.
C
Our
yeah
and
then
we
know
what
all
metadata
about
each
of
these
items
we
want
so
like
with
the
first
two.
Is,
it
just
counts,
tours
it
deeper
data,
I
guess
another
question
I
have
about
climate
reading
general
sorry
before
is:
do
we
need
to
provide
configuration
for
folks
who
want
to
say
no
I
don't
want
to
send
this?
The.
C
G
There
I
know
there
is
an
open
issue
right
now
about
figuring
out
the
scan
type
and
the
job
data.
There's
discussions
on
that
issue.
In
fact,
I
was
just
looking
at
it
last
night
about
what
data
we
want
to
send
and
then
I
believe.
The
vulnerability
management
team
is
working
on
tracking
the
dismissed
interactions
with
the
vulnerabilities,
which
is
also
part
of
first-class
vulnerabilities.
C
G
G
A
C
C
G
Thing
I
would
say
about
the
smell
is
that
if
we
do
the
other
things,
the
smell
piece
is
very
easy.
We
have
a
way
to
do
smell.
It's
the.
What
we
consider
feature
usage,
insecure
right
now
is
a
complete
and
total
mess.
So
getting
these
other
pieces
clarified
them
in
the
form
we
want,
then
it's
very
easy
to
simplify
down
smell
and
actually
I
think
smell.
G
B
E
Going
back
to
line
93
sorry,
if
we
kind
of
covered
this
already,
but
what
what
does
it
mean
to
be
remediated
I
mean
as
far
as
what
we're
tracking,
because,
like
someone
could
have
an
mr,
they
can
run
this
security
scheme.
They
can
see.
Oh
hey
I've
got
a
problem
here,
I'll
fix
it
run.
This
run
another
scan.
It's
it's
gone,
send
up
my
mr,
and
that
is
never
going
to
get
recorded
as
a
full
durability.
Finding.
G
G
E
Gotcha,
okay,
okay,
sure
sure
sure,
but
I
mean
some
of
this
is
is-
is
improving
it,
though
right
because
like
with
with
standalone
vulnerability,
you
can
mark
it
as
fixed
as
well
without
that
issue
workflow.
So
that's
another
way
that
it
could
be.
You
know
remediated
that
but
I
think
I
think
that
could
answer
some
of
the
questions
and
I
had
about
what
what
exactly
we're
expecting
the
track
there.
So
thanks.
A
In
the
interest
of
explicit
as
better
than
implicit
going
through
this
and
actually
putting
in
notes
as
far
as
what
the
definition
of
the
dormitory
items
or
the
metrics
are
clearly
showed,
where
we
have
composites,
where
they're
really
the
combination
of
two
other
distinct
atomic
units
that
need
to
be
captured,
and
so,
if
it's
something
that
is
like
a
failure
rate.
That
is
something
where
we
need
two
other
items
and
the
failure
rate
can
be
done
in
periscope.
We
we
don't
need
that
generated
for
us.
A
B
G
B
A
C
A
G
G
Of
seats,
okay,
yep,
the
way
that
smile
is
calculated
as
well
as
ways
that
when
we
get
the
telemetry
King,
it
might
have
your
user
ID.
Your
association
want
an
account
and
then
can
do
things
on
the
other
side.
Where
we
say
hey,
show
me
unique,
accounts
or
show
me
unique
users,
so
we
can
count
the
same
usage
ping
in
multiple
ways.
Just
by
the
way
we
slice
the
data
in
size
sense.
I
A
Okay,
assuming
that
everyone
is
working
in
every
single
milestone
and
no
one
is
taking
time
off
so
with
those
big
caveats,
then
the
than
the
baseline
measure
or
velocity
that
I'm
using
on
a
per
engineer
basis
is
if
they
can
achieve
nine
points
of
work
in
a
given
iteration
five
engineers
means
45
total
points
of
effort
in
a
given
month.
That's.
A
We're
going
to
stay
on
for
a
while,
so
I
am
taking
all
the
way
that
I
was
providing
earlier
today
and
the
staff
meeting
that
we
had
a
year's
worth
of
work,
and
what
we
had
done
already
is
that
I
was
taking
that
45
points
like
cutting
1/3
off
of
it
for
all
that
other
stuff,
so
30
points
of
work
for
sass
to
complete
per
month
is
what
I
was
baselining
that
work.
That
estimate
off
of
him.
Okay,
so
yes,.
I
And
you
know
I'm
just
trying
to
figure
it
out
because
we're
gonna,
you
know
we're
gonna
have
to
have
this
conversation
at
some
point
with
with
p.m.
as
well
as
you
know,
my
management
and
so
I
just
wanted
to
kind
of
get
a
feel
and
I
was
just
doing
some
quick
calculations
and
I'm.
Looking
at
it
and
I'm
like
these
numbers,
don't
look
too
good,
but
but
but
that's
alright,
right
I'm.
The
whole
point
is
to
understand
the
the
workload.
It's
not
to
say
whether
we
can
actually
do
that
in
any
measure
of
time.
I
G
G
There
will
be
things
that
will
be
removed
and
we
will
do
parts
of
these
things
and
so
now
I
think
it's
a
question
of
how
how
much
of
each
one
of
these
items
do
we
want
to
do
and
in
what
order.
So
remember
that
what
we're
trying
to
do
is
by
the
end
of
the
year
when
we
need
to
come
up
for
submission
for
the
next
Gartner
Quadrant.
G
What
do
we
want
to
have
done,
and
how
can
we
be
smart
and
ordering
and
slicing
and
dicing
to
both
compete
with
our
competitors
to
have
our
strategic
and
then
do
all
of
that
in
a
way
if
it
creates
something
that
our
customers
actually
want
to
use?
So
this
to
me,
is
the
funnest
part
of
management.
I
know
some
of
you
may
not
want
to
do
anything
related
to
this,
but
this
is
the.
How
do
we
take
the
Rubik's
Cube
and
really
make
one
plus
one
equal
ten
and
not
too.