►
From YouTube: April 2023 Secure Stage Strategy Q&A
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
All
right
well,
Welcome
to
our
secure
stage
strategy.
Q,
a
Tim
you've
got
the
first
question.
You
want
to
vocalize
it
for
us.
C
Absolutely
so
in
in
regards
to
event
correlation
for
vulnerability
findings.
I
know
a
few
releases
ago.
We
we
gave
the
ability
to
add
vulnerabilities.
C
C
Well,
the
vulnerabilities
also
adopt
that
work
items
capability
and
the
reason
I'm
asking
is
I
know
issues
have
the
ability
to
have
like
dependencies
between
one
another
and
it
I
didn't
know
if
or
if
work
items
were
to
envelop
vulnerabilities.
If
the
ability
to
to
stitch
together
interdependencies
between
vulnerabilities
would
also
be
something
that
would
be
available.
C
B
Yeah,
that's
a
great
question.
I
can
take
at
least
a
pass
to
this.
Anybody
else
comments,
please
feel
free
to
add
in
but
I
don't
as
far
as
I
know,
I,
don't
think
we
have
any
near-term
plans
to
work
on
this,
not
that
that's
not
a
valuable
feature.
It's
just
in
comparison
to
the
other
items
that
we
have.
It
seems
like
a
lot
of
the
other
features
that
we
have
addressed:
more
critical
needs
for
customers.
B
C
Oh
cool,
so
so
work
items
and
vulnerabilities
will
remain
completely
independent
from
one
another.
B
D
Guess
real,
quick
Sam
would
that
be
a
question
we're
referring
to
the
government
meeting?
That's
right
after
this
one,
for
if
Alana
is
going
to
be
there.
B
E
My
question
here
might
not
be
as
much
related
to
the
video,
but
we
have
a
few
security
marketing.
Events
that
are
coming
up
in
the
submissions
are
due,
given
that
we
have
some
of
the
field
crew
in
here.
E
It
might
be
a
good
time
to
talk
on
what
we
might
want
to
show
off
at
AWS,
reinforcing
black
hat
or
any
others,
but
figured
we
could
maybe
dump
some
ideas
in
here
add
a
few,
including
API
security,
with
dast
in
particularly
with
lambdas,
but
would
be
curious
of
others
and
what
we
can
maybe
highlight
or
show
off.
B
Oh
that's
hard
to
say
what
are
your
top
features?
Hillary
continuous
scanning
I
think
is
a
big
one.
You
know
some
of
the
governed
stage
features
are
really
good
ones
to
highlight,
like
we
have
upcoming
plans
for
group
level
dependency
list.
You
know
none
of
these
are
like
earth-shatteringly
new
in
in
the
broader
Market,
but
they're
new
for
git
lab
and
in
terms
of
ease
of
use
and
shifting
security
left.
You
know
because
we
are
gitlab
and
we
are
an
FDA
and
CI
tool.
B
As
we
add
these
features
in
it
makes
it
really
easy
to
have
them
on
by
default
for
all
of
your
projects.
So
I
think
you
could
almost
take
just
about
any
feature
that
we've
got
on
our
roadmap
and
you
know,
even
if
it's
not
something
completely
new
to
the
broader
Market.
The
fact
that
you
can
suddenly
have
that
capability
for
every
single
one
of
your
development
projects
is
a
really
big
deal,
but
yeah
I
feel
like
continuous
vulnerability.
Scanning
is
a
really
big
Flagship
feature.
B
Some
of
the
features
we
just
released
around
license.
Compliance
are
a
pretty
big
deal
as
well,
like
we
just
revamped,
not
only
our
licensed
scanner,
but
also
added
license
approval
policies,
so
we're
letting
you
manage.
You
know
more
than
just
security,
I
think
that
could
be
a
talk
like
going
Beyond
security
into
into
license
compliance.
If
we're
focusing
on
past
features
and
then
coming
up
in
the
future,
I
think
you
know
you
could
talk
about
just
security
and
compliance
in
general
perhaps
might
make
another
good
good
talk.
B
We
have
I,
know
I'm
Shifting
the
focus
over
to
govern,
because
that's
my
stage
so
my
mind's
there
a
little
bit
more,
but
we
have
a
lot
of
big
features
coming
out
related
to
locking
down
the
governance
and
compliance
of
gitlab
across
the
board,
with
like
a
new
compliance.
Adherence
report
page-
and
you
know
more
more
strict
controls
for
actually
enforcing
some
of
the
policies
that
we
have
so
I
I,
don't
know,
I'm
just
spouting
out
a
few
ideas
off
of
mine,
I'm,
not
sure.
D
Yeah
great
question
so
I
think,
depending
on
the
the
show
and
the
audience
often
you're,
dealing
with
people
who
go
from
what
is
a
gitlab
to
like
I've
used
ultimate
before
or
I
use
ultimate
today,
and
the
answer
is
always
going
to
be
different
for
those
people
for
the
people
that
are
maybe
less
familiar
or
haven't
used
ultimate
before
I.
D
Think
it's
really
hitting
the
platform
message
that
it's
it's
we're
trying
to
help
you
collaborate
actually
resolve
the
security
problems
you
have
and
that
having
all
your
things
in
one
platform
really
is
a
huge
benefit.
If
they're
more
familiar
with
the
product,
the
the
bigger
recent
changes
can
be
a
good
to
sort
of
refresh
them,
especially
if
they
maybe
evaluated
ultimate
a
long
time
ago.
D
I
guess
important
things
there
I'm
kind
of
struggling
for
the
list,
but
maybe
pretty
significantly
improved
SAS
workflows
working
coming
soon
in
line
to
findings
in
the
inline
changes
Tab
and
the
Mr
I
think
that
type
of
thing
about
just
bringing
the
platform
together
secret
section,
auto
response
is
pretty
neat.
D
D
Yeah
Derek
I
think
you
know
if
you
had
other
thoughts
to
mention
browser-based
tasks,
man,
we
should
totally
talk.
D
F
Think
would
be
an
interesting
one
to
to
talk
about
a
little
bit
just
because,
historically,
it's
very
difficult
to
scan
single
page
applications
or
like
modern
JavaScript
Frameworks
with
desks
it
just
doesn't
work
very
well
and
with
our
browser-based
analyzer
we're
trying
to
solve
it
and
we're
doing
much
better
than
anything
else
that
I've
seen
out
on
the
market
right
now.
So
you
know
that
could
be
an
interesting
one
to
talk
about.
F
I.
Do
like
the
API
security
idea
as
well,
because
that's
a
that's
a
big
area
that
is
being
missed
in
a
lot
of
companies
that
I've
looked
at
and
customers
that
I've
talked
to
they
just
don't
really
do
anything
with
their
apis
and
it's
it's
a
big
risk
that
they're
taking
there.
D
I
think
also,
maybe
the
the
message
about
like.
Why
should
you
say
at
a
trade
show,
Brian
Mason
Fern,
like
folks
will
will
be
the
best
people
to
to
sort
of
give
the
authoritative
answer
on
on
the
message
you
wanted
a
particular
show
it's
just
to
kind
of
mention
that
they're
both
super
engaged
in
what
we've
been
doing
lately
and
I,
wouldn't
want
to
kind
of
freelance
about
the
the
show
message.
A
I've
got
a
big
batch
of
them.
Sorry
for
the
word
formatting
by
the
way,
I
put
them
into
slack
as
I
watch
a
video,
so
I'll
run
through
mine,
real
quick,
the
mostly
tactical
I
saw
that
we
are
again
emphasizing
code
climate
as
our
code,
Quality
Engine,
which
I
give
a
big
thumbs
up.
I've
seen
some
limitations
in
it,
especially
with
some
outside
of
its
core
languages.
A
It's
just
not
as
great
as
I
would
hope,
right
and
I
think
the
video
said
something
about
I
forgot
the
verbiage,
but
to
my
brain
is
suggested.
We
will
encourage
customers
to
use
third-party
code
quality
engines.
Is
that
an
accurate
assumption
or
to
have
this
here.
D
So
generally,
yes,
the
difference,
maybe
being
that
a
code
Quality
Engine,
often
cues
a
different
kind
of
thing
than
Like
A
lender.
So
a
lot
of
what
people
are
getting
from
code
climate
is
not.
D
You
know
the
type
of
thing
that
you
actually
get
if
you
buy
a
good
quality
product,
what
you're
getting
is
lunching
you're
getting
eslint
running
on
your
code,
you're
getting
RoboCop
getting
all
these
things
that,
if
you're
a
competent
development
team
that
wants
to
use
any
kind
of
linting,
you
already
have
an
eslint
job.
D
So
we're
really
trying
to
figure
out
how
to
how
to
make
it
easy
to
get
that
into
the
code
quality
widget.
If
you're
using
excellent,
you
probably
have
a
config
file.
Already
you
have
your
own
opinions,
on
which
version
of
it
to
use
you're
not
going
to
want
to
run
Docker
and
Docker.
You
should
just
take
the
reports
from
yesterday
sort
of
mapping
the
the
existing
developer
workflow
that
we
validated
from
ux
research
to
what
the
tool
is
actually
doing.
D
If
you
get
more
toward
like.
What's
a
Quality
Engine
finding
you
know
potential
code
smells
and
things
that
gets
a
little
bit
more
toward
like
a
you
know,
sonar
or
something
like
that.
That's
a
place
where
we
candidly
won't
be
able
to
invest
much
right
now.
D
There
are
some
interesting
approaches
that
that
could
pay
off,
but
that
that
area
just
has
less
definition,
we're
really
trying
to
get
past.
The
code
climate
blocks,
adoption
issue
by
at
least
hitting
kind
of
parody,
with
what
it's
doing
as
a
first.
A
Step,
that's
awesome.
I
appreciate
that
and
I
do
hear
more
requests
around
things
like
we
want
to
understand
our
McCabe
cyclomatic
complexity,
as
opposed
to
one
of
the
for
a
number
of
events
when
defining
a
message
right
and
I'm.
A
big
fan
of
if
a
feature
is
not
going
to
be
amazing,
it's
actually
probably
beneficial
to
not
have
it
there.
A
So
actually,
that's
really
good
to
hear
the
other
one
Sam's
already
answered,
but
I'll
verbalize
here
I
saw
some
of
the
what
I'll
call
continue,
scan
or
passive
scan
I
guess,
I!
Think
of
it
things
that
don't
happen
in
a
pipeline
container
registry
secrets.
We
are
a
repository.
I
think
that's
been
something
that
would
be
fairly
natural
for
us
for
a
long
time.
I
love
seeing
it
in
there
and
my
question
was:
will
it
be
for
SAS,
first
or
self-managed?
A
First
and
Sam
has
answered
that
it
will
be
for
SAS
first
and
we'll
want
to
do
some
thorough
testing
around
performance,
so
I'll
consider
that
one
addressed.
B
So,
just
to
clarify
we're
we're
trying
to
release
it
for
both.
At
the
same
time,
like
that's
our
current
default
plan,
we
would
only
deviate
from
that
if,
if
there
was
a
good
reason,
so
it
okay-
hopefully
it
won't
be
SAS
first
right.
If
everything
goes
to
plan,
then
it'll
go
out
for
both
at
the
same
time,.
A
Okay,
no
worries
I
would
totally
understand
it
if
it
were
SAS.
First
I
think
our
customers
are
beginning
to
have
a
natural
expectation
of
that,
because
we
manage
all
the
moving
parts.
So,
let's
see
I'll
have
two
more
I
promise.
That's
it
I
saw
that
we
are
consolidating
yeah,
the
many
SAS
scanners,
basically
on
this
and
on
the
Sim
grip
right
and
it's
a
very
Corner
case,
but
it
does
come
up
from
time
to
time.
A
So
what
I
was
thinking
is
there
can
be
cases
where
there's
like
you
know,
python
commonly
it's
a
build
driver.
It's
not
Azure
project
application
code
base
and
it
will
cause
the
like
static
analyzer
for
python,
obviously
wouldn't
cause
a
dependency
scanner,
because
there's
no
package
manager
well,
the
auto
detection,
be
something
that's
perhaps
a
small.
You
know
story,
that's
part
of
a
larger
epic.
Just
because
I
can
imagine.
If
everything
is
Consolidated
on
some
graph,
it
may
be
alarming
to
some
users
to
see
like.
A
Oh
the
python
scanner,
didn't
complete
or
error
it
out
or
whatever
does
that
question
make
sense?
I
tried
to
word
it
in
a
linear
fashion,
not
sure
if
I
did.
D
Yeah,
maybe
I'll
restate
it
and
you
can
before
I
answer,
which
is
sort
of
like
there's,
often
a
non-production
code,
that
people
don't
really
care
about
and
I
think
you're
mentioning
jobs
running,
but
maybe
the
the
issue
is
also,
if
there's
a
security
finding
in
there
is
it
worth
paying
attention
to
that's
kind
of.
That's
am
I
on
the
target.
A
It
was
actually
a
little
more
fundamental.
It's
forget
about
the
fact
of
whether
there's
a
security
finder
or
not.
Let's
just
say
that,
for
whatever
reason
a
there's
a
security
finding
or
there
are
no
security
findings,
but
there's
something
about
that
file
that
causes
the
scanner
that
exit
incorrectly
and
appear
to
be
an
error
right.
D
Some
kind
of
error
in
the
operation
of
the
skin,
because
because
of
that
non-production
code,
yeah,
okay,
so
on
non-profession
code,
more
broadly
I
think
we
need
to
do
a
better
job
of
being
able
to
kind
of
split
that
out
and
you
can
use
stats
included
paths
that
sort
of
ignore
certain
directories
if
you
want
to
or
certain
glob
patterns,
but
that's
not
a
place.
D
That's
that
we're
super
strong
in
trying
to
to
differentiate
between
you
know
which
the
code
that's
in
your
repo,
we
kind
of
treat
it
all
as
as
the
code
you
want
to
scan.
So
so
we
need
to
think
about
how
to
improve
that
workflow,
but
if
you
get
more
toward
errors
in
the
scans,
this
is
a
surprisingly
thorny
issue
because
of
the
way
that
reports
are
processed.
We
assume
that
every
report
is
complete
so
like
when
you
every
scan
has
to
be
sort
of
a
scan
of
a
whole
repo.
D
So
then,
if
you
have
errors
like
well,
if
you
had
a
full
scan
and
then
an
error
scan
and
then
a
finding
isn't
there
anymore?
Is
it
resolved?
If
you
didn't
have
the
error
like
how
should
you
handle
that?
That's
the
thing
that
my
groups
actually
talked
about
as
recently
as
last
week
about
how
it's
sort
of
a
incremental
scans
and
like
this
game,
with
an
error,
kind
of
become
kind
of
the
same
type
of
problem?
D
So
I
guess
I,
don't
have
an
answer
for
exactly
how
to
do
that,
but
we're
trying
to
figure
out
how
we
can
handle
a
scan.
That's
99,
complete
that
has
an
error
because
of
a
python
file
or
whatever
you
know,
without
having
confusing
Behavior
through
the
rest
of
the
system.
A
Okay,
there
is
also
a
silent
philosophical
debate
which
I
didn't
bring
to
this
call,
even
if
it's
just
a
bill
driver
or
a
test
Suite
does
that
represent
exposure
to
avoidabilities
right.
So,
let's
not
get
into
that
I'll
bring
that
up
if
it
becomes
a
hotter
topic
and
then
the
final
one
I
think
Derek's
already
got
the
answer,
but
API
fuzzing,
one
of
the
most
common
I,
wouldn't
say
objections,
but
it's
kind
of
like
oh
wait
moment
with
prospects
that
I
deal
with
is
hey.
A
We
really
want
that,
but
we
don't
have
a
starting
point
right.
We
need
to
test
Suite,
we
don't
know
how
to
generate
it.
This
is
brand
new
to
us,
but
it's
of
interest
and
I
heard
I
watched
this
last
night
at
2
A.M.
It's
that
we
will
basically
do
some
interrogation.
A
F
Yeah
I
think
that
I
think
it
depends
on
how
you
look
at
the
idea
of
a
seed
with
API
fuzzing.
We
don't
really
use
the
the
Corpus
the
same
way
or
a
seed.
The
same
way
that
you
would
in
coverage
guided
fuzzing,
so
the
open,
API
spec
that
will
be
generating
is
really
just
like
the
guide
for
the
scan.
So
it's
telling
you
what
the
API
calls
are,
what
you
know,
what
it's
expecting
all
that
kind
of
stuff.
F
So
it
is
in
some
sense
that
the
Corpus
or
the
seed,
but
also
it's
a
little
bit
different.
We
don't
really
call
it
that
in
terms
of
documentation
and
how
we
present
its
customers.
So
it's
really
more
of
like
the
guide
for
the
scan
make
sure
that
we
call
All
the
Right
API
calls
and
we're
not
missing
anything
since
that's
kind
of
been
a
huge
blocker
for
anybody
trying
to
get
into
API
scanning,
it's
surprising
how
many
companies
don't
have
any
type
of
spec
or
anything
for
their
apis.
A
C
I
I
do
want
to
note.
I
have
yet
to
find
a
API
fuzzing
example.
That
is
passing
right
now
in
the
in
the
security
products,
demos,
okay,
we
can
totally
take
that
offline,
but
just
a
just
a
note.
C
But
I
will
ask
the
next
question
slide.
17
I
was
curious
about
updated
timelines
for
the
the
five
databases.
I
know,
there's
an
epic
and
sometimes
it's
a
little
bit
unclear
on
whether
or
not
the
databases
or
data
is
officially
being
used
within
that
database
and
everything's
been
created
and.
C
I
didn't
know
if
the
definition
of
done
included
ABI
access
to
some
of
that
data.
B
Yeah
the
definition
of
done
so
I
guess
it
depends
on
what
apis
you're
want,
what
you're
wanting
to
do
with
the
apis.
If
you're,
just
wanting
to
like
read
vulnerability
information,
you
know,
obviously
we
already
have
apis
that
you
can
call
to
get
the
list
of
vulnerabilities
same
thing
with
licenses
like.
If
you
want
to
get
the
license
information,
you
can
click
the
export
button
on
the
dependency
list
page
and
that
will
give
you
a
Json
export
of
all
of
your
dependencies
with
their
licenses
and
their
vulnerabilities.
B
I'm,
not
positive,
but
I
think
that
one
is
backed
by
an
API
that
you
can
call
as
well.
Adding
and
changing
data
is
a
little
bit
more
complicated
because
we
are
syncing
The
Advisory
database
and
the
license
database
from
something
that
gitlab
manages,
and
so,
if
we
start
letting
customers
add
or
edit
that
data,
then
there
start
to
be
questions
of
like
which
one
overrides
which
one
and
how
do
we
deal
with
conflict?
Do
we
share
that
data
across
all
customers?
B
Do
we
partition
it
just
for
that
one
customer
anyway,
so
the
our
Solutions?
There
might
be
a
little
bit
more
nuanced
than
just
API
access
if
they
do
want,
to,
you
know,
add
their
own
advisories
or
add
their
own
license
data.
That
is
something
that
we've
considered
and
I
think
I
linked
to
the
correct
issue
for
that,
but
it's
more
likely
that
we
would
have
some
sort
of
mechanism
where
they
could
report
up
licenses
through
a
CI
Pipeline
and
currently
our
proposal
is
to
ingest
those
added
into
essentially
a
sandbox
or
a
customer.
B
Only
you
know
licensed
database
that
is
tracked
separately
from
the
main
advisory
database,
and
then
you
know
their
results
would
reflect
that
data
that
they've
added
in
manually
so
I
don't
know
that
we're
still
working
through
the
technical
implementation
details.
The
end
solution
might
be
different,
but
it
there
are
more
considerations
than
just
opening
up
apis,
because
if
we
let
anybody
read
or
write,
it
would
actually
affect
all
of
our
customers
right
now
yeah
and
it
would
conflict
with
our
own
data.
That's.
C
Right,
yeah
I
know
that
sometimes
the
licenses
aren't
being
detected
and
it's
just
showing
up
as
unknown
and
I
didn't
know.
If
that
would
be
a
potential
stop
gap
for
some
customers
of
like
I
know
what
license
it
is.
C
I
just
want
to
manually,
detect
it
or
manually
override
it
for
now,
until
gitlab
can
get
this
resolved,
but
also
yeah,
I'm
I
love
the
idea
of
being
the
place
that
customers
could
declare
their
internal
advisories
and
then
all
packages
that
or
you
know
if
they
have
a
shared
Library,
all
all
of
their
applications
that
rely
on
that
that
internal
third-party
Library
could
then
get
detected
or
flagged
as
having
a
vulnerability
tied
to
it.
That
would
be
pretty
cool.
B
Yeah
absolutely
and
those
unknowns
we
actually
for
most
languages,
we've
seen
huge
improvements
in
reducing
the
number
of
unknowns
in
comparison
of
the
previous
one
for
the
other
ones
that
are
out
there.
We've
got
open
issues
that
are
actually
being
worked
on
right
now,
cool
to
help
improve
those
rates.
So
you
know
we're
trying
to
address
those
in
the
core
database
because
that's
going
to
help
everybody
rather
than
having
each
customer
do
their
own
workarounds
as
much
as
possible.
C
Makes
sense
thanks
I
guess
Connor
did
you
want
to.
D
No,
let's
we
only
have
three
minutes
so
I
figured
you
and.
G
Yeah,
my
question
is
mostly
around
analytics
or
insight
into
all
the
findings
which
we
have
with
respect
to
vulnerabilities.
Some
of
the
questions
customer
or
prospects
are
asking
is
like
hey:
what
are
the
top
projects,
which
has
the
secrets
like
I?
Don't
want
to
go
and
start
scanning
400
projects
and
looking
for
this
information
so
give
me
some
kind
of
analytical
reporting
kind
of
dashboard
which
can
help
us
start
planning
things
so
I
just
brought
up
like
a
few
examples
out
there
hey.
G
When
was
the
vulnerability
first
detected
a
particular
vulnerability
and
how
often
it
was
detected,
how
long
it
has
been
not
addressed.
Those
kind
of
things
will
help
them
operationalize
and
go
back
and
start
working
on
it.
So
mostly
around
reporting,
analytics
insights,
so
I
don't
see
anything
in
the
slide
Deck.
With
respect
to
that.
H
H
I,
don't
think
I,
don't
know,
maybe
as
many
as
10
hours
in
the
last
and
last
quarter.
For
me
at
least
so
I
think
that's
something
that
probably
we're
gonna
have
to
address
something
like
this
year,
but
we
don't
currently
have
any
plans
to
do
so
so,
but
I
think,
given
the
amount
of
interest.
H
This
is
something
that's
even
on
my
list
of
things
you
will
get
where
we
can
swap
this
and
if
we
can
maybe
take
advantage
of
what
I'll
call
synergies
with
other
teams
that
I
have
to
review
over
to
sort
of
work
together
to
provide
some
some
visibility
into
some
of
those
metrics.
But
here.
H
Well,
that
looks
like
that's
the
end
of
the
list,
so
thanks
folks
for
popping
on
and
engaging
instructions
for
great
we'll
see
you
next
time.