►
From YouTube: Sec PM / Security Department Monthly Sync Up - June 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yes
yeah,
so
it's
kind
of
also
covered
in
the
table
above,
but
one
of
our
biggest
pain
points
right
now
is
just
the
sheer
volume
of
findings
and
in
the
dashboards
and
just
wanted
to
check
with
with
you
guys
on
what
you
know.
D
D
Okay
for
das,
we
just
merged
the
aggregated
vulnerabilities
into.
D
I
guess
we
just
turned
it
on
by
default,
so
you
should
once
you
upgrade,
because
I
know
that
the
other
one
that
we
were
working
on
that
you
wanted
to
for
dast.
I
wanted
to
have
the
ability
to
include
or
exclude
specific
vulnerabilities
or
vulnerability
checks
that
was
also
merged.
So
I
think
that
hopefully
we
should
be
good
for
all
of
you
to
upgrade
the
das
version
and
try
that
out
and
that
might
solve,
or
hopefully
that
will
solve
the
issue
on
the
dash
side.
D
I
can't
speak
to
the
other
scanners,
but
for
das.
I
think
that
will
help
a.
D
B
E
We
can
do
that
already
just
using
a
post
or
script.
E
F
Okay
yeah,
so
I
was
just
saying
that
I
know
like
you
mentioned,
you
don't
want
to
have
a
hard
delete
to
delete
everything,
but
in
our
current
situation,
with
the
noise
we
have,
and
also
just
when
even
for
another
project-
you're
experimenting,
you
you
just
add
a
new
analyzer
and
then
it
adds
crazy
amount
of
noise
and
there's
just
no
way
to
get
rid
of
it
once
it's
there.
F
So
we
really
need
something
to
just
massively
delete,
slash
archive
hide
vulnerabilities
to
make
to
make
the
report
usable
because
right
now,
there's
so
much
noise,
we'll
just
never
get
through
it,
and
so
much
noise
gets
added.
So
what
we
want
to
do
is
remove
the
noisy
analyzers,
but
even
if
we
do
that,
we
have
like
50
000
results
that
we
can
never
get
rid
of.
F
F
So
yeah
I
mean
we
could
have
scripts.
We
could
have
hacks
and
whatnot,
but
we're
going
to
have
like
a
a
post
pipeline.
That's
gigantic
and
error
prone.
So
it
would
be
nice
to
be
able
to
clean
up
the
dashboard
as
a
feature
and
more
than
nice,
actually
because
right
right
now,
it's
really
just
making
it
extremely
difficult
to
use
the
vulnerability
report
for.
G
Down
to
the
compliance
question,
I
do
wonder
if,
in
the
shorter
term,
since
that
script
will
delete
all
of
the
records
if
it
would
be
possible
to
modify
it
to
just
extract
all
that
data
and
where
we
would
store
it
and
then
run
the
delete,
so
you
at
least
have
something
workable
in
the
short
term,
because
realistically
that's
not
going
to
be
on
my
near-term
roadmap
for
the
archive
button.
G
It's
something
that
comes
up
occasionally,
but
it's
usually
where
a
customer
has
configured
something
incorrectly
for
one
project
where
they're
doing
their
initial
test
configurations
and
use
the
live
project
and
they're
like
I
want
the
project
to
go
back
to
zero,
but
once
they
get
past
that
they
typically
don't
ever
need
it
again.
So
it's
it's
one
of
those
things
that's
important,
but
doesn't
necessarily
rise
higher
than
some
of
the
other
things.
G
So
I
think
working
with
the
infrastructure
team
is
probably
the
best
bet
to
get
something
within
the
next
few
months.
If
you
need
it,.
E
Yes,
thank
you.
So
that's
going
to
be
a
big
one.
We
discovered
this
week
that
actually
this
table
that
we
have
at
the
top
of
the
dock
is
that
it
was
not
accurate.
That
was
completely
absorbed,
so
we
did
a
complete
refresh
yesterday,
not
sure
if
all
the
categories
that
we
have
in
the
first
column
are
still
accurate
or
not.
E
A
as
well
so
I
guess,
since
we
don't
have
a
lot
of
things
in
the
agenda,
we
can
go
through
this
table,
make
sure
that
we
are
all
relying
on
the
the
current
endpoints
that
we
have,
because
I
I
realized
lately,
I'm
working
on
this
big
inventory
of
the
projects
that
we're
using
at
github
tuber
to
product
and
every
time
I
share
my
progress
or
what
I'm
doing.
I
realize
that
not
everyone
is
aware
of
the
pain
points
that
we
have
and
that's
definitely
on
us.
E
We
have
difficulties
to
to
share
and
to
to
give
feedback
on
where
we
want
to
go.
What
are
the
the
roadblocks
that
we
are
facing,
and
I
think
this
table
could
be
a
good
starting
point,
so
we
try.
We
did
our
best
at
least
yesterday
to
to
put
everything
in
there.
Obviously,
it's
not
complete,
but
it's
again
a
good
start
and
the
sake
of
iteration.
We
need
to
start
somewhere.
So
that's
that's
the
spirit
here.
E
What
do
you
think
about
going
through
them?
One
by
one
support
for
you.
E
All
right
so
for
us,
mostly
it's
not
specific
to
sas.
Actually
it's
also
independent
scanning,
but
we
can
see
that
a
lot
more
in
size,
the
location
tracking,
is
obviously
obviously
an
issue,
because
every
time
we
dismiss
something
and
we
do
the
triage
process
if
anything
in
the
code
is
moving
and
there
was
a
generality
in
there
we're
going
to
create
a
new
vulnerability.
E
So
that's
that's
a
huge
problem
for
us,
because
if
you
do
the
triaging
process
yourself,
all
the
time,
that's
easy
to
spot,
because
you
have
this
muscle
memory
after
the
findings
that
you
already
dismissed.
But
if
you
do
that
with
multiple
people,
this
is
actually
what
we
do
in
the
in
the
absence
team.
E
It's
really
hard
to
figure
out
if
it's
really
new
or
if
it's
not
so,
we
created
some
issues
to
improve
that,
but
the
education
tracking
is
adding
a
lot
of
noise
on
top
of
everything.
So
that's
something
that
you
are
working
on.
We
know
that,
but
we
just
wanted
to
remind
you
that
it's
probably
the
major
pinpoint
that
we
have
attached
and
also
the
noise
you
know
in
the
results.
Yes
link
is.
E
Giving
us
a
lot
of
results.
What
was
this
example
itan
that
you
added
that.
E
E
We
are
obviously
waiting
for
the
aggregation
of
results
and,
as
domenic
said,
we
will
do
a
complete
wipeout
of
all
the
findings
that
we
have,
because
we
have
more
than
60k
and
I
think
we
are
hitting
a
limit
on
get
gitlab
get
up
the
project,
so
we
might
want
to
use
to
start
over
completely.
We
will
do
that
with.
E
E
That
all
right
for
dependency
scanning
that
was
not
well
communicated
on
our
site.
So
that's
why
I
added
this
and
that's
mostly
why
we
are
creating
this
event.
This
inventory
every
time
I
I
share
my
progress
again
on
the
inventory.
I
got
the
same
thing
back,
but
we
have
a
dependencies
page.
Why
don't
you
use
that?
E
E
There
were
some
bugs
like,
for
example,
if
the
pipeline
is
running
suddenly
the
list
is
empty
waiting
for
the
pipeline.
The
latest
pipeline,
to
finish
so
that's
an
issue
as
well.
It's
probably
going
to
be
fixed
in
the
next
iteration
and
we
don't
have
a
good
overview
of
all
the
projects
that
we
want
to
monitor.
E
That's
the
the
major
problem
that
we
have
with
dependencies
and
or
supply
chain
in
general.
We
can
start
with
the
github
project,
but
we
don't
have
a
clear
path
to
follow
when
it
comes
to
dependencies
and
the
software
build
of
material.
So,
for
example,
this
inventory
that
we
are
building.
We
are
cherry
picking
projects,
one
by
one
in
every
single
projects
that
we
have
in
github.org.com,
gitlab
services,
gitlab
data
and
so
on.
E
So
that's
extremely
painful
for
us
and
I'm
not
even
sure
that
we
are
close
to
having
something
that
we
will
be
able
to
use
in
the
future.
I
mean
the
problem,
so
the
inventory
will
be
handed
over
to
the
product
once
it's
it's
ready
to
ship,
which
is
probably
by
the
end
of
the
quarter.
E
We
also
have.
We
also
don't
have
the
higher
level
of
dependencies,
meaning
that
interpreters
and
compiles.
So,
for
example,
a
few
months
ago
we
had
a
big
advisory.
Nrc
in
go
that
we
had
to
fix
so
by
updating
the
go
versions
that
were
using
and
we
had
no
idea
where
go
was
used,
which
versions
were
using
what
they
were
used
for.
It
was
just
to
build
something
for
a
tool
if
it's,
if
it
was
built
to
build
something
that
that
was
going
to
be
shipped
in
gitlab.
C
C
Mean
so
we're
we're
working
on
this,
but
I
just
wanted
to
add
this
in
here
that
and
because
I
you
know,
I
think
it
would
be
useful
for
customers
and
we're
going
down
that
route,
but
just
the
idea
that,
like
having
metadata
about
your
dependencies
also
included
and
available
to
see,
I
linked
the
issue
about
the
dependency
review
bot
and
how
we're
sort
of
doing
that
in
terms
of
like
getting
collecting
different
information
about
the
packages
you
know
when
they
were
last
updated.
All
of
that
good
stuff.
C
C
C
So
so
costell
has
been
working
on
building
out
some
of
the
api
fuzzing
stuff.
I
know
he
has
had
at
least
one
successful
I
mean
or
he's
had
success
running
it.
The
challenge
that
we're
having
and
it's
not
necessarily
the
product
itself
is
that
gitlab
doesn't
have
like
an
open
api
or
swagger
definition.
C
File
so
he's
hand
defining
all
of
it,
which
you
know
for
the
size
of
our
api,
take
a
while
it's
on
hold
now,
because
we
had
to
move
him
to
a
different
priority,
but
it
is
on
our
roadmap
to
continue
that
work
at
some
point
and
you
know
have
a
similar
to
das.
Having
like
nightly
api,
fuzzing
jobs,
running.
D
Sure
yeah,
that
makes
sense
I'll
look
at
the
the
fuzzing
that
that
issue.
That's
linked
there,
the
buzzing
for
rails
and
see
when
we
can
schedule
it
right
now
we're
working
on
replacing
the
dast
api
engine
with
the
peach
engine
which,
since
we
acquired
it,
and
it
does
dust
and
fuzzing
we're
putting
that
into
dast
api.
So
as
soon
as
that's
done,
the
main
engineers
will
have
time
to
go
back
and
and
start
working
on
new
features.
D
So
I
can
and
look
at
prioritizing
that
in
with
all
the
other
other
things
that
we've
got
so
yeah
I'll.
Keep
that
in
mind.
E
F
Yeah,
just
this
is
not
a
re
yeah,
it's
it's
a
product
request,
but
at
the
same
time,
I'm
aware
that
this
is
very
tied
to
our
own
workflow
and
probably
not
for
everyone.
But
many
of
our
cv.
Requests
are
basically
copy
pasting,
many
a
bunch
of
issues
and
a
bunch
of
data
from
from
the
issues,
and
we
actually
that's
our
fault.
But
there
are
mistakes
on
that
process
and
the
affected
version.
F
F
So
we
have
a
bunch
of
fixed
in
13.12.x
and
then
one
of
those
x's
makes
its
way
to
the
blog
post
because
we
forgot
to
patch
one
of
them,
which
is
our
fault,
but
it
could
be
automated
away
so
yeah,
that's
kind
of
a.
F
F
C
F
G
E
All
right,
I
jump
ahead
to
the
next
one,
that's
for
you
actually
mad
and
I'm
pretty
sure
we
won't
be
able
to
cover
everything
but
real
quick.
I
added
the
issues
that
we
use
to
track
the
requirements
that
we
have
on
our
side
for
threat
insights,
which
are
most
of
our
requirements.
It's
probably
too
big
for
this
table.
So
that's
why
we
use
this
issue.
We
create
one
issue
per
quarter
so
that
we
can
track
the
progression
on
that
and
see
how
well
we're
doing
with
that.
E
That
was
added
by
dominica,
we'll
cover
that
real
quick.
We
already
covered
that
at
the
beginning
of
the
meeting
the
clean
state
button,
that's
something
that
we
really
need
to
be
able
to
deal
with
the
number
of
findings
that
we
have
and
maybe
wipe
out
everything.
This
missing
is
going
to
be
problematic,
because
if
we
search
something
we're
going
to
have
the
hundred
thousands
of
old
findings
in
the
db
that
we
will
search
as
well,
because
that's
that's
not
going
to
work
for
us
and
mostly
it's
not
scanning
for
gitlab.
E
First
of
all
and
for
large
projects,
we
have
too
many
findings
timeouts.
When
we
want
to
dismiss
bulk
of
findings,
for
example,
if
we
select
too
many
findings
at
once,
it's
timing
out,
so
we
have
a
lot
of
different
issues
related
to
that.
So
we
need
to
come
up
with
a
better
solution
for
this,
and
I
added
something
matt
for
101
this
afternoon
to
talk
about
this
by
the
way.
E
That's
something
that
I
already
talked
about
when
we
do
the
triaging
process.
With
more
than
one
engineer,
it's
really
hard
to
figure
out
what
is
really
new
from
what
is
just
noise
because
of
the
location
change
for
example.
So
having
suggestions
could
save
us
some
time?
E
That's
nice
to
have
it's
not
blocking
us,
but
that
that
could
be
useful
coming
from
eaton.
We
don't
have
a
way
to
filter
out
by
imaging
container
scanning.
So
that's
something
that
we
are
also
echoed
a
bit
lower
in
the
table
for
container
security.
E
So
hopefully
we
can
gather
the
results
and
and
pause
the
results
correctly,
but
once
they
are
in
the
dashboard,
there's
no
way
to
understand
where
they
are
coming
from
and
we
have
thousands
of
results
sometimes
once
we
enable
controller
scanning
it
lights
up
like
a
christmas
tree.
So
that's
that's
a
big
issue
for
us,
the
last
one
here,
the
merchant,
the
merch
request,
security,
widget
not
being
stable.
E
Those
were
for
something
that
disappeared
from
the
table.
I
don't
know
why.
That's
that
was
for
the
usage,
the
dog
fooding,
especially
of
the
security
gates,
what
we
call
the
merch
request,
security
approvals.
If
I
remember
correctly,
we
are
anywhere
near
a
state
where
we
would
be
able
to
enable
that
on
on
git
lab.
E
E
So
that's
something
that
we
will
need
to
to
address
and
the
last
one
is
something
interesting
that
was
created
by
tago
this
week.
I
guess
like
a
few
days
ago,
where
some
findings
similar
similar
findings,
are
based
on
on
rules.
We
would
be
able
to
automatically
triage
things.
So
that's
something
we
would
like
to
explore
in
the
future,
but
that's
very,
very
low,
invisible,
it's
nice
to
have
in
the
future.
We
understand
it's
it's
great,
but
we
have
other
priorities
for
now
and
as
for
or
scanning
italian,
you
want
to
cover
that.
C
I
mean
sam
we've
talked
about
most
of
this
already,
but
the
challenge
we're
facing
here
at
get
lab
is
well
there's
two
big
ones
one
is
we
build
all
our?
We
don't
build
a
single
containers
and
single
pipelines.
We
build
all
of
our
containers
in
one
pipeline
and
we
talked
about.
C
We
currently
have
a
way
of
triggering
another
pipeline
that
sends
a
list
of
containers
and
we
do
it
that
way,
but
you
could
also
you've
also
said
that
we
could
create
dynamically,
create
multiple
jobs
for
each
container
within
the
same
pipeline,
which
is
something
we
would
need.
We
need
to
explore
still.
C
The
challenge
then
becomes
even
if
we
do
that,
like
in
the
dashboard.
It's
you
know,
especially
this
isn't
that
you
know
like
with
with.
If
the
base
image
has
a
lot
of
findings,
then
you
get
like
50
000
findings
in
one
thing
and
we
can't
filter
by
individual
containers
and
so
that
that
could
be.
A
C
A
At
time,
if
anyone
needs
to
drop
feel
free
to
go
ahead
and
drop,
I
have
time
I
can
stay
on,
so
I
can
route
things
as
needed
to
various
parties.
Sorry
to
interrupt
ethan.
C
Oh
yeah
no
problem
and
then
the
other
thing
that
we're
looking
at,
which
is
not
you
know,
is
basically
you
know.
We
have
a
requirement
for
scanning
all
containers
in
production,
which
means
third-party
containers
that
aren't
built
within
gitlab.
So
you
know
yeah
I
mean
I'm
just
letting
you
know
that
that's
a
problem
we're
trying
to
solve
yeah.
A
So
you
can
scan
container
or
you
can
spend
images
today
that
are
not
built
with
gitlab.
It
does
not
have
to
be
built
by
gitlab
right
with
you
to
scan
it.
You
can
scan
out
to
any
external
registry.
One
of
the
big
reasons
for
us.
Switching
to
trivia
was
because
it
was
it
made
it
a
whole
lot
easier
for
us
to
do
that.
Production,
vulnerability
scanning
our
first
iteration
of
production,
vulnerability
scanning
it
may
come
out
as
early
as
14
1.,
so
we're
you
know.
C
C
So
I
I'm
just
saying
that
not
expecting
you
to
solve
it,
but
but
yeah
I
mean
like,
especially
for
for
actioning.
A
That
I
mean
so
that
it
gets
tricky
because
there's
overlap
between
containers,
inning
and
vulnerability
management.
That
filter
would
actually
fall
under
matt's
team
under
vulnerability
management,
but
when
we
do
implement
the
production
container
scanning
we're
planning
to
have
a
separate
tab,
so
you'll
have
one
tab
for
anything
that
was
found
as
part
of
the
pipeline
and
then
a
second
tab
for
anything.
That's
external
vulnerabilities.
A
I
think
annabelle
actually
might
set
up
a
meeting
with
you
soon
she's
trying
to
validate
some
of
the
designs
around
that,
but
we
will
have
an
image
filter.
If
not
it's
part
of
the
first
release
coming.
You
know
in
a
follow-on
release
to
let
you
filter
by
image
in
that
external
tab.
G
G
Yeah,
what
we'll
probably
do
in
this
case,
so
annabelle
is
actually
able
to
leverage
some
of
the
future
looking
designs
either
that
andy
had
designed
around
not
a
drop-down
filter,
but
more
of
like
the
issue
list
or
the
mr
filter,
where
there's
actually
filterable
keywords
inside
of
it.
I
think
that's,
probably
the
more
appropriate
direction
to
cut
over
for
something
like
a
container
image
which
is
not
going
to
apply
the
majority
of
the
scanners.
G
So
we
start
getting
these
more
kind
of
like
specific
and
advanced
filtering
cases.
That's
ultimately
where
we
want
to
head
on
the
vulnerability
report.
So
I
think
if
sam
is
able
to
get
there
first,
we
can
actually
reuse.
It's
not
the
component.
That's
a
challenge
for
us.
It's
going
to
be
the
performance
of
the
packing
search
itself,
because
I
don't
believe
we
have
anything
in
elasticsearch
for
that
type
of
data
right
now
and
given
the
volume
of
vulnerability
records,
I
think
that
would
be
a
non-starter
to
do
it
out
of
the
postgres
database.
C
E
A
Yeah
sounds
good
just
on
that.
Last
point,
though,
the
approval
rules
are
rigid.
We
know
that
we
have
not
invested
in
that
area.
In
a
long
time.
The
board
approved
us
to
hire
one
new
developer
and
they
are
they
they're
actually
kind
of
swap
things
around,
but
it's
actually
going
to
be
zamir
and
he
started
on
monday
of
this
week.
So
he's
dedicated
100
full-time
to
nothing
but
security
approvals.
A
So
we
will
start
seeing
some
progress
there.
We
have
we're
actually
planning
to
move
that
entire
security
policy
ui
to
make
to
bring
all
sorts
of
benefits,
but
we
can
do
a
deep
dive
on
that
later.
If
people
need
to
draw.