►
From YouTube: Secure:Threat Insights group discussion 2021-04-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Know
it's
I
just.
I
can't
handle
the
awkward
silence.
It
drives
me,
nuts,
let's
jump
right
into
the
agenda
and
the
first
item
is
matt's.
C
You
could
thanks
lindsay,
I
probably
should
have
done
the
intro
kickoff.
That's
what
I
thought.
Thiago
was
handing
off
to
me
actually,
but
I
was
trying
to
get
the
agenda
back
up
on
screen.
While
I
was
doing
it
and
in
the
interview
started
talking,
so
it
worked
out
all
right.
So
I
don't
know
if
anybody's
seen
the
slack
thread,
but
it
looks
like
there's
a
couple
of
things
where
the
dash
team
is
working
in
a
particular
area
on
their
so
they're
releasing
or
trying
to
release
a
new
findings
aggregation
feature.
C
and
that
is
causing
some
very
poor
ux.
I
think
when
they
stitched
it
all
together
and
did
a
walk-through,
so
that's
the
the
slack
thread
link.
I
guess
the
question
is
really:
do
we
see
any
potential
for,
even
if
we
were
to
put
some
stuff
on
pause
for
1311
to
pick
up
that
one
issue?
I
know
it's
towards
the
end,
but
it
looks
like
it's
a
two-point
weight
and
given
that
the
details
page
was
already
implemented,
maybe
it
is
a
small,
quick
win
that
would
help
another
team
out.
A
So
I
think
part
of
the
weight
of
that
issue
was
considering
an
issue-
that's
blocking
it,
which
is
to
use
the
same
component
that
we
use
on
the
vulnerability
details
page
out
on
the
modal
for
the
pipeline
view.
So
it
would
be
either
throwaway
work
to
just
update
the
pipeline
view
to
grab
this
data.
That'd
probably
be
a
good
amount
of
throwaway
work
or
the
size
of
that
would
be
much
larger.
A
I,
I
really
think
we're
kind
of
blocked
on
from
an
efficiency
perspective
on
getting
this
done
until
we
have
updated
the
modal
to
use
the
shared
component
daniel.
I
know
you
just
joined,
but
if
you
could
just
real
quickly
back
me
or
see,
if
I'm
totally
crazy
on
this,
but
this
is
about
getting
the
generic
security
report
schema
working
for
some
dest
ongoing
work
sooner
to
support
some
of
their
needs,
but
they
need
it
displaying
both
on
the
details
page
and
the
pipeline
model.
So
you
know
that
this
current
milestone.
D
We,
if
so,
if
push
comes
to
shop,
I
think
dave
is
still
working
on
the
that
part
for
the
vulnerability
details
page
if
we
need
it
for
the
moto.
I
think
the
data
is
there.
So
if
push
comes
to
shove-
and
they
just
need
a
small
part
of
data,
we
could
like
fudge
something
and
stick
it
in
there
without
yeah,
without
it
being
like.
Actually
generic,
it's
just
sort
of
like
hard
coded
for
just
that
piece
of
data.
A
C
C
Yeah!
That's
that's
related,
but
not
here
like
daniel.
I
really
like
your
suggestion.
I
think
this
is
more
kind
of
if
there's
anything
that
we
can
shunt
in
to
help
them
display
that
same
information
in
that
modal.
For
the
mr
in
the
pipeline,
then
that
would
allow
them
to
potentially
release.
There
are
other
things
like.
I
don't
want
to
make
it
sound
like
this
is
the
only
thing
blocking
it.
C
Point
b,
so
the
diff
is
also
kind
of
odd.
So
again
I'll
point
you
back
to
the
cams
video
walkthrough.
Basically,
if
you
have
run
dash
cams
before
and
you
have
multiple
discrete
findings
and
then
you
run
dest
again
with
the
new
version
that
can
do
this
aggregation,
the
new,
singular
finding
is
actually
showing
up,
as
already
dismissed
in
the
mr,
if
you've
dismissed
any
of
the
original
findings
that
are
now
rolled
up
underneath
it.
C
Even
though
it's
supposed
to
behave
like
a
new,
a
completely
net
new
finding
and
there's
some
other
wonkiness
in
there,
too,.
E
Did
they
just
changes,
the
reports
that
they
send
us,
or
did
they
actually
change
anything?
On
the
back
end
on
how
we
store
the
reports,
I
didn't
get
a
chance
to
look
into
that
at
all
yeah.
I
don't
know,
then
they
were
asking
about
yesterday,
but
it
does
seem
at
odd
in
the
walkthrough
that
the
location
does
still
say
page
one
and
because
it's
still
showing
there
is
page
one.
That's
gonna
that
location
hasn't
changed.
So
there's
something
in
how
we're
storing
those
new
reports.
I
think
that's
off.
C
Yeah,
I
I'm
not
sure
so.
This
kind
of
came
up
my
time
fairly
late
last
night
and
I
was
poking
through
the
thread
and
it
looks
like
this
is
a
very
sort
of
late
realization.
Now
that
they're
putting
all
the
pieces
together,
I
think
the
the
general
ask
is
because
we
don't
have
anything
big
coming
out,
and
this
is
like
a
big
feature
for
the
das
team
that
they've
been
working
on
for
several
milestones.
C
That
would
be
really
appreciative
right
now,
because
I
know
they're
even
trying
to
line
this
up
as
like
one
of
the
main
headline
things
for
the
release
post.
B
A
F
A
A
A
C
A
F
A
Already
have
in
flight
if
they
want
to
just
crack
open
the
existing
pipeline
modal,
which
also
covers
the
widget
model.
We
figured
out
that
that's
the
same
code
base,
so
that
would
be
throwaway
work
on
their
part,
but
I
I'm
curious
though,
so
with
the
generic
report
schema,
the
data
has
to
be
under
this
details
block
to
get
consumed,
but
they
need
to
change
things
on
the
back
end
as
part
of
their
dust
and
implementation
to
go
from
something
that
is
accessible
today
from
the
pipeline
model
to
the
leap
for
us
using.
C
A
Cool,
so
I
I've
got
a
one-on-one
with
neil
tomorrow.
I
can
also
jump
on
this
issue
and
this
thread
and
suggest
that,
because
we
haven't
it
doesn't
look
like
we've
even
started
on
this
pipeline
conversion
issue.
Yet.
B
C
C
B
C
B
A
Speak
for
savage,
then
he
added
some
demos,
so
I
think
he's
realized
that
we've
kind
of
fallen
off
a
little
bit
about
adding
demos
to
our
meeting
agendas.
He
is
working
on
the
refactor
of
our
vulnerability
reports.
This
is
something
that
daniel
identified
a
little
while
back
to
make
it
easier
to
do
some
of
these
conversions
and
just
generally
reduce
the
amount
of
effort
it
takes
for
us
to
maintain
the
four
different
reports
that
we
have
now.
So
he
shared
a
link
to
that.
A
B
A
I
added
the
planning
breakdown
item,
so
I
guess
I
should
be
the
one
to
to
bring
it
up,
so
this
is
a
recently
released
item
from
andy
around
filtering
the
vulnerability
reports
by
detected
date.
I
know
there's
already
been
some
conversation
going
on
here.
Thank
you,
daniel,
for
taking
a
look
at
this
already.
A
E
Yeah
I
mean
it's,
it's
it'd,
be
I
mean
this
is
a
graphql
call
right
and
I
mean
it'd
be
fairly
basic
to
to
put
another
filter
in
by
date.
A
E
B
I
I
the
question
I
had
was
about:
when
you
do
the
the
date
range
filter.
Will
you
do
do
you
want
to
put
like
arbitrary
integers?
So
the
client
can
pick
a
range
or
do
you
want
to
put
the
fixed
values,
so
the
api
matches
the
front
end.
F
D
The
the
question
is
more
for,
because
the
graphql
api
is
available
for
everyone
to
use,
it
may
be
more
flexible
to
allow
people
to
search
for
custom
date
range.
B
E
And
speak
yeah
I
mean
on
the
back
inside.
I
don't
think
it'd
be
that
much
extra
work.
This
one
actually
probably
would
be
not
too
bad
on
the
performance
wise,
because
it's
only
looking
for
vulnerabilities.
E
C
D
Yeah,
it's
it's
just
one
of
those
things
where
we
really
need
to
dig
into
the
query
itself,
because,
although
we're
limiting
the
data
set,
we're
doing
it
by
adding
an
additional
where
clause.
So
it
might
just
be
like
you
gain
a
little
bit
here.
But
then
you
lose
a
little
bit
there
and
it
ends
up
equaling
out.
C
Yeah
well
as
long
as
I
think
the
performance
doesn't
go
down.
This
is
also
beneficial
just
from
the
perspective
of
really
long-running
projects
that
have
a
lot
of
like
mediums
or
lows
or
infos
that
they
don't
want
to
dismiss.
I
think
that's
kind
of
the
other
feedback
even
from
internally
is
you
know,
we've
got
there's
a
lot
of
noise
and
they
don't
want
to
close
things,
but
they
don't
want
to
see
it
every
time
they
load
it
up.
They
just
want
to
see
like
the
new
stuff,
so
it'll
still
be
a
beneficial
feature.
C
C
But
it
sounds
like
this
is
that's
tvd.
F
A
D
I
did
have
a
question
regarding
matt's
comment
on
saving
the
value
to
local
storage.
We
can
definitely
do
that.
It
might
take
some
discovery
to
figure
out
how
that's
going
to
work,
and
the
reason
is
because
we
for
the
filters.
We
currently
have
this.
This
weird
thing
where
the
the
data
source,
like
the
source
of
data,
that
everything
else
is
based
on,
is
changes
depending
on
what
you're
doing
in
the
app.
So
normally
it's
what
you
pick
in
the
drop
down.
D
So
when
you
pick
items
in
the
drop
down,
it
updates
the
query
string.
But
when
you
click
the
forward
and
back
button,
the
query
string
value
for
a
split
instant
becomes
the
the
source
of
truth.
That
then,
updates
the
selected
options
and
it
can
get
a
little
funky,
depending
on
where
you're
approaching
the
workflow,
like,
if
you're
entering
the
page,
with
the
curry
string,
if
you're
clicking
forward
and
back
if
you're
updating
it
by
hand.
D
C
Because
if
a
user
is
doing
something
else
in
git
lab
and
then
goes
to
that
vulnerability
report,
if
we
don't
do
it
from
local
storage,
it's
always
going
to
go
to
the
default
for
the
page.
So
if
I've
decided
that
I
like
90
days
and
that's
what
I
always
want
to
see
is
my
90
day
view,
but
you
force
me
to
30
every
time
now.
That's
one
extra
click
that
I've
got
to
make
every
single
time.
I
go
to
that
vulnerability
report.
A
D
And
will
would
that
we
still
have
it
persist
per
per,
let's
call
it
level,
so
we
have
a
different
filter
or
the
date
setting
for
each
level,
or
should
it
be
global.
D
So
if
I
pick
say
90
days
and
then
I'm
on
the
group
level
dashboard
and
I
hop
over
to
a
project
level,
but
then
on
the
project
level
I
previously
picked
60..
Should
I
save
60
for
the
project
and
then
94
the
group,
or
should
it
just
be
like
a
global?
You
pick
90
on
this
board,
so
on
this
other
one,
I'm
going
to
show
90.
C
C
that
was
the
main
thinking
behind
having
it
in
some
sort
of
you
know,
and
again
it
doesn't
have
to
be
client-side
storage,
just
that
user
preference
I'm
a
little
bit
nervous
about.
We
are
changing
how
much
information
we're
showing
in
terms
of
the
vulnerabilities
by
default.
So
if
we
let
the
users
choose
what
they
want
to
see
and
that
are
kind
of
it's
sticky
for
them.
I
think
that's
the
best
experience,
but
if
it's
always
defaulting
that,
may
I
think
it's
just
going
to
cause
a
lot
of
confusion
and
some
frustration.
C
Yeah,
that's
it's
a
fair
point.
We
did
change
this
a
couple
of
times
so
remember
we
used
to
actually
display
all
statuses
way
back
when
and
then
people
said
I
don't
want
to
see
all
the
dismas
stuff,
so
we
change
it
to
just
the
detected
and
confirmed
and
we're
hiding
result
and
dismissed.
C
B
And
and
if
we
follow
following
up
these
questions
with
the
new
issue,
I
have
one
as
well,
which
is
something
you
ask
with
questions
daniel.
B
It
got
me
thinking
what's
the
priority,
if
you
have
multiple
selections
from
different
places,
so,
for
example,
if
if
I
previously
had
the
dashboard
set
to
30
days,
but
I
click
on
a
link
that
has
a
query
string,
parameter
telling
it
to
so
to
do
90
instead
of
30,
which
one
wins,
the
query
string
or
my
personal
preference
anyway,
we
don't
have
to
answer,
but
I
think
I
think
that
should
go
in
that
follow-up
issue.
Well,.
C
I
actually
I
did
not
articulate
it,
but
I
kind
of
was
trying
to
think
through
this.
I
think
it's
going
to
cause
problems
both
directions,
because
I
would,
I
would
normally
say:
let's
do
the
query
string,
because
if
I'm
sharing
a
link
with
you,
I
want
you
to
see
the
view
that
I'm
looking
at,
but
then
that
means
you're
overriding,
my
personal
sense
yeah,
I'm
overwriting
it.
So
then
how
do
I
know
the
difference
between?
C
A
This
independent
issue:
we
can
do
it
ahead
of
this
new
filter,
so
you
don't
have
to
compromise
your
design
right.
C
I
don't
know
I'm
kind
of
wondering
about
just
scrubbing
the
whole
like
remember
what
I
selected
and
going
with
what
andy
said
and
just
default
it
to
all
time,
and
just
add
this
as
a
net
new,
I
can
choose
to
filter
down
from
everything
if
we're,
if
we're
not
going
to
see
like
a
big
performance
bump
from
it.
So
maybe
that's
the
first
thing
to
answer
like
if
we
did
do
a
limitation
on
the
number
that
we
pull.
If
we
do
just
30
days,
is
that
actually
more
performant?
E
I
honestly
don't
see
it
as
being
a
big
performance
gain
on
this.
One,
like
I
said,
the
thing
that's
going
to
matter.
The
most
is
the
the
table
joints
unless
it's
just
like
a
really
tiny
amount
of
vulnerabilities
that
it's
pulling
rather
than
a
larger
amount.
B
E
E
E
D
Go
ahead
also,
thinking
a
bit
more
about
it,
as
we
were
talking
it
to
restore
the
value
from
local
storage,
would
either
be
very
easy
or
somewhat
tricky,
but
not
much
in
between
I'm
leading
towards
very
easy.
But
it's
there's
always
like
those
things
where
I'm
like.
Oh,
this
is
easy
and
I
go
and
start
doing
it
and
it
ends
up
being
like.
Oh
okay.
I
really
underestimated
this,
so
I
don't
want
to
commit
to
anything
but
yeah.
I
don't
really
see
it
being.
You
know
kind
of
like
like
in
the
middle.
F
Yeah,
I
think
some
of
the
dashboards
and
reports
that
use
time
filters
actually
start
out
with
nothing
displayed,
and
then
you
kind
of
have
an
additive
filter
experience
that
begins
presenting
information
as
in
issues
or
to-do's
or
anything
else,
that's
kind
of
filterable
content
or
data.
F
A
So,
do
you
want
confirmation
about
the
lack
of
performance
improvement,
or
are
you
going
to
take
jonathan
and
daniel's
hunch
on
this
and
we
move
forward
with
leaving
the
persistence
piece
out
of
this
and
just
deciding
whether
we
can
move
forward
to
the
planning
breakdown
and
the
addition
of
the
new
filter.
C
Matt
can
we
do
both
I'd
like
to
see
if
we
could
do
like
just
a
quick
test
or
a
spike
on
that
jonathan
and
if
it
like,
if
he
comes
back
and
says,
oh
man,
there's
like
10x
fast
try
to
do
it.
This
way,
with
a
30-day
view
on
a
data
set
like
the
size
of
the
gitlab
org
project
represent.com,
then
I
think
it's
worth
considering
if
it's
in
10
better
than
I
don't
think
all
the
complexity
that
we're
talking
about
is
probably
worth
the
effort.
E
Yeah
I
mean
something
like
the
git
lab
data
set
like
that
all
there's,
so
many
of
their
those
that
are
like
that
are
old
that
were
there
from
the
very
beginning,
you
know
not
introducing
the
new
ones.
So,
on
those
yeah
I
mean
it
would
for
sure
make
the
performance
better,
because
we
would
not
be
pulling
nearly
as
many.
B
A
Can
we
agree
not
that
the
persistence
can
be
handled
in
a
separate
issue
based
on
the
results
of
that
performance
spike?
So
if,
if
jonathan
comes
back
and
says
yes,
there's
a
big
performance
improvement,
we'll
create
another
issue
around.
This
persistence
prioritize
that
ahead
of
the
new
feature,
because
it
does
touch
more
than
just
this-
this
one
filter
and
we
can
move
things
along
that
way.
A
D
Right,
I
have
one
final
question:
do
we
get
any
feedback
from
the
survey
yet.