►
A
Was
way
too
much
effort
recording
alright,
this
is
discussing
the
the
migration
to
create
vulnerabilities
records,
and
we
were
discussing
specifically
whether
or
not
we
wanted
to
within
this
migration
drew
the
the
update
to
set
vulnerabilities
to
dismissed
and
and
I
think
we
do
I
think
that
might
say,
like
that's
the
most
important
part
of
this
migration,
because
like
for
a
lot
of
the
a
lot
of
the
projects,
you
know
they
might
have
dismissed
a
bunch
in
the
past
and
then
we
moved
them
over
and
then
they
would
have
to
manually
redo
that
now,
whether
it
needs
to
be
done
as
part
of
this.
B
Well,
if,
if
it's
important,
if
it's
important
from
total
point
of
view
like
this,
we
can
just
leave
it
because
it's
not
slowing
it
down,
it
will
just
increase
the
overall
time
because
we
need
to
like
schedule
it
for
more
project,
and
some
project
will
need
just
this
part.
But
then,
if
we
have
to
do
another,
you
know
if
we
have
to
do
it
in
another
much
request
and
another
big
round
migration.
You
have
to
go
through.
B
C
Okay,
yeah
I
mean
it
I
think
that
was
my
only
question
was
if
everyone
is
comfortable
with
the
performance
of
it
that
doesn't
seem
to
be
a
risk,
then
what
Ross
said
is
accurate.
The
dismissal
state
making
sure
that
we
persist.
That
is
a
top
concern,
because
we
have
I
mean
I,
I,
think
the
top
project.
What
was
there
was
something
crazy
like
two
hundred
ninety
thousand
vulnerability
findings
project.
Most
of
them
are
not
like
that,
but
I
think
the
average
was
in
the
mid.
C
C
Wayne
Haber
ran
some
metrics
on
that
and
pulled
that
out.
I
mean
that's
an
extreme
outlier,
but
it's
I
think
there
were
a
number
of
projects
that
had
thousands
or
tens
of
thousands
of
items
in
there.
Yeah
yeah.
B
A
Yeah
yeah,
we
definitely
look,
I
mean
a
lot
of
times.
We
do
use
the
main
gate
lab
project
as
the
as
a
quick
benchmark
or
whatever
against
it,
get
a
lot
of
things
and
that
one
doesn't
need
to
be
migrated.
Do
the
full
migration,
but
it
will
need
to
do
the
you
know,
update
dismissal,
part
of
the
migration,
so,
okay.
B
A
B
A
B
Yeah
I
didn't
like
like
another
review
yesterday
and
I
think
with
it's
looking
solution
that
we're
working
on
looks
like
viable
and
we're
getting
closer
it.
We
just
need
to
cover
all
these
points
that
that
I
found
so
in
that
that
will
be
ready
to
pass
it
to
a
maintainer
which
suppose
there
will
be
a
quick
turnaround
after
all
this.
B
A
Yeah
you
can
share
it.
That's
yeah,
that's
great
I!
Guess!
If
we're
like
top-loading
first
stuff,
that's
gonna
be
relevant
to
Matt
the
delay
interval.
You
know
so
like
I
tried
to
do
20
seconds,
and
you
said
the
minimum
is
two
minutes
which
you
know.
That's
you
know
totally
fair,
but
that
will
you
know,
make
the
overall
duration
longer
I,
don't
I.
B
Just
to
give
you
some
context,
the
minimum
we
said
a
delay
between
each
background,
migration,
which
is
background
job,
so
that
we
give
the
database
to
recover
from
the
potential
load
and
the
stress
it
was
put
on,
and
this
like
process
like
ask
how
to
vacuum
to
clean
up
Odom,
like
whatever
is
left
over
from
the
work
that
was
done
in
the
each
job
and
I.
Don't
know
why
two
minutes,
it's
always
been
there.
B
B
Yeah
right,
the
work
that's
done,
so
the
idea
is
that
each
job
runs
and
it's
enough
time
between
this
job
and
the
next
job.
To
like
give
the
database
of
space
entered
with
the
current
approach.
I
think
will
probably
like
the
project
with
the
most
vulnerable.
It
is
to
be
created,
it's
like
around
100
K
and
if
we
do
them
in
batches
of
10k
and
each
part
is
like
relatively
maybe
around
cuz
ii
think.
B
Probably
that
the
longest
for
that
project,
the
top
time
it
will
take
to
migrate
everything
will
be
like
30
seconds
or
something
give
or
take.
So
we
have
with
two
minutes.
There
will
be
enough
time
left
for
before
the
next
job
start
so
yeah,
but
the
basic
numbers
are
with
two
minutes
will
be
to
take
seven
days
to
to
complete
the
whole
thing
which
for
background
migration
is
something
that's
like
normal
I.
Don't
know
what
you
like.
The
total
point.
C
Well,
I
guess,
if
that's
the
the
way
that
we
need
to
do
these,
the
integrals
on
that
I
would
wonder.
Is
there
a
way
that
we
can
target
ones
that
have
or
sequence
the
highest
number
of
dismissed
vulnerabilities?
First,
because
if
people
had
a
handful,
I'm
less
concerned
about
that
and
I'm
wondering
if
it's
less
of
an
even
distribution
and
more
just
kind
of
a
you
know,
ten
percent
are
gonna.
C
B
Well,
I,
don't
know
it
might
be
possible,
but
then,
when
you
start
adding
new
like
salty
ink
and
stuff
like
this,
it
may
affect
the
the
execution
time
and
in
the
dot
migration,
but
the
scheduling,
one
that
is
doing
the
scheduling.
It's
pretty
much
done
at
the
moment.
So
I've
checked
it
I
even
run
manually,
all
that
the
budgets
and
the
queries
to
make
sure
it
will
it
takes
acute
fast.
So
if
we
change
it,
we
have
to
go
to
that
process
again
and
we
make
we
may
hit
a
problem
cause.
B
Suddenly,
if
you
have
some
sorting,
everything
may
go
like
may,
may
start
being
complete
like
very
slow
and
time
it
out.
So
who
knows,
I'll
take
a
look
but
I.
Don't
I
can't
answer
this
and
we've
never
done
this
like
I,
don't
see,
I've,
never
seen
it
like
done
it
in
a
way
that
we
target
some
cohort
of
projects
or
users
like
to
be
done.
First,
we
just
do
demo
like
in
what
like
in
usually
we
sorted
sorting
by
ID
because
that's
the
fastest
and
that's
how
it
works.
C
Ruff's
I
gonna.
Look
for
you
to
agree
with
that,
but
I
think
that
that's
accurate,
because
the
vulnerabilities,
if
they
rerun
it,
will
just
convert
from
the
old
model
to
the
new
standalone
vulnerability
objects.
It's
just
the
persistence
of
things
that
were
already
identified
and
dismissed
like
as,
if
you
know,
maybe
it's
a
false
positive.
Maybe
it
is
a
low
priority
or
something
that
doesn't
apply
it's
if
that
extra
step
that
they've
already
taken,
we
want
to
avoid
for
them.
A
A
B
B
B
These
two
minutes
are
just
like
recommendation
that
we
have,
and
we
can
always
try
to,
let's
say
make
it
one
minute,
which
will
bring
the
total
time
from
seven
days
to
just
three
and
a
half
days
and
explain
why
we
want
to
do
this
and
then
leave
it
to
the
maintainer
to
all
four,
not
it
like.
If
we
can
show
data
that
each
job
will
take
no
more
being
like
20
or
30
seconds,
and
then
we
have
another
30
second
to
recover
than
one
minute
might
be
at
zero.
B
B
A
A
B
B
A
B
They
may
require
disabling
the
DDL
transaction,
which
means
that
the
adding
cover
column
will
not
be
in
transaction,
so
it
might
be
safer
to
just
do
it.
You
know
in
a
separate
migration,
but
just
just
a
detail,
I
think
if
you
in
cases
like
this,
if
you
are
in
now,
if
you
wonder
they're
just
doing
in
a
separate
migration
is
safer
and
more
because,
like
in
this
case,
you
can
target
all
everything
around
it
to
the
specific
change
you're
doing,
while
when
you're,
combining
changes
in
single
migration,
like
1min
may
require
disable
digital
transactions.
B
The
other
note-
and
it's
trickier,
to
to
reason
about
rollback,
because
one
of
the
change
could
be
made
and
there
was
it
was
most
introduction
something
field.
It's
not
rollback,
and
if
you
want
to
rerun
the
migration
it
will
fail
because
the
first
ones
already
done
so
usually
it's
separate
migrations
and
it's
not
changing
the
amount
of
work
to
be
done.
Anyway.
Significantly
we
talked
about
the
dismissal
that
will
keep
it
here.
For
now,
we
need
to
add
a
simple
test
for
the
scheduling
meditations.
There
are
many
examples
in
the
codebase
in
it.
B
Kind
of
paper,
when
everything
is
done-
and
we
know
what's
going
on
it-
will
be
good
what
date
the
most
requests
description
with
every
query
that
we're
gonna
run
in
execution
plan
in
anything,
and
the
goal
of
this
is
so
that
it
decreased
time
for
the
read
next
round
of
maintained.
I
did
you
because
if
it's
not
there,
the
main
and
usually
a
lot
of
the
tracing
discussions
at
that
time
are
result
in
collapsed
in
the
the
maintain
us:
don't
see
them,
so
it's
pretty
much.
B
Like
like,
in
this
case,
the
things
changed
a
lot
and
we
may
try
few
different
approaches
in
its
type.
It's
hard
to
keep
track
to
keep
the
description
up-to-date
and
sometimes
it
doesn't
make
sense
cause.
You
are
not
even
sure
that
we're
gonna
work.
So,
let's
just
make
sure
we
when
we
are
when
we
are
ready
to
update
this
and
that's
pretty
much
it
and
from
the
look
of
it.
B
The
most
important
thing
we
need
to
think
about
is
the
delay
interval
and
can
we
can
we
target
specific
projects
first
or
not,
and
if
not
try
with
a
shorter
interval
like
one
minute
cause.
What
is
is
like
tree
and
update
days
more
acceptable
cause
in
the
I?
Don't
know
how?
How
urgent
is
that
whole
thing?
But
I
don't
know
to
me.
It
seems
fine,
because
three
days,
sooner
or
later,
yeah.
C
So
what
happens
in
the
case
of
let's
say
that
I
am
one
of
these
projects
where,
on
day
one
I
run
the
pipeline,
nothing
has
been
migrated,
I
had
500,
dismissed
vulnerabilities,
all
those
are
going
to
show
up
in
and
when
I
when
I
run
my
next
pipeline.
But
then,
let's
say
a
couple
days
later,
the
background
job
gets
to
my
project.
Is
it
still
going
to
update
those
findings?
Are
the
old
findings
the
dismissal
status
and
apply
those
for
my
next
subsequent
pipeline
run?
A
This
migration,
as
far
as
the
dismissals
is
set
up
in
such
a
way
that
it
can
be
reran
like
that
that
logic
can
be
reran
as
many
times
as
we
want
I
mean
we
might
need
to
do
it
in
a
different
migration
in
the
future
or
something
like
that,
but
it
can
be
reran.
So
so
we
can.
You
know
you
know.
If
something
falls
through
the
cracks
we
can,
we
can
update
it.
Yes,
I
under
the
that
you
mentioned
like
they
should
get
updated,
just
fine
like
they
will
get
caught
up.
A
One
thing
that
the
your
your
asking
around
you
didn't,
quite
you
didn't
quite
say
the
exact
thing
that
could
happen,
but
you
got
really
close
in
that
like
so
we
we,
we,
we
roll
out
this
migration,
the
migration
finishes,
and
then
somebody
else
like
runs
a
new
pipeline
and
and
merge
that
in
and
create
new.
What
like
they
dismiss
stuff
in
the
pipeline
right
and
then
they
merge
it
into
their
branch
and
it
shows
up
on
the
dashboard.
A
new
vulnerability
is
gonna
be
created,
but
it
won't
be
dismissed.
A
Yet
we
do
have
work
for
that.
I
believe
Nihal
is
assigned
to
that,
but
if,
if
this
goes
out,
if
this
mr
goes
out
without
his,
mr
there
is
the
potential
for
you
know
some
of
those
projects
to
have
dismissals
left
behind.
If
that
makes
sense,
but
that's
that's
not
necessarily
a
concern
of
this
discussion.
Yeah.
C
I
guess
in
that
case,
maybe
this
is
a
terrible
assumption.
They
would
be
running
pipelines
against
a
project
that
had
likely
already
been
around
for
a
while.
So
I
would
think
that
any
new
additional
discoveries
and
dismissals
would
be
small,
yeah
single
digits,
so
we're
not
talking
about
the
entire
history
of
their
error
and
hurricane
history.
So
in
that,
in
that
scenario,
that
somebody
runs
the
pipeline
and
generates
these
new
standalone
vulnerabilities
before
the
update
gets
to
their
project.
C
If,
when
the
update
eventually
runs
a
few
days
later,
all
of
a
sudden,
all
those
dismiss
things,
sort
of
reappear
as
dismiss
vulnerabilities,
I,
think
that's
fine
and
that's
actually
really
easy
to
message
like
hey
we're
having
to
do
a
big
migration
of
all
the
projects.
If
you
have
a
lot
of
stuff,
that's
dismissed,
you
may
not
see
the
update
for
a
few
days.
Don't
worry
it'll.
A
C
C
B
It
calls
that's
very
different
because
all
these
numbers
seven
days
are
for
calm
and
so
like
we
never
have
a
way
to
actually
know
what's
the
case
for
self-managed,
but
usually
the
dot
the
data
there
is
like,
where
like
they
have
like
way
less
projects
and
everything.
So
those
are
background.
My
question
Cameron
for
a
day
for
them
or
for
like
hours,
even
because
it
depends
on
the
size
of
the
day,
yeah.
C
Honestly,
I
don't
know
the
specifics
in
this
particular
customer
setup,
but
what's
interesting
is
that
they
build
custom
applications
for
clients,
so
they're
I
would
think
of
the
people
that
we
typically
deal
with,
probably
more
likely
to
have
a
larger
database
instance
because
they
have
I,
don't
know
dozens,
maybe
hundreds
of
people
that
they
build
apps
applications
for
they're
all
mayushii
in
get
lab,
but
in
any
case
I
doubt
it's
thousands.
It's
probably
not
fifty
five
hundred
anywhere
close
to
that.
B
Anyway,
in
the
past
week,
when
we
had
background
migration,
we
were
including,
like
a
paragraph
in
the
release
post
about
how
how
you
can
check
for
yourself
was
based
inmate,
because
you
just
need
to
query
that
we
do
and
then
multiply
by
delaine's
figure
out
how
long
it
will
it
will
take
for
you.
So
if
you
have
a
more
direct
line
without
customer,
you
can
ask
them
to
check
and
see
okay,
for,
like
at
least
two
at
least
they
will
know
when
it
will
finish
for
ten.
C
B
A
B
The
checklist
is
like,
like
the
checklist,
is
like
resume
at
all
the
other
stuff
that
scatter
around
that
that's,
why
I
put
it
in
one
place,
but
I
think
that
if
we
cover
all
this
and
most
like
pretty
much
all
they're,
not
pretty
just
he
needs
to
be
done.
There
is
nothing
like
known
about
it.
We
will
be
fine
too
I.
The
most
it
will
take
for
you
is
to
update
the
description
at
the
end
before
the
particle
paper.
We.
B
We
have
a
lot
of
these
execution
plans
in
curious
and
the
consequent
to
the
execution
plan
in
database
fun,
but
you
can
find
the
start
in
a
quite
different
batches.
But
I
did
this
and
we
have
the
numbers
around
the
comments
and
also,
if
you
need,
if
you,
if
you
can
find
it
just
being
and
I
and
I
can
be
the
number
stop
or
run
the
query
for
you.
So
just.