►
Description
Some customers have complained that they don't fully trust the data being displayed by Value Stream Analytics (VSA). Larissa has found the problem, and here she explains it to the group. Specifically, the date range should apply to end events, not start events. Based on that, we decide that we probably also need to change how the overview and individual stages behave. Then Adam describes some of the new performance issues this will raise, and how we might address them.
A
Which
is
great,
but
maybe
for
pablo
and
just
make
sure,
thank
you.
B
All
right,
and
so
over
the
past
month
or
two
I've
heard
two
different
customers
say
that
they
don't
trust
the
data
they're
seeing
in
bsa
and
have
stopped
using
it
and
gabe
who
who
dog,
feed
dog
foods.
Bsa
also
said
the
same
thing,
and
so
I
started
diving
in
to
try
and
understand
why
people
were
feeling
that
they
weren't
seeing
the
data
they
expected
and
that's
what
I
want
to
walk
through
as
a
team
today,
just
to
try
and
understand
that
a
little
bit
better
and
dan.
B
B
B
So
ch,
then
we
we're
going
to
compare
the
stage
times
over
previous
months
and
then
we're
going
to
go
back
to
the
items
that
went
through
that
stage
most
recently
and
drill
in
and
figure
out
where
some
of
the
delays
may
have
occurred,
then
so
that's
the
first
scenario.
The
second
scenario
is.
B
Let's
say
I'm
a
director
of
engineering
and
my
engineering
leadership
just
wants
to
know
based
on
everything
we
shipped
in
the
most
recent
release
on
average.
How
long
is
it
taking
us
to
get
value
out
to
customers
for
the?
How
long
is
the
entire
workflow
taking
all
right
dan?
Do
you
want
to
share
your
screen
with
the
essay
group
level
open.
B
C
B
All
right,
so
the
date
range
already
defaults
to
the
last
30
days,
so
we're
looking
at
things
that
we
would
have
gotten
into
the
1312
release.
B
So
let's
take
a
look
at
the
stages
and
see.
I
think
we
should
mostly
just
focus
on
the
development
stages,
not
the
planning
and
design
stages
for
this
exercise.
So
let's
just
take
a
look
at
the
stages
and
see
which
stages
took
a
long.
B
C
So
am
I
playing
customer
here.
B
B
Okay,
so
it's
showing
that
there
were
five
items
in
the
verification
stage.
Does
anything
about
that
data
set
so
far?
Look
off.
B
Yeah,
that's
what
I
would
expect
to
eat
so
how
about
we
take
a
look
at
our
billboard
and
see
well,
first
of
all,
let's
see
how
we
measure
the
verification
stage.
C
B
And
how
nice
that
we
can
actually
see
that
in
the
tooltip?
Now
it's
very
nice,
okay,
so
verification
I
always
added
and
the
issue
was
closed.
All
right.
Let's
look
at
our
billboard
is
there.
I
would
expect
that
there
were
more
items
that
went
through
the
verification
stage
in
last
month.
A
Just
not
I'm
not
hundred
percent
sure
if
we
always
add
the
verification
label,
though
so
if
we
go
straight
to
production
since
we
realized
this,
this
is
actually
already
working
on
production.
I
think
we
sometimes
skip
the
verification
label
and
then
it
would
be
not
tracked.
That's
my
that's
my
immediate
thought,
but
not
sure.
B
B
B
Okay,
there's
one
that
second
one
down
is
a
verification
label.
Can
we
open
was
that
one
on
the
list
for
the
essay
days
to
completion
chart.
D
C
B
Stage
given
the
date
range
that
I've
selected,
so
if
you
go
back
to
bsa
so
two
questions
I
would
have
at
this
point.
B
We
are
showing
a
stage
time
of
five
days
which
now
we
can
filter
by
time
and
easily
see
if
that
makes
sense,
we
know
that
we're
using
media
not
average,
so
that
does
look
accurate
for
the
data
set,
but
now
I'm
questioning
what
what
this
data
set
is
and
and
that
my
stage
time
is
maybe
not
right,
because
it
doesn't
include
the
whole
data
set
okay,
so
I'm
gonna
move
on
anyway
with
my
next
task,
which
is,
I
am
going
to
take
a
look
at
some
previous
months
so
just
to
keep
it
simple.
D
B
One
thing
that
I
noticed
in
this
exercise
was
the
scrolling
there's
a
lot
of
scrolling
in
that
horizontal
map
to
look
at
the
stages
and
I'm
wondering
if
we
can
like
wrap
the
label
names
or
something
to
reduce
the
scrolling.
A
B
C
C
B
D
Can
you
just
take
a
look
at
when
that
issue
was
actually
opened,
because
I
think
that
maybe
the
filter
is
looking
at
the
dates
that
the
issue
was
open,
not
the
date
that
the
event
occurred
as
this
one?
I
can
see
from
our
side.
B
B
B
Yeah
and
it
becomes
even
more
evident
when
you
go
back
even
further
like
if
we
went
back
to
february
actually
do
you
want
to
do
that.
B
C
Just
to
be
clear,
larissa
and
I
are
like
acting
a
little
bit
like
we're
playing
customer
here,
a
little
bit
because
yeah
brandon,
that's
that's
been
our
conclusion
too.
It's
just
that!
It's
that
it's
that
create
time.
B
B
B
And
then,
if
we
have
gone,
if
we
go
through
the
final
task
that
we
wanted
to
do
in
this
scenario,
which
is
pulling
up
making
a
process
change
in
march
and
then
looking
at
the
march
data
stage
time
and
then
comparing
that
against
april
stage
time
to
see
if
your
process
changes
made
a
difference.
B
A
So,
just
to
make
sure
I
fully
understand
that
means
that
both
event
dates
needs
to
be
included
in
the
particular
date
range
in
order
to
show
up
correctly.
A
B
A
So
I'm
confused
too.
I'm
not
sure
if
that
made
sense,
though
so
in
order
to
show
up,
I'm
just
thinking
that,
for
example,
here
the
the
verification
label
and
the
issue
create
date,
both
need
to
happen
in
the
particular
time
frame
that
we
selected
in
order
to
to
show
up
correctly.
D
A
B
B
But
then
there
is
this
other
work
use
case
that
then
gets
difficult
to
support,
which
is
the
use
case
of
I'm
a
director.
My
leadership
has
said
has
asked
me
how
long
it
takes
from
end
to
end
to
deliver
value,
and
so
I
want
to
go
in
and
see
what
I
delivered
in
1312
and
how
long
it
took
me
from
either
the
time
that
the
issue
was
created
or
the
time
that
we
started
development
work
on
it
until
we
actually
got
into
production
and
so
right.
B
Now,
if
you
filter,
if
you
set
a
date,
range
you're
filtering
by
create
date.
In
for
that
use
case,
I
think
that
we
could
probably
filter
by
closed
issue
closed
date,
so
that
you
can
see
everything
you
closed
in
the
last
month
and
then
show
the
cycle
time
or
lead
time
based
on
that,
but
how
we
fit
that
view
into
a
view
of
our
our
stage
times
where
we
filter
by
when
the
issue
went
through
the
stage
time
they're
like
two
totally
different
use
cases
and
two
totally
different
views.
C
C
It
should
only
look
at
the
end
event
and
just
to
be
clear,
I
think
it
was
set
to
use
start
event
like
before
optimize
group
existed
like
I
think
that
was
just
the
original
implementation
and
I
think
it
just
took
us
a
while
to
figure
out
that
we
should
probably
be
changing
that,
but
there's
a
second
thing
which
is
kind
of
interesting,
which
is
that
it
seems
like
we
need
to
like
right
now.
C
The
overview
is
a
summary
or
sort
of
a
summation
of
each
of
these
individual
stages,
and
I
think
that
probably
needs
to
change
as
well.
I'm
less
certain
on
this
one,
but
I
think
so
because
I
think
like,
for
example,
when
somebody
clicks
on
this
verification
stage,
they're
expecting
to
see
anything
that
was
verified
in
this
date
range.
C
I
think
I
think
that's
what
users
are
expecting
to
see
like
that's
what
I'm
expecting
to
see,
but
if
that's
what
users
are
expecting
to
see,
then
that
means
that
the
overview
would
no
longer
be
a
summation
of
these
things,
because
these
things
would
no
longer
like
sort
of
sum
up
to
the
overview.
They
are
a
different
perspective
on
the
same
frame
of
time,
rather
than
being
like
a
summation
of
a
series
of
stages
across
time.
If
that
makes
sense,
so
I
think
we
need
to
make
that
change
too.
B
A
I
think
it
should
still
be
part
of
the
flow
because
well
we
we
want
to
click
on
it
right,
so
it
should
be
part
of
the
navigation
somehow,
but
maybe
we
should
visualize
like.
Maybe
we
need
a
visual
distinction
to
say.
Okay,
this
is
you
know.
This
is
not
really
part
of
the
flow.
It's
it's.
It's
also
not
a
sum
of
the
flow.
It's
just
an
overview.
A
B
B
B
D
E
Yeah
yeah,
so
I
totally
agree
that
the
current
setup
does
not
make
sense.
There
is
probably
some
historical
stuff
going
on
why
it's
behaving
like
that.
I
don't
know
why.
Actually
the
future
predates
me
so
if
in
case
you
want
to
filter
items
by
the
start
or
end
events
or
basically,
both
events
should
happen
within
the
same
time
range
we,
we
will
hit
basically
an
optimization
problem.
So
currently
we
are
optimizing
for
one
particular
column,
which
is
the
creation
at
column,
the
creation
date
for
an
issue
or
an
mr.
E
This
is
quite
simple
to
address.
We
use
this
column.
Quite
often
we
query
many
parts
in
many
parts
of
the
application,
so
it's
it's
quite
easy
to
make
it
performant.
Let's
see
so
vsa
supports
several
different
events
closed
date
merged
at
bit
when
the
label
was
added,
and
things
like
that
and
some
of
these
type
stamp
columns
are
quite
difficult
to
query
without
going
through
other
issues
within
your
group.
E
So
worst
case
scenario
we
would
have
to
the
database
would
have
to
iterate
over
all
the
issues
in
your
group
and
filter
out
the
items,
and
this
is
currently
about
50
000
issues
in
gitlab.org
and
it
takes
about
10
seconds
kind
of
and
this
grows
as
we
move
forward
in
time,
because
the
issue
count
is
not
going
to
get
smaller,
it
will
just
increase
and
we
have
significantly
larger
groups.
So
github
is,
I
think,
the
sixth
biggest
group
on
on
on.com.
So
we
have
groups
10
times
bigger
than
that
so
yeah,
it's
it's
problematic.
E
I
don't
say
it's
not
possible
to
do
it,
but
it's
a
big
investment,
because
we
will
have
to
look
into
each
start
event
and
see.
How
do
we
make
this
performant
a
simple
in
some
cases?
It's
quite
simple:
we
need
to
introduce
a
new
database
index.
This
is,
I
don't
know
it's
about
the
weight
of
three.
So
it's
a
relatively
simple
change
database
migration
and
we
have
to
verify
if
the
database
queries
are
performed.
So
that's
that's
that
the.
E
So
if
it's
not
about
creation,
then
we
might
need
to
denormalize
these
tables
to
expose
the
project
id,
because
in
some
cases
the
project
id
is
not
available
on
this
database
table
and
that's
more
complex
work.
It
requires
data
migration
where
we
have
to
backfill
create
a
new
column
in
the
database.
We
need
to
backfill
it
and
we
need
to
make
sure
that
it's
up
to
date
so.
B
E
So
we
have
an
issue
stable.
If
you
are
not
familiar
with
databases,
it's
a
giant,
you
can
imagine
it
as
an
excel
sheet
and
you
have
about
90
million
issues
in
it.
So
that's
gitlab.com
and
we
have
a
column
called
project
id
which
specifies
which
project
are
you?
Is
this
issue
created
right?
So
that's
kind
of
makes
90
million
issues
to
50
000
right,
it's
a
slice
of
that
table,
and
what
do
we
do
on
the
group
level?
E
So
we
look
at
your
group
hierarchy
and
find
all
the
projects
in
your
group.
So
in
github
we
have
about
a
thousand
projects
in
the
gitlab
work
group
and
for
each
project.
We
look
up
the
issues
right,
so
we
are
looking
at
that
huge
table.
Huge
access
sheet
that
find
this
is
a
project.
Give
me
the
issues.
This
is
another
project,
give
me
the
issues
and
only
take
out
the
records
that
are
matching
the
created
at
timestamp
condition
and
we
have
additional
tables.
Let's
say
you
have
the
issue
table
that
huge
table.
E
So
it's
like
you
have
to
do
a
double
jump
to
actually
figure
out
a
small
set
of
data
and
in
the
middle
there
is
that
issues
table
which
is
huge,
and
you
need
to
actually
filter
that
wall
table
down
in
order
to
figure
out.
What's
in
the
second
table
and
the
solution,
is
you
look
at
the
issue
matrix
table?
You
add
the
project
id
there
as
well,
so
you
can
skip
looking
into
the
issues
table,
so
you
go
directly
there.
That's
it's
called
a
normalization.
E
B
Oh
yeah,
that
made
a
difference
yeah
so
that
skipping
the
issues
table
is
one
of
sounds
like
one
of
the
key
things.
We'd
need
to
figure
out
how
to
do.
E
E
E
So
if
we
do
this
for
the
other
tables
in
vsa,
then
we
can.
We
can
have
similar
speed
than
we
have
today
could
be
even
faster
in
some
cases
because
carrying
the
issue.
Stable
is
a
bit
it's
a
big
table
and
it's
a
quite
high.
It
gets
high
traffic.
So
there
is
a
bit
of
contention
congestion
very
quickly,
so
yeah.
B
E
The
database
migration
to
create
an
index.
The
simple
case
is
about
the
weight
of
three
the
normalization.
I
think
we
have
like
two
tables
to
then
denormalize.
It's
about
us
five
or
six.
It's
actually
a
two
release
process
because
first,
you
have
to
create
the
database
column
backfill
the
data
in
a
background
process.
So
that's
one
mile
one
milestone
and
in
the
second
milestone
you
can
start
actually
using
that
column.
B
E
E
F
D
C
D
F
F
Because
I
can't
quite
visualize
how
we
might
like
the
date
selector
part
of
it
seems
fairly
simple.
I
think.
D
F
Would
be
mostly
back
end
changes,
but
if
we
were
to
move
the
overview
stage
or
like
rework
that
somehow
yeah
it'd
be
hard
to
say
until
we
came
up
with
like
an
actual
design
for
how
we
want
to
do
that
in
terms.
D
F
Actual,
I
guess
state
of
the
code
base
we're,
I
think,
we're
in
a
much
better
position
to
make
quicker
changes
to
the
front
end
definitely
than
we
were
six
months
ago
and
then
the
other
aspect
is
it's
getting
close
to
the
point
where
anything
we
do
on
the
group
level
will
start
being
reflected
on
the
project
level
as
well.
So
we
probably
just
need
to
bear
that
in
mind
as
well.
D
F
Well,
currently,
the
project
level
doesn't
have
an
overview
stage
because
we
don't
have
charts.
So
if
we
did
it
relatively
soon,
then
yeah,
it
would
only
be
really
on
the
group
level.
A
B
And
we
have
a
couple
of
things
I
think
in
140
to
have
a
pop
over
and
add
a
sentence
to
the
pop
over
that's
next
to
the
date
selector
to
explain
that
it's
using
the
issue
created
date
depending
on
when
we
decide
to
move
forward.
If
and
when
we
decide
to
move
forward,
we
may
or
may
not
want
to
do
that
issue.
B
B
The
amount
of
effort
that
you
think
is
involved
so
you've
got
an
issue
in
14-0
to
allocate
some
time
to
that,
and
then
I
want
to
go
and
run
this
by
some
more
customers.
Just
to
make
sure
that
I
I
feel
pretty
confident
based
on
the
conversations
I've
had
so
far,
but
I
would
just
like
to
do
a
bit
more
validation
to
make
sure
this
is
the
right
change.
C
F
Good,
I
was
just
gonna
jump
in,
I
suppose.
The
only
area
I'd
probably
want
to
get
a
little
bit
more
validation
before
we
jumped
ahead,
would
be
making
changes
to
the
overview
stage,
taking
out
the
numbers.
Fine,
but
yeah.
I
think
if
we
to
start
playing
around
with
the
navigation
again,
we
may
then
need
another
feature
flag
which
yeah
we've
all
been
down
that
path.
So.
B
A
I
was
just
going
to
add,
I
think,
the
overview
I
think
we
should
not.
Personally,
I
think
we
should
not
make
changes
to
the
way
the
navigation
works.
I
think
it's
just
it's
just
a
matter
of
what
we
want
to
show
on
the
overview
page,
but
it
should
still
be
part
of
the
flow
right
because
it's
it's
part
of
the
clickable
tiles
and
at
some
point
you
want
to
get
an
overview.
A
We
have
that's
the
only
place
where
we
show
the
charts
and,
like
a
summary,
so
in
my
opinion
it
should
still
be
part
of
the
of
the
horizontal
navigation.
I
think
it's
just
a
matter
of
what
we
want
to
show
in
that
field.
B
G
Yeah
could
do
that.
We
could
also
like
have
a
slightly
different
background
background
color
to
differentiate
it
as
well.
Just
like
there's
a
couple
there's
a
couple
of
just
slight
visual
treat
tweaks
that
we
could.
We
could
do
to
differentiate
it
from
the
rest
of
the
stages.
A
I,
like
both
approaches.
Actually
I
think
both
things
should
be
very
easy
to
accomplish.
A
If
you
just
remove
the
arrow,
make
it
rounded
on
that
side
change
the
background
color,
I
think
that
should
help
as
well
and
it's
a
quick.
B
Fix
what
if
customers
come
back
and
say
something
like
okay,
but
we
also
yeah.
I
want
to
see,
I
want
to
be
able
to
select
a
date
range
and
see
what
passed
through
the
stage
in
that
date
range.
But
I
also
want
to
only
look
at
the
items
that
were
closed
in
a
certain
date
range
or
opened
in
a
certain
date
range.
I
don't
know
if
that's
gonna
be
a
thing,
but
if
we
get
requests
like
that,
is
that
just.
E
D
E
Will
find
a
lots
of
additional
filtering
options
and
theoretically
we
could
support
those.
So
actually
it's
much
more
than
we
support
in
vsa,
like
you
can
filter
by.
I
don't
know
this
emoji
what
you
can
put
on
the
issues
and
things
like
that
and
if,
if
that's
like
extra
time
stamp
feature
is
available,
then
we
could
probably
do
that.
It
wouldn't
add
too
much
too
much
processing
on
the
backhand
side,
because
the
the
initial
date
range
filter
that
we
build
or
planning
to
build.
E
We
have
reduced
the
number
of
rows
significantly,
so
an
additional
feed
that
doesn't
really
make
too
much
difference,
but,
of
course
it
needs
to
be
measured
properly
and,
let's
see
height
height
behaves.
I
cannot
tell
for
sure
if
it
will
work
something
else
if
the
customer
needs
like
a
special
reporting
or
additional
special
filters
based
on
something
completely
as
apis.
So
if
vsa
would
expose
data
via
the
apis,
your
issues,
the
related
issues,
are
merge.
Requests.
The
customer
can
build
automation
on
top
of
that
to
further
filter.
F
E
There
is
one
from
me,
so
I
was
thinking
so
optimizing
value
stream.
Ethics
is
is
a
big
challenge
and-
and
I
think
it's
the
conversation
is
going
on
for
quite
some
time.
How
can
we
make
this
faster
or
you
know?
How
can
we
improve
it?
E
Several
times
aggregation
data
aggregation
came
up
and
my
question
is:
how
can
we
validate-
or
maybe
you
can
tell
me
if-
let's
say
you
configure
your
stages
and
we
start
aggregating
the
data
after
you
kind
of
finalize
your
your
value
stream
configuration
instead
of
having
data
available
right
away,
you
would
silently,
in
the
background,
build
up
the
the
necessary
data
that
belongs
to
the
stages.
So
it's
like
not
today,
like
you,
can
just
start
querying
and
filtering
stuff,
but
it
needs
some
time
to
get
the
data
into
a
shape.
B
E
I
mean
it
would
be.
Some
data
would
be
visible
after
a
few
seconds,
but
it's
like
a
background
process
to
make
every
make
sure
everything
is
put
into
the
place
where
we
can
easily
query.
So
that's
that
would
be
the
idea.
The.
D
E
E
B
E
Would
be,
I
wouldn't
say
it
would
be
like
super
fast,
but
it
would
be
significantly
faster
than
than
today.
There
is
also
one
covered.
You
know
that
these
data
changes
often
right.
E
You
close
an
issue,
you
create
a
merge
request,
you
add
the
labels,
so
this
data
also
needs
to
be
synced,
so
we
need
to
keep
this
new
table
or
new
source
of
data
in
sync,
so
there
might
be
a
slightly
later
a
few
seconds
up
to
a
minute
or
so,
and
this
is
like
a
price
we
have
to
pay
for
for
fast
data
access.
B
Okay,
that
that
last
bit,
you
said
so
about
the
labels.
Are
you
talking
about?
If
you
add
additional
filtering.
E
No,
no,
so
if
you
have
a
stage
event
that
depends
on
labels
right
and
we
that
might
not
be
reflected
right
away.
So
you
add
the
at
the
at.
So
you
have
a
stage
where
you
say
it
goes
into
workflow
verification.
E
E
B
E
B
G
The
the
freshness
of
the
data
also
is
dependent
on
the
particular
use
case
that
you're
looking
at
as
well
like
this,
the
scope
of
the
the
data
that
you're
looking
at
so,
for
example,
like
the
historical
view
where
we
are
interested
in
seeing
how
our
stage
time
has
has
changed
over
time.
That
really
doesn't
need
to
be
real
time.
G
That
can
be
a
few
days
late,
because
we're
looking
at
it
over
the
course
of
a
month
or
something
like
that,
whereas
the
more
like
work
in
progress
type
view
that
we're
working
towards
that
one
does
require
it
to
be
sort
of
more
real
time,
not
necessarily
instant
like
if
it's
a
minute
or
two.
G
I
don't
think
it's
too
bad
but
like
there
is,
if
we
can
find
the
right
balance
of
providing
enough
real-time
data
for
that
work-in-progress
view,
whilst
balancing
it
with
the
historical
stuff
that
we
can
load
over
a
longer
period
of
time.
I
think
that
would
be
ideal
if
that,
if
that's
valuable
at
all,
I'm
just
going
to
follow
along.
E
Yeah
that
makes
sense
to
me
to
be
honest.
We
within
github,
we
didn't
really
do
these
kinds
of
aggregation
tasks,
jobs.
We
just
start
exploring
these
ideas
in
in
order
to
to
improve
the
performance
of
the
application-
and
I
I
thought
for
for
for
volunteering
analytics-
it
could
be
also
a
way
to
go,
but
I
don't
have
the
specifics.
F
As
a
side
note,
would
the
aggregation
also
make
sense
in
conjunction
with
templates
like
if
we
had
some
predefined
setup
templates
for
specific
common
use
case
value
streams.
E
E
Obviously,
this
is
not
going
to
be
available
enough
past
storage
or
we
won't
be
able
to
access
it
quickly
because
we
cannot
aggregate
it
for
all
groups
for
all
projects
is
this
data,
because
that
would
be
quite
significant
load
on
the
database
and
it
would
cost
disk
space
so
for
default
stuff
that
the
user
are
not
really
changing,
cannot
really
change
and,
let's
say,
the
the
open
source
version.
Where
you
have
these
default
stages,
we
would
still
rely
on
the
current
way
of
calculating
things,
looking
into
tables
and
do
allegations.
E
But
if
we
are
talking
about
paid
paying
customers
and
custom
stages,
then
we
can
move
it
into
a
more
improved
approach
of
better
performing
backhand.
G
Another
concept,
which
is
really
useful
in
value
stream
is,
is
like
using
different
item
types,
so
there's
a
distribution
of
different
types
of
items
which
come
through
your
value
stream.
You
have
bugs
features
security,
vulnerabilities,
typically
of
like
five
or
six
in
total,
and
these
are
mutually
exclusive
things.
G
So
is
there
some
way
to
optimize
around
pulling
from
a
smaller
data
set
once
you've
defined
these
workflow
items
so
say
so,
rather
than
going
from
like
the
entire
pool
of
issues,
once
we
have
issue
types
implemented
and
an
issue
type
is
defined
as
this
and
within
your
value
stream,
you
say:
I'm
only
interested
in
seeing
these
types
of
issue
types.
Would
that
reduce
the
data
set
and
somehow
help
with
performance
at
all.
E
So
yeah,
so
there
are
two
options:
one
you
define
this
extra
flag
in
your
stage
definition
when
you
set
up
the
stage
at
this,
this
particular
stage
only
look
at
incidents
or
or
bugs.
Then,
when
we
aggregate
the
data,
we
only
aggregate
a
really
small
set
of
data
which
makes
everything
everything
fast.
If,
if
you
don't
do
this,
what
we
can
do
is
on
the
ui.
You
would
have
maybe
an
extra
filter
in
the
in
the
drop
down
that
you
say
this
is
my
volume
stream.
E
If
we
want
to
do
that
on
that
special
table,
where
we
collect
this
data,
we
will
have
to
record
the
issue
type,
for
example.
So
this
is
like
a
post
filter.
We
can
call
this
like
that.
This
can
also
speed
up
things,
but
it
don't.
It
won't
reduce
the
number
of
data
vr.
The
amount
of
data
we
are
storing
is
just
a
fast
access
to
a
slice
of
data.
We
get
something
like
that.
E
So
generally,
the
answer
is
yes,
it's
it's
doable
and
if
we
can,
if
it's
not
too
many
types
we
are
talking
about,
then
this
is
totally
doable.
E
A
limiter,
so
if
it's,
if
it's
like,
really
a
state
column
like
a
state
or
a
type
on
the
on
the
issue
or
the
mr
tables,
that's
fine,
but
as
soon
as
you
say
to
me,
like
I
had
the
backlib,
I'm
only
interested
in
the
bug
labels.
Then
it's
a
bit
tricky
because
then
I
will
have
to
look
into
the
labels.
I
will
have
to
find
the
issues
with
the
with
the
bug,
labels
and
aggregate
those.
So
that's
an
extra
extra
step.
B
The
current
items
view
dan,
and
I
were
talking
about
that
the
other
day.
We,
it
probably
doesn't
make
sense
to
have
a
date
selector
for
the
current
items
view
because
you
just
want
to
see
what's
open
in
a
stage
right
now,.
E
That's
a
tricky
one
because
think
about
a
group
that
is,
I
don't
know
our
group
a
few
years
old
and
there
are
tons
of
stuff
in
it
from
months
years
ago.
Right
I
created
an
issue
two
years
ago
when
I
joined,
and
it's
still
there
and
some
stage
might
pick
that
up
so
showing
the
whole.
E
But
if
it's,
if
you,
if
you
manually,
add
the
filter
when
you
open
the
page
and
the
first
time
first
page,
though,
will
be
terribly
slow
right,
because
you
have
to
wait
until
all
the
stuff
is
done.
Load
it.
And
then
you,
you
filter
down
by
state
or
type.
C
E
A
E
Yeah
yeah,
so
that's
the
other
thing
so,
a
year
ago,
one
and
a
half
year
ago,
when
this
feature
the
new
backend,
was
initially
merged
and
and
and
went
through
review.
Our
timings
were
at
least
four
times
less
than
the
current
and,
as
the
group
increased,
the
number
of
issues
and
projects
things
got
slower.
So
that's
this
trend
will
likely
to
continue.
D
E
Yeah
yeah,
that's
that
might
work
in
some
cases,
but
we
also
have
aggregations
like
median
or
average,
and
that
requires
the
whole
data
set.
That
is
matching
the
the
filters
to
be
loaded,
sorted
and
data
midpoint
out
of
it.
So
that's
a
tricky
one.
Honestly,
I
haven't
looked
at
the
queries
lately
how
they
are
performing
there
could
be
that
you
know
somebody
thought
that
okay,
I
don't
need
this
database
index
or
I
change
something
deep
in
the
in
the
core,
very
logical,
but
we
also
using
vsa
and
things
got
slower.
C
So
that
issue
is
the
the
most
recent
link
in
the
chat
here,
and
I
will
also
add
it
to
the
agenda.
By
the
way
I've
been
I've
been
keeping
some
notes
in
our
like
optimize
meetings
doc.
So
we
have
some
notes
there
as
well.
We
have
like
maybe
30
seconds
left
on
this
call.
Is
there
anything
anybody
wanted
to
wrap
up
with
real,
quick.
G
I
think
vsa
has
made
a
huge
amount
of
amount
of
steps
forward
since
since
I
joined
the
team,
so
I
think
we've
done
a
great
job
up
into
this
point,
so
congrats
everyone.
I
really
like
it
there's
a
lot
to
go,
but
I
think
we've
done
a
lot
of
great
stuff.
B
Yeah
and
I
think
if
we
can
resolve
this
issue,
where
we're
not
showing
what
customers
would
expect,
I
think
we've
got
a
really
really
solid
foundation
to
build
upon
then,
and
then
we
can
start
iterating
with
fancy
new
features.
B
Agreed
and
there's
just
so
much
attention
being
paid
to
anything
with
the
with
the
words
value
stream
in
it
at
the
moment
that
this
is
a
really
key
time
to
make
this
change.
I
think,
and
I
wouldn't
be
surprised
if
we
see
our
adoption
really
start
to.