►
From YouTube: Optimize (VSA& DORA4) Roadmap view Discussion - Oct 2021
Description
Roadmap issue: https://gitlab.com/gitlab-org/manage/general-discussion/-/issues/17315/.
Related (shorter) video: https://www.youtube.com/watch?v=dWHBiyH1yNE
A
B
Hi,
I'm
nick
I'm
a
designer
in
manage
as
well
currently
working
on
the
optimize
group
and
also
the
workspaces
group.
A
Yeah
so
today
we
kind
of
just
wanted
to
walk
back
and
forth
about
some
of
the
things
that
we
already
have
supported
in
optimize
and,
namely
value
stream,
analytics
and
kind
of
walk
through
what
could
potentially
be
future
visions,
we'll
discuss
some
things
just
by
talking
and
some
will
show
mock-ups
so
with
that
I'll
share
my
screen
and
kind
of
let's
discuss
what
we
have
supported
today,
so
I'm
hopefully
sharing
the
right
screen
and
the
first
thing
that
that
I
noticed
when
I
go
into
analytics.
A
A
B
Correct
yeah
and
you
also
have
value
streaming
as
the
default,
and
not
many
people
may
know
what
a
value
stream
is
so
you're
sort
of
taken
to
this
analytics
page
instantly,
without
sort
of
having
too
much
contract
text.
Of
what
this
value
stream
is,
what
it's
useful
for
and
how
it
connects
to
git
lab
and
the
rest
of
the
product,
so
you're
sort
of
presented
with
a
lot
of
information
that
could
definitely
be
quite
confusing
and
overwhelming.
B
Oh
sorry,
god,
but
we
we'd
like
to
obviously
make
that
a
little
bit
more
interesting
or
easier
to
onboard
and
understand
contextually.
So
I
I
agree.
We
need
to
somehow
connect
this
page
to
the
rest
of
the
the
to
the
product,
as
specifically
the
analytics
pages
as
well,
to
make
it
feel
a
little
bit
more
cohesive
and
also
show
how
the
dots
connect
between
certain
types
of
analytics.
A
This
is
the
one
so
basically
and
correct
me
if
I'm
wrong,
the
value
stream
is
built
on
top
of
labels
and
and
based
on
the
label
that
the
issue
slash.
Mr,
but
let's
start
with
issue,
has
that
that's
going
to
indicate
which,
which
stream
they're
in
correct.
B
Yeah
there's
start
and
stop
events,
some
of
those
are
based
on
labels
and
some
of
those
are
based
on
on
specific
actions
like
when
an
issue
is
created
or
when
an
issue
is
closed.
That
sort
of
stuff.
A
Okay,
so
issue
here
shows
me:
this
should
should
only
the
average
of
the
entire
stream
correct.
B
No,
this
is
this
is
quite
a
strange.
It's
it's
from
the
point
where
an
issue
is
created
to
when
an
issue
is
scheduled.
A
B
A
A
What
I
like
about
the
this
view,
while
you
pull
up
the
docs,
is
a
it
lets
me
kind
of
understand
what
my
flow
is
comprised
of,
which
is
you
know,
issue
plan
code,
test,
review
and
staging,
but
what's
really
interesting
is
that
it
doesn't
show
production
which
maybe
we
need
to
change
as
a
default,
because
I
think
that's
the
most
interesting
part
and
also
like
we
have
the
part
where
issues
are
open
but
not
necessarily
close.
A
Then
we
have
a
really
nice
view
of
very
general
metrics,
some
dora,
some
non-dora.
So
we
have
lead
time
or
cycle
time
v
time
for
changes
number
of
new
issues
in
this.
A
Whatever
the
date
range
is
number
of
commits
number
of
deployed
deployment
frequencies,
which
is
really
interesting,
then,
depending
on
the
where
I
am
in
the
value
stream
right
now,
I'm
an
issue
it'll
give
me
a
list
of
issues
that
are
part
of
the
stream
so
and
it
can
be
sorted,
which
I
think
is
pretty
new,
so
that's
helpful
and
we
have
pages
next
in
preview.
So
what
I'll
see
here
is
the
list
of
the
issues.
A
B
B
Yep,
it
doesn't
account
for
repeated
times
that
something's
been
within
a
a
stage
as
well.
So
if
someone
put
the
milestone
on
automatically
and
then
removed
the
milestone,
it
would
be
counted
in
that
stage
and
then
you
wouldn't
see
it
appear
in
this
stage
again
if
they
assigned
a
milestone
again
after
that.
A
A
A
Awesome,
that's
actually
pretty
good
from
an
issue
to
be
created,
15
hours
until
the
first
commitment.
I
think
that's
pretty
impressive-
go
get
that
okay,
then
we
have
coding
time
is
this
till
the
commit
is
signed
on
merge
requests.
A
A
Yeah
yeah
there
you
go.
We
have
here
some
tool,
tips
that
kind
of
give
us
hints
and
which
is
really
helpful.
Then
we
have
the
testing
phase,
which
in
theory
is
the
one
that
usually.
B
A
The
longest
review
makes
sense
also
until
something
goes
through
code
review
and
then
until
it
gets
approved
and
goes
to
staging.
So
this
is
pretty
neat.
This
is
our
stream
and
it
gives
us
a
lot
of
data
like
just
looking
at
it
from
a
glance
you
already
see
like
this
is
probably
a
problematic
area
that
needs
attention.
A
There
was
a
previous
video
that
you
shared
with
me,
which
I'll
add
to
them
as
a
link
to
this
video,
where
you
discussed
real-time
analytics.
So
let's
talk
a
little
bit
about
real-time
analytics
right
now.
This
is
not
in
real
time,
so
you
can
see
the
stream,
but
there's
no
indication
that
something
is
kind
of
going
off
the
rails
at
this
moment.
So
I
want
to
pick
your
brain
a
little
bit
about
how
you
envision
that
happening.
A
So
maybe
it's
in
the
group
level,
where
you
see
the
average
time
of
your
value
stream,
but
were
you
imagining
that,
if
something
like,
if
a
specific
issue
takes
me
out
of
that
average,
I'm
gonna
get
a
notification
or
what
would
what
was
the
thinking
there.
B
Yeah,
so
this
is
using
value
stream
for
sort
of
two
different
contexts.
The
sort
of
context
this
is
initially
focused
on
was
to
reflect
on
some
of
the
trends
of
how
long
your
value
stream
has
been
taken
and
what
you
can
do
to
like
sort
of
change
the
process
in
order
to
reduce
the
time
it
takes
for
your
value
stream
in
total
and
then
using
value
stream
in
real
time
is
more
focused
on
helping
to
remove
blockers
and
bottlenecks
and
stuff
in
real
time.
B
So
you
can
sort
of
continue
that
flow
within
your
value
stream,
potentially
two
different
personas
required
for
that,
but
yeah
in
order
to
to
to
work
this
sort
of
stuff
in
in
real
time,
we'd
need
new
visual
indicators
and
new
ways
of
visualizing
the
the
interface
to
sort
of
highlight
these
things
that
are
potentially
out
outliers
or
like
taking
taking
longer
than
the
average
time
whatever
sort
of
threshold
we
want
to
set
there
and
also
yeah,
potentially
alerting
through
the
use
of
to
do's
or
email
notifications.
Things
like
that.
B
So,
for
instance,
if
a
issue
has
been
in
the
plan
stage
for
for
much
longer
than
the
the
average
time
say
twice
the
amount,
which
is
say
five
days
as
usual,
and
then
ten
days
is,
is
what's
going
on
a
product
manager.
Someone
who
is
maybe
subscribed
to
this
particular
value
stream
will
get
a
notification
and
say
this
issue
is
taking
a
while.
Maybe
you
can
action
it.
A
Okay,
so
let's
talk
about
mvc
for
a
minute.
I
think
the
vision
is
great,
because
anything
we
can
do
to
help
our
users
in
real
time
is
much
better
than
you
know
two
weeks
along
the
way
where
there
really
isn't
much
to
do
so.
This
is
a
great
example.
We
have
your
plan,
the
average
time
is
15
hours
and
we
have
here
one,
for
example,
that's
seven
days,
which
obviously
is
a
problem.
Let's
assume
that
twice
as
much
as
something
that
causes
the
alert,
I'm
assuming
that
we
want.
A
We
could
configure
this
at
some
point,
even
though
we
can
decide
that
the
default
is
twice
as
long
and
so
visually.
We
would
have
something
showing
that
this
is
a
problem
like
maybe
it's
red,
maybe
it's
on
the
top.
Maybe
it
has
some
kind
of
icon
or
symbol,
and
then
right
you
add
it
to
that
to
do
a
notification
or
maybe
even
a
slack
message
with
integration
and
something
like
that.
B
Correct
and
that's
that's
like
sort
of
from
the
the
side
of
like
the
analytics
page,
but
we
could
also
consider
potentially
embedding
this
analytics
or
these
insights
into
the
actual
issues
themselves.
So
not
just
the
manager
or
director
who's
using
this
would
see
it,
but
also
the
the
developer
is
seeing
it
as
well
or
the
product
manager.
A
A
Okay,
awesome,
so
so
that's
really
neat.
We
didn't
make
a
lot
of
progress
in
real-time
analytics.
A
Another
thing
that
I
want
to
talk
about,
and
maybe
also
talk
about
the
gap,
is
something
I
heard
from
analysts,
which
is
value
stream
is
great
because
it
gives
you
observability,
but
really
you
want
to
know
about
those
outliers
and
also
those
comparisons.
So
this
is
a
value
stream
for
a
specific
project,
but
it
doesn't
tell
me
how
I'm
doing
versus
another
project,
or
maybe
that
is
more
suitable
for
the
group
level.
We
can
think
about
that.
But
the
idea
is,
I
have
some
kind
of
average
cadence.
It
doesn't
matter.
A
If
we're
talking
about
the
value
stream
itself
or
one
of
these
metrics
and
this
specific
project,
you
know,
didn't
meet
the
the
average
or
didn't
meet
our
goal.
We
can
also
talk
about
average
versus
goal.
Is
that
something
that
we
have
thought
about
as
well?.
B
Yeah,
so
if
you
think
about
it,
there's
the
potential
that
we
want
to
be
comparing
different
projects
against
each
other
or
different
groups
against
one
another.
B
But
a
project
or
a
group
may
have
also
individual
value
streams
within
them.
That
we'd
want
to
be
comparing
as
well.
So
those
are
other
things
that
we'd
want
to
compare
and
the
general
types
of
metrics
that
you'd
want
to
compare
against
actually
come
from
the
the
accelerate
book,
so
the
door
of
four
metrics
and
those
are
typically
good
indicators
to
show
how
a
particular
project
or
group
is
doing
so.
B
Yeah,
ideally
we'd,
be
able
to
see
the
projects
or
groups
against
each
other
within
a
table
at
least
and
sort
of
see
how
the
the
metrics
fare
against
one
another
to
get
a
good
indication
of
of
how
these
these
value
streams
are
performing.
A
Yeah
absolutely
so
we'll
talk
about
dora
for
also
and
how
to
connect
that
as
well.
Another
kind
of
very
general
question
that
I
have
and
I'm
not
sure
if
I
saw
documentation
about
that.
But
let's
say
I'm
looking
at
this
issue
list
here
and
I
have
a
confidential
issue
in
terms
of
permissions.
Will
that
show
up?
Will
that
not
show
up
does
it
depend?
Who
is
logging
into
the
system.
B
A
Okay,
also
in
terms
of
who
creates
the
value
stream,
do
we
have
differences
in
terms
of
the
permissions
in
that
sense,
or
can
anyone
create
their
own,
it's
personal
or
is
it
another
team.
B
A
So
if
I
should
say
group
level,
let's
talk
a
little
bit
about
group
level,
value
stream,
so
the
the
group
level
value
stream
kind
of
looks
a
little
bit
different
than
what
we
were
looking
at
the
project
level.
First
of
all,
you
can
see
we
have
a
bit
of
a
different
value
stream
here.
My
first
question
is
why
then
we
have
these
charts
that
are
very
specific
to
the
groups
and
don't
appear
in
the
project
level,
and
that's
basically
it
I
mean
everything
else
is
the
same.
A
We
have
kind
of
the
same
kind
of
metrics.
We
can
filter,
we
can
filter
by
projects
we
can
filter
by
the
day,
that's
kind
of
the
the
same
there's
these
two
buttons,
which
are
a
little
bit
different.
So
can
you
walk
me
through
this
page
as
well.
B
Sure
so
the
top
two
buttons
are
the
are
the
different
saved
value
streams
that
you
have
at
the
group
level
now
and
saving
value
streams
effectively
allows
you
to
change
the
different
stages
that
you
see
in
that
chevron
or
tab
across
the
top.
So
right
now,
customer
test
is
a
specific
value
stream
and
if
you
click
the
edit
button,
we
can
actually
dive
in
and
see
how
we've
they've
constructed
this
particular
value
stream.
A
B
A
B
No,
it
doesn't
that's
a
very
good
idea.
Maybe
we
should
create
an
issue
for
that.
A
Okay,
so
I
have
a
to
do
for
miss
meeting
okay,
so
we
kind
of
already
saw
how
we
can
create
our
own
custom,
value
stream
or
customize
the
default
ones,
and
you
as
like,
as
you
can
see
here,
there's
a
lot
is:
there's:
is
there
a
limitation
on
the
amount
of
value
streams
that
I
can
create.
A
A
B
A
B
Yeah
I'd
say
I'd
say
it
potentially
makes
sense
to
to
do
that,
based
on
what
they're
trying
to
achieve
with
their
value
stream
as
well.
That
would
make
sense.
A
A
How
long
does
it
take
for
me
to
fix
bugs
or
hotfixes
or
whatever-
and
we
do
have
this
data
and
other
analytics
in
the
product,
so
it
may
may
make
sense,
at
least
in
mvc,
to
show
what
we
already
support
without
allowing
you
know
fully
customizable
analytics.
A
So
let's
talk
about
the
regrets,
can
you
kind
of
walk
me
through
this.
B
Well,
this
first
chart
is
a
a
bit
of
a
mess
and
it
has
been
for
a
while.
Initially
it
was
supposed
to
be
a
a
scatter
plot
and
then
you'd
see
a
an
average
line
going
through
it
and
potentially
some
bands
showing
the
percentile
as
well,
and
that
that
sort
of
visualization
would
effectively
give
you
the
average
lead
time
of
your
value
stream
and
how
different
items
or
issues
are
falling
with
either
within
or
outside
that
particular
lead
time.
B
The
the
the
performance
of
this
chart
was
was
abysmal,
so
it
was
shifted
in
the
mvc
to
to
sort
of
account
for
for
less
data
and
fewer
data
points,
and
that's
why
it
sort
of
looks
like
this
and,
in
my
opinion,
there's
an
issue
out
there
currently,
but
this
until
we
can
get
the
performance
right
should
be
changed
to
a
line
chart
and
potentially
show
things
superimposed
on
it.
B
So
having
the
lead
time
and
the
cycle
time,
both
within
the
same
chart
would
be
potentially
quite
useful
or
maybe
the
lead
time
for
changes
over
time.
A
Yeah
so
right
now
I
see
that
it
takes
all
stages
into
account
and-
and
it
kind
of
shows
me
the
average
right
so
this.
A
Is
a
30-day
interval
and
it
shows
me
the
daily:
this
is
actually
pretty
cool
like
if
we
know
that
the
average
is
a
day,
then
everything
above
it
is
like
green,
great
right,
yeah
yeah
when
we
created
this
chart
was
there
thinking
about
adding
annotations
like
this
was
the
release
day.
This
was
a
code
freeze
and
what
not
to
kind
of
help
explain
what
we're
seeing
in
the
chart.
B
So
you
could
click
through
to
those
outliers
and
understand
the
context,
and
that's
why
value
stream
and
get
lab
is
actually
such
a
interesting
value
proposition
is
that
you
can
actually
click
through
and
see
all
the
underlying
issues
and
mrs
and
data
that's
actually
contributing
to
how
your
value
stream
is
performing,
so
not
just
the
quantitative
side,
but
also
the
qualitative
side
of
comments
and
all
that
sort
of
stuff.
B
But
I
agree
simple
annotations
on
this
chart
saying
we
released
on
this
day
or
there
was
you
know,
a
crash
on
this
day
or
whatever
it
may
have
been.
They've
been
super
useful
and
another
sort
of
slight
usability
fix
that
that
would
be
interesting
for
this
chart.
Is
you
see
that
you
can
select
the
different
stages
to
see
how
those
different
add
those
add
up
to
do
basic
to
completion?
B
We
could
actually
simplify
this
chart
dramatically
by
removing
that
drop
down
and
then
taking
the
specific
stages
or
chart
for
those
stages
and
putting
it
within
the
tab
bar
above.
So
when
you
click
on
plan,
you
just
get
the
chart
for
plan
when
you
click
on
code.
You
just
get
the
chart
for
code,
and
this
total
overview
stage
would
give
you
the
chart
just
for
the
entire
value
stream.
A
Yeah
yeah,
that
makes
sense
great,
and
this
chart
is
really
interesting.
It's
one
that
customers
really
like
especially
product
managers,
which
kind
of
shows
me
what
the
team
was
working
on
in
a
given
day.
A
B
This
this
is
a
a
chart
which
is
effectively
depicting
one
of
the
key
flow
metrics,
which
is
flow
distribution,
and
it
shows
you
the
different
types
of
work
items
that
are
coming
through
your
work
stream.
So,
as
you
said,
if
you
are
focusing
on
improving
the
or
having
a
certain
percentage
for
the
number
of
bugs
that
you're
trying
to
fix
as
opposed
to
new
features
or
vulnerabilities
or
whatever
it
may
be,
you
can
track
that
over
time
and
ensure
that
that's
what's
happening.
A
Because
it
lets
me
toggle
between
issues
and
merge,
requests,
silly
question:
why
don't
we
show
both?
Why
do
we
need
to
toggle
between
them?
Why
don't
we
show
both
of
them?
These
both
seem
really
interesting.
B
Yeah
seeing
them
both
superimposed
would
be
really.
Nice
performance
has
prevented
that
in
the
past,
but
that's
also
something
that
would
be
super
cool
to
see
next
to
each
other,
because
obviously,
issues
in
merge
requests,
there's
a
relationship
there,
so
seeing
them
in
context
would
be
really
cool.
A
Cool,
so
we
talked
about
project
level
and
group
level
analytics.
A
We
by
chance
mentioned
dora
metrics,
so
the
dora
metrics
currently
appear
under
ci
cd
analytics.
It
doesn't
really
make
sense
to
be
there,
but
hey
that's
what
that's
what
we
have
at
the
moment.
So
at
the
moment
we
support
deployment
frequency
and
lead
time
metrics.
Only
and
assuming
that
we
do
want
to
connect
these
metrics
to
value
stream
or
at
least
have
them
linked
from
one
another.
From
the
tile
view.
A
B
Yeah
having
some
consistency
between
the
scopes
would
be
useful,
so
whether
we
handle
that
by
something
like
when
moving
from
the
value
stream
page,
the
mvc
could
just
be
take
us
to
the
last
month.
Tab
here
that
would
that
would
be
a
quick
way
of
going
about
it.
But
then
we
could
potentially
evaluate
how
we
handle
scopes
and
pass
labels
and
stuff
over
within
filters.
A
Yeah,
I'm
going
to
go
back
to
value
stream
for
a
minute
because
lead
time
for
changes,
deployment,
frequency
and
the
other
two
metrics
that
we
want
to
add,
which
is
mean
not
to
restore
and
and
time
to
don't
remember.
A
They
all
talk
about
specifically
the
production
environment.
When
we
look
at
the
value
stream
tiles,
when
we
look
at
lead
time,
it
says
created
to
close,
but
it
doesn't
necessarily
talk
about
the
production
environment
specifically
so
also
in
terms
of
deployment.
So
I
wonder
if
we
really
do
need
to
keep
this
consistency,
because
it
looks
like
they're
both
kind
of
talking
about
different
things
and-
and
I
wouldn't
want
to
confuse
the
user,
so
maybe
it
does
make
sense
to
have
them
separate.
B
Yeah
and
then
also
there's
the
the
fact
that
not
all
customers
may
may
have
their
environment
or
named
as
as
production
as
well,
so
have
been
able
to
have
a
little
bit
more
flexibility
on
what
specific
it
is,
a
thing
that
is
that
they're
actually
going
to
deploy
to,
but
that's
so.
A
That
we
that
we
solved,
we
introduced
a
new
keyword
called
deployments
here
in
the
yaml
file,
which
you
can
call
your
production
environment.
Whatever
you
want,
you
can
have
multiple
reduction
environments
and
this
shows
aggregated
data
so
that
problem
solved,
but
yeah
go
back
to
here.
I
just
want
to
make
sure
that
we
have
consistency,
because
this.
A
Total
number
of
deployments
to
production
so
hopefully
they're
both
sitting
on
the
same
api.
B
Yeah
we
we
want
consistent,
we
want
consistent,
metrics
and
value
stream
will
effectively
be
the
top
level
view
that
you
can
then
use
to
click
into
more
detailed
specialized
views.
So
it's
like
a
t-shaped
model
where
value
stream
tells
you
what's
going
on
in
your
value
stream,
and
then
you
can
click
into
specific
analytics,
like
this
cicd
analytics
to
understand
a
little
bit
more
detail.
A
Well,
just
from
clicking
it
does
look.
Okay,
like
I
didn't
check
the
math,
but
the
the
deployment
frequency
was
like
67
point
something
and
you
can
see
they're
here
we
have
116
and
we
have
18.
So
it
probably
is
okay,
the
number
maybe
just
to
be
nice.
We
should
put
the
average
here
on
the
cicd
page
as
well,
so
they
just
speak
in
the
same
language.
B
Exactly
and
having
other
metrics
like
just
the
the
quantity
of
of
deploys
as
well
just
to
provide
that
additional
context.
So
you
don't
need
to
try
and
figure
it
out
from
the
chart
by
using
integrations
or
differentiation
having
that
maps
in
a
while.
A
Yeah,
that
makes
sense.
I
also
saw
that
you
opened
an
issue
for
that,
so
we
should
be
good
to
go.
So,
let's
talk
about
devops
reports,
so
devops
adoption
and
a
pretty
new
chart
that
we
added
at
the
group
level
and
is
also
kind
of
a
standalone
but
talks
together
with
a
value
stream.
We
should
talk
about
maybe
potentially
connecting
those
so
devops
adoption
shows
you
selected
features
from
different
areas
in
the
products
we
started
with
devsecond
ops.
A
Obviously,
there's
a
lot
more
that
we
can
add
here
we
talked
about
adding
compliance
frameworks,
adding
issues
adding
review,
apps
and
so
on
and
so
forth.
So
there's
a
ton
of
different,
adding
protect
features,
so
there's
a
lot
of
things
that
are
still
missing,
but
as
an
mdc.
What
this
shows
us
is,
you
know
what
are
the
different
groups
adopting
at
the
group
level,
and
what
we
can
see
here
is
that
different
subgroups
within
the
group
behave
a
little
bit
differently.
A
Some
have
like
100
of
all
dev
functionality
like
git
lab
services,
but,
for
example,
growth,
isn't
using
any
of
the
sec
features.
Now
this
could
be
okay
and
it
could
be.
Maybe
not
okay,
I
think
that
it
would
be
really
powerful
to
cut
kind
of
connect.
This
saying
well,
gitlab
services
adopted
quite
a
bit
of
of
the
different
functionality.
How
does
this
affect
my
value
stream?
A
Like
maybe
even
have
the
value
stream
average
displayed
here
and
say
wow,
this
one
is
doing
really
well
what
what
functionality
are
they
using?
What
can
we
learn
from
them
or
something
like
that?
But
what
have
you
been
hearing
or
seeing
or
thinking
about.
B
Correct,
yes,
so
there's
an
interesting
relationship
between
devops
adoption
and
values,
value
stream,
insights
that
we
we
keep
on
hearing
and
and
basically
the
two
are
very
much
related
and
value
stream
and
the
way
your
value
stream
is
configured
and
the
the
purpose
of
your
value
stream.
Very
much
determines
the
types
of
features
that
you
actually
want
to
be
adopting.
B
So
not
all
value
streams
are
the
same
and
the
the
purpose
of
the
outcomes
that
the
value
streams
are
trying
to
achieve
are
actually
widely
different.
So
not
everything's,
maybe
going
to
require
ops
features,
not
everything's
going
to
maybe
require
security
features.
So
sometimes,
when
comparing
these
different
groups
across
here,
it's
not
apples
and
apples.
It's
apples
and
oranges
that
you're,
comparing
so
having
a
little
bit
more
context
of
what
the
value
stream
is,
is
doing.
How
it's
performing
can
give
a
bit
more
insight
as
to
what's
being
adopted.
B
A
Interesting
and
so
you
mentioned
something
that
some
may
be
adopting
features
and
some
may
not,
and
that
may
be
okay,
so
let's
take
compliance
for
example,
even
though
we
don't
support
it
yet,
but
let's
say
I
have
a
project
that
is
fully
compliant
with
compliance
and
it's
using
compliance,
compliance
frameworks
and
so
on,
and
there
may
be
a
project
or
a
group
of
projects
that
isn't,
and
maybe
that's
okay,
because
it's
internal
and
you
don't
have
to
be
compliant-
is
there
a
way
that
we
can
kind
of
add
notes
here,
saying:
well,
it's
okay,
that
they
didn't
adopt
this
because
just
like
having
someone
acknowledge
the
problem
or
the
lack
of
their
problem.
B
Yeah
so
before
we
were
thinking
about
annotations,
we're
actually
thinking
about
the
ability
to
hide
different
features.
That
may
not
be
relevant
for
your
value
stream
and
that
could
maybe
give
you
us,
like
a
better
indication
of
of
like
what's
been
adopted
for
within
a
particular
group.
So,
for
example,
if
a
larger
group
says
we
are
right
now
not
too
concerned
with
adding
compliance
features,
then
the
other
subgroups
may
not
have
to
take
that
into
account
and
that
will
sort
of
reflect
in
the
in
the
percentages
that
are
shown
so
annotations.
B
A
That
makes
sense
what
it
makes
sense
that
you
know
I've
been
talking
about
to
a
lot
of
customers,
both
both
in
terms
of
compliance
and
been
optimized
because
of
my
role,
but
a
lot
of
times.
They
decide
that
they
want
to
set
a
company
policy,
for
example,
they
want
to
make
sure
that
every
pipeline
runs
secure
features,
and
here
at
a
glance,
you
can
see
that
a
lot
of
the
projects
do
not
so
assuming
that
my
organization
decided.
A
A
Could
we
maybe
add
something
that
would
either
add
a
notification
or
maybe
even
open
an
incident
so
that
someone
knows
that
action
needs
to
be
taken
similar
to
when
you
know
a
vulnerability
is
found
or
something
like
that.
Is
that
something
we
thought
about.
B
For
sure
I
mean
just
to
give
a
little
bit
more
context,
the
the
initial
purpose
or
job
to
be
done
that
we're
focusing
on
with
this
nbc
was
helping
sort
of
devops
champions
or
or
get
lab
champions
to
justify
renewal.
So
we've
really
optimized
this
interface
to
sort
of
show
what
feature
what
features
are
currently
being
used
and
how
they're
being
used,
and
then
we
can
start
to
prioritize
the
next
jobs
to
be
done
that
we
want
to
help
to
accomplish.
B
So,
if
that
happens,
to
be
sort
of
compliance
related
stuff,
there's
plenty
of
routes
that
we
can
go
with
here
and
I
think
there's
there's
a
lot
of
different
jobs
that
we
can
serve
with
this
interface.
A
Cool
so
we
talked
about
what
we
have
today
in
the
product.
With
the
time
lift,
I
would
like
to
talk
about
maybe
future
roadmap,
and
for
that
I
kind
of
use
this
issue,
but
maybe
you
want
to
walk
through
the
figma
file
better
than
I
do,
because
this
is
yours,
so
I'll
stop
sharing
my
screen.
If
you
want
to
share
it.
B
B
So,
with
regards
to
the
division,
we
wanted
to
sort
of
add
a
little
bit
more
functionality
and
start
to
differentiate,
but
between
those
different
jobs
to
be
done.
That
I
was
talking
about
before.
So
there's
the
job
to
be
done,
which
is
more
about
helping
to
identify
outlier
work
items
and
unblock
those
in
real
time.
B
There's
the
job
to
be
done,
which
is
basically
reflecting
on
your
value
stream
and
seeing
the
trends
that
have
happened
over
over
the
the
time
over
the
like
the
last
few
months
to
see
whether
you've
been
improving
and
why
and
then
also
starting
to
compare
value
streams
and
sort
of
seeing
what
other
teams
are
doing
in
order
to
improve
the
way
that
they're
they're
performing
as
a
team.
B
So
this
first
view
is
very
much
focused
on
the
deliver
job
to
be
done,
helping
to
unblock
stuff
in
real
time,
and
this
is
starting
to
provide
a
more
condensed
view
which
is
giving
you
an
overview
of
all
the
items
that
are
currently
within
your
value
stream.
And
what
stages
that
they're
in
it's
sort
of
roughly
analogous
to
a
board.
But
it
tries
to
get
everything
into
more
of
a
condensed
view
and
then
it's
it's.
It
starts
to
highlight
particular
items
that
are
outliers.
B
We
can
define
outliers
as
stuff,
that's
falling
out
outside
a
particular
set
percentile
and
then
from
there
you
we
can
provide
some
sort
of
annotation
of
this
is
this
is
currently
blocked,
and
this
is
why
maybe
you
can
click
through
it
and
then
like
comment
on
the
issue
or
see,
what's
actually
going
on
so
really
easy
view
to
see
what's
happening
within
your
particular
value
stream,
see
which
items
are
outliers
and
then
take
an
action
on
those
those
particular
items
as
well
and
having
the
ability
to
visualize
these
items
or
group
these
items
in
different
ways,
so
that
tasks
by
type
chart
that
we
were
talking
about
before
I'm
talking
about
that
flow
distribution
of?
B
Do
I
want
to
have
more
things
which
are
bugs
do
I
want
to
have
more
things
which
are
features
versus
vulnerabilities,
starting
to
represent
those
different
types
of
issues
that
may
be
in
here
by
categorizing
them?
Maybe
with
an
icon
like
this
is
a
feature
versus
this
is
a.
B
A
Two
questions
so
one
I
really
like
the
view.
I
have
a
question
about
the
representation
of
the
type,
because,
theoretically
I
can
create
my
any
type
I
want.
We
saw
that
you
can
choose
your
own
labels,
and
so
I
don't
know
if
that
would
correlate
to
all
the
different
types
of
labels
that
users
customize.
B
Well,
yeah,
I
mean
ideally,
these
types
need
to
be
mutually
exclusive.
Maybe
we
could
wait
until
issue
types
are
a
specific
feature
within
within
issues,
and
then
we
can
work
off
the
back
of
those
and
that
would
make
it
easier
than
labels,
which
are
a
little
bit
more
loosey-goosey.
With
regards
to
the
type
of
data
that
we'll
be
showing.
A
Yeah,
the
second
thing
that
I
kind
of
it's
not
really
a
question.
It's
more
of
a
comment
when
I
look
at
this
view
it
looks
very
similar
to
the
environment
dashboard.
I
don't
know
if
you've
seen
that,
but
it
kind
of
shows
that
same
view
where
you
can
see
a
lot
of
different
boxes
and,
at
least
in
the
environment
view
you
can
see
at
a
glance
like
if
something
is
red
or
green
or
needs
attention
or
not.
I
wonder
if
that's
something
that
we
can
adopt
here.
B
Yeah
for
sure,
so
right
now
this
was
sort
of
stripped
of
any
color,
because
it
was
more
of
a
wireframe
but
yeah
introducing
different
visual,
cues
and
colors
to
highlight
things
that
need
attention.
I
think,
is
a
great
idea.
A
B
Sure
so
optimize
is
the
job
where
you
are
reflecting
and
seeing
the
general
trends
that
are
happening
for
this
particular
value
stream
over
time,
and
we
can
break
that
down
in
a
number
of
ways.
So
we
can
see
the
door
metrics
here
we
can
see
the
key
metrics,
which
sort
of
show
you
how
effective
your
value
stream
is,
and
then
we
can
also
see
the
flow
metrics
as
well.
B
So
the
item
type
distribution
and
seeing
how
that's
performing
over
time,
the
the
average
velocity
or
how
quickly
you're
you're
closing
down
issues
per
iteration
the
lead
time
the
amount
of
work
in
progress
as
well,
which
would
be
your
flow
load
and
then
flow
efficiency
as
well,
and
this
would
start
to
take
into
account
how
efficient
your
value
stream
is.
So
how
often
are
you
having
to
pause
and
wait
for
someone
else
to
to
do
something
or
yeah?
A
Yeah,
this
is
really
cool.
I
do
want
to
note
something
that
I
heard
from
an
analyst
last
week,
which
is
that
again
visually.
This
is
awesome,
but
when
I
look
at
this-
and
I
see
there's
an
option
to
see
full
report,
but
when
I
look
at
this,
I
don't
know
if
it's
okay
or
not
like
I'm
looking
at
the
door
for
metrics.
A
Maybe
this
is
good.
Maybe
this
isn't
good.
Like
I
see,
there's
a
triangle
with
the
trend.
Is
this
trend
like
versus
the
average
versus
last
month
versus
another
group?
Maybe
I
have
you
know
so
in
the
settings
goals
that
I
want
to
achieve,
and
maybe
that's
a
problem
like
how
do
you
see
that,
in
terms
of
when
I
look
at
this-
and
I
know
that
something
needs
attention.
B
Yeah
again,
visual
cues
highlighted
when
something
is
an
outlier
falling
outside
a
percentile
being
able
to
hover
over
these
metrics
as
well
and
see
different
things
like
median
or
mean
or
seen
yeah
all
the
other
other
trends
that
are
related
to
this.
This
metric,
as
well
so
providing
initially
sort
of
visual
cues
that
something's
interesting
here,
and
maybe
you
should
look
at
it
and
then
allowing
users
to
investigate
that
further.
B
So
yeah
we
talked
about
before
that
different
value
streams
would
perform
differently
based
on
the
sort
of
stuff
that
they're
adopting
and
what
they're
actually
trying
to
achieve.
B
But
it's
really
useful
to
see
a
comparison
of
value
streams,
side
by
side
to
see
how
they're
performing
and
what
you
could
potentially
learn
from
other
value
streams
as
well.
Why
are
they?
Why
are
they
doing
it
that
way?
How
are
they
so
successful,
or
maybe
there's
something
that
can
be
improved
about
them?
So
this
starts
to
show
you.
B
This
starts
to
show
you
some
of
the
key
metrics
for
how
the
value
stream
is
performing.
These
could
be
flow
metrics,
which
is
shown
here,
but
they
could
be
door
metrics,
there's
basically
just
showing
different
metrics
side
by
side
for
each
value
stream
and
then
there's
the
ability
to
click
in
to
see
how
the
different
sort
of
stages
that
they're
leveraging
from
the
from
the
devops
life
cycle
are
also
performing
as
well.
So
if
you
want
to
compare
how
effectively
a
value
stream
is
planning
stuff,
you
could
click
on
the
plan.
A
I
think
this
one
is
really
really
powerful
and
exactly
what
people
are
looking
for.
A
general
question
that
I
have
is:
is
this
showing
me
my
value
streams
of
my
organization,
or
is
it
also
teaching
me
like
how
others
are
performing
so
in
gillab.com?
A
We
have
the
advantage
of
having
all
this
data.
Of
course
we
want
to
anonymize
it
and
we
don't
want
to
show
you
know
the
issues
linked
to
a
private
project
or
something
like
that.
But
maybe
you
know,
if
there's
a
really
good
performer,
we
can
at
least
show
the
labels
that
the
value
stream
is
built
upon,
or
something
like
that.
Is
that
something
that
we
also
incorporated
here.
B
Yes,
so
if
you
think
about
like
the
accelerate
metrics
dora
dorametrics,
they
they
sort
of
categorize
performers
into
elite.
High
moderates,
all
that
sort
of
stuff.
So
you
could
have
that
internally
to
compare
but
having
that
from
anonymized
or
seen.
Anonymized
organizations
and
value
streams
and
comparing
to
them
would
also
be
really
useful
and
we
could
potentially
even
create
a
better
comparison
by
looking
at
the
types
of
things
that
these
value
streams
are
trying
to
achieve,
and
the
types
of
things
that
they're
adopting.
A
Awesome
so
with
the
very
few
minutes
that
we
have
left
there's
one
general
question
I
kind
of
wanted
to
ask:
we
talked
about
real
time
value
stream
and
we
talked
about
observability
and
we
talked
about
comparison,
something
that
I
was
thinking
about
when
I
had
planned.
The
door
of
four
metrics
in
git
lab
was
in
product
guidance,
and
so
we
maybe
touched
upon
it
now.
A
A
A
Where
would
you
think
that
kind
of
thing
would
fit
into
the
value
stream
metrics,
because
we
have
so
much
data
so
much
analytics?
And
so
much
so
many
features
that
it's
just
really
hard
to
keep
track.
B
B
If
you
think
about
it,
there's
there's
a
little
bit
of
inefficiency
within
git
lab
where
you
need
to
configure
your
projects
and
groups
in
a
particular
way.
So
you're,
effectively
configuring
your
workflows
and
then
you,
your
team
works,
works
in
these
workflows
and
then
we
have
some
analytics
which
show
you
how
your
teams
work
in
these
workflows
and
you
need
to
configure
these
analytics
in
order
to
reflect
back
to
make
sure
that
they're
aligning
with
the
configurations
that
you
initially
made
for
your
workflows.
B
So
it
would
be
great
if
we
were
able
to
bring
more
value
stream
thinking
into
the
initial
configuration
of
projects
and
groups
and
visualize
that
in
a
certain
way,
so
you're
building
up
your
workflows
and
your
value
streams
in
a
certain
way,
and
then,
through
that
process,
we
can
automatically
configure
the
analytics
that
we
need
to
reflect
back
and
having
that
loop
of
configure,
do
the
work
and
then
reflect
on
the
work.
This
continuous
improvement
type
cycle.
B
It
means
that
we
can
sort
of
reflect
back
and
say:
oh
maybe
you
should
introduce
this
and
we
can
see
that
you've
configured
your
work
stream
in
this
way.
We
think
that
you
could
dramatically
reduce
your
lead
time
by
adopting
this
feature.
Let's
go
back
into
this
configure
value
stream
type
screen
where
we
can
add
that
new
workflow
and
figure
out
whether
that's
going
to
help
you
out.
B
I
think
gitlab
has
a
really
interesting
position
in
the
market
with
with
regards
to
value
stream
analytics
I
mean
we
are
a
value
stream
delivery
platform
and
we
provide
so
much
so
many
features
and
so
many
services
that
it's
really
interesting,
that
we
can
have
analytics
which
are
actually
like
attached
to
this
platform
and
are
embedded
within
the
product
as
well.
B
So
I
think
there's
a
lot
of
room
for
value
stream,
thinking
from
the
analytical
perspective
and
within
these
dashboards,
but
I
think
we
could
also
start
to
spread
a
little
bit
of
these
values
through
insights
in
context
of
the
actual
product
as
well
to
help
not
just
directors
and
managers
who
are
interested
in
value
stream
analytics,
but
also
developers
and
individual
contributors
as
well.
A
Great,
I'm
really
excited
to
see
you
know
the
entire
optimized
vision
come
to
play.