►
From YouTube: Pipeline Execution Feb 2022 RICE scoring recap
Description
Recap of the results of the work done by Pipeline Execution to use the RICE framework to score epics/issues for the coming year.
Issue: https://gitlab.com/gitlab-org/ci-cd/pipeline-execution/-/issues/91
A
Hey
team:
I
just
wanted
to
walk
through
the
results
of
the
race
scoring.
Thank
you,
everybody
for
doing
that.
First
off
you're,
all
amazing.
I
really
appreciate
the
effort
and
the
thinking
that
went
into
this.
I
know
it
can
take
quite
a
lot
of
time
and
really
stretches
how
we
think
about
these
things
so
really
appreciate
your
efforts
there.
So
I
added
this
comment
on
friday
to
the
issue,
hopefully
better
chance
to
review
it
already
a
couple
of
initial
thoughts
from
me.
A
What
was
surprising,
I
was
really
expecting
more
interaction
in
some
of
the
issues
and
ethics
around
merge
trains
which
wasn't
there.
A
So
my
initial
hypothesis,
that
I'm
looking
to
disprove
is
that
customers
just
aren't
using
this
and
that's
why
we're
not
getting
that
feedback
so
we'll
be
I'll,
be
creating
an
issue
around
capturing
some
of
the
feature
usage
to
make
sure
that
we're
understanding
how
often
merge
trains
are
really
being
used
and
then
that'll
help
us
drive
direction
in
the
latter
half
of
the
year
and
then
what
wasn't
surprising
to
me
was
how
much
overlap
there
is
just
given
how
old
some
of
the
items
are
in
the
backlog.
A
It's
very
mature
product
we've
had
a
lot
of
great
ideas
that
show
up
multiple
times.
This
is
manifested
in
epics
that
sound,
really
really
similar
and
things
get
lumped
together
depending
on
who
was
running
the
the
backlog
at
the
time.
I
think
we
saw
that
around
secure,
ci
job
token
and
pipeline
permissions
both
where
there's
some
ambiguity,
I
think,
between
those
two
as
well
as
then
all
of
the
epics
around
pipeline
analytics
showing
additional
data
to
customers
around.
A
What's
going
on
in
your
pipeline,
how
can
you
make
it
more
efficient,
so
one
of
my
titties
is
going
to
be
to
clean
those
things
up,
try
to
get
them
back
down
to
a
single
source
of
truth
around
directionally.
Here's
where
we
want
to
go
here's
some
of
those
first
steps
that
we
can
work
on
iterations
after
that.
A
So
I
took
a
chance
to
just
rank
everything
here.
Taking
that
summary
by
rank
and
copy
and
pasting
just
the
values
so
that
I
could
sort
things
without
formulas
working
and
being
funky.
So
hopefully
I
have
a
chance
to
review
this.
I
copied
over
roughly
the
top
10
back
into
the
issue
and
just
left
some
of
my
notes
here.
A
So
this
does
show
pipeline
create
as
slow
is
going
to
be
one
of
the
top
things
that
we
work
on
in
the
first
half
of
the
year
and
an
issue
that
I'd
like
to
work
on
in
the
first
after
the
year.
Is
that
ensure
after
script
is
called
for
canceled
on
time.pipelines?
A
That's
a
really
really
old
and
really
popular
issue,
and
things
like
that,
I
think,
are
going
to
contribute
towards
us
getting
the
continuous
integration
category
back
to
a
lovable
state
from
complete
of
just
getting
that
experience
better
for
everyone
after
that,
looking
at
the
ci
job
token
and
the
providing
pipeline
analytics
in
the
back
half
of
the
year.
So
that
will
come
into
a
lot
more
clarity
after
going
through
and
cleaning
some
of
that
up,
focusing
in
on
what
is
our
minimum
viable
change
here?
A
What
is
the
first
thing
that
we
can
start
to
deliver
to
customers
in
getting
them
that
better
experience
for
both
of
those
things
that
they
want
to
do
so?
I've
started
to
apply
those
fiscal
year,
23
labels
to
epics.
You
can
see
in
the
back
half
of
the
year,
some
of
those
that
got
lumped
together
that
I'll
be
working
on
combining
and
in
the
front
of
the
year.
A
This
is
stuff
that
we
have
a
lot
more
clarity
on
what
we
need
to
do
so
we've
been
working
for
quite
some
time
on
the
ci
limit,
ci
minute
limits
for
the
public
projects
on
gitlab.com.
Right
now.
We're
deep
in
the
weeds
of
the
open
source
stuff
and
getting
those
that
license
type
actually
working
correctly.
Picking
up
some
work
from
utilization
pipeline
creation
is
slow.
A
I
called
that
out
as
well
I'm
working
on
narrowing
down
what
is
acceptance
criteria
for
that
then
we
can
dig
in
and
then
this
gitlab.com
ci
builds
metadata
table
growth.
That's
really
work
that
marius
and
gregorish
are
doing
in
ensuring
that
we
can
continue
to
scale
the
products.
A
There's
a
lot
of
ci
scalability
work
there
as
we
look
forward
into
q2
I'd
like
to
get
my
arms
around
this,
mr
emergibility,
bugs
that's
right
now,
just
a
collection
of
a
lot
of
bugs
in
there
that
we
could
work
on
create,
can
work
on
runner
can
work
on
so
we're
gonna
try
to
narrow
that
down
to
what
can
we
actually
move
on
make
the
experience
better,
more
partitioning
work,
more
ci,
scalability
work,
so
I'd
expect
that
some
of
this
will
carry
over
from
q1
into
q2,
and
then
we
can
really
tackle
that
merge
ability
and
probably
pull
some
of
what
happens
in
q3
in
the
secure
ci
job
vote.
A
Sorry,
secure
ci
job
token
workflows
forward
into
q2,
so
we
start
working
on
that
earlier
and
delivering
especially
around
some
of
the
bot
use
cases
that
nicole
and
some
of
the
other
folks
insecure
are
trying
to
drive.
So
those
are
some
of
the
updates
or
the
update.
I
wanted
to
provide
just
verbalize
what
I've
already
dropped
into
the
comment
so
feel
free
to
leave
a
comment
on
the
video
or
ping
me
add
comments
to
the
issue
itself
or
in
any
of
the
the
epics.
As
you
start
thinking
about.
A
What
does
this
mean
for
us?
Are
these
the
right
things
to
be
working
on,
and
the
other
next
step
for
me
is
then
to
create
an
issue
that
talks
about
what
is
our
performance
indicator?
What
is
the
right
thing
for
our
group
to
be
measuring
around
product
usage
and
what's
going
to
drive
us
forward,
so
we
can
continue
to
measure
as
we
deliver
work.
Is
it
driving
that
that
key
metric
forward
for
us
so
I'll
tag?
Everyone
in
that
issue
added
to
the
agenda
for
the
coming
week,
but
thank
you
again
so
much.