►
Description
Today we caught up on how additional information about about tests like failure count and duration would be helpful for the Engineering Productivity team.
Kyle shared how the EP team is using reference tags (https://docs.gitlab.com/ee/ci/yaml/#reference-tags) to use the standard Code Quality template again and use the rules defined to get speedy pipelines.
Agenda: https://docs.google.com/document/d/1ijjjvLeIY6a82dS2_8H-G0bhvpaZ5zxPgYpU08EYaHc/edit#heading=h.b4f8595pqvm6
A
I
also
like
to
call
it
fireside
chats
with
kyle
for
april
2021
a
couple
of
quick
announcements
if
you
are
into
the
agenda
and
I'll
link
it
into
the
video,
the
roadmap
deck
changes
of
note,
nothing
major
this
month,
teams
continuing
to
prep
for
some
iterative
improvements
and
mr
widgets,
we
started
code
quality
this
month
so
that
those
high
severity
items
are
up
at
the
top
instead
of
somewhere
buried
in
the
list
of
100
things
on
the
pipeline
page
and
then
prepping
for
those
larger
issues,
we
should
have
code
quality
in
the
mr
diff
at
least
noting
that
a
file
has
a
new
violation
in
it
in
1312
and
then
the
full
annotation
in
fortino,
which
is
pretty
exciting,
and
so
that's
in
flight.
A
If
scott
and
miranda
get
it
done
is
adding
the
screenshot
for
the
failed
test
into
the
modal
on
both
the
test
summary
and
the
full
report,
or
maybe
it's
just
the
full
report
for
the
first
iteration.
So
if
you
have
a
failed
test,
you
can
go
see
the
screenshot
that
was
captured
as
part
of
it.
I
know
scott
was
looking
for
some
ways
to
get
screenshots
or
projects
that
did
that
as
part
of
their
testing.
A
C
I
was,
I
was
gonna
say:
did
you
need
help
finding
like
we
capture
artifacts
of
screenshots
in
our
system
like
our
spec
system,
jobs?
As
an
example?
Do
you
need
help.
D
C
Yeah,
I
kind
of
just
pulled
back
on
meetings
a
little
bit
I'll
say
so
we
don't
cross
paths
as
much
anymore.
A
Cool
and
then
I'm
gonna
jump
ahead
in
the
agenda.
There's
a
new
mr
that
uses
the
references
keyword
to
override
the
rules
and
code
quality
templates.
I
am
not
familiar
with
this
new
keyword,
so
I'm
not
I'm
not
sure
what
similar
issues
customers
might
have
not
quite
following
the
use
case.
Yeah,
maybe
voice
over
that
for
us.
C
Yeah
yeah,
so
in
the
gitlab
project
pipeline
we
had
essentially
duplicated
the
code
quality
jobs
definition
from
the
template
in
our
own
ci
config,
because
we
wanted
to
modify
the
rules
and
with
the
references
keyword
we
found,
we
no
longer
needed
to
duplicate
that
we
could
just
reference
the
rules,
definition
from
somewhere
else
and
everything
else
would
still
apply
from
the
template,
which
was
pretty
cool.
So
the
mr.
C
If
you
look
at
the
diffs
of
the
mr,
or
at
least
like
I
don't
know
if
drew
did
and
that's
why
he
did
one
of
these,
you
can
kind
of
you
can
kind
of
see
what
I'm
talking
about
a
little
bit
clearer.
I
I
I
I've,
seen
general
feedback
from
customers
through
unlabeled
issue,
triage
that
that
adjusting
the
adjusting
parts
of
the
templates
is
challenging
for
them,
and
I
think
references
is
actually
gonna
enable
more
flexibility
than
what
people
probably
were
aware
of
before
this.
This
came
out.
E
If
that's
not
a
helpful
description,
I
would
love
for
you
to
do
that.
One
of
the
I
way
back
when
when
rules
was
being
designed,
we
didn't
we
didn't
talk
about
the
like
the
additive
mergibility
of
yaml.
That
way,
I
remember
we
kind
of
learned
that,
after
the
fact
that
you
know
merging
key
value,
pairs
and
yaml
is
a
really
useful
feature
that
you
don't
have
available
to
array
structures
in
yml.
C
C
Was
our
rules
here,
so
the
variables
in
the
script
were
all
as
far
as
I
know,
100
equivalent
to
what
is
in
the
template,
as
well
as
the
reports,
everything
that's
removed
here
and
really
all
we
needed
was
those
rules
which
which
provide
I,
I
guess
what
they
do
for
our
pipeline.
To
say:
when
do
we
want
to
run
code
quality
because
there's
some
filtering
that
we
do
based
on
the
applicable
changes?
C
So
as
an
example,
we
don't
need
to
run
code
quality
when
doc
like
product
docs
are
the
only
things
that
are
changed.
That's
the
purpose
of
these
rules
is
we
try
to
focus
running
jobs
as
running
jobs
on
mrs
that
that
that
job
would
apply
to
but
to
work
around
that
for
templates,
and
this
is
the
same
with
secure
security
templates.
Remy
has
a
mr
that's
pretty
much
the
same.
C
We
had
to
duplicate
all
that
content,
so
with
this,
mr,
he
removed
all
that
duplicated
content.
We
brought
in
the
include
template
again
and
we
just
use
rules,
and
then
we
use
the
reference
keyword
which
I
don't
have
open,
but
I'm
happy
to
find
cpi
keyword
references,
it's
funny
that
it's
called
references
and.
C
Yeah,
so
I
think
right
here,
which
allows
you
to
just
kind
of
take
a
portion
of
an
already
defined
config.
So
here
we
say
reference
rules,
reports,
quality
rules.
So
just
that
key
for
this
for
right
here
and
it
overrides
the
rules
that
are
there
with
these.
That
are
the
ones
that
are
the
rules
that
are
applicable
for
our
project.
C
C
You
know
it
I
I
see
this
being
the
same
if,
if
customers
are
trying
to
solve
similar
challenges
of,
I
only
want
to
run
code
quality
when
these
files
change
as
an
example-
and
I
use
rules
for
that-
they
can
really
take
our
implementation
pattern
there,
but
even
other
things,
I
believe
should
just
apply
so
like
if
variables
or
needs
is
modified.
You
may
be
able
to
use
that
as
well
from
other
job
definitions.
E
C
That
is
my
understanding
cool.
I
asked
remy
to
prepare
like
documentation
and
then
sigma.
We
said
a
little
bit
more
because
I,
I
suspect
the
power
of
the
bang
reference
keyword
wasn't
really
understood
that
much
until
we
came
across
that
so
I'll
I'll
cross-post
that
documentation
once
he
once
he
prepares
that
next
week.
A
It'd
be
great
because
we,
I
think
we
still
have
a
number
of
open
issues
that
extend
environment
variables
or
variables
rather
so
that
folks
can
do
various
things
like
that.
And
if
this
would
be
a
way
to
solve
those
issues,
we
could
do
a
walkthrough
of
closing
all
of
the
issues
posted
into
them
and
then
posted
unfiltered
of
here's.
How
to
better
use
the
reference
to
change
all
of
those
variables
within
the
docker
image
that,
while
we're
using
code
climate.
C
C
So
yeah
I
yeah,
I
thought
this
was
really
exciting
and
yeah.
I
I
agree
that
more
exploration,
but
seeing
how
that
keyword
could
be
used
as
issues
come
in
that
you're
that
you're
looking
at
might
be
a
workaround
at
least
to
what
the
product
or
what
a
customer
might
be.
Looking
for
from
a
solution
to
their.
A
C
C
Yeah,
that
is
my
understanding.
It
it's,
but
instead
of
yaml
anchor
so
say
you
have
an
anchor
that
defines
like
four
different
attributes.
You
can
say
just
this
one
specific
attribute
within
that
gml
config
so
like
it
like.
If
you
had
needs
rules
and
all
these
other
things
to
find
in
a
in
the
yama
laker,
you
could
always
extend
that
anchor
if
you
needed
all
of
them.
But
if
you
only
wanted
the
rules,
that's
where
you
could
use
references
so.
E
C
Okay,
yeah
so
I'll,
say
the
last
three
months
have
been
quite
challenging
outside
of
work.
So
I
feel
that
I
should
know
the
answer
to
this
question
before
I
ask
it,
and
I
want
to
make
sure
I
vocalize
that
as
well,
but
is
there?
Is
there
already
a
way
or
an
issue
for
retrieving
the
slowest
tests
that
are
that
are
run
on
a
project
where
this
came
up
in
the
quality
staff
meeting
was
if
we're
trying
to
optimize
our
end-to-end
pipelines,
which
run
against
staging
or
live
environments?
C
D
C
Right
and
I'm
sorry
that
I
didn't
have
something
ready,
let
me
just
open
a
pipeline
and
kind
of
just
talk
through
the
challenge
that
I'm
saying
and
zeph.
Maybe
here's
where
you
can
see
just
say
like
here's,
where
it
is.
D
C
Share
again,
we
have
this
project,
which
runs
end-to-end
tests
against
staging
there's
tests
that
are
added
to
the
test.
Summary
I
can
see
like
these
are
the
jobs.
C
If
I
click
in
and
I
can
see
the
total
time
so
I
should
say:
oh
so
there
they
were
it's
already
there
and
I
don't
know
what
I
was
looking
at
the
other
day
where
I
missed
it,
but
here
I
essentially
was
in
a
view
where
I
could
only
see
this,
and
I
guess
I
need
to
because,
like
here
it's
right
here,
what
I'm
talking
about.
C
B
Yeah,
but
it's
a
pretty
tricky,
be
a
tricky
api
call
to
get
to
the
suite
we
pass
in
the
build
ids
that
were
used
to
run
that
suite.
So
you
have
to
basically
get
the
test
report
summary,
which
has
the
build
ids
for
each
suite
and
then
make
another
request
for
the
suites
that
you
want.
B
I
think
there
is
an
old
api
endpoint,
because
when
we
first
initially
created
this
page,
it
was
all
just
one
end
point
right.
I
don't
know
if
that
endpoint
still
exists,
because
we
moved
it
to
this
new
pattern
of
getting
the
summary
first
and
then
each
tweet
individually
right.
B
E
A
C
So
we
received
the
general
feedback
that
n10
pipeline
duration
can
block
our
deployment
for
too
long.
So,
let's
say:
there's
a
staging
incident
that
blocks
deployment
beyond
staging
and
we
have
to
take.
The
quality
department
would
have
to
take
an
action
to
remediate
that
it's
like
a
two
hour,
two
to
three
hour,
wait
time
after
the
fix
is
done
for
it
to
get
merged
because
of
the
pipeline
duration.
C
So
trying
to
figure
out.
Where
do
we
get
the
most
benefit
within
that
two
to
three
hours?
Isn't
something
that
I
saw?
We
had
the
ability
to
get
good
insight
into,
but
I
don't
know
how
I
just
overlooked
the
thing
I
showed
everyone
as
I
was
talking
through
it.
So
it's
kind
of
embarrassing,
but.
C
It's
really,
if
I
let's
say
someone
has
a
day
and
I
want
to
say,
focus
on
the
most
impactful
thing.
I
don't
know
what
the
answer
to
that
question
is.
A
So
it
sounds
like
the
next
step
then
would
be
after
we
give.
You
here
are
the
tests
that
are
most
often
failing,
so
you
can
fix
them
when
we're
getting
to
efficiency.
It
would
be
great
now
more
pipelines
are
green
or
they're
failing
faster,
and
we
have
more
confidence
in
them
now
they're,
just
taking
too
long,
though,
when
they
are
green.
How
do
we
make
them
faster?
To
do
that?
We
might
want
to
say
here
are
the
jobs
that
are
the
slowest
and
within
that?
C
Yeah
yeah,
actually
starting
at
the
job
level,
which
again
I
think
it's
all
there
is-
is
great
that
actually
aligned
to
some
of
the
feedback
I
had
for
the
new
job
dependency
view
where
it
like.
It
shows
the
needs.
Representation
really
well,
and
I
think
the
next
one,
not
not
the
very
next
thing,
but
in
the
future,
showing
how
the
bottleneck
through
your
pipeline
is
and
where,
where
it's
taking
the
longest
was,
where
my
mind
went
to
immediately.
Yeah.
D
So
you
we
could
have
one
suite
that
has
one
test
in
it.
That
takes
a
really
really
long
time
and
it
might
not
ever
bubble
up
because
the
other
sweets
are
just.
C
Bigger
so
yeah
yeah,
I
think
once
we
get
to
the
test
level,
that
would
that
would
definitely
help
too
on
that
zeff.
I
I
actually,
if
we
get
some
spare
time
like
if
it
sends
earlier
I'd
like
to
like
hop
on
a
call
with
you
for
like
10
minutes,
if,
if
you
have
that
time
available,
if
not
I'll
schedule,
something
and
chat
chat
through
like
can
those
jobs
that
have
50
tests,
what
stops
us
from
like
splitting
them
up
into
like
50
jobs
as
an
example,
it's
an
extreme
example.
A
Cool,
I
don't
see
an
issue.
I
was
just
doing
a
quick
search
that
gets
us
that
data,
but
I
will
get
one
spun
up
for
us
and
we
can
figure
out
what
the
back
end
of
that
might
look
like.
So
at
least
it's
stored
somewhere
and
you
can
get
at
the
data
internally
and
we
can
see
what
is
a
valuable
view
for
us.
That'll
help
inform
the
front
end
work
when
we
get
to
it.
I
think
that
would
be.
B
A
Yeah,
I
can't
remember
if
we
had
to
think
big
like
a
cross
stage
about
pipeline
optimization
or
not
because
it
does
touch
on.
You
know
for
runner,
like
what
kind
of
runner
are
you
using?
Is
it
the
right
kind
of
runner,
the
right
kind
of
compute,
underneath
the
hood
and
then
within
pipelines
like
showing
you
where
those
bottlenecks
are
and
for
tests
which
tests
are
taking
the
longest
or
causing
your
pipelines
to
fail
most
often
so
it
touches
all
of
the
groups
within
verify.
It
might
be
a
really
good
cross
stage.
Think
big.
E
Yeah,
I
know
I've
seen
a
bunch
of
chatter
and
pipeline
authoring
that
relates
to
like
a
lot
of
cool
work.
They're
doing
and
nadia's
posting
a
bunch
of
like
mock-ups
of
helping
people
understand
what's
happening.
They
might
have
some
really
good
things
kind
of
cooking
already
about
how
to
like
quickly
get
those
answers
we
should.
We
should
definitely
find
out
if
they
have,
if
they
have
good
ideas,
because
it
sounds
like
they.
C
Do
and
james
go
back
to
your
question
like
that
issue
and
progression
sounded
fine
to
me
from
an
urgency
perspective.
It
seems
the
data
we
need
to
answer.
The
question
that
I
was
asking
is
available.
I
just
didn't
get
to
it
so
trying
to
see
how
we
can
use
the
data
that
is
there.
First
should
probably
be
the
first
thing
for
us,
so
I
wouldn't
prioritize
that
super
high
at
all
until
we
do
some
due
diligence
on
it.
Unless
you
see
lots
of
benefits
to
based
on
your
customer
and
stuff
like
that,.
A
Well,
I
definitely
see
a
potential
for
a
paid
feature
here
where
we
start
to
surface
to
you:
here's
the
jobs
that
fail
most
often
or
the
tests
that
fail
most
often
and
all
of
the
tests
that
are
the
slowest.
So
here's
how
you
spend
less
on
runner
minutes
by
fixing
these
things,
but
it's
added
premium
value.