►
From YouTube: Monthly Testing Internal Customer Call - June 2020
Description
A review of the epics in active development, check in on the recent release of the Identify Failures Fast template and talk about what testing features we should next focus on increasing internal use of through dog fooding.
Links:
- Identify Failures Fast Docs: https://docs.gitlab.com/ee/user/project/merge_requests/fail_fast_testing.html
- Identify Failures Fast Feedback Item: https://gitlab.com/gitlab-org/gitlab/-/issues/222706
- Accessibility testing documentation: https://docs.gitlab.com/ee/user/project/merge_requests/accessibility_testing.html#accessibility-testing
A
This
is
the
June
internal
customer
meeting
for
the
verified
testing
route,
I'm
James
hi
buck
the
product
manager
for
testing
and
looks
like
I'd,
be
first
agenda
item.
This
is
more
like
read
me
or
read
only
but
I'm
going
to
focalize
a
bit
of
it.
There's
some
in
the
roadmap
deck
instead
of
going
through
all
of
the
deck
we're
going
to
continue
to
be
just
calling
out.
A
The
team
is
really
focused
on
delivering
on
an
improved
experience
or
in
the
j-team
it
report
and
actually
the
slide
to
tell
you
together,
I
already
code
coverage
data
for
groups,
that's
providing
historic
coverage
data
for
all
of
your
projects
within
a
groups
that
builds
on
the
curcumin
raft
it
just
released
in
13:1
and
then
further
enhancements
where
they
ran
usability.
Some
of
the
data
that
comes
through
in
that
Jade
unit
report
and
those
are
issues
that
were
recorded
by
both
external
and
internal
customers.
A
We're
really
focused
on
being
employed
dog
through
that
report
and
make
it
usable
for
all
of
our
internal
folks
that
get
lab,
and
so
we're
focused
on
some
things
that
were
reported
out
of
the
quality
team,
some
of
whom
I
represent
here
today.
So
that
is
what's
going
on
actively
today
in
the
group
and
then
our
next
efforts
on
code
quality,
again
focus
on
dog
fooding.
Looking
back
to
the
open
dog
fooding
issued
for
the
cur
quality
feature,
I
pulled
together
some
of
the
issues
that
were
open
there
into
an
epic.
A
Let
me
know
otherwise
what
I
wanted
to
jump
in
and
discuss
synchronously
today,
because
we
have
Ricky
Andrew
here
from
the
testing
team,
probably
some
other
folks
I'm,
just
not
seeing
what
kind
of
synchronous
feedback.
Besides
what
we
already
see
an
inner
feedback
issue
around
the
identified
failures
fast
template.
Does
the
team
wanna
provide?
What's
the
experience
been
so
far
and
kind
of
talk
through
what
our
next
steps
are
in
that
effort
being
kind
of
matching
or
getting
the
parity
with
the
existing
internal
tooling?
A
B
B
Looking
for
the
looking
towards
the
team
for
input
on
this
to
see,
if
there's
something
that
I'm
overlooking,
but
our
intent
is
to
use
it
and
at
least
be
able
to
measure
the
things
that
I
think
Ricky
drew
James
and
I
talked
about,
which
is
how
often
do
we
catch
the
failures
that
happen
in
the
other
pipelines
so
at
least
run
it
alongside
and
start
to
have,
an
idea
of
where
we
can
look
to
refine
that
the
other
thing
that
I
think
will
help
get
unlock
a
lot.
More
value
is
being
able
to
feed
it.
B
Maps,
like
a
mapping
of
data,
so
replicate
what
we
have
for
Foss
impact
should
be
able
to
be
done
on
that,
but
we're
also
looking
engineering
productivity
is
looking
towards
using
either
coverage,
mapping
or
dynamic
test
mapping
to
to
try
to
refine
more
like
refine
the
utilities.
The
unit
integrations
factor
to
run
in
a
pipeline
more
looking
at
the
q3
and
being
able
to
use
the
gym,
or
at
least
that
that
pattern,
who
would
be
phenomenal.
C
I
guess
my
only
my
my
initial
thought
there's
because
I've
I
mean
most
of
what's
in
the
gem.
So
far,
is
it's
heavily
targeting
like
get
lab
being
able
to
use
this
internally
but
I'm
for
for
things
like
overriding
configuration,
we
have
an
open
issue
about
defining
a
mapping
and
the
in
config
file.
You
know
kind
of
some
kind
of
route
map
style,
simple
mammal
and
I'm-
wondering
if
that
seems
to
everyone
else,
like
the
most
direct
way
to
solve
that
problem.
C
More
generally,
like
I
I
didn't
want
to
I,
didn't
want
to
build
the
the
EE
things
directly
into
the
gem,
because
the
ideally
we'll
be
able
to
you
know
sort
of
pitch
this
to
CI
customers
and
okay.
You
could
run
CI
this
way,
so
is:
does
anyone
have
an
idea
of
what
is
of
something
more
direct
to
solve
that,
but
also
generic
enough
to
sell
as
a
feature
or
is
that
there's
the
mapping
issue?
Probably
the
number
one
way
to
go
for
that.
D
A
B
So
I'm
not
saying
that
there's
a
problem
with
the
gym
today,
because
we
haven't
been
able
to
try
it.
So
that's
that's
thing
number
one,
but
I
would
call
out
if
we're
looking
to
replicate
what
we
have
today
right
now,
we
have
a
way
to
run
Foss
tests
based
on
the
changes
in
the
EMR
and
we're
not
able
to
configure
the
gem
to
be
the
same
way.
The
same
thing.
So
that's
something
I,
guess
to
consider
and
I
think
providing
that
capability
in
a
general
sense
will
help
unlock
a
lot
more
value
for
the
gem.
B
That's
used
by
that
CI
template
I.
Think
from
from
my
perspective,
I
do
I
do
want
to
step
back.
We
need
to
use
the
gem
to
be
able
to
provide
like
the
thing.
That's
gonna
have
the
best
value,
but
at
the
moment
it
would
be
having
a
configurable
mapping
would
would
be
the
thing
that
allows
us
to
leverage
it
more
than
what
we're
able
to
at
the
moment.
B
B
A
B
C
C
So
to
be
clear,
there's
two
different
things
right:
there's
the
the
CI
config
template
isn't
usable
because
of
not
being
able
to
partially
override
selective
sections
of
it.
That's
one
problem
and
then
the
second
problem
is
not
being
able
to
provide
a
config
map
for
the
test
file
finder
so
that
you
can
allow
it
to
do.
The
easy
Foss
test
runs
right
now,.
B
Yes,
I,
agree
and
I,
don't
think
I
think
both
have
workarounds
like
we
don't
need
the
mapping
to
be
able
to
see
it's
the
gem
valuable.
We
don't
need
the
template
to
change,
to
be
able
to
test
out
what
the
template
does.
We
can
just
take
the
content,
whether
that
it's
just
prioritizing
the
work
on
our
side
and
getting
it
back.
Shinzon
me
I,
don't
think
it's
on
the
testing
group
to
be
clear.
I
know.
A
E
C
It
covers
it.
So
if
right
now,
there's
a
there's,
an
open
issue
that
doesn't
cover
request
specs
in
r-spec
II,
then
because
there's
no
app
requests
directory
right.
It's
it's
that
simple
of
a
is
there
something
that
exactly
corresponds.
Okay,
we're
gonna
run
that
and
the
configuration
like
passing
in
a
config
to
be
able
to
make
an
educated
guess
to
those
paths.
I
think
is
the
simplest
option
for
extensibility
to
specify
other
kinds
of
tests
and
other
files.
C
You
know
to
whatever
extent
you
want,
but
also,
if
something
is
a
very
common
configuration
pattern
and
a
lot
of
people's
config
files,
that's
something
that
I
would
think
we
should
roll
directly
into
the
gem
and
be
able
to
do
automatically
so
people
can
do
whatever
they
want,
but
we
should
try
to
make
them
do
as
little
as
possible.
That's
sort
of
my
strategy
for
building
the
automatic
is
you
know
anything
we
see.
That's
really
common
or
standard.
C
B
In
the
spirit
of
setting
a
due
date,
it
might
not
be
a
few
weeks
until
we're
able
to
bring
that
in
so
I.
Don't
know
if
we'll
have
feedback
for
you
next
week,
but
let's
read
relook
at
this
in
two
weeks:
if
that's
okay,
unless
you
need
it
sooner
than
that,
I
can
look
to
addy
prioritize
some
other
items,
but
I
think
that's
very
team
is
pretty
full.
A
A
To
quality
folks,
our
internal
team
or
our
internal
customers,
what
other
crowds
are
you
having
or
whatever
features?
Should
we
look
to
start
dogfooding
next?
This
would
be
a
great
opportunity
to
talk
about.
What's
preventing
us
from
dog
fooding
things
like
good
quality
or
accessibility.
Web
performance
usability,
the
there's
more
of
you
tools,
things
like
that
in
your
day
to
day
work,
fun.
F
A
couple
things
like
the
visual
review
tool,
so
we
don't
have
anything
within
quality
per
se.
Nobody
is
sitting
down
and
running
these
tests
manually
to
be
able
to
go
in
and
and
look
and
compare,
possibly
there's
a
way.
We
could
work
that
into
our
debug
process,
but
then
we're
just
talking
about
really
using
the
review
app
to
try
to
look
at
something
specifically.
F
Yeah
so
I
don't
think
for
visual
review
tools.
That's
gonna
be
something
that
we
have
and
from
a
team
perspective,
we
haven't
really
addressed
the
issue
of
accessibility,
yet
so
I
think
that's
going
to
be
something
on
our
roadmap.
Eventually,
it
has
come
up
before
in
the
past,
but
it's
it's
nothing
current
that
we
have
right
now
unless
I'm
mistaken,
caller
Joanna
but
I,
don't
think
that's
a
anything
we're
looking
at
a
short
term
for
us
to
try
to
work
with
it
now.
Can
we
integrate
that
with
some
of
our
CI
have?
F
B
A
B
Yeah
so
I
write
as
f
kind
of
alluded
to
right
now,
review
apps
aren't
being
used
at
all
on
the
get
lab
project
because
the
security
issue,
which
that's
that's,
what
engineering
creativity,
is
looking
to
remediate
the
next
thing
that
would
hold
us
back
from
using
that
again,
it's
more
of
an
internal
thing
where
there's
not
seed
data,
so
I
can
spin
up
a
get
lab
instance,
but
there's
not
a
project
to
go.
Look
at.
B
It
would
be
very
limited
in
the
scope
of
how
you
could
apply
those
testing
tools
with
the
current
implementation
if
it
was
going
live
so
we
have.
We
have
an
issue
that
has
not
been
prioritized
of
being
able
to
provide
more
I,
guess
I
should
say
like
more
seed
data
or
at
least
the
capability
to
load
an
environment
with
data,
and
that
would
be
something
that
I
think
we
we'd
want
to
look
at
how
we
can
leverage
leverage
some
of
these
testing
components.
After
that,
the.
A
Accessibility,
one
is
tougher
Mike
for
me
because
it
came
up
into
UX
group
conversation
today,
so
you're
asking
about
readability
and
accessibility.
So,
potentially
maybe
the
docs
project
is
somewhere
in
the
end.
The
project
you
similar
route.
We
could
jog
through
this,
especially
if
we
added
a
reality
component
to
the
accessibility
project,
to
make
sure
that
those
Doc's
are
handbook
pages
are
readable.
Maybe
human
language.
F
F
D
F
D
F
B
B
Sorry
guys
no
yeah
yeah,
my
apologies,
the
other
consideration
is
just
maybe
even
Knightly
Knightly
pipelines
or
something
like
that
on
just
get
lab.
Where
we
can
I,
don't
know
where
we
would
do
with
the
information
so
like
we
can.
We
have
nightly
pipelines
around
the
get
lab
project
where
maybe
we
can
configure
it
to
run
against
a
few
of
the
get
lab,
comm
pages
again,
I,
don't
know
how
actionable
or
what
we
would
do
with
it.
B
D
They'll
be
like
from
white
he's
from
the
UX
perspective.
If
we
have
the
data
somewhere
and
we
can
track
the
data
consistently
through
time,
I
think
that
would
be
very
valuable.
It
will
give
us
like
something
to
look
at
and
say
like
hey,
where
it's
almost
like
code
quality
a
little
bit
like
overage.
For
instance.
B
A
I
was
going
to
say
to
make
this
really
usable
in
a
pipeline
having
the
changed
pages
feature,
which
is
an
open
issue,
would
be
really
nice
so
that
you
can
just
scan
for
those
barring
that
just
identifying
some
key
pages
that
we
want
to
say
these
are
accessible
pages,
maybe
something
like
some
static
pages
in
the
signup
flow
or
just
the
home
page.
Those
would
be
a
good
first
step,
and
then
we
do
have
an
open
issue
for
the
next
next
iteration
or
a
future
iteration
on
accessibility
around
tracking
some
sort
of
score
over
time.
A
Right
now,
it
spits
out
an
HTML
report
that
identifies
individual
issues,
but
we
don't
have
necessarily
that
kind
of
letter
grade
or
GPA
great
that
you
might
say:
hey
our
accessibility
is
trending
up
or
down
it's
just
here's
the
issues
and
the
URLs
that
we
scanned
for
you.
So
those
are,
those
are
a
couple
of
things
that
come
to
mind.
I,
don't
think
that
those
prevent
us
from
using
it
today,
at
least
to
start,
but
those
will
definitely
make
it
a
lot
nicer
to
make
actionable
internally.
A
So
I'll
take
as
a
to-do
creating
the
dog
feeding
issue
for
accessibility.
There
is
one
open
for
code
quality
already
I,
don't
know
of
one
open
for
web
performance,
yet
I
believe
that
we
are
using
that
that
tool
internally
and
we've
made
some
changes
since
I've
started
to
web
performance,
that,
in
response
to
internal
customer,
open
issues.
C
C
C
Today,
so
if
yeah,
if
there's
no
issue
for
sitemaps,
not
being
supported
and
accessibility,
we
should
open
that
because
I
that
we
could
do
a
lot
of
scanning
really
quickly.
My
my
concern
with
that
would
be
that's
a
lot
of
scanning
really
quickly
and
a
lot
of
the
handbook
pages
are
kind
of
very
similar,
so
it
might
just
be
worth
getting
a
few
different
like
maybe
stand
one
page
that
adheres
to
each
type
of
template
in
the
handbook,
as
opposed
to
scanning
all
like
10,000
pages.
C
So
there's
there's
an
add-on
feature
that
I'd
really
like
to
have
where,
if,
if
we
could
I
think
there's
a
more
intelligent
way
that
we
can
group
some
of
the
violations
to
help
sort
of
display
severity,
you
know
if
somebody
runs,
you
know
right
now,
you
can
see
all
these
new
I
cannot
merge
request
this.
You
can
see
all
these.
C
You
know
new
accessibility
issues,
but
if
the
thing
at
the
top
of
the
list
is
like
you
introduce
this
new
accessibility
issue,
it
occurs
on
10,000
pages
on
your
website
like
okay,
that's
pretty
severe,
so
I
I'd
sort
of
like
to
start
with
a
sitemap
full
site,
scan
and
then
sort
of
think
about.
Okay.
How
can
we
make?
A
At
least
the
first
time
right
once
we
have
to
change
pages,
Levant,
they're,
definitely
really
fast.
All
right!
Well,
I
will
get
my
dog
fooding
as
you
open
and
I'll
poke
around
with
some
folks
and
see
what
we
can
do
about
getting
it
into
the
pipeline
for
the
handbook.
I
think
it'd
be
interesting
to
at
least
run
at
one,
since
you
will
be
fined
and
walking
through.
The
report
with
some
folks
would
be
an
interesting
exercise,
anything
to
say
hey.
How
is
this
actionable?
How
is
this
not
actionable?
B
And
I
was
just
gonna
add,
like
I,
think
the
simplest
iteration
we
can
do
for
the
gitlab
project
is
like
check
the
sign-in
page
check
like
a
repo
page,
it's
publicly
accessible
check.
It's
like
an
issue
and
em
are
like
some
of
the
just
the
heavily
used
features.
I.
Imagine
accessibility,
testing,
I'm,
not
sure
what
you've
heard
from
customers,
but
the
company
I
came
to
from
now
it's
a
year
Wow.
It
feels
like
it
hasn't
been
that
long
large
insurance
carrier.
B
They
were
going
through
like
a
large
effort
to
to
meet
accessibility
requirements
for
all
of
their
public
facing
site
like
millions
and
millions
of
dollars.
So
having
a
tool
for
this
I
imagine
would
be
a
really
good
selling
point
of
the
group
and
being
able
to
use
it
and
provide
that
feedback
will
only
make
it
better.
Yeah.