►
From YouTube: UX Showcase: Testing 3 Year Vision Exploration
Description
Juan J. Ramirez shares designs for the vision for testing categories.
A
A
Basically,
the
idea
of
this
is
to
generate
certain
ideas
and
start
kind
of
like
nurturing
the
roadmap,
as
we
move
forward
so
yeah
talking
about
those
goals
more
precisely,
I
think
the
first
idea
with
this
is
definitely
to
create,
explore
and
validate
that
vision
that
we
have
been
trying
to
paint
and
that
we
have
been
discussing
with
several
customers
and
generate
a
backlot
of
initiatives
and
and
different
things
that
will
help
us
set
up
set
up
the
foundation
for
for
that
future
work.
A
So
that's
one
of
the
reason
reasons
why
this
type
of
work
is
important.
The
second
goal
here
is
to
learn
more
from
our
customers
and
their
current
needs.
So
the
idea
here
is
showing
them
some
of
these
designs
and
trying
to
to
understand
how
the
existing
testing
offering
compares
to
what
we're
trying
to
show
them
here
and
also
trying
to
validate
certain
gaps
when
it
comes
to
how
we
compete
in
the
market.
A
So
that's
another
thing
that
we're
trying
to
do
here,
we're
looking
at
other
tools
and
we're
going
to
we're
trying
to
see
how
they
go
to
market
with
their
tools
and
how
their
offering
can
compete
with
with
our
product
moving
forward
all
right.
So
before
I
jump
into
showing
the
the
the
vision.
The
first
thing
that
it's
important
to
explain
here
is
what
is
testing
right.
So
you
you
hear
it
a
lot
people
kind
of
kind
of
like
if
you're
working
in
gitlab
or
you
work
in
this
industry.
A
You
kind
of
know
what's
testing
implicitly,
but
you
really
like
it
it's
more
complex
than
than
it
sounds.
You
know,
because
there's
many
moving
pieces
here,
so
one
thing
is
testing
in
the
context
of
gitlab.
One
of
the
goals
here
is
that
we
want
to
provide
best
testing
platform
and
enable
end-to-end
testing
for
anyone
who
is
using
gitlab
and
that
doesn't
only
lay
down
to
you
know
being
some
a
platform
where
you
can
run
tests.
A
It's
also
something
that
has
to
do
a
lot
with
test
planning,
and
you
know
thinking
about
how
you're
gonna
create
tests
for
your
for
your
projects.
So
that's
what
it's
called
end-to-end
testing,
which
is
coming
from
planning
to
basically
analyzing
your
your
tests
as
they're
happening
so
intel
in
terms
of
what
we're
doing
or
what
we
currently
have
at
gitlab.
We
we
have.
We
are
very
strong
in
test
execution.
A
I
think
that's
where
we
are
provide
our
best
offering
right
now,
which
is
the
idea
of
basically
any
any
type
of
framework
testing
can
be
implemented
on
top
or
implemented
or
applied
on
top
of
any
ci
cd
project
on
gitlab.
So
anyone
who
want
to
run
a
framework
like
jest
or
karma
or
r-spec
we're
capable
of
basically
executing
all
those
type
of
tests
with
a
very
low
friction,
and
we
are
also
capable
of
exposing
the
resulting
data
from
those
tests.
In
a
very
elegant
way
in
the
ui,
so
we
are
strong
on
that
area.
A
Test
planning
is
an
interesting
one
because
that
one
actually
doesn't
fall
under
the
testing
team.
Currently,
this
is
gonna
be
more
an
area
of
focus
for
for
quality
management,
which
is
something
that
nick
brown
is
working
on
that,
and
this
is
something
that's
coming
soon,
but
there's
already
kind
of
like
a
road
map
for
bees
happening,
which
also
includes
a
little
bit
of
test
design,
and
this
one
is
a
little
bit
more
vague.
A
I
think
there's
more
intersections
between
what
we
are
doing
at
testing
and
what
quality
management
is
doing.
But
the
idea
with
test
design
is:
how
can
we
help
our
customers
to
plan
those
tests
and
imagine
those
tests
and
kind
of
create
this
foundation
of
what
needs
to
be
implemented
before
actually
creating
and
executing
the
tests
and
finally
result
analyses?
That's
an
area
where
we
are
running.
We
are
kind
of
like
doing
our
things
and
it's.
A
It
needs
to
be
way
better
and
that's
a
an
area
where
we
want
to
focus,
and
that's
part
of
the
large
mission
that
we
have
for
the
future.
So
I'm
going
to
be
mostly
showing
things
about
results,
analysis
or
analytics
of
testing
and
yeah.
It's
basically,
this
idea
of
how
can
we
move
forward
and
create
more
compelling
solutions
in
result?
Analysis
as
we
keep
creating
things
in
testing
in
the
next
three
years
and
forward.
So
there's
two
lofty
goals
here.
A
One
is
we
want
to
help
directors
and
managers
to
understand
all
the
data
that
is
generating
from
their
testing
and
create
insights
that
they
can
use
to
improve
and
inform
their
decisions
and
also
improve
their
cost
and
the
performance
of
their
companies.
Extrapolated
from
that
testing
data,
and,
of
course,
we
also
want
to
help
developers
to
understand
the
areas
of
concern
when
it
comes
to
testing.
A
So
we
we
want
them
to
know
what
it's
failing,
why
it's
failing,
and
even
if
it's
not
failing
how
they
should
improve,
what
they're
doing
to
make
their
their
code
base
more
reliable
and
achieve
more
things
as
they
as
they
keep
working
and
as
they
keep
releasing
and
adding
new
features.
A
So
yeah,
let's
jump
into
the
design,
so
we
can
explore
a
little
bit
of
the
see
some
of
the
things
that
I
have
been
exploring.
So,
as
I
said,
this
is
mostly
focus
on.
It's
mostly
focused
on
the
result.
Analysis
part.
So
the
vision
here
is
sorry.
Let
me
just
see
okay,
this
works
better.
A
So
the
mission
here
is
that
anyone
who,
in
the
future,
anyone
who
is
using
testing
should
be
able
to
understand
what's
going
on
with
their
tests
and
how
that
is
impacting
the
overall
performance
of
all
of
their
projects.
So
the
vision
is
that
eventually,
people
are
gonna
come
to
the
group
level.
A
So
if
they
have
an
instance,
it
will
be
the
instance
level
really
doesn't
matter
for
the
context
of
what
I'm
trying
to
explain
here,
but
it
is
that
you're
gonna
have
broad
visibility
of
all
your
projects
and
how
they
are
performing
testing
wise
right.
So
you
will
see,
like
you
will
come
to
a
view
like
this
one,
where
you
are
gonna,
see
a
dashboard
that
it's
showing
you
data
like
group
testing
coverage.
So
it's
giving
you
a
a
data
point
on
how
your
groups
are
behaving
in
terms
of
testing
coverage.
A
What
testicle
rush
means
is
how
much
of
my
code
is
being
tested.
We
we
also
have
this
intention
of
showing
them.
How
long
is
the
testing
jobs
or
how
long
are
the
testing
jobs
taking
in
total
duration,
because
that
has
an
impact
in
cost
and
that
has
an
impact
in
the
way
that
the
they
create
and
they
craft
those
tests
so
showing,
though
showing
them
that
type
of
data?
A
One
parenthesis
that
I
wanna
make
here
is
that
you
will
see
that
there's
these
segmented
controls
to
switch
between
different
stats
like
or
like
stat
angles.
So
you
could
see
the
p90,
the
p95
or
the
average,
and
the
the
reason
here
is
that
there's
certain
things
that
where
we
want
to
be
opinionated
and
there's
certain
things
where
we
don't
want
to
be
opinionated
and
we
we.
A
We
believe
that
the
best
way
to
do
that
is
by
allowing
customers
to
just
see
how
a
p90
looks
against
an
average
and,
of
course,
from
a
statistic,
statistical
point
of
view.
A
p90
or
a
p95
is
more
accurate
than
seen
on
average,
but
we
really
want
to
give
them
like
all
that
data
and
don't
be
opinionated
on
how
they
interpret
the
data.
There's
other
areas
where
we
want
to
be
opinionated
and
I'm
going
to
talk
about
that
in
a
minute.
A
But
but
the
whole
idea
is
to
give
them
as
much
flexibility
in
analyzing
the
data
as
they
can
so
yeah.
As
you
can
see,
there's
many
other
things.
One
one
interesting
thing
that
we
want
to
start
leveraging
more
is
the
frameworks
that
you
can
use
to
measure
code,
quality
and
start
analyzing
things
like
the
churn
of
files,
the
code
complexity,
the
duplication,
the
maintainability
of
your
code,
how
many
trivial
check-ins
are
happening.
A
We
also
have
accessibility
testing
in
in
the
solutions
that
we
provide.
We
wanna
also
serve
how
much
how
accessible
are
the
tests?
Flexible
are
the
project,
and
we
also
have
this
concept
of
web
performance
and
testing
latency
in
endpoints,
but
we
also
wanna
surface
that
there's
also
another
concept
that
I'm
exploring
here,
which
is
group
projects
breakdown,
so
seeing
this
data
project
and
seeing
how
you
test
coverage
is
in
through
time
if
it's
improving,
if
it's
decreasing
and
just
seeing
those
scores
across
your
different
projects.
A
I
I
said
before
something
about
being
opinionated,
and
this
is
the
area
where
I
think
we
want
to
become
a
little
bit
more
opinionated.
So
as
core
is
a
very
opinionated
perspective
on
data
right,
if
you
say
that
something
it's
an
a
you
gotta
be
coming
from
somewhere,
and
we
understand
that
that's
opinionated.
We.
A
We
also
feel
that
this
is
the
right
way
to
start
driving
the
attention
of
our
customers
into
into
the
important
things
you
know
so
dna,
it's
not
representing
what
they
want
to
see
in
term
duplication,
for
instance,
but
it's
gonna
allow
them
to
start
benchmarking
against
themselves.
You
know.
So
if
you
have
a
lot
of
things
on
your
code
base
that
are
scoring
a
c
or
a
b,
then
or
a
d,
then
you're
gonna
use
that
as
a
as
a
baseline
to
start
improving
yourself.
We
believe
that
we
do.
A
A
Many
other
things
that
we
want
to
explore,
basically
showing
graphs
that
can
be
plotted
against
different
projects,
seeing
that
performance
throughout
multiple
different
metrics
and
having
the
ability
to
see
that
in
things
like
90
days
or
120
days,
whatever
it
makes
sense
to
our
customers
also
showing
coverage
hot
spots.
A
That's
another
area
where
we
feel
that
we
can
add
a
lot
of
value
so,
instead
of
just
saying,
like
your
tests,
are
not
doing
well
telling
them
what
specific
files
are,
the
ones
that
are
not
performing
well
and
what
things
are
not
not
are
not
the
best
at
the
moment
you
know
and
just
pointing
pointing
that
to
the
customers
and
another
exploration
that
I
did
here
just
showing
them
how
the
overall
project
its
behavior,
you
know
like
if
I
have
a
hundred
percent
coverage
on
this
file-
that's
great,
but
it's
it's
it's
a
tiny.
A
It's
a
tiny
coverage
against
a
large
part
of
the
project,
which
has
not
an
ideal
coverage
same
goes
for
other
areas
that
we're
exploring
like
code
quality,
the
same
thing
using
these
graphs
that
you
can
plot
against
other
projects
and
see
how
one
project
performs
against
the
other.
This
gives
directors
the
ability
to
manage
their
groups
or
manage
their
pro
their
teams
more
effectively
and
try
to
extrapolate
some
areas
of
concern
more
quickly.
A
Same
goes
for
oh
here's,
one
small
thing
that
I
wanted
to
mention.
You
will
find
out
that
like
if
we
start
going
down
this
route,
we're
gonna
end
up
with
very,
very,
very
long
displays
of
data
where
people
probably
are
gonna
end
up
scrolling
for
a
long
time.
So
so
I'm
thinking
also
like
on
those
camera
quality
of
life
improvement
so
basically
having
the
ability
of
collapsing
the
sections.
If
you
don't
care
about
seeing
the
group
90-day
performance,
you
can
collapse
that,
and
that
should
be
remember.
A
Maybe
for
your
particular
persona.
You
only
care
about
code
quality
quality
hotspots,
which
is
a
very
software
engineer,
persona
the
one
who
cares
about
this.
So
you
can
just
collapse
all
the
other
metrics,
so
yeah,
just
a
parenthesis
about
how
how
on
how
we're
thinking
moving
forward
on
making
this
usability
sane
and
reasonable
same
for
other
things
like
endpoint
performance,
showing
data
like
endpoint
latency,
the
time
of
time
to
first
byte.
A
All
those
metrics
are
very
important
for
developers,
but
also
keeping
that
flexibility
of
seeing
and
plotting
against
different
projects
and
plotting
against
different
timelines.
A
And
finally,
for
accessibility,
the
same
just
surfacing,
accessibility,
errors,
accessibility,
warnings
and
just
kind
of
like
creating
that
foundation
of
how
I'm
doing
in
terms
of
errors
and
warnings
across
all
my
projects
for
all
these
areas
moving
forward.
The
next
situation,
for
this
is
basically
start
exploring
things
about
cost
cost
is
an
area
of
concern
for
our
customers.
If
you
have
tests
that
are
taking
a
long
time
for
no
reason,
that's
money
that
you're
wasting
on
pipelines.
A
If
you
are
half
flaky
tests
that
fail
a
lot
of
time,
we
we
definitely
want
to
tell
that
to
our
customers
and
tell
them
hey.
You
might
have
a
test
that
is
costing
you
a
lot
of
money.
You
might
want
to
look
into
that.
And
finally,
another
area
that
we
want
to
explore
is
that
proactive
approach
on
how
we
fix
these
issues.
A
So
right
now
we're
just
surfacing
these
three
things
to
the
analysis,
but
we
want
to
make
sure
that
we
also
provide
an
easy
way
for
them
to
fix
these
things,
and
if
we
have
suggestions
for
them
having
that
ability
to
apply
those
those
fixes
on
the
go
in
dmrs
or
perhaps
at
a
larger
level
in
the
in
the
project
as
you're
analyzing
it
giving
some
insights
on
how
to
fix
these
issues.
A
All
right,
that's
that's
pretty
much
it.
What
I
had.
A
Sure,
thank
you
juan.
Don't
you
have
any.