►
From YouTube: Verify:Testing Internal Stakeholder call 03.26.2020
Description
Discussion with GitLab internal stakeholders for Testing group features on March 26, 2020.
A
B
Sorry
I'll,
let
you
start
first
I
need
to
copy
over
some
links
on
okay.
We
made
a
lot
of
progress
on
storing
test
results
in
issues
after
test
cases
and
having
a
historical
view
on
all
the
tests
into
a
test.
So
I'll
do
a
demo
last
time,
but
we
could
go
with
them
to
follow
up
on
code
changes,
vocalis
yeah.
A
The
idea
is
that
we
would
get
back
a
lot
of
pipeline
minutes
if
those
things
fail,
so
you
don't
have
to
one
of
the
test
cases
or
one
of
the
anecdotal
bits
of
evidence
that
we
ran
into
was
an
internal
developer
was
any
wait,
45
minutes
for
the
use
cases
or
the
unit
tests,
rather
to
run
for
the
code
that
I
changed
so
I
just
sat
there.
Waiting
waiting
waiting
until
I
actually
got
confidence
that
my
unit
test
ran.
A
So
we
think
if
we
pull
that
forward,
we
can
get
back
a
lot
of
time,
especially
in
a
failure
scenario.
Where
hey
your
brand-new
unit
tests
failed,
you
don't
have
to
wait.
40
minutes
to
get
it.
You
can
get
that
information
in
maybe
20
minutes,
so
we
can
save
a
bunch
of
runner
time
internally,
but
also
a
lot
of
developer
time
and
sitting
around
waiting
for
a
build
and
unit
test
results
so
wanted
to
just
point
you
at
that
and
see.
Does
that
actually
resonate
like?
A
B
Let
me
let
me
go
back
and
reread
a
little
bit.
No
I
think
I
think
I'm
supportive
of
doing
this.
My
my
comments
was
a
long
lines
of
when
we
roll
this
out.
I
think
we
may
incur
additional
or
extra
CI
minutes
when
things
are
overlapping
a
bit
until
you
can.
You
can
confidently
say
that.
Okay,
we're
testing
the
right
things.
Let's
kill
the
rest
of
the
pipeline
and
I
am
looking
forward
to
see
what
the
feedback
is
and
I.
Don't
know
how
we
want
to
roll
this
out.
B
We
expect
a
slight
bump
in
CI
minute
costs
this
month,
and
then
we
need
to
work
on
okay,
let's
turn
it
on
or
turn
it
off
similar
to
how
we
did
with
much
trains
on
the
handbook
project
and
see
where
we
get
fast
feedback
there,
yeah
but
other
than
that
I
think:
let's
try
it
at
a
unit
test
level,
first
I'm
reluctant
to
put
the
into
and
tests
in
the
main
repo
in
the
fray,
because
that
has
its
own
quarantine
process
and
all
that
stuff.
So
yeah.
B
C
Nothing,
that's
really
shareable
yet,
and
the
I
ran
into
some
bug
fixes
that
we
need
to
make
the
template
work.
So
I've
been
working
on
that
and
the
kind
of
good
news
is
that
it's
a
really
easy
work-in-progress
to
share
and
because
I
the
easiest
thing
to
do
is
just
gonna,
be
some
CI
configuration
so
once
I
start
trying
it
out
myself.
I
can
pop
over
into
quality.
Whatever
I
have
I
can
say,
like
hey
here's.
What
I'm
thinking
does
this
make
sense
in
your
workflow?
C
A
Yeah
I
guess
the
use
case
that
we
were
thinking
back
was
that
this
would
happen,
probably
only
in
merger
quests
first,
so
not
necessarily
in
the
master
pipeline
run.
We
trust
that
at
that
point
your
unit
tests
are
already
passing.
This
is
as
an
individual
developer,
you're
testing
that
change
early
and
you're,
seeing
your
new
tests
or
tests
against
files
you
changed
or
added
being
run
first
before
the
rest
of
the
unit
tests
run.
So
if
you
do
have
a
failure,
I
mean
it's
almost.
A
You
can
think
about
it
like
a
linting
stage
where
you're
getting
that
feedback
earlier.
It's
just
a
limited
test
kind
of
a
stage
just
the
way
I've
been
thinking
about
it,
terminologies,
probably
all
wrong.
It
doesn't
make
any
sense
together,
but
that's
the
way
that
we've
been
that
I
was
positioning
this
or
thinking
about
the
use
case
was
for
an
individual
developer
like
a
drew
I.
Don't
have
to
wait.
45
minutes
for
the
big,
our
spec
tests
to
run
I
get
a
very
small
stage
to
run.
First,
it's.
C
Yeah
I'd
considered
even
trying
to
roll
this
out
for
merger
quest
pipelines
that
have
wit
at
the
beginning
of
the
title,
because
to
me
those
pipelines
are
extremely
likely
to
fail
and
so
it'd
be
it'd,
be
interesting
to
see
like
how
granular
we
can
get
and
and
find
out
which
what
I
would
just
have
to
do.
It,
experimentally,
but
quality,
probably
has
a
lot
of
at
least
anecdotal
information
I'm
like
which,
how
specific
can
we
get
with
the
kinds
of
pipelines
that
we
think
we'll
see?
Games
I
have
to
agree.
B
B
C
Because
of
because
we
are
the
NBC
we've
been
talking
about
is
really
see,
it's
a
CI
config
and
just
using
I'm
fixing
the
variable
that
we
want
to
use
to
be
able
to
do
this
and
we
could
potentially
dog
food
it
as
soon
as
we
want
there's.
No,
you
don't
have
to
wait
for
merge
or
view,
or
it's
not
it's
not
a
like
a
what
you
think
of
as
a
proper
feature.
Release
we're
actually
thinking
about
a
blog
post
would
be
a
great
way
to
release
it.
C
A
B
Let
me
move
that
thing
up
and
then
I'll
run
through
it
there
and
then
one
second
and
I
will
go
ahead
and
share
screen
awesome.
Thank
you.
Okay,
on
the
same
page,
did
you
see
my
dog
Google
Docs
now,
so
we
have
been
started
using
experimenting
this.
This
was
an
idea
we
had,
and
it's
also
in
line
with
them.
What
what
where's
it
now
includes
test
results
with.
B
B
We
are
now
using
gait
lab
CI
to
update
our
test
case
I'm,
going
to
take
a
step
back
and
have
you
digest
what
we
use
as
test
cases
and
it's
essentially
a
project
with
issues
and
with
labels
telling
us
what
are
the
types
of
tests.
So
these
are
the
end-to-end
tests
in
the
end-to-end
layer,
there's
like
a
three
layer
pyramid.
There's
a
API
tests
if
I
can
to
enters
the
the
browser
which
is
a
GUI
and
then
the
visual
test
which
I'm
it's
very
minimal
right
now.
B
But
those
are
the
plan
test
layers
in
the
top
of
the
pyramid
and
we
have
I
got
a
label
for
sweet
as
a
reliable
smoked,
sweet,
sorry,
smoked,
sweet
and
reliable
sweet.
These
are
the
tests,
apparently
block
the
release,
process
or
meirin's
team,
and
then
we
have
status
labels.
That
signifies
is
it
failing
on
staging?
Is
it
passing
on
staging
of
failing
in
production,
passing
in
production
and
all
that
jazz
now
go
into
the
issue?
This
is
essentially
the
list
of
test
cases.
B
It's
a
little
bit
rough
right
now,
because
we
haven't
clean
up
our
naming
convention.
A
number
of
these
are
all
of
these
are
now
actually
updated
and
created
dynamically
through
CI.
So
there's
no
human
intervention
in
any
of
these
at
all
it
just
post
results,
I
think
I'll
click
one
one
goodham
example
here,
I
think
we've
clearly
just
reset
this,
but
you
click
on
on
a
test
case.
Business
automatically
posted
from
this
job
and
this
test
failed
and
we
have
everything
in
one
place.
B
We
have
the
herbs
that
the
herb
message
stack
and
currently
I
know.
Okay,
it's
passing
in
master
I,
think
staging
hasn't.
We
don't
probably
run
this
test
in
staging,
but
it
was
it
was
passing
in
staging.
It
would
have
to
staging
passing
label
here
and
then
in
Reverse.
This
is
like
a
really
heavy
one,
so
this
is
an
example
of
like
it's
been
failing
a
lot
so
I.
When
we
look
at
this,
we
know
exactly
this
test.
It's
filling
the
most
matter
and
stating
it's
fairly
Knightley's
as
well.
B
D
B
Want
to
render
what
are
the
state
of
staging
right
now?
What's
this
data
production
right
now
and
then
any
tests
automated
how
many
tests
have
been
a
plan
and
then
have
having
stage
they've
up
stage
statuses
on
on
the
health
of
these
these
test
cases?
Well
so
with
using
what
we
have
and
putting
them
together
and
getting
like
a
just
one
single
pane
of
glass.
We
wouldn't
on
the
state
of
the
test
cases
and
where
they're
being
run
currently
as
part
of
the
release
process.
So
I'll
pause
there
for
questions
and
yeah.
B
A
B
No,
it's
it's
the
current
iteration
this.
This
is
like
the
single
pane
of
last
view,
so
it
will
live
forever.
Okay,
unless
you
want
to
clear
clear
it
out,
just
close
the
issue,
if
the
issues
close,
the
logic
is
that,
if
there's
no
issue
you
can
find
it,
we
just
go
and
create
a
new
one,
but
we're
using
issues
for
it.
So
it
that's
like
a
workaround
that
allows
us
to
clean
up
reset
and
enable
it
again
as
we
need
if
we
need
to
reset
stuff.
So
very.
A
B
We
plan
to
integrate
the
JD
reporter
into
here,
maybe
if,
if
the
Cheney
report
is
I
believe
we're
building
on
top
of
the
test
results
JSON
from
the
J
unit
report
right
now
for
this,
and
if
we
can
have
cross
links
or
deep
links,
we
can
remove
the
stack
trace
here
and
link
to
the
Yuni
record
instead.
But
this
is
like
the
really
rough
first
iteration
of
it.
But
yes,
we
want
to
deduplicate
things.
If
information
is
already
there,
we're
just
gonna
have
a
cross
link
to
it,
so
go
go
and
click
there.
B
One
of
the
reasons
that
I'm
we
lean
on
just
putting
it
here.
Company
here
is
because
sometimes
we
clean
the
history
and
then
some
engineers
sometimes
want
to
go
back
with
three
months,
see
how
come
often
as
it's
been
failing,
yeah
we
at
least
have
something,
and
then
we
turn
this
their
turn
this
off.
B
A
So
you,
you
have
a
cross-section
of
both
the
test,
the
history
and
then
the
environment
that
it
was
running.
Yeah
sounds
like
okay,
and
you
mentioned
three
months.
What
are
generally
the
parameters
that
you're
looking
back
within
those
test
cases
like
what
is
the
most
actionable
time
time.
We
know
that
you
might
have
it's.
B
Very
heavy
right
now
intensive
process,
so
we
have
an
issue
that
investigates
the
failure
and
then
we
have
like
the
four
main
types
of
failure.
It's
like
a
stale
test,
which
means
people
just
didn't,
update
the
tests,
but
the
feature
got
updated.
A
flaky
test
which
we
are
staunchly
looking
at
every
day,
like
hey,
add
fault,
tolerance
to
the
tests,
intelligent
weights,
test
environments,
which
we
need
help
from
the
developers
and
also
infrastructure.
Hey
it's
probably
some
rate.
B
Limiting
and
I
can
I
can't
forget:
I,
forget
that's
another
one,
but
like
there's
a
Katraj
process
before
things
can
move
to
a
high
value
proposition
right.
Okay,
so
there's
there's
some
human
there,
human
interaction
at
which
we
can
automate
right
now,
tortured
but
happy
to
to
let
all
you
know
so
anything
here
that
can
be
made
into
a
feature
I
by
all
means.
It's
it's
been
vast
and
open
field.
James.
D
Ii,
as
the
the
looking
for
the
window,
I,
don't
know.
Probably
it's
it's
it's
more
when
we're
investigating
flaky
tests
than
anything
else
that
we're
trying
to
see
that
kind
of
history,
I,
don't
think
Mac
we've
gone
back,
I
mean
we
lose
artifacts
and
things
like
that
in
the
pipeline
over
time,
but
I
don't
think
we
need
probably
more
than
a
three
month
window
at
mode
will
spur
for
looking
at
flaky
desert.
Look,
you
want
something
longer.
The.
B
Longer
we
go
back
the
less
fidelity,
so
I
think
if
you're
looking
at
six
months
or
longer,
I
you're
right
I,
don't
think
we
need
the
level
of
detail
of
stack
traces,
but
we
probably
want
to
know
like
hundred
tests
how
many
of
them
stale
code
how
many
of
them
fail.
Because,
oh,
if
it's
summarized
that's
good
enough,
like
the
fidelity,
can
cantaloupe
be
lowered
as
we
travel
back
in
time.
Yeah.
A
The
fidelity
and
metadata
or
extra
data,
like
you,
said
the
stack
trace,
might
go
away.
It's
interesting
to
think
about.
Then
you
have
like,
if
you're
looking
at
test
history
over
time-
and
you
see
it's
passing,
passing
passing
then
fail
labeling
that
failure
as
why
it
failed
that
time.
I
hadn't
thought
about
that
nuance
of
I
was
just
thinking
about
I
just
need
to
know
if
it's
passing
or
failing
and
then
based
on
like
it
passes
three
times
of
fails,
it
passes
through
times
it
fails.
We
would
log
that
into
well.
A
B
I
am
gonna,
consult
my
encyclopedia
here,
which
is
my
email,
but
I,
try
to
pull
up
something
and
feel
free
to
to.
Let
me
know
if
I'm
totally
off,
but
I
found
this
issue.
I'll
cross
link
it
here
in
the
documentation
as
well
they're
like
types
of
types
of
that's
failure,
and
we
used
to
use
that
we
use
these
labels
now.
So
there's
like
the
failure,
scope,
labels
so
there's
some
flaky
tests,
test
environments.
A
I
was
thinking
through
like
I'm
still
always
just
thinking
about
the
developer.
Use
case
of
I
have
a
work
in
progress.
Mr
I
want
to
see
then
history
of
that
test
do
I.
Look
at
history,
for
my
career
branch,
do
I
look
at
history
of
my
target
branch
where
am
I
getting
history
from
and
thinking
about
it
that
way,
or
it's
not
just
one
history
of
this
test
running
everywhere,
because
that's
probably
I
wouldn't
think
that
that'd
be
useful.
D
It
can
be
useful,
I
mean
because
if
you
look
at
some
of
the
reasons
we
have
some
of
the
failures,
flaky
tests
and
broken
tests,
I'll
be
taking
a
little
more
obvious,
broken
to
us.
We
just
have
a
bug
in
the
test
code
itself
and
that
shouldn't
matter
which
environment
it's
run
in
its
gonna
fail.
Flaky
test
means
maybe
we're
not
taking
into
account
some
sort
of
timing
issues
or
something
else
with
them.
D
The
test
kind
of
still
a
bug
in
the
test,
in
my
opinion,
but
I,
think
I,
wouldn't
care
where
it
was
running
I
would
want
to
know
if
it
was
failing
in
those
environments,
yeah
again,
stale
tests
same
thing
and
then
our
other
one
failure,
your
test
environment,
just
that's
the
one
that
pinpoints
hey
our
or
flakiness
here.
Our
failures
are
due
to
some
sort
of
instability
in
the
environment.
D
B
Mean
if
we
were
to
apply
to
other
work
streams,
you
mean
like
feature
branches
and
things
like
that
I
think
it
may
be.
It
might
be
too
much
noise
to
use
all
of
it,
because
these
is
these
makes
sense.
When
you
have
an
on
call
process
behind
it
like
what
we
do
now
to
help
the
delivery
delivery
team,
like
everybody
quality,
is
on
call
looking
at
pipelines
every
every
weekday.
B
So
it
makes
sense
to
have
this
if
we
would
do
apply
this
to
like
a
feature
branch
where
only
one
person
or
two
person
I
won't
working
on
it,
I
think
having
something
baked
into
the
CI
feature
where
hey
this,
that
trace
is
in
the
app
it's
probably
a
bug.
They
start
trace
earth
out
in
the
test.
Maybe
it's
like
a
test
so
just
have
that
that
20/80
line.
Where
is
it
a
bug
or
is
it
sorry?
Is
it
a?
Is
it
a
issue
with
the
test
or
is
an
issue
with
the
code
itself?
A
I
was
thinking
if
you're,
if
you
are
on
a
feature
branch,
you
probably
want
to
see
like
test
results
of
the
last
ten
test
runs,
or
some
number
of
test
runs
on
your
target
branch
like
when
it's
run
there.
How
is
it
behaving
I
thought
this
was
yeah.
This
is
really
helpful
and
I'm
gonna
pick
on
Alex
a
little
bit
and
see
if
he
has
any
perspective
on
this
from
your
side
of
the
house
before.
B
I
pass
on
to
Alex
before
I
forget
something
that
was
really
useful
when
I
used
him
City
is
when
the
CRI
reported
only
the
differences
in
test
results
from
the
last
run,
and
that
was
really
useful
for
feature
branch,
because
that's
what
the
developers,
air
about
Lee
I,
push
to
change.
What
was
different
from
the
last
time
and
because
it's
still
fresh
in
their
memory,
that's
the
most
actionable
information
in
for
them
Alex.
Thank
you.
Yeah.
E
So
I
don't
really
have
a
lot
of
thoughts
on
this
myself.
I
do
very
much
like
the
idea
of
only
showing
the
differences
between
the
last
test,
because,
especially
if
it's
like
a
really
really
long
test
that
runs
forever
and
gives
you
a
bunch
of
output.
Sometimes
you
just
you,
don't
really
care
about
that
anymore.
You
already
especially
deprecation
notices.
E
A
A
A
Verify
testing
a
roadmap
I
see
a
thumbs
up
thanks,
drew
vision,
hasn't
changed.
Themes
are
the
same
as
last
time.
All
this
come
from
the
CI
CD
Direction
page
multi-platform,
support,
speeding
and
reliable
pipelines
and
doing
powerful
things
easily
are
the
prevailing
themes
across
the
director
across
the
stage,
rather
so
I'm
going
to
jump
right
into
the
roadmap,
the
epics
that
are
in
active
development,
we're
still
working
on
introduction
of
accessibility,
testing,
moving
from
plans
and
minimal.
This
will
be
our
group's
first
jump
into
the
new
UX
scorecard
for
maturity
process.
A
So
we're
going
to
be
going
through
that
after
we
introduced
the
next
feature
and
the
accessibility
M
our
widget
and
actually
I'm
tentatively
cautious,
that
we'll
be
able
to
move
beyond
minimal
into
actually
viable.
With
that
change,
being
able
to
solve
the
job
to
be
done
of
a
developer
wants
to
see
how
the
changes
they've
made
has
impacted
the
accessibility
of
their
of
their
map.
We
think
that
we'll
be
able
to
solve
that
job
two
again
and
that'll
actually
make
that
a
viable
product
category
for
us.
So
that's
pretty
exciting.
Beyond
that.
A
Bringing
unit
testing
up
to
more
of
a
complete
by
finding
failures
faster
is
another
interesting
project
that
we're
working
on
visual
review
tools,
making
those
usability
or,
if
you
use
faster
and
more
effective.
Some
of
the
issues
in
there
include
things
like
being
able
to
add
a
screenshot
on
to
your
mr
comment,
so
that,
if
you
are
leaving
comments
on
the
Mr
through
the
review
app
and
the
visual
review
tools,
you're
not
also
having
to
take
your
own
screenshot
go
back
in
and
find
your
comment
edit.
It
and
add
it.
A
It's
just
one:
nice
simple
workflow
and
we're
talking
with
the
design
team
or
the
team
on
the
design
category
about
how
do
we
make
the
design,
tab
and
interaction
with
designs?
That
experience
looked
a
lot
more
like
or
be
similar
to
the
visual
review
tool
so
that
users
who
are
using
both
don't
have
a
disjointed
experience.
You're
able
to
have
the
same
type
of
experience,
leaving
comments
on
both?
How
do
we
then
start
to
incorporate
some
of
those
design
artifacts
over
individual
reviews?
We
think
there's
some
opportunity
there
so
we're
in
some
initial
discussions
about.
A
A
That's
an
epic
that
I'm
grooving
on
now
taking
some
of
the
old
issues
out,
putting
in
some
new
issues
based
on
new
customer
feedback
that
we're
getting
around
code
coverage
and
where
there's
problems
that
customers
run
into
in
lack
of
feature
set
and
get
lab
today,
or
that
they're
just
trying
to
solve
with
code
coverage
data
that
you
present
to
developers
or
present
to
quality.
Folks
as
a
change
is
making
its
way
through
the
pipeline
and
I
didn't
get
a
chance
to
update
the
slide.
A
But
a
new
epic
that
I
created
yesterday
is
around
how
we're
going
to
enhance
our
new
j-unit
report.
As
part
of
that
there's
a
sub
epic
in
there
of
how
do
we
improve
performance
for
a
gen
unit,
parsing,
which
the
team
is
actively
working
on
right
now,
trying
to
make
that
feature
a
lot
more
performant.
So
when
you're
loading,
sixteen
thousand
or
a
hundred
and
sixty
thousand
tests
or
whatever
it
is,
that
run
in
your
gate,
lab
dot,
org
project
that
page
responds
a
lot
quicker
than
thirty
seconds
after
you
get
to
it.
A
A
We
mentioned
it
last
month
and
then
in
twelve
nine
we
had
full
code
quality
report
go
out,
which
is
a
fun
enhancement,
so
that
you're
not
just
seeing
the
changes
in
code
quality
in
the
EMR
widget
you're,
now
able
to
see
the
full
report
for
your
branch
in
the
pipeline
view
and
then
our
next
six
releases,
you
see
a
change
here
from
the
last
time
we
looked
at
this
slide:
usability
testing,
accessibility,
testing,
we've
already
talked
through
most
of
those,
the
iteration
on
the
screenshot.
In
there
those
categories
have
stayed
the
same
code.
A
Quality
is
so
the
same
category.
We've
done
further
combination,
unit,
testing
and
system
testing
into
one
category
and
then
called
out
the
that
also
includes
coverage.
There's
a
lot
of
issues
that
straddled
between
code
quality
and
unit
testing
and
that
were
coverage
issues
and
things
would
fall
one
place
or
the
other
we're
starting
to
move
all
of
those
into
the
code,
testing
and
coverage
category,
and
so
just
calling
out
more
specifically
that
test
coverage
lives
in
there
and
features
won't
live
in
there.
A
So
questions
on
what
I've
already
covered
up
recent
releases
or
anything
here
that
you
want
to
dig
into
look
at
the
issue
itself,
alright,
so
digging
a
little
bit
further
in
the
accessibility
testing.
We
validated
a
number
of
those
solutions,
we're
still
working
on.
We
have
a
good
set
of
work
and,
like
I,
said
movie,
looking
to
move
that
category
maturity
wise
beyond
planned
into
either
minimal
or
complete
with
our
April
release.
So
we're
going
through
that
UX
scorecard
process
for
the
first
time,
which
is
fun
in
code
quality,
there's
iterations.
A
A
We
think
the
use
case
there
is
someone
like
a
Quality
Manager
or
a
team
lead
is
going
to
be
looking
at.
How
do
we
improve
quality
at
this
project?
Let
me
go
through
and
filter
and
find
maybe
some
of
our
top
files
that
have
quality
issues
or
our
top
issue
or
quality
problems,
and
focus
in
on
a
set
of
work.
That
circles
around
that,
so
that
we
can
find
some
low-hanging
fruit
to
go
in
and
make
a
big
impact
with
that
very
much
effort
which
right
now,
it's
just
a
real
scattershot
approach.
A
A
Then
toad
code
testing
coverage,
like
I,
said
we
have
some
performance
issues
in
the
current
method
of
parsing
out
that
j-unit
data
on
the
pipeline
page
there's
an
epoch
that
is
working
on
that
and
after
that
introducing
we've
talked
about
the
graphing
of
the
code
coverage.
I,
say
here,
we'll
be
stepping
back
from
you
to
test
for
a
few
milestones
which
isn't
100%
correct.
We
do
have
that
follow-on
epoch
around
the
j-unit
report
to
make
that
again
a
little
bit
more
functional,
same
sort
of
things,
we're
just
making
the
report
a
little
more
interactive.
A
We
think
that's
a
great
value
add
on
top
of
just
displaying
the
report
being
able
to
better
navigate
through
the
tests
from
the
jobs
better
C
than
history
as
you're.
Looking
at
that
jumping
into
history,
we
think
is
a
great
value
and
seeing
that
demo
today,
mech
of
what
you
all
have
put
together,
but
for
some
ideas
for
me,
or
maybe
we
can
just
incorporate
that
in
there,
so
that
you
can
see
history
of
a
test
run
over
time.
A
From
that
view
on
the
pipeline
page,
that
could
be
pretty
cool
where
performance
on
hold
still
and
usability.
Still,
the
biggest
hurdle
we
have
is
just
getting
customers
using
the
tool
we're
fixing
bugs,
but
just
generally,
adoption
is
lower
than
we'd.
Like
so
add
it,
we
hope
that
adding
those
screenshot
capabilities
into
the
tool
and
do
some
blog
posts
and
some
general
advocacy
for
this
category
is
going
to
help
broaden
awareness
and
usage
of
this
of
this
tool.
That
we've
built
in
is.
A
Either
kind
of
conflate
it
got
it
with
accessibility.
You
can
use
your
review
apps
and
scan
against
those.
That's
really
a
great
use
case
of
what
did
I
just
deploy
and
how
has
it
changed,
but
you
could
also,
if
you
have
your
own
staging
environments,
set
up
and
you're
deploying
there.
You
can
run
the
scan
there
as
well.
A
B
One
one
suggestion
is:
we
are
having
these
discussions
monthly,
which
is
great.
We
can
aim
to
make
it
a
bit
more
lightweight
and
maybe
just
update
in
the
dock.
What
was
different
the
last
time
in
the
deck
then
we
can
I,
can
do
some
homework
and
read
it
before
hand
and
put
and
give
you
comments.
That's
tough!
That's
a
suggestion,
but
yeah
I'm
happy
to
see
that
where
the
team
is,
it
is
a
staffing
up
and
then
we're
making
progress.
I.
A
Will
do
that
for
next
time
call
out
where
there's
changes
in
the
deck
for
before
the
meeting,
so
that
yeah
you
can
do
some
homework
there
great.
Thank
you
any
anything
else.
Well,
we
have
folks
here
questions
for
testing
other
problems
that
we've
not
talked
about
that
have
come
up
since
we
met
last.