►
From YouTube: CI/CD UX Team Design Review | 17 December 2019
Description
Design Review session of CI/CD UX Team.
- Rayana Verissimo | 00:26
Merge Request Pipelines - User Interviews Insights
- Juan J. Ramirez | 24:00
Accessibility Testing - Viewing Reports
B
Gonna
briefly,
share
the
findings
from
the
merge
request,
pipelines,
usability
testing.
I
already
did
the
same
presentation
yesterday
to
our
team
to
the
CD
team.
I'm
just
gonna
touch
making
some
of
the
highlights
to
leave
more
time
for
one
two
to
go.
I
didn't
talk
about
his
his
topic.
So
for
those
who
are
not
familiar
with
this
research,
we
wanted
to
first
identify
why
people
didn't
know
what
the
detached
States
Mint
in
the
in
their
pipelines
so
journey.
B
Taking
here
familiar,
you
were
the
one
that
worked
on
this
initially,
so
we
got
a
lot
of
feedback.
Let's
say
constructive
feedback
internally
that
people
had
no
clue
what
that
they
meant
and
then
once
we
release
myrrh
strains,
then
confusion
started
because
you
know
you've
had
five
plants
or
merge
requests
and
pipelines
for
merge
results.
People
had
no
idea
how
they
worked
and
when
they
were,
they
were
activated.
Then.
What
does
that
mean?
B
So
we
interview
a
total
of
six
participants
from
all
around
the
globe,
so
that
was
super
interesting
to
internal
cos,
words
so
to
get
levers
and
for
external
or
gate
lab
users
or
engineers,
so
that
helps
engineers
front-end
backhanders
and
most
of
them
did
not
have
familiarity
with
the
concept
of
a
theta
pipeline.
So
that
was
super
interesting,
because
I
showcase
them
I
show
them
like
a
UI.
First
off.
You
know
the
pipeline's
me
with
the
detached
label
and
nested.
So
what
do
you
understand
by
the
t-test?
B
B
B
Aftergut
heads,
which
is
not
true
there
right
it
detached,
is
a
key
term,
but
it
doesn't
have
anything
to
do
with
what
they
use
in
your
product
and
some
participant.
Two
participants
were
familiar
with
it,
so
they
were
like
yeah,
I
kind
of
know
what
it
is,
but
their
answer
was
not
super
sure
and
in
general
people
didn't
know
why
we
chose
the
word
detached
right
for
them
once
they
actually
understood
what
the
attachment
or
what
that
pipeline
from
which
requests
mean
and
how
it
works.
B
They
say:
okay,
I,
don't
know
why
the
detach
word
was
used.
So
that
really
highlights
the
problem,
and
now
we
have
to
look
into
a
solution
for
that
and
find
a
way
to
rename
this
type
of
pipeline
so
that
people
can
easily
understand
it
in
in
the
merge
requests
view.
So,
if
you're
not
familiar
as
well
on
github.com,
when
you
go
to
CCD
pipelines,
you
probably
see
we
always
have
the
attachment
ones
here
yeah.
So
this
is
what
it
means.
B
Ci
file
and
also
another
user
mentioned
that
you
organize
that
the
CI
file
would,
you
know,
validate
this
if
my
pipelines
footmen,
if
my
configuration
is
correct,
yes
or
no,
so
that
was
super
interesting
because
well,
maybe
I'm
not
gonna,
find
this
in
the
CI
file,
because
I
imagine
it's
gigantic
for
us
yeah!
It's
exactly
this,
so
they
wanted
to
validate
because
also
under
merging
quest
view,
the
widgets,
if
it
is
not
correctly
configured,
there
is
an
error
and
then
the
interface
also
doesn't
tell
people
where
to
go,
to
fix
this
and,
of
course
documentation.
B
So
it
was
really
interesting
to
sees
all
right.
Let
me
open
this
one.
He
was
interesting
to
see
how
people
were
struggling
to
find
a
correct
information,
so
when
I,
when
I
asked
users,
so
why
would
you
go
about
to
find
information
about
this
topic
or
what
would
you
go
learn
how
to
set
up
pipelines
for
my
requests?
They
went
to
Google
and
then
Google
gave
them
three
different
pages,
and
that
was
a
piece
of
information
about
configuration
in
each
one
of
them
and
the
meaning
of
you
know.
B
The
detached
pipeline
was
like
in
one
line.
Only
so
people
were
really
like.
I
have
no
idea
what
this
means.
I
have
no
idea
how
to
configure
I
had
to
go
to
so
many
different
places
to
configure
this
thing
and
there's
no
clues
on
the
user
interface
that
will
help
them
will
help
guide
them
through
this
process.
B
So,
even
though
they
are
familiar
with
the
product,
they
really
didn't
have
much
of
an
idea
of
how
this
could
be
valuable
to
their
workflows.
So
that
really
opens
a
good
opportunity
for
us
to
improve
the
docs
and
see
how
we
can
make.
You
know
the
workflow
a
bit
more
clear
to
users
from
the
documentation
point
of
view
and,
of
course,
some
general
feedback
that
they
say
the
disor
team
on
the
pipeline's
page
can
be
improved.
So
if
I
go
back
here
oops,
what
was
it
if
I
go
back
to
the
pipeline's?
B
Some
people
seem
like
yeah
I,
don't
care
about
all
it
attached,
I
only
care
about
my
latest.
So,
okay,
you
know
really
sort
it's
not
filter
and
that
if
some
pipelines
are
related-
or
you
know
from
burst
frames,
for
example
that
this
could
be
improved,
so
definitely
have
to
look
into
that
as
well
and
open
a
couple
of
issues
yeah.
So
this
was
very
very
interesting.
People
were
super
confused
and
I.
Think
there
was
a
you
know.
B
It
was
nice
that
we
we
started
this
this
investigation,
because
in
the
beginning
we
were
really
going
for
the
quick
solution.
Let's
just
improve
the
tool
tip
in
the
detach
pipeline,
and
now
we
know
that
it's
not
just
about
the
label.
It's
not
about
people,
not
understanding
what
they
label
does,
but
they
don't
understand
what
it
means
and
we
couldn't
really
prove
in
showing
them
how
they
can
use
the
edit
in
your
workflow.
So
that's
a
lot
of
opportunity
for
us
and
you
becoming
Muslims
and
I.
Think
that's
about
it.
C
You
mentioned
in
there
and
it
seems
to
come
up
in
like
every
conversation.
I
have
these
days.
Is
that
idea
that
there
are
things
that
you
configure
in
the
the
animal
file
and
it
seems
like
half
of
the
world
about
half
a
row.
Probably
75%
of
the
world
wants
to
do
it
there,
but
then
there's
the
other
25%
that
really
want
to
do
it
in
the
UI,
and
we
have
this
like
wall
between
these
two
that
we've
yet
to
chip
away
at
or
even
like,
consider
breaking
down
so
just
curious.
C
Like
for
it,
we
now,
it
seems
like
we
either
have
to
like
put
it
in
the
UI
or
put
it
make
it
available
as
an
option
in
the
yellow
file
and
there's
really
no
way
to
do
both
right
now.
You
either
have
to
do
it
one
way
or
the
other,
because
we're
not
we're
not
yet
editing
the
yellow
file
for
them
or
providing
templates
to
edit
it
for
them.
So
is
anybody
else
bumping
up
against
that
where
you
have
to
make
that
decision,
where
some
users
want
it?
C
D
I
would
say
that
in
a
recent
research,
I
did
especially
like
there
was
one
user
who
initially
use
github
CI
but
then
moved
to
github,
because
they
they
just
couldn't
do
the
things
they
wanted
to
doing.
Give
up
CI,
because
once
it's
complicated
or
github
made
it
much
easier
for
them
to
do
the
same
thing.
I
know
there
is
been
an
issue
for
the
longest
time
and
believe
it's
also
a
moonshot.
D
C
Yeah
and
that's
kind
of
where
it's
driving
it.
That
issue
has
kind
of
popped
back
up
again
of
starting
down
that
path
right
of
either
chipping
down
that
wall,
or
probably
it
sounds
like
more
likely.
What
we'll
do
is
is
have
some
sort
of
a
templating
system
that
at
least
lets
you
pick
things
and
add
them
to
the
to
the
CI
y
ml
file.
I,
don't.
C
E
E
Right
and
like
you
see
what
I'm
talking
about
specifically
specifically
accessibility
testing,
but
it
may
not
be
super
straightforward
and
clear
for
customers
that
they
need
that
artifact
for
things
to
succeed,
and
why
there's
nothing
right
now,
indeed
lab
that
tells
you
like,
okay,
why
you
cannot
add
this
type
of
job?
If
you
don't
have
this
other
type
of
job
before
so,
basically,
customers
are
like
it's
basically
up
to
them
to
go
and
like
figure
out
that
does
the
pipeline.
The
pipeline
fails
and
they're
like
this
certain
volume,
and
they
realize
oh
wait.
E
These
needs
this
artifact
to
work.
You
know
so
I
mean,
like
I,
think
the
Yama
file
is
very
usable,
but
it's
not
it's
just
gelatin
right,
like
you
can
do
whatever
you
want
there,
and
that
doesn't
mean
that
the
yeah
I'm
not
file
that
you're.
Writing
is
create
a
successful
pipeline,
so
I
think
like
if
we
can
increase
the
chances
of
writing
successful
good
luck,
yamo
files
from
the
beginning,
I
think
that's
like
a
big
win
for
us.
E
You
know
I,
think
the
question
would
be
like
where,
where
are
people
riding
the
get
Lobby
AMA
files
are
doing
it
inside
it
lab
we
were
like.
Are
they
doing
it
in
their
visual
studio
and
then
I
just
the
point?
You
know
and
I
think
that
changes
things
a
little
bit,
because
it's
not
the
same.
When
you
have
like
all
the
helpers
of
the
GUI
and
I
mean
there,
you
can
do
a
lot
of
things,
but
if
you
are
on
visual
studio,
it's
I
mean
what
what
can
we
do?
E
B
And
that
was
one
of
the
points
I
remember
now
that
one
of
the
one
of
the
participants
say
that
the
learning
curve
for
his
team
on
writing
IMO
was
very
steep.
So
I
say
that
no
one
in
my
team
was
familiar
with
llamo,
so
we
got
arrows
all
the
time
and
then
we
had
no
idea
why
things
were
not
working,
because
you.
D
B
Would
expect
that
okay
I'd,
like
what
you
say,
you're
linking
there
while
I'm
writing
it,
so
that
would
be
super
awesome
and
also
about
focusing
on
different
I.
Think
it's
also.
The
question
is
we
want
to
automate
everything
right,
but
do
we
need
to
automate
the
EITI
file?
For
example,
I
had
a
conversation
with
Jackie
or
rpm
yesterday,
and
she
say
that
or
some
customers
in
the
US
and
even
English,
like
the
regulation
industries,
is
that
correct,
so
government
agencies,
right
banks,
etc?
They
don't
might
not
want
to
automate.
B
They
really
have
to
go
through
the
more
manual
processes.
So
how
can
we
ensure
that
it
works
both
ways
so
that
for
people
that
have
to
write,
you
know
they're
there,
people
at
CI
that
we
provide
some
checking
and
some
more
guidance
to
the
UI,
but
also
that
the
automation
works
correctly
for
people
that
just
want
to
get
like
to
magically
come
up
with.
There
are
the
configuration
files,
so
that's
a
challenge,
especially
that
we
keep
adding
more
complexity,
and
some
of
these
configuration
is
done
in
this
setup
right.
B
A
B
Any
other
comments,
I
have
questions.
What
I'm
really
a
comment,
but
I
would
like
to
ask:
what's
the
when
you're
focusing
is
the
best
way
to
showcase
these
findings
right
so
right
now,
I'm
just
talking
about
it
and
I'm,
just
showing
some
pages.
Would
you
folks
think
that
it's
clear
when
you
follow,
for
example,
a
slide
deck?
You
know
when
we
are
showing
something
more
visual
and
I'm
asking
this
is
also
for
when
we
have
to
do
this
presentations
to
the
engineering
team
or
to
the
engineering
managers
I'm
really
curious
into
you
know.
B
F
I
know
I
was
thinking
about
that
takes
I
know
our
culture
is
not
presentation.
Culture
like
make
it
make
a
DAC
make
make
pretty
pictures
to
show
people
I
would
assume
and
I'm,
not
the
user
here,
cuz
I'm,
not
a
deaf,
but
I
would
assume
that
they
would
want
the
story.
Tell
me
why
this
is
important.
Tell
me
why
I
should
make
time
to
figure
out
how
to
solve
this
issue.
F
If
you
need
to
show
me
a
picture,
maybe
video
clip
or
something
of
somebody
telling
you
a
great
quote
that
yells
hard
to
learn
and
we
spent
weeks
trying
to
figure
it
out.
That
might
be
good
in
in
like
knit
it
together
in
a
way
that
you
take
them
from
okay.
We
did
this
because
we
wanted
to
find
this
out
now.
F
Let
me
tell
you
a
story
about
what
we
found
out
and
and
do
it
in
about
ten
minutes
not
any
longer
than
that,
because
people's
attention
spans
and
then
hassle
calls
to
action
for
them
to
say,
like
we
learned
this,
what
do
you
think
or
you
know
what?
What
would
your
solution
be?
How
would
you
want
to
solve
this?
F
Is
that
it
thinking
to
solve,
develop
her,
so
you,
you
have
a
conversation
with
them
as
well,
get
them
drawn
into
the
presentation,
so
you
set
the
stage,
give
them
the
context
of
what
you
learned
and
then
have
that
conversation.
Okay.
Now
what
what
are
we
gonna
do
with
that?
I
don't
know
if
it
needs
to
be
a
presentation,
but
that's
up
to
you.
If
you,
you
guys,
know
them
much
more
better
than
I
do,
but
visuals
are
always
good.
Humans
are
very
visual
creatures.
F
Video
is
good,
we're
also
auditory
creatures
and
also
having
somebody
tell
you
like
having
that
user
say
the
quote
and
have
a
video
of
them
doing.
It
is
a
lot
more
powerful
than
you
you
telling
them
what
you've
learned,
so
they
can
actually
see.
Oh,
that
was
shitty.
That
was
not
good,
and
so
they
can.
They
can
see
that.
F
So
that
would
be
my
recommendation
and
then
I
would
also
see
if
you
can
either
get
time
in
the
meeting
that
they
already
have
or
better
schedule
a
separate
chat
with
them
about
this,
because
those
meetings
that
they
already
have
already
have
an
agenda
and
they're
usually
packed
and
I,
don't
know
if
they
can
get
through
at
all.
So
it's
it's
ideally
is
ideal
if
he
can
take
some
of
that
time.
D
F
And
then
bring
in
the
design
the
dev
managers
ahead
of
time
tell
them
what
you
want
to
do.
Tell
them
like
this
is
give
them
like
a
little
preview
of
your
presentation,
so
they're
not
hit
with
it
first
either,
so
they
can
kind
of
start
thinking.
Maybe
they'll
have
some
questions
for
their
team
to
get
them
also
engaged
in
the
presentation
as
well.
B
That's
awesome,
I
would
really
love
to
experiment
with
more.
You
know
dynamic
presentations,
because
that's
that's
how
I
felt
a
little
bit
yesterday
is
that
okay
cool
then
finally
interesting,
but
not
everyone.
They
were
not
all
there
in
the
call
and
it's
difficult
to
find
the
time
slots
where
everyone
is
available
other
than
the
you
know,
the
the
meetings
that
we
already
have
with
the
team,
but
I
love.
B
C
I
think
like
linking
back
to
the
insights,
repository
I,
think
that's
a
big
deal.
Cuz
I
think
a
lot
of
people
don't
even
know
the
existence
of
that
I.
Think
the
more
like
we
like
here's
the
issue
to
fix
this
and
here's
where
it
came
from
it
came
out
of
this
one.
It's
going
to
help
people
like
close
that
loop
and
understand
that
this
is
this
magical
place
where
you
can
go
to
learn
stuff
right,
which
is
kind
of
the
point
of
it
right.
So
I
think
we
should
do
more
of
that.
A
In
my
experience
in
the
past,
company
is
what
we
did
if
there
was
a
task
oriented
research.
We
did
like
we
posted
tasks
or
the
questions
we
asked
and
we
just
placed
like
some
kind
of
bold
findings
and
and
something
more
visual.
Maybe
if
that
would
be
like
percentage
coloration,
we
would
use
like
of
a
pie
chart
and
no
that
sounds
awful,
but
it
strikes
a
lot
of
attention.
People
will
really
like
to
read.
A
Statistics
I
feel
like
that
would
work,
okay,
especially
where
there
is
a
comparison
like,
for
example,
failure
versus
success
or
I,
know,
automation
versus
manual,
or
something
like
that.
Those
kind
of
look
big,
bold
metrics
that
are
easier
to
see
and
judge,
are
always
useful
and
I
agree
with
Mike
it's
nice
to
lean
back
details
to
be
able
to
find
out
a
bit
more
yeah.
B
I
think
it's
also
painter
we're
seeing
once
we
have
improvements
right
implementing
with
product.
In
than
me,
we
asked
the
same
questions
to
users
and
then
we
can
see
how
they
can.
You
know
finish
the
test
or
complete
the
task,
and
then
we
can
show
those
two
clips
or
show
people.
So
this
was
the
first
experience
and
this
is
the
second
one.
Sometimes
we
don't
really
have
I,
don't
know
you
can
already.
We
can
measure.
B
A
E
E
E
We
like
an
accessibility
check
as
part
of
the
continuous
integration
process
right
so
like
we
already
did
some
problem.
Validation
on
these
nore
helped
us,
and
then
there
were
like
a
lot
of
like
interesting
findings
and
like
things
that
we
discover
there
so
like
we
believe,
there's
a
lot
of
value
here.
Buddy,
it's
selling
like
a
tricky
thing
like
it's
the
way
that
we
want
to
integrate
it.
E
It's
like
you
get
up
snake
strawberry
MVC,
but
the
me2
NBC
has
its
own
I
will
say,
caveat,
say
oscillator,
but
like
yeah,
it
has
like
some
things
that
we
need
to
think
about.
So
I'll
show
that
in
a
bit
but
anyways,
that's
basically
what
it
is
so,
basically
the
days
that
you
have
like
your
typical
CI,
CD
pipelining
in
theory.
In
theory,
this
should
run
on
the
tests
right,
like
it's
part
of
our
test.
E
Suite
and
you
should
like
put
it
inside
your
test
stage,
but
as
you're
gonna
see
in
a
beat,
it's
not
a
straightforward,
so
basically
there's
kind
of
like
to
existing
there's
several,
but
there's
like
two
like
very
big
frameworks.
One
is
pally
and
the
other
one
is
lighthouse,
which
is
like
a
built-in
Chrome
browser
accessibility
framework.
E
These
two
frameworks,
basically,
what
they
allow
customers
to
do
is
load
a
web,
a
website
or
a
specific
web
page,
and
then
they're
gonna
basically
run
older
accessibility
scanning
on
that
website,
and
then
they're
gonna
highlight
the
things
are
wrong
that
are
not
family,
meaning
the
standard
in
terms
of
accessibility.
So
these
frameworks,
what
they
do.
E
Is
they
it's
one
of
tests
against
a
weapons
right
and
then,
if
you
have
a
large
web
app,
you
need
to
run
these
against
multiple
web
pages
right
and
then
I,
basically
compiled
that
that
we
poured
into
a
report
for
yet
so
really,
one
of
the
things
that
you're
gonna
see,
which
is
kind
of
weird,
is
like
there's
caveats
about
these
right.
It's
not
really
as
a
straight
forward.
One
caveat
one:
big
one
is
that
for
you
to
actually
do
this
type
of
testing,
you
need
to
run
the
framework
on
top
of
an
existing
app.
E
So
the
only
way
that-
and
that
means
that
these
this
approach
for
context
about
what
this
means
is
that
this
needs
to
review
up.
You
know,
so
it's
not
something
that
you
can
run
on
existing
code.
Perhaps
there
are
some
frameworks
that
actually
do
accessibility
test
him
by
looking
at
the
code,
but
it's
not
really
what's
like
these
things
are
not
designed
to
do
that.
They
are
designed
to
look
at
the
final
rendered
app
and
then
determine
the
issues
on
the
rendered
app.
E
So
if
you
think
about
it
from
the
context
of
gitlab,
this
is
something
that
needs
a
review
up
or
at
the
very
least
you
need
us
to
tell
you
we'll
need
a
way
for
us
to
allow
the
customer
to
tell
us
where
to
look
internally,
it's
time
to
to
check
and
do
the
accessibility
scan
so
the
caveat.
There
basically
makes
this
not
a
test
stage
job.
E
It
makes
it
a
deploy
job
because
it
really
needs
to
happen
after,
like
all
the
other
stuff
happen,
and
that's
when
I
was
talking
before
about
like
the
the
yeah
Mel
and
the
configuration
the
is,
what
I
was
trying
to
say.
You
know
it's
for
a
customer
who
is
super
new
to
these.
They
might
think.
Oh,
this
is
a
testing
framework.
This
goes
in
the
testing
stage
and
doing
that
will
ultimately
fail
the
pipeline,
because
there's
there's
no
artifact
to
consume
the
artifact
to
consume
is
an
actual
URL
of
an
existing
website.
E
So
that
really
gets
confusion.
The
other
caveat
is
that
failure
in
accessibility
is
a
very
great
concept.
We
we
all
agree
that
accessibility
is
a
very
important
theme
that
we
should
make
our
websites
screen
reader
friendly,
that
we
should
make
sure
that,
like
people
would
be
shown
implements
currently
things
that
people
with
different
like
disabilities
can
access
the
content
that
we
have.
All
right
is
for
the
context
of
Midna,
but
like
for
anyone,
who's
building
out
an
upright
but
determine
what's
the
threshold
to
say
like
this
is
good
or
bad.
E
This
is
gonna,
fail
a
pipeline
or
not
that's
very
subjective
and
every
app
every
every
company
every
customer
has
different
needs
when
it
comes
to
this
and
there's
gonna
be
people
who
are
very
stringent
needs
and
demands
on
this
area.
There
are
people
who
want
to
just
meet
the
bare
minimum
because
they
also
know
that
they
just
need
to
met
the
bare
minimum.
So
if
you
are
designing
an
app
for
blind
people,
you
probably
want
to
be
at
a
hundred
percent
in,
like
whatever
subjective
teams.
This
is
trying
for
you.
E
If
you
are
designing
like
a
random
website.
Let's
say
like
a
news
website.
You
might
just
want
to
make
sure
that
these
are
like
accessible
in
terms
of
a
screen
reader
like
basics,
you
know
so
all
right.
That's
kind
of
a
brief
introduction,
so
I'm
just
gonna
show
some
of
the
things
that
I
have
been
working
on.
Let
me
stop
the.
E
E
E
So
let's
start
with
that
baseline,
we
assume
that
Polly
is
integrated
into
your
pipeline
and
we
assume
that
like
this
is
something
that
it's
already
happening
in
an
existing
pipeline.
So
now
the
question
is
now
that
you
setup
that
in
a
deep
block,
let's
forget
for
a
meaning
about
the
fact
that
this
is
something
that
needs
a
review
up.
E
Let's
assume
that
people
were
setting
up
these
already
know
by
reading
the
documentation
or
something
so
now
that
the
challenge
is,
how
do
we
show
these
to
the
customer
and
like
what
will
be
the
first
MVC
for
these
right?
So
this
is
an
M
R
right,
so
there's
different
different
ways
that
we
can
show
these.
So
one
of
the
things
that
I
was
exploring
is
the
idea
of
doing
basically
the
same
thing
that
we
do
with
a
security
scanning.
E
There's
another
caveat
about
this:
this
were
initially
we
thought
this
was
like
a
pure
HTML
dashboard,
but
this
actually
requires
requires
a
data
base.
So
that's
another
reason:
why
need
a
nap
because
you
actually
for
you
to
show
these?
You
also
need
to
set
up
all
their
stuff
in
your
CI.
So
you
will
see
this
dashboard,
the
where
you
will
see
the
errors
and
the
warnings
and
the
notices
that
will
be
one
way
of
doing
the
MVC
right.
E
However,
these
might
not
be
the
best
way
to
do
it
because
I
don't
think
it
explains
a
lot
of
the
complexities
that
are
below
the
configuration
so
and
one
of
those
complexities
is
the
fact
that
these
slips
on
top
of
our
review
up.
This
is
not
something
that
is
checking
on
the
code.
So
if
you,
if
you
see
all
these
guys
here,
they
are
basically
checking
the
code.
E
So
what
I
thought
and
abilities
that
we
could
change
that
the
the
expanded
beautiful
piece
basically
would
be
going
and
parsing
the
parsing
the
dashboard
and
then
from
the
dashboard
is
playing
only
the
errors,
so
that's
in
there
and
it
will
tell
you
the
downtown
and
the
principle
that
it's
violating
so
that's
cool.
That
could
work
but
again,
I
have
concerns
about
the
fact
that
this
is
getting
mix
it
here.
So
the
the
other
option
that
I
was
exploring.
E
Is
the
idea
of
not
putting
that
in
this
level
of
mr
widgets
that
basically,
all
them
are
related
to
code
changes
and
things
are
happening
on
the
actual
and
compiled
code,
but
actually
putting
it
as
part
of
the
context
of
we
already
have
a
review
up.
So
when
you
have
a
review
up
as
part
of
a
pipeline,
we
add
this
level
to
that
widget
and
that,
if
you
have
seen
it
like,
it
doesn't
have
this
thing.
E
That
says
view
accessibility
report,
but
it
does
have
you
up
and
it
has
review,
and
it
has
this
thing
that
I
don't
know
what
I
did
pass
as
well
as
the
pattern
perhaps,
and
they
usually
like
the
the
status
of
your
pipeline
to
the
Eggman
right.
So
what's
the
drawback
of
this
approach
is
that
you
will
see
like
you
will
have
to
dig
more
to
see
if
it
failed
or
not
like
right
away.
You
wouldn't
know
like
if
there's
like
accessibility
issues,
you
might
know
that
there
she
should
because
of
these
warnings.
E
But
again
this
is
where
I
come
back
to
the
fact
that
we
cannot
fail
the
pipeline
I
mean
we
could
fail.
The
pipeline,
but
in
an
ABC
world
we're
just
shipping
the
minimum
functionality,
we
don't
have
trash
holes
to
fail
the
pipeline.
So
at
this
point
everything
is
a
warning,
and
this
is
something
that
I
want
to
find
out
more.
Maybe
do
some
research
around
this,
but
sometimes
I
suspect
the
warnings
my
being
on
but
I.
Don't
know.
This
is
perhaps
my
persona.
It's
sometimes
ignoring
warnings,
but
yeah
I,
don't
know.
E
E
Appscan
is
what
failed,
and
then
you
have
to
go
on
basically
dig
deeper
into
that
job
and
then,
when
you
open
that
job
you'll
see
all
the
stuff
and
what
you
have
to
do
is
basically
brow
browse
the
artifact,
which
is
the
report
or
like
download
it
and
you'll
find
the
issues
right.
So
if
you
download
it,
you
probably
are
not
gonna
get
these
because
again,
it's
a
database,
but
you
get
another
version
of
these.
Sometimes
it's
just
a
JSON
file
with
all
the
things
all
the
errors.
E
So
these
ones
less
discoverable,
but
it's
better
in
terms
of
showing
that
these
leaves
on
top
of
our
review
up
and
then,
while
you
seen
it's
a
report
of
the
app
as
it's
rendered,
not
a
report
of
the
un--
compiled
code,
you
know
and
I,
don't
I
think
that's
I
mean
I,
think
that's
interesting
and
an
important
distinction,
because
not
everyone
who's
gonna
be
doing
these
checking
these
accessibility.
Things
has
the
context
of
how
these
things
are
set
up.
E
So
I
think
it's
important
to
tell
them
how
things
are
being
tested
right,
I
think
the
other
thing
that
I
explored
it's
basically
the
idea
of
okay.
Maybe
we
keep
this
level
right,
but
instead,
when
you
expand
things-
and
this
is
super
experimental
I-
don't
think
we
have
done
this
yet
anywhere,
indeed
lab,
but
because
we
we
already
have
a
URL
to
show
this,
which
basically
iframe
is
within
the
widget
and
show
it
there
in
context
right.
So
you
will
just
need
to
scroll
through
these
and
see
whatever
you
need
to
do.
E
So
that's
pretty
much.
It
I
think
that
is
the
things
that
I'm
exploring
there's
another
part
of
these,
which
is
how
are
we
gonna,
introduce
the
concept
of
accessibility
testing
and
how
are
we
gonna?
Tell
customers
hey.
You
need
a
review
up.
This
leaves
on
top
of
a
review
up.
How
are
we
gonna
prevent
that
people
get
confused
with
the
target?
This
is
a
testing
job
and
that
they
need
to
set
it
up
at
the
deploy
stage.
You
know
so
there's
so
many
things
that
we're
exploring
but
yeah
that's
pretty
much
for
a
while
sure.
E
E
A
A
So
it's
you
know
anyway
kind
of
want
to
see
the
I
think
it's
more
closer
and
more
details
and
I
like
this.
You
kind
of
like
suggested
the
two
levels.
First
like
leave
the
general
things
and
then,
if
you
want,
you
can
go
deep
dive.
One
comment
I
would
like
to
suggest
like
I,
see
that
you're
using
like
a
11
for
specifying
that
this
is
an
accessibility
issue
in
the
hover
over
that
icons.
I
would
suggest.
A
Maybe
you'll
find
some
other
way
like
to
make
it
more
clear
that
that's
accessibility
and
that's
the
name
of
the
tool,
but
for
people
who
are
not
really
aware
of
what
does
11
mean,
maybe
a
more
recognizable
way,
maybe
for
accessibility,
wouldn't
have
an
icon
or
just
write
it
as
accessibility.
I,
don't
know.
Okay,.
B
Yeah
and
my
comments
were
around
firstly
about
the
the
proposer
where
you
have
the
button
in
like,
under
the
pipeline's
view,
right
ideal
us
widget,
so
also
about
like
placements,
that
we
sometimes
have
jump
buttons
there
and
it's
nice
that
you're
showing
this
because
I'm
working
or
I'm
working
some
issues,
this
milestone
and
I
know
I
have
some
things
assigned
to
me.
The
upcoming
milestone
that
women
are
Adams
there.
So
just
a
heads
up
that
you
know
this,
it's
not
always
static,
so
we
need
to
see
how
these
these
call-to-action
buttons
are.
B
Where
is
it?
It's
a
warning
message.
It's
related
to
my
previous
point
in
this
in
this
call
so
warning
message:
if
by
Quantrill,
my
request
is
figured
but
pipeline
from
bridge
result
is
not
or
something
about
that,
so
that
no,
the
application
wouldn't
hear
it
lab
CI
file
and
then
say
hey
in
the
in
those
warning.
Like
security
warning
things,
hey
you're
missing
a
line
here.
No,
then
you
play
with
expense,
but
also
under
are
people
actually
relying
to
that
section.
B
For
this
type
of
you
know
warning
message,
because
it's
all
the
way
down
in
the
in
the
in
emerge
request
view,
and
sometimes
we
have
so
many.
So
my
comment
was
also
about
that.
How
can
we
measure
you
know
if
people
are
running
on
this
section
in
emerge
request
view
for
warning
messages
in
general,
they
aware
that
they
have
to
go
there.
You
know:
is
this
the
the
best
place
so
that
we
can
perhaps
standardize
and
find
I?
Don't
know
the
right
location.
E
E
E
There's
many
things
that
I
feel
that
are
important
from
distance
for
this
level
in
particular
right
maybe
changes
to
code
quality
I
mean,
like
that's
kinda,
like
one
metric
that
you
already
know
when
you're
working
on
your
code,
I
mean
maybe
you're
paying
attention
to
that.
But
these
type
of
things
that
you
are
completely
unaware
of
until
like
the
app
is
rendered
and
tested,
you
wouldn't
know
those
until
you
go
and
what
you
download
and
you
select
claim
with
the
reports.
You
know
that's
up
right
and
yeah.
I.
E
Think
that,
of
course,
if
I'm
gonna
be
advocate
for
customers,
we're
very
into
testing,
I
will
say.
Yes,
we
probably
wanna
want
I
mean
we
all
would
basically,
oh
no
I,
like
a
large
portion
of
these
testing
things
like
good
quality
and
all
that
stuff.
So
I
would
say
that
even
like
a
tab
that
is
colle
like
testing,
you
know-
and
it's
basically
just
dedicated
to
testing.
Of
course,
I'm
not
trying
to
be
like
you're
like
I,
don't
hate.
You
saying
like.
Oh
just
give
me
a
tab
for
my
group.
E
You
know
I'm
just
saying,
because
I
believe
that
there's
kind
of
like
value
you
seen
that
from
in
a
separate
context,
you
know
because
many
of
the
things
that
you're
seeing
here
are
like
requests
to
merge
the
pipeline.
It's
this
pipeline
with
these
from
these
MRR
is
running.
You
know
and
here's
the
review
up
and
here
are
the
eligible
approvers.
You
know,
and
then
you
see
testing
things
you
know.
So
maybe
we
completely
broke
down
down
like
down
the
clerk
the
level
and
then
we
will
have
a
person.
E
F
That's
what
I
was
gonna
say
one.
What
we
could
do
is
come
up
with
a
couple
of
different
designs.
You
know
you
don't
have
to
design
all
of
the
process
in
multiple
ways.
You
could
just
take
a
page
and
say
this
particular
page
I
wanted
to
explore
three
different
ways
to
present
the
information
on
this
page.
So
you
could
you
could
do
it
like
that
and
then
put
it
in
front
of
some
people
and
get
their
thoughts
on
which
one
they
prefer,
why
they
prefer
it?
How
would
they
would
use
it?
F
D
E
Yeah
I
think
I
think
one
of
my
main
concerns
is
actually
on
the
yama
setup,
file
etc,
but
I
don't
think
we
can't
like
address
any
of
those
things
in
the
MVC
of
these,
but
I
just
wanted
to
call
out
a
cloud
that
I
have
been
thinking
about
that,
and
this
basically
goes
back
to
Mike's
previous
comment
on
hyannis
portion,
or
he
you
mentioned
that
and
I
think
that's
I
think
as
a
stage.
We
should
think
more
about
it.
E
F
E
B
Also,
a
heads
up
that
maybe
we
can,
the
the
expertise
from
I
think
right
is
responsible
for
Richard
Quest
who's
with
the
infinities,
so
they
they
know
a
lot
and
they
are
in
a
way.
The
honors
can
also
make
sure
that
other
designers
are
you
involved,
because
maybe
they
are
working
on
some
things.
I'm
talking
about
the
interface
right
now,
some
things
that
might
influence
our
decisions
in
the
future
yeah
yeah.
C
One
small
thing
to
consider
here:
one
is
I
notice.
You
said
database
was
required.
Its
we
did
just
introduced
the
ability
to
make
a
review
app
or
any
environment
like
auto.
Stop,
because
we
had
seeing
that
you
know
if
you
don't
do
that.
You'll
get
thousands
of
these
things
and
it
sooner
or
later
becomes
an
unmanageable,
so
hopefully
moving
forward.
People
will
put
in
that
that
functionality.
So
these
review
apps
will.
D
E
A
great
call
out,
because
that's
as
far
as
I
know
like
we
don't
want,
we
don't
want,
is
to
live
forever,
especially
if
he
needs
the
database.
This
should
like,
but
that's
another
like
that's
me,
being
ignorant,
but
sometimes
they
like
open,
like
super
old,
a
Mars,
and
they
still
have
a
review
up
at
work.
You
know
so
yeah
that
doesn't
seem
wise.
You
know,
like
that's,
consuming
from
our
resources
or
I,
don't
know,
I,
don't
know
where
rank
you
from
the
customer.
C
E
Audit
so
yeah
there
was
no
I.
Think
that's
a
great
suggestion
is
because
perhaps
part
of
the
NBC
of
these
when
you're
writing
when
you're
writing
the
the
llamó
specification
from
this
type
of
testing.
You
should
also
provide
like
like
a
way
to
kill
the
app
in
the
context
of
accessibility,
testing.
I,
don't
know,
there's
because
you're
saying
yeah,
yeah
I'll
stop
by
the
ducts,
not
for
all
the
apps
right.
Oreo
leads
that,
like
a
measure
that
it's
gonna
stop
like
all
the
apps.
From
now
on,
like
the
actual
circuit
man,
yeah.
C
E
Because
you
know
there's
another
interesting
thing,
like
the
the
accessibility
report,
happens
against
an
existing
app,
but
once
it's
done
it's
done.
You
know
like
the
app
can
get
healed.
You
don't
need
to
like
retest.
This
can't
yep,
you
know.
So,
even
if
the
app
the
review
up
is
that
the
accessibility
report.
E
Yeah
but
the
JSON
file
exists
right,
like
the
JSON
file
that
supposedly
loads,
it's
complicated
me
basically
being
the
poly,
which
is
the
framework
we
want
to
use.
You
need
like
its
own
app
as
well,
which
I
feel
this
gonna.
It's
gonna
make
this
hard,
but
I
don't
know,
I
need
to
sync
with
the
and
what
I
was
gonna.
Ask
you
the
person
who,
like
the
development,
that
it's
working
on
these,
how
it's
gonna
work,
because
I
think
that's
gonna
impact
some
of
the
design
decisions
that
I
make
yeah.
A
I
think
it's
a
good
call
to
double
check
anyway.
I
think
we
have
to
leave
to
the
another
session
yucks
meeting,
that's
a
happening
in
one
minute
and
thanks
so
much
for
presenting.
If
anyone
has
any
comments,
please
add
them
to
the
notes.
So
presenter
and
presenters
can
follow
up
on
those
thanks.
So
much
for
another
great
session.