►
From YouTube: UX Key Review - October 20, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
welcome
to
the
october
20th
2021
ux
key
review.
I
want
to
give
everyone
a
couple
of
quick
reminders.
First
of
all,
please
don't
mention
any
customers
in
this
call
and
then
also
a
reminder
that
we
don't
present
in
this
call.
So
with
that
said,
I'm
going
to
pause
and
wait
for
some
questions.
A
That's
that's,
okay.
I
know
exactly
what
you're
talking
about
so
I'll
kind
of
clarify
what
the
question,
if
I
get
it
wrong,
tell
me
if
you're
talking
about
something
different,
but
I
think
you're
talking
about
is
how
do
we
get
our
legacy?
Components
that
are
in
the
product
migrated
over
to
our
single
source
of
truth,
pajamas
components.
It's
been
an
ongoing
project
for
a
very,
very
long
time.
We
made
a
bunch
of
headway
a
little
over
a
year
ago,
where
we
migrated
about.
I
think
800,
ish
components.
Tori
can
correct
me.
If
that's
wrong.
A
A
B
Prediction
like
our
availability
has
been
better.
Do
you
expect
our
sus
score
to
increase.
B
A
A
lot
of
things
going
on
performance
did
impacts
us.
We
have
done
a
lot
of
really
good
performance
work.
Now,
whether
or
not
that's
reflected
in
the
score
at
this
point,
we
can't
be
sure
it
does
take
a
while
for
people
to
notice
that
sort
of
change
and
go
okay.
This
is
better
now
and
stop
commenting
on
it.
The
other
thing
is
that,
once
we
resolve
performance
problems,
that
doesn't
mean
that
there
aren't
other
problems
that
they'll
start
to
talk
about.
Instead,
adam's
team
is
still
working
through
the
q3
sus
data.
A
B
Thanks
for
that
contact
appreciate
it.
B
C
There's
a
world
in
which
sid's
absolutely
right,
performance,
reliability,
negative
impacts,
us
or
performance.
Reliability
is
better
that
component
of
sus
goes
up,
but
suss
overall
goes
down
because
there's
many
components
to
it.
What
are
their
what's?
The
relative
sizing
of
like
learnability,
which
is
another
thing
that
you've
highlighted
versus
something
like
performance
like
do
you
have
a
sense
of
the
relative
weighting
of
these
components.
A
Let
me
see
if
adam
does,
based
on
verbatims.
What's
interesting
before
I
turn
it
over
to
adam
is
sus,
doesn't
specifically
ask
in
the
questions
about
performance.
It's
just
that
performance
was
a
enough
of
a
consideration
for
people
to
bring
it
up
separately
in
our
verbatim,
where
we
basically
ask.
Is
there
anything
else
you
want
to
tell
us
learnability.
There
are
two
sus
questions
that
are
specifically
focused
on
learnability,
and
that
is
where
we
see
scores
that
are
lower
than
we
would
expect,
based
on
how
the
other
questions
were
rated
by
users.
D
For
this
past
quarter,
not
yet
but
kirsty
you're,
absolutely
right
with
everything
you
said
and
eric
those
two
learnability
questions
that
christie
referenced
were
consistently
lower
in
those
compared
to
the
other.
Eight
questions.
C
A
Yeah
I
have
asked
for
a
kr
in
collaboration,
so
a
shared
kr
that
focuses
on
learnability
in
q4.
I
don't
yet
know
whether
or
not
we'll
be
able
to
to
do
that.
I
should
know
by
end
of
day
today,
but
in
q3,
one
of
the
krs
for
ux
was
to
do
heuristic
evaluations
of
10
different
parts
of
the
product
from
a
learnability
perspective
and
to
come
back
with
specific
recommendations
for
how
we
can
improve
learnability
in
those
areas.
So
the
request
was
great.
Now
that
we
know
let's
burn
some
percentage
of
those
down.
B
It's
a
bit
technical,
but
it
came
up
during
a
conversation.
We
seem
to
be
using
a
different
way
to
look
at
repository
diffs
and
mr
diffs.
It's
causing
it
might
cause
a
duplication
of
effort,
it's
more
of
a
front-end
thing,
but
it's
like
wow,
that's
really
inefficient.
That
seems
really
inefficient
to
me,
not
sure.
There's
anybody
here,
I'm
discussing
a
front-end
thing
in
a
ux
call,
but
whatever.
A
You're
always
welcome
to
ask,
I
will
say
I
have
no
idea.
Is
there
anyone
else
on
this
call?
Who
knows
anything
about
this.
B
Where's
it
there's
a
good,
can
you
put
it
in
some
agenda
eric
either
ours
or
the
scaling
meeting,
or
I
don't
know
where,
but
it's
it's
one
of
the
main
complaints
I
hear
about
gitlab
is
that
our
mr
view
is
bad
and
then
now
learning
that
the
mrv
is
different
than
the
diffu.