►
From YouTube: UX Group Conversation - September 21, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Hi
christine
my
name
is
jessica
brown,
so
I'm
one
of
the
managers
in
the
revenue
team-
and
this
is
my
first
time
here-
I
haven't
seen
the
slides
yet,
but
I
kind
of
wanted
to
come
to
the
meeting
to
familiarize
myself
with
this
particular
group
and
it's
my
first
time.
So
maybe,
if
you
can
kind
of
like
give
a
little
background
of
the
group,
that
would
be
awesome,
sure
welcome
jessica.
A
Thank
you
for
asking
your
question,
so
user
experience
at
get
lab
is
inclusive
of
product
design,
user
research
and
technical
writing
within
that
product
design
bucket.
We
also
have
our
foundations
team,
which
focuses
on
our
design
system,
which
are
a
single
source
of
truth,
components
that
we
use
in
our
product.
A
Our
responsibility
is
to
create
exceptional
experiences
for
our
end
users,
to
really
understand
the
problems
that
they
have
and
what
they
need
and
help
solve
those
problems
in
the
best
ways
possible,
while
at
the
same
time
iterating
and
making
you
know
the
the
smallest
yet
still
valuable
changes.
We
can
we
partner
very
closely
with
our
product
managers
and
we
are
part
of
the
engineering
department.
C
D
D
But
what
should
the
mix
election
be
for
when
we
are
trying
to
figure
out
a
broad
survey
which
reflects
our
customer
base
and
makes
in
terms
of
new
versus
experienced
users.
A
E
There
was
a
key
review
not
too
long
ago,
where
sid
asked
a
question
that
made
us
think
about
this
a
little
more
and
we
started
to
reach
out
to
the
data
team
to
find
out
what
our
percentage
of
experienced
users
and
new
users
are,
and
we
we
have
those
numbers
and
it's
not
50
50,
which
is
what
we've
been
sampling
for
for
sus
and
the
reason
why
we've
been
doing
that
up
to
this
point
is
to
meet
that
that
lower
margin
of
error,
which
is
fairly
common
within
sus
across
the
industries.
E
So
this
quarter
we're
going
to
be
experimenting
a
little
bit
and
trying
to
alter
those
experience
versus
new
percentages
for
sus
to
match
our
own
user
base.
That
way,
so
I
hope
that
answers
your
your
question.
A
F
Yeah,
so
it's
it's
kind
of
nice
there
to
put
this
after
a
newt's
question,
because
it's
sort
of
related,
because
I
know
that
one
of
the
main
findings
out
of
the
ux
research
was
that
learnability
was
having
a
big
impact
on
sas
and
general
view
of
the
ux.
So
I
was
kind
of
wondering
I
noticed
that
one
of
the
okrs
is
around
improving
learnability
and
I'm
wondering
what
is
the
like
primary
or
ways
that
you
are
measuring
that
learnability
and
whether
it's
improving?
F
Are
you
just
using
sauce
or
are
there
other
ways
that
you're
kind
of
measuring
that
in
terms
of
what
the
designers
are
working
on.
A
Yeah
and
it's
I'll
say
it's
not
just
designers,
so
we
have
folks
working
across
on
this
across
the
entire
ux
department.
Learnability
is
a
key
theme
for
us.
The
interesting
thing,
so
if
we
certainly
go
beyond
sus
sus
tells
us
what's
happening,
doesn't
tell
us
why
it's
happening
so
once
we
understand
what's
happening
now,
we're
digging
into
the
okay.
Well,
why?
So?
Actually,
when
I
have
each
of
my
leaders
address
this,
we'll
start
with
adam
talking
about
user
research,
we're
doing
on
learnability.
A
I
would
like
to
move
then
to
valerie
and
have
her
talk
about
the
kr
we
have
for
product
designers
around
learnability,
then
we'll
transition
to
susan.
She
can
talk
a
little
bit
about
learnability
in
docs
and
then
finally
we'll
transition
over
to
tori,
although
I
think
that'll
actually
transition
into
kyana's
question
right
after
that.
So
maybe
actually
story
will
save
you
we'll.
Let
you
respond
to
hayanna.
Instead,
all
right,
adam.
E
Thanks,
I
lost
track
of
the
order
of
people
there,
yeah
so
I'll.
First
say
that
how
we're
measuring
learnability
one
of
the
ways
within
sus
or
system
usability
scale.
Thank
you
bronwyn
for
asking
that
question.
So
it's
it's
a
scale
that
has
10
questions
in
there
and
question.
10
is
a
big
one
around
how
we
measure
learnability.
So
after
I'm
done
talking
I'll
post
a
link
to
all
the
questions,
so
there's
a
little
bit
more
context
in
there.
I
believe
questions.
E
This
quarter
we're
learning
a
little
bit
more
about
how
our
users
learn
gitlab
and
we're
approaching
it
from
two
different
ways:
one
we're
talking
to
people
who
are
get
lab
users
they've
been
using
it
for
at
least
a
year
and
we're
interviewing
them
to
find
out
how
their
learning
experience
went.
So
thinking
back
to
when
you
started
learning
what
kind
of
hurdles,
what
kind
of
challenges,
what
helped?
E
I'm
handling
that
one
personally
and
it's
it's
just
fascinating,
just
to
be
alongside
people
as
they're
learning
and
to
check
in
with
them
every
couple
weeks
and
at
the
end
of
that
we'll
find
out
a
little
bit
more
about
what's
effective
for
people
in
terms
of
learning
approaches,
and
why
and
what's
not
that
effective,
also
and
how
they
learn
in
the
context
of
they
still
have
to
do
their
jobs
too.
So
there's
lots
of
interesting
things
we'll
find
out
there
I'll
hand
it
over.
I
think
valerie's,
next
yeah,
okay,
great.
G
G
You
know
brand
new,
not
necessarily
brand
new
to
gitlab
but
brand
new
to
using
a
particular
feature
feature
set
within
gitlab.
So
that's
kind
of
the
angle
that
we
took
from
a
product
design
standpoint
is
just
getting
somebody
that
didn't
really
know
how
to
do
something
in
the
tool.
What
was
their
experience
like
they
document
it,
and
then
they
work
together
with
the
designer
who
actually
works
in
that
particular
area
on
coming
up
with
some
recommendations
for
how
to
improve
the
experience.
A
Thank
you
valerie
before
we
move
to
susan
who
will
be
next
anu
has,
I
think,
a
follow-on
comment,
slash
question.
D
Yeah,
first
of
all,
thanks
valerie,
I
love
the
scorecards.
I
watch
them
and
I
have
you
know
sometimes
found
myself
scum
when
we,
when
I
see
some
usability
issues
that
we
ship
the
product
with.
So
I
appreciate
be
doing
that
because
it
just
brings
it
front
and
center
for
everybody
and
motivates
people
to
make
the
change.
D
I
had
a
question
on
learnability
signal,
so
it's
there
we
know,
but
have
we
amplified
the
learnability
signal
due
to
our
sampling,
bias
of
50
50
new,
experienced
mix
and,
if
so,
by
how
much
and
to
ask
another
way.
The
real
question
I
have
is:
are
we
missing
on
sort
of
some
high
signal
from
our
experienced
users,
because
we
are
so
overweight
on
the
new
users.
A
I'll
start
and
then
I'll
turn
it
over
to
adam
and
what
I'll
say
is
learnability
is
not
just
a
problem
for
our
new
users.
It's
a
problem
for
our
experienced
users
too,
for
a
few
different
reasons.
So
one
even
when
someone's
been
using
gitlab
for
a
long
time,
they
aren't
using
necessarily
every
single
feature
right.
So
as
we
introduce
new
features,
whereas
they
explore
new
features,
they
are
a
new
user
there
and
they
talk
about
learnability
problems
in
those
areas.
A
Vulnerability
also
impacts
experienced
users
because
they
have
to
onboard
new
users
as
they
join
their
company,
and
we
hear
about
that.
They
make
comments
about
that
in
verbatims
in
sus.
The
last
thing
that
I'll
mention
is
that
the
cohort
mix
has
not
seemed
to
have
a
big
impact
on
whether
or
not
learnability
is
a
key
theme
for
us,
because
we've
had
different
mixes
there,
but
adam.
Please
follow
along.
E
Yeah
you're
spot
on
christy,
just
if
we
were
to
just
go
one
click
in
there
and
I'll
share
a
link
to
this,
but
for
question:
10,
a
noob
for
q2
that
we
just
wrapped
up
we're
able
to
actually
look
at
those
two
cohorts
and
it
was
57.8
for
new
users
and
62.6.
E
E
So,
like
I
mentioned
earlier
very
closely
related
to
your
earlier
question,
we
will
be
looking
to
offset
that
50
50
balance.
D
A
H
For
the
doc
site,
we
focus
on
facets
of
learnability.
We
look
at
readability,
we
look
at
findability
and
organization
of
topics,
so
at
a
very
atomic
level
we
test
for
the
readability
of
our
docs.
There
are
ways
to
test
to
see
how
readable
a
page
is,
and
we've
run
those
tests.
The
most,
I
guess
the
most
obvious
way
of
measuring
readability
is
how
long
are
your
sentences?
H
H
Findability
has
to
do
with
whether
a
person
recognizes
whether
the
information
that
they're
looking
for
is
what
they're
looking
at.
We
talk
about
organization
in
concepts,
tasks,
references
and
troubleshooting
chunks
of
information
and
we're
working
on
that
when
you
chunk
information
in
that
way,
it
makes
it
more
findable
makes
it
more
readable,
makes
it
more
understandable,
and
I
would
assume
learnable
because
of
all
of
those
different
aspects.
A
Yes
and
documentation
is
a
key
way
that
users
learn
about
our
product.
So
that's
why
we're
so
focused
on
on
making
these
improvements.
Okay,
hayanna
take
us
home
on
this
concept.
With
your
question
too,.
J
Yeah
thanks
ayana,
so
far,
we've
gathered
examples
across
the
product
in
order
to
map
them
to
the
use
cases
that
we
currently
have
to
find
in
pajamas
and
determine
if
there
are
any
use
cases
that
we
don't
already
have
documented
there.
J
So
from
there
we're
looking
at
how
the
design
may
change
based
off
those
use
cases
right
now,
pajamas
the
region
is
just
one
design.
It
has
an
illustration
some
text
and
some
action
buttons.
J
One
of
the
main
use
cases
for
empty
states
is
for
when
a
feature
hasn't
been
configured
yet,
and
we
one
of
the
opportunities
that
we
see
is
being
able
to
provide
more
in-app
guidance
during
those
situations.
So,
for
example,
we
recently
changed.
I
think
it's
the
pipelines
page
to
be
from
our
normal,
empty
state
to
showcasing
like
adding
a
ci
cd
template
so
that
you
can
use
that
feature.
J
K
Yeah
sure
so
the
other
example
you
added
was
something
that
was.
We
were
pinged
on
an
issue
around
epics
that
are
created
that
have
no
issues
or
related
epics
attached
to
them
yet,
and
so
I
think,
there's
an
opportunity
for
us
to
be
a
little
bit
more
creative
and
go
beyond
like
you
mentioned
an
illustration,
some
text
and
a
button
that
links
to
documentation.
K
So
for
that
example,
we
can
potentially
suggest
related
issues
or
epics
that
match
keywords
to
that
epic,
so
that,
instead
of
having
to
do
this
manually,
finding
issues
adding
them,
we
can
do
this
intelligently.
So
those
are
just
a
couple
examples
where
we're
trying
to
think
beyond
our
static,
informational,
empty
states
and
the
those
are
a
couple
things
that
have
come
up
so
far
during
that
kr.
D
Yes,
thank
you.
So
my
one
of
the
product
principles
we
have
is
you're,
not
the
customer,
and
this
encourages
pms
to
go,
seek
our
primary
feedback,
even
when
they
have
conviction
on
what
the
customer
may
want
or
doesn't
and
as
we're
doing
a
lot
of
these
learnability
initiatives.
I
see
designers
from
other
groups
looking
at
the
product
from
another
designer
and
that's
great
because
it
gives
a
different
perspective.
D
At
the
same
time,
we
need
to
make
sure
that
that
to
keep
the
user
persona
front
and
center
when
assessing
whether
something
is
difficult
or
whether
this
sort
order
is
better
than
the
other
sort
order
and
things
like
that
and
and
as
a
pm
or
a
designer.
We
think
something
may
be
difficult,
but
it
may
not
be
for
that
persona,
because
this
is
sort
of
their
daily
work.
So
how
do
we?
How
do
we?
A
Yeah,
that's
a
great
question
as
a
user
experience
department.
Obviously
we
always
want
to
thank
user
first
and
we
always
try
to
remember
that
we
are
not
our
end
users.
So,
yes,
we
are
doing
heuristic
reviews.
That
is
one
initiative
amongst
many
to
understand
learnability.
A
So
you
heard
adam
talk
about
the
direct
interactions
that
we're
having
with
users
in
research,
because
that's
a
really
important
perspective
for
us,
the
heuristic
reviews.
We
rely
on
our
expertise
as
ux
practitioners
to
kind
of
go
through
and
take
that
first
pass
at
a.
If
I'm
looking
at
this
job,
like
does,
does
it
on
its
face,
make
sense
and
it
can
be
a
really
good
way
to
do
two
things.
One
is
uncover
low
hanging
fruit.
A
We've
got
low
hanging
fruit,
usability
problems
in
our
product
that
you
don't
have
to
be
an
expert
in
one
of
our
personas
to
be
able
to
look
at
it
and
just
go
just
from
a
general
usability
perspective.
This
doesn't
make
sense
and
so
we're
trying
to
uncover
some
of
those
things.
The
other
thing
that
happens
during
a
heuristic
review
is
you
see,
opportunities
for
further
user
research.
Where
you
go-
and
this
may
not
make
sense-
it's
not
making
sense
to
me-
maybe
it
does
make
sense
to
the
persona
we're
targeting.
A
Let's
go
dig
in
part
of
that
digging
in
could
be
talking
to
the
pm.
Who
may
have
a
better
understanding.
It
could
be
talking
to
internal
personas,
it
could
be
going
out
and
actually
conducting
additional
user
research
valerie.
Do
you
want
to
add
to
that.
G
Yeah,
I
think
that's
great.
We
do
a
ton
of
usability
testing
as
well,
so
this
is
likely
not
the
first
time
that
we're
also
looking
at
the
experience.
If
you
look
at
the
entire
product
development
blow,
we
have
many
opportunities
for
validation
and
so
kind
of
to
christy's
point.
I
think
we're
we're
doing
a
really
good
job
at
making
sure
that
we
are
getting
the
user
voice
in
in
perspective
in
anything
that
we
do
and
then
we
always
work
on
a
level
of
confidence.
G
So
that's
something
that
we
take
into
consideration
too.
So
it
could
be.
I
don't
know
we'll
find
out
towards
the
end
of
this
quarter,
but
it
could
be
that
some
of
the
information
that
we're
getting
from
the
heuristics
and
it
stands
out
to
us-
and
we
do
say
we
have
more
questions
about
this
or
we
do
need
to
get
validation
from
our
actual
users
and
then
we
will.
That
will
be
the
next
step
for
that.
So
most
of
the
recommendations
that
will
come
out
of
this
there
will
be
that
question
of
is
this.
G
Is
this
really?
You
know?
How
confident
are
we
about
this
particular
thing?
Should
we
be
moving
forward
with
it
and
it
will
likely
trigger
some
additional
research
with
the
actual
users?
We
also
do
category
maturity.
Scorecards,
which
is
another
form
of
validation,
has
a
different
purpose,
but
there's
all
of
these
ways
that
we're
introducing
the
user
to
make
sure
that
you
know
we
have
a
high
level
of
confidence
in
whichever
direction
that
we're
moving
forward.
A
D
L
H
My
question
was
actually
answered
and
I'm
reading
about
it
further
now
cool.
A
I'll
verbalize
it
in
case
anyone
is
watching
this
on
the
internet,
so
what
blair
asked
was?
Have
we
considered
other
frameworks
like
ux,
lite
and
pedro?
I
think
you
and
adam
both
answered
that
so
pedro.
You
can
start.
B
Yeah
thanks
yeah.
Basically,
we
already
use
the
ux
slides
a
variation
of
it
when
doing
the
category
maturity
scorecards.
B
It's
one
of
the
questions
that
we
use
there
if
you're
not
familiar
with
those
types
of
scorecards,
take
a
look
because
they're
different
from
our
ux
scorecards
that
we
talked
before
and
and
yeah
that's
one
of
the
uses
that
we
do
for
the
ux
lite
adam
adds
some
context
about
the.
Why
and
how
that
relates
to
sus
adam
go
ahead.
E
Yeah
thanks
for
bringing
up
the
category
maturity
scorecard.
I
I
forgot
about
that.
So
yes,
for
ux
light,
it
is
faster.
It's
way
faster.
E
We
don't
really
have
a
huge
problem
with
that
being
a
big
issue,
because
we
are
able
to
get
a
pretty
large
n
for
our
surveys,
but
I
think
the
big
thing
around
why
suss
and
not
ux
light
at
the
moment,
is
because
of
the
level
of
granularity
that
sus
gives
us
with
those
10
questions
like
we're
able
to
really
go
in
to
help
diagnose
like
what's
bringing
us
down
which
of
these
questions
and
a
great
example
of
that
is
what
we're
finding
with
learnability
at
the
moment.