►
From YouTube: [REC] Key Meeting - UX (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Oh
and
yes,
we're
not
presenting
that's
correct.
The
first
three
items
are
just
informational,
so
we'll
go
straight
to
item
number
four,
which
is
stella.
C
Do
we
have
yes,
there's
stella,
sorry,
I'm
here
this
is
kind
of
like
a
combination
fyi.
I
just
wanted
to
get
your
opinion,
but
we
just
updated
our
three
year
strategy
to
move
this
metric
from,
I
think,
with
80
down
to
77,
and
I
see
your
target
is
75,
which
is
still
a
stretch
goal
from
where
we
are
at
the
moment.
But
I
want
to
kind
of
get
your
thoughts
on
whether
or
not
77
should
be
that
target.
A
Yeah,
so
let
me
start
by
saying
yeah:
it
probably
should
so
I'll
go
ahead
and
make
that
change.
The
reason
why
it
was
75
before
is
that
we
were
trying
to
do
a
shorter
term
goal,
but
I
think
the
consistency
is
important,
so
I
will
make
that
change.
C
Awesome
sounds
good
and
then
for
the
next
one
number
five
is
also
me
sas
usability
improvement
is
a
ceo
objective
for
this
coming
quarter
and
I've
seen
a
number
of
related
issues.
I
think
we're
still
flushing
out.
Some
of
this
need
means,
but
what
needs
to
happen
in
order
to
reach
72.5
by
the
end
of
the
quarter.
A
I'm
just
gonna
be
real,
honest
and
say
it's
not
gonna
happen.
I
would
not
want
to
falsely
set
your
expectations.
A
One
reason
is
that
we
in
q4
of
fiscal
year
21
we're
at
70.8
moving
two
plus
points,
or
I
think
that's
two
points.
Sorry,
it's
just
a
really
big
move
in
sus.
Every
point
is
a
big
move.
Incest.
The
other
reason
is
that
we
just
got
our
q1
fy
22
sus
score
in
and
it
has
gone
down
and
we
are
still
analyzing
the
reasons
why,
but
it
was
our
most
significant
drop
yet
so
adam.
Let
me
let
you
talk
about
that
for
a
moment
and
then
I'll
finish
up.
D
Yeah
sure
so,
yes,
it
was
our
most
significant
drop
yet
catherine
from
the
research
team
she's
looking
into
a
lot
of
the
verbatim
that
respondents
leave
when
they
score.
D
So
you
try
to
uncover
the
reasons
as
as
to
why
and
christy
gave
one
suggestion
that
maybe
it
was
related
to
some
of
the
technical
issues
that
were
experienced
in
this
last
quarter,
we're
looking
into
that
we're
looking
into
any
possible
correlation
to
pricing
changes
that
were
done.
So
that's
where
we're
at.
We
should
have
some
more
details
in
the
next
week
or
so.
I
would
imagine.
A
Yeah
and
thank
you
for
that
adam
and
can
you
go
ahead
and
share
what
the
q1
number
was.
A
Yeah,
which
is
super
disappointing,
we
do
see
performance
come
up
frequently
in
verbatims
in
the
sus
folks
really
consider
performance
to
be
a
key
part
of
usability,
and
I
know
we've
worked
on
it,
but
we
did
have
some
performance
issues
during
this
last
quarter.
So
that's
why
we
want
to
dig
in
and
see
if
maybe
that
was
the
reason
why?
But
the
the
real
answer
is
we're
not
sure
yet,
but
we're
working
hard
to
figure
it
out
what
questions
do
y'all
have,
because
this
is
a
big
deal.
This
is
a
big
one.
E
Yeah
and
I'll
add
also
that
this
is
a
great
reminder
that
sus
is
not
just
about
user
experience.
It's
about
the
entire
business
experience
that
a
user
or
a
customer
has
with
us.
So
if,
for
example,
the
starter
customer
is
not
happy
with
our
eoa,
they
may
give
us
a
bad
sus.
If
a
startup
customer
goes
to
premium
but
felt
forced
to
go
to
premium,
they
could
give
us
a
bad
source.
A
Yes,
our
emotional
roller
coaster,
so
we
just
got
that
number
yesterday,
which
is
why
we
haven't
talked
yet
about
whether
the
72.5
number
was
the
appropriate
goal
to
set
for
our
shared
okrs.
This
quarter.
We
could
look
at
it
in
a
couple
of
ways.
One
is
we
could
lower
it
and
say
well,
hey
if
we're
at
68.8.
A
I
think
adam
said
that
maybe
we
need
to
go
ahead
and
lower
it
to
70,
because
70
is
a
stretch
goal.
It
may
also
be
an
aberration.
It
could
be
based
on
things
that
happened
during
q1
that
we
have
worked
hard
to
resolve
and
it
may
bounce
back
up
in
the
next
quarter.
A
F
A
new
can
you
articulate
what
you
said
during
the
e-group
off-site
in
terms
of
how
to
think
about
the
goal
setting
I
thought
that
was
really
insightful
and
intuitive.
If
you
remember
what
you
said
at
that
meeting,
I
think
it'd
be
good
to
discuss
here
regarding
sus
susscore,
you
said
you
had
a
different
idea
of
how
to
set
the
target,
based
on
best
in
class
benchmarking,
that
you
did.
E
Yeah,
so
I
I
was
looking
left
and
right
to
see
some
sort
of
correlation
between
revenue,
growth
and
sus,
and
I
found
one
from
bentley
university
and
I'll
link
it.
Here.
We
showed
that
dropbox
had
a
sense
of
nearly
77
and
when
they
had
it,
they
were
experiencing
a
growth
of
120
year-over-year,
and
then
there
were
there's
a
whole
set
of
others
below
that,
including
microsoft
word
and
things
like
that
which
had
a
lower
sus.
E
So
there
is
a
weak
correlation,
but
it's
positive,
0.54
r
square
between
revenue,
growth
and
sus.
So
it's
a
good
metric
to
have
but
and
then
the
second
thing
was
when
we
look
at
numbers
like
80
or
77,
we
are
comparing
ourselves
to
thermometers
and
gmail
and
those
are
not
the
kind
of
products
we
build.
E
A
Yeah
just
add
a
little
more
color
to
that
68
is
kind
of
the
industry,
accepted
average
sus
score.
That
being
said,
software
tends
to
be
higher
than
that
by
an
exact
number.
We
don't
know,
but
it's
it's
more
in
the
low
70s,
so
for
us
to
shoot
for
77
feels
appropriate
because
we
don't
we
don't
want
to
be
average.
We
want
to
be
great
and
great
starts
happening
in
the
upper
70s.
B
So
christy,
what
we're
doing
this
upcoming
quarter
is
something
kind
of
different
from
the
past,
which
is
we're
kind
of
zooming
in
we're
going
to
focus
on
merge
requests
because
that's
the
most
important
unit
of
work
and
arguably
the
most
important
experience
other
than
login
and
settings,
and
things
like
that
do
we
are,
we
confirmed.
We
feel
like
that's
the
right
thing
to
do
here
or
we're
going
to
do
it,
even
if
it
has
a
negative
impact
on
sus.
Just
because
it's
important
for
other
reasons
or
would
there
be
a
strategy
change?
A
That's
a
great
question
eric.
I
do
think
it's
the
right
thing
to
focus
on.
I
don't
think,
there's
any
scenario
in
which
we
work
on
improving
the
performance
and
usability
of
our
merge
requests,
and
it
has
a
negative
overall
or
long-term
impact
on
sus,
it's
critical
to
our
product.
I
I
now
I
should
say
will
will
that
work
immediately
make
the
sus
score
go
up.
Maybe
not!
This
is
a
trailing
indicator.
It
takes
a
while
for
changes
like
this
to
have
an
impact
on
the
sus
score,
but
I
I
don't.
A
I
see
zero
downside
to
focusing
on
something
that
is
so
critical
to
our
product
experience.
We
have
done
this
to
an
extent
in
that
we
haven't
done
it
as
an
okr,
but
pardon
me
the
settings
and
navigation
work
that
we've
been
doing
is
also
a
targeted
effort
based
on
sus
feedback.
So
we've
got
that
that's
been
going
as
part
of
stage
group
work.
In
addition
to
that,
we
focused
on
broader
themes
of
visibility,
of
system
status
and
system
performance
in
q1
does.
B
That
answer
your
question:
it
does
yeah
thanks.
So
next
next
question
separate
question.
One
idea
to
get
more
insight
into
this
is
that
you
know
we're
looking
at
time
series
data
right,
so
we're
asking
the
same
set
of
questions
about
the
holistic
experience
over
time,
and
these
we
see
a
trend
in
sus.
We
could
go
back
and
ask
some
of
these
same
people.
The
meta
question
about
the
trend
and
say:
do
you
feel
like
gitlab's
user
experience
has
actually
declined
over
the
past?
A
That's
an
interesting
idea:
adam
you
are
our
research
expert,
so
you'll
give
a
better
answer
than
me.
D
Yeah,
I
love
that
idea
and
I
would
actually
throw
in
there
also
eric
if
they
say
yes,
why
is
there
something
that
maybe
it's
perceived
performance
downgrade
or
anything
like
that,
or
maybe
a
specific
experience
that
they
that
they
saw
change
so.
D
B
This
google
app
it's
called
insights
or
something
like
that,
but
they
basically
just
ask
you
questions
once
around.
They
give
you
like
nickels
when
you
answer
them
and
into
your
play
account,
and
they
always
ask
this
type
of
question
and
it's
phrased
as
like.
You
know,
improved
stay,
the
same
gotten
worse
or
like
other
and
then
with
a
with
a
free
form
thing.
B
So
maybe
we
we
leave
the
door
open
to
someone
saying:
oh,
no,
it's
gotten
better
rather
than
say:
has
it
gone
down
yes
or
no,
because
then
we're
sort
of
like
pre-loading
the
answer
to
that.
So
that
might
be
the
way
we
ask
it
it's
like.
Would
you
say,
gitlab's
user
experience
has
improved
to
stay
the
same
or
decreased
over
two
years
or
something
like
that
and
then
the?
F
Are
we
concerned
about?
Is
a
user
mix
change
here?
If
you
ask
the
same
users,
I
think
this
goes
to
eric's
point,
but
if
you
ask
the
same
users,
we
had
eight
quarters
ago
and
floating
forward.
I
bet
I
I'm
not
sure
you
would
see
it
get
worse.
I
think
what
we're
seeing
is.
Potentially
you
know
we
get
people
who
haven't
been
on
core
for
a
long
long
time,
maybe
we're
getting
people
who
are
just
new
to
the
products.
D
D
That
over
time,
so
there
really
isn't
a
huge
standout
amongst
those.
Some
scores
are
a
little
bit
higher
like,
for
example,
mature
users
is
a
little
bit
higher
than
that
68.2
number,
but
new
users
is
actually
lower
than
that
number
two.
A
So
something
interesting
is:
we
know
that
onboarding
is
difficult
for
gitlab
and
we
hear
that
reflected
in
the
verbatims
as
well.
So
the
verbatims
confirm
what
we
see
in
that
data.
E
G
Yeah,
so
in
thanks,
allah
and
you
regarding
mr
usability
improvements,
consider
also
looking
at
the
following
epic
and
the
notes
in
order
to
to
find
usability
and
performance
problems,
because
I
think
there's
quite
a
quite
a
list.
There
things
we
know
are
kind
of
broken
about
the
emr
process.
G
Under
v,
when
users
raise
usability
concerns,
engineers,
sometimes
don't
think
about
it
as
a
performance
concern.
This
is
an
example,
and
it's
it's
not
about
the
specific
person
responding,
but
it's
our
chief
revenue
officer,
saying
hey.
I
did
this.
I
had
to
wait
25
seconds
and
the
response
was
so
25
seconds
doesn't
seem
unreasonable.
Anyway,
maybe
we
can
do
some
performance
optimization
sometime
without
any
commit,
and
probably
a
better
response
would
have
been
pretty
annoying
to
wait
25
seconds
without
any
indication
that
anything
is
happening.
G
G
We
should
give
some
indication
things
like
that,
so
I
think
there's
something
to
be
done
there
as
well,
and
I'm
not
sure
whether
it
should
be
in
the
okrs
but
consider
thinking
about
that
and
eric
also
for
you
a
bit
more
user
empathy,
especially
around
our
highly
trafficked
parts
of
the
application.
This
is
not
every
part.
This
is
just
like
the
most
crucial
feature
in
gitlab.
A
This
is
a
really
good
point
and,
interestingly
eric-
and
I
discussed
this
in
our
one-on-one
earlier
today-
this
is
an
interesting
take
on
it
sid.
I
agree
that
it
may
not
be
an
okr,
but
that
doesn't
mean
it's
something
that
we
can't
address.
Let
me
let
me
take
this
feedback
and
think
about
lightweight
ways.
We
can.
We
can
help
do
this
within
the
organization.
G
H
Yeah
we
treat
ux
debt
differently
than
the
industry
standard.
I
would
say
we
use
it.
The
definition
of
it
is
different,
so
we
specifically
apply
the
term
ux
debt.
Whenever
we've
deliberately
made
a
decision
to
deviate
from
an
mvc
that
had
a
negative
impact
or
what
we
believe
would
be
a
negative
impact
on
the
user
experience
and
we
define
it
in
that
way
so
that
we
can
track
it
to
see
you
know.
Are
there
areas
for
improvement
in
our
process?
That
are,
you
know,
are
we
making
our
nvcs
too
large?
H
C
I
think
that
makes
my
sense
what
you
said.
I
think
there's
probably
also
some
metric
that
probably
doesn't
exist
at
the
moment
around
just
like.
Are
we
committing
things
that
have
negative
implications
where
we
should
change?
That
rate
is
our
way
to
measure
something
like
that.
H
H
We
would
say
that
25
seconds
is
way
too
long,
and
so
we
would
not
release
it
until
we
felt
confident
that
it
wouldn't
be
that
long
of
a
wait
time.
Maybe
we
were
able
to
even
reduce
it
down
to
10
seconds,
and
we
say
that
10
seconds
is
okay,
it's
not
great,
but
it's
better
or
that
we
at
least
get
an
indication
of
how
long
that
they're
waiting
or
that
they
are
waiting.
H
Anything
would
be
a
little
bit
better
in
that
experience,
so
yeah.
I
think
we
just
need
to
kind
of
push
for
that
more
and
more
be
more
advocates
for
improving
the
experience
and
being
very
detailed
in
what
we
define
as
an
mvc.
C
I
Okay,
let
me
vocalize
point
point
v
christie
happy
to
partner
with
you.
We
already
have
a
performance
bug,
refinement,
bi-weekly,
happy
to
add
more
emphasis
here
on
the
key
areas
and
that
we
also
have
timings
clearly
called
up
for
apis,
and
a
number
of
issues
in
fight
are
directly
related
to
code
review,
which
is,
I
won't,
be
surprised
if
it's
used
by
the
editor
or
in
my
views
so
happy
to
work
with
you
here.
A
That
sounds
great.
There
are
industry
standards
on
this
based
on
user
research
and
nielsen
norman
is
kind
of
the
industry
standard
around
they
are
they
go.
Do
research
that
tells
us
what
some
of
these
metrics
should
be
and
what
they
say
is
yeah.
One
second
feels
instantaneous.
It's
similar,
it's
seamless
between
one
and
ten
seconds
people
are
willing
to
hang
in
there.
Then
I
love
it,
but
hey,
that's!
Okay,
anything
more
than
10
seconds
and
people
go
hey.
E
E
What
we
have
said
is
that's
not
where
we
should
focus
on
best
usability
to
begin
with,
so
we
have
to
decide
if
we
want
to
be
focusing
on
usability
for
those
tools
as
well,
because
when
a
cro
then
comes
and
says
for
a
tool
that
has
30
active
monthly
users,
that
the
performance
is
low
and
we
all
spend
so
much
so
many
minutes
on
discussing
this,
we
have
to
remember
hey.
Is
this
a
good
use
of
all
of
our
time?
A
G
Yeah,
I
think
I
know
that's
valid
like
this
is
about
this
example-
is
about
the
static
site,
editor
and
and
that's
not
highly
traffic,
so
that
wasn't
a
good
example
of
something
we
should
focus
on.
We
should
focus
on
the
popular
parts,
really
good
point
thanks
for
making.
I
miss
that.
I
Yes,
it
might
be
moot
just
wondering
if
this
captures
sentiment
posting
the
recent
incidents
we
have
or
not
or
anything
do
we
know
if
there's
any
correlation.
A
E
So
we
can
start
to
get
better
benchmarks
of
what
sus
codes
are
for
new
tools
in
today's
world,
because
I
think
the
comparing
ourselves
to
microsoft
office,
for
example,
and
saying
that
75
so
we
should
be
75-
is
fine.
But
microsoft
office
has
taken
20
years
to
get
to
75
and
we
don't
want
to
take
20
years
to
get
to
75.
So
we
have
to
be
able
to
look
at
the
best
of
breed
today
and
then
figure
that
out.
A
F
Yeah
and
I
think
we're
reading
this
answer,
I
know
I'm
actually
quite
concerned
the
sub
score
methodology,
so
I
was
reading
the
handbook
taking
a
look
at
to
see
how
we
actually
collect
sample
sizes
for
our
self-managed
customers
right
because
we're
90
self-managed,
I
know
most
of
our
and
from
a
paying
customer
perspective.
So
maybe
I'm
just
missing
the
market.
F
Maybe
the
goal
of
this
is
to
be
broader
to
free,
np
and
core
users
and
all
different
types
of
licenses,
but
if
we're
not
getting
an
ample
sample
size
from
usability
because
via
email
address,
because
this
is
in
the
same
army,
why
can't
we
get
an
mps
score
from
our
self-managed
customers?
Do?
Do
we
feel
like
we
have
a
good
sample
right?
My
question
was
not
that
originally,
but
reading
your
answer,
that's
kind
of
what
was
there,
I
was
more.
I
was
more
like.
F
Can
we
apply
whatever
if
we
got
this
great
sample
for
sus?
Let's
use
it
for
mps
too,
be
awesome.
That
was
kind
of
my
where
I
was
going
initially,
but
then
reading
your
response.
So
just
saying
reactions
to
the
sampling
and,
if
you're
concerned
at
all
by
that.
D
Yeah,
so
we
we,
I
think
it
was
about
six
months
ago,
a
new
viewer
part
of
this
big
prop
process
around.
How
do
we
reach
out
to
more
of
our
customers
and
the
short
answer?
Craig?
We
don't
have
the
ability
to
reach
out
to
our
self-managed
customers,
and
that
makes
it
incredibly
hard.
The
only
way
we
do
is
through
first
look,
which
has
a
total
of
3
000
users
in
it,
and
we
don't
even
know
how
many
of
those
are
self-managed
so
with
sas.
D
It's
actually
advantageous
for
us
to
sample
that
audience,
because
we
know
so
much
about
them.
We're
able
to
really
dial
in
usage
some
of
those
criteria
that
I
rattled
off
earlier,
like
oh,
we
know
this
individual
has
been
using
it
for
less
than
180
days,
for
example,
also
people
in
self
in
in
sas
they're,
seeing
our
fixes
a
lot
earlier
than
self-managed,
who
might
be
two
three
plus
releases
back,
so
we're
able
to
get
a
faster
indicator
of
what
our
changes
are.
D
Actually,
how
they're
impacting
things
so
that
might
be
a
longer
answer,
but
our
and
I'm
not
that
concerned
around
the
fact
that
we're
not
sampling
self-managed
to
the
same
extent,
we
actually
do
sell,
do
sample
them
every
other
quarter.
D
This
is
a
quarter
that
we
actually
did
sample
them,
but
again
we
we
have
to
use
first
look:
we
are
through
nps
through
that
process.
We
are
starting
to
kind
of
explore
in-app
messaging,
to
see
if
we're
able
to
sample
that
way
through
self-manage.
F
Yeah,
I
see
your
point.
I
just
I
think
our
sas
products
gonna
skew
smb,
maybe
a
little
mid-market.
You
know,
I
think,
that's
where
it'll
skew
so
you're
going
to
get
that
type
of
and
maybe
in
in
this
for
this
product
it
doesn't
matter
maybe
from
usability
yeah.
Okay,
the
smb
user
is
just
the
same
as
an
enterprise
user,
but
maybe
not
right.
I
just
I
just
I
just
I
just
want
to
flag
that.
I
think,
based
on
that
discussion,
so
just
put
some
thought
into
that.
A
D
E
Yeah
and
I'll
say,
I
think,
we'll
all
love
to
access
more
users,
so
we
can
slice
and
dice
across
various
segments,
both
nps
both
for
nps
and
for
sus.
Another
alternative
to
consider
is
gain
site
which
now
has
stam
access
and
gain
site
has
an
nps
tool
and
we
should
be
able
to
run
nps
through
gainsight,
and
most
of
the
customers
in
gainsight
are
large
customers
that
are
self-managed.