►
From YouTube: UX Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Everybody
I'm
Christy
winnable
I'm,
our
UX
director,
welcome
to
the
UX
group
conversation.
If
you
look
in
the
agenda,
our
slides
are
posted
and
if
you
have
questions
you
can,
please
add
them
to
the
agenda
and
then
also
help
take
notes
about
the
responses
and
I
know,
look
and
see.
If
we
have
some
questions
we
do
so,
it
has
some
questions
for
us.
We
look
at
slide
9,
so
Sid
wanted
us
to
elaborate
on
slide
9,
and
this
is
actually
a
Katherine
Parra
added
this
one.
So
Katherine
I'm
going
to
turn
it
over
to
you.
C
Can
you
give
me
a
second
to
read
that
lad
right
on
slidin,
okay,
so
just
to
give
a
overview
of
what
was
going
on
in
that
study.
So,
basically,
we
had
three
proposals
were
changing
the
way
that
labels
are
displayed
on
boards
kind
of
mainly
with
the
goal
of
helping
people,
reduce
the
noise
and
like
seeing
where
issues
at
once
and
what
we're
kind
of
investigating.
There
was.
C
That
information
communicates
something
about
the
teams
that
I'm
planning
for
or
it
communicates
something
about
the
cycle
time
where
there,
as
there
are
other
users
who
don't
need
that
information
at
all,
say
they're,
just
coming
into
triage
issues
or
like
Grima
backlog,
or
things
like
that
and
they're
more
concerned
about
seeing
a
large
volume
of
issues
at
once
versus
like
the
deep
the
details
about
them.
So
basically,
what
we
kind
of
landed
on
is
that
you'll
move
forward
with
the
proposal
to
toggle
on
or
off
all
labels
as
an
NBC.
C
But
what
might
be
more
beneficial
is
actually
granularity,
so
the
ability
to
set
like
create
a
label
set
based
on
some
property.
So
say
we
have
labels
that
are
related
to
things
like
personas
or
our
teams
that
don't
need
to
be
seen
by
everyone
at
once.
You
can
potentially
turn
that
off,
based
on
what
you
want
to
actually
see,
but
that's
a
future
step,
but
yeah
that's
the
main
overview.
Hopefully
that
means.
D
I
have
two
thanks
to
that
and
I
have
two
things:
I
wonder
about
one
of
them.
If,
if
we
still
ask
the
user
whether
they
want
to
add
the
default
labels,
I
always
thought
that
was
a
question
kind
of
at
the
wrong
time
like
when
you
first
go
to
an
you
board.
They
ask
the
the
question
used
to
be
like
hey.
Do
you
want
to
add
the
default
set
of
labels?
I
think
we
should
just
add
them
and
now
confront
the
user
with
a
question.
D
They
don't
know
the
answer
to
at
that
time
and
and
kind
of
make
sure
that
get
lab
works
out
of
the
box
instead
of
people
having
to
make
all
kinds
of
decisions
and
end
up
and
make
a
decision
that
they
they
really
not
empowered
to
make.
At
that
time
we
should
be
making
decisions
and
then
allow
them
to
change
that.
But
I
don't
know
what
it
was
still
a
set
question,
or
what
did
we
were
able
to
get
rid
of
that.
C
A
Yes,
it
I
can
I
can
answer
that
I've
been
working
with
with
Gabe
Weaver
who's,
our
PM
for
project
management
on
that
yeah,
because
you've
asked
about
that
pretty
recently
and
there's
some
complication
in
doing
that.
So
if
we
make
those
labels
defaults
for
every
board,
so
right
now,
let's
just
give
everyone
else
context.
If
you,
if
you
make
a
new
board,
there's
a
there's,
a
open
column,
there's
a
closed
column
and
there's
a
column
in
the
middle
that
says
hey.
A
Do
you
want
to
add
these
to
other
columns,
which
is
I,
think
you
know
to
do
or
backlog
and
and
basically
by
doing
so,
if
we
just
made
those
columns
there
by
default,
we
would
have
to
ensure
that,
like
those
labels
existed
on
every
project
that
ever
made
a
board,
and
so
if,
if
users
don't
want
those
labels
in
their
labels,
that's
basically
every
time
we
do
that
we're
forcing
them
to
essentially
go
and
clean
up
their
label
sets
every
single
time.
We're
also
forcing
a
few
clicks
every
time.
D
D
D
No
specific
suggestions,
I'll
move
on
a
church
slide
15
like
it's
the
UX
of
monitor
but
I,
seen
what
we're
doing
it
I
think
I'm
missing
the
bigger
picture
in
monitor,
I,
think
in
monitor.
You
want
to
kind
of
go
from
alerts
to
metric
to
log
to
tracing
and
I.
Don't
see
his
kind
of
I
see
that
as
like
the
job
to
be
done
there
and
I,
don't
see
us
aiming
for
that
use
case.
I.
B
E
D
E
E
F
Slide
fours
shows
some
user
experience
scores
which,
whenever
I
think
about
elementary
school
and
getting
grades
so
I
always
like
to
get
high
grades.
So
you
know
I
think
we
just
got
started
with
that
evaluation,
and
you
know
just
like
to
get
some
perspective
on.
You
know
like
what
the
overall
goal
is
and
canal
we're
thinking
about
it.
From
that
perspective,
yeah.
B
That's
a
great
question
so
yeah
we
didn't
love
the
grades
that
came
out
either
and
we
do
have
a
define
grading
rubric
for
what
each
of
those
grades
means
in
the
handbook.
So
you
can
get
a
better
sense
of
when
we
say
something
as
a
C.
What
does
this
e
mean?
But
the
plan
is
to
push
those
grades,
so
the
point
was
to
have
a
baseline
score.
B
That
is
that
calls
out
specifically
where
we
have
found
pain
points,
and
then
we
also
created
issues
based
on
recommendations
for
improving
those
scores
and
then
we're
working
closely
with
product
management
to
prioritize
those
four
changes.
So
a
slide
four
in
particular,
is
talking
about
documentation,
improvements
to
those
jobs
to
be
done
so,
but
we
talked
about
just
the
experience
baselines
first
and
then
I'll
tie
it
back
to
the
documentation.
B
B
Three
you'll
see
we
have
five
of
those
experience,
baselines
that
have
been
prioritized
by
product
management
for
improvement,
so
we're
pretty
close
so
time
that
back
to
documentation,
we
know
that
we
need
to
improve
our
documentation.
The
documentation
team
is
really
focused
on
that
as
well.
The
question
becomes,
we
did
a
lot
of
documentation.
Where
do
you
start
well?
When
we
worked
with
product
management,
they
identified
these
jobs
to
be
done
as
the
most
impactful
parts
of
their
particular
stages.
B
So
it
makes
sense
that
we
want
to
start
by
focusing
on
the
things
that
are
the
most
used
and
the
most
important.
So
our
decorating
team
is
proactively
going
back
to
those
workflows
in
the
documentation
and
making
improvements.
Our
goal
was
to
hit
ten
of
those
during
this
quarter.
I'm,
not
sure,
if
we'll
hit
all
ten
but
but
they're,
making
some
really
good
progress
on
that,
and
then
also
as
part
of
the
experience
baseline
efforts
that
the
design
team
drove
documentation.
B
Improvements
were
part
of
those
experience
based
lines
as
well,
so
they
really
tried
to
cover
the
end-to-end
experience,
including
how
documentation
ties
into
that.
So
when
the
technical
writing
team
focuses
on
that,
they
will
look
at
those
specific
issues,
but
they'll
actually
go
beyond
that.
Just
to
look
at
the
docs
of
that
entire
workflow
and
also
see
where
they
think
improvements
are
needed.
Does
that
help.
D
D
D
Analogy
but
I
love
that
where,
where
that's
it's
in
our
face
now
and
I,
think
that's
a
great
job
by
the
UX
team
and
I
think
the
numbers
are
fair
and
the
way
to
fix
them
is
to
it's
a
work
with
product
and
engineering
to
to
make
it
better.
So
I
I
love
it
I,
love
the
process,
I
love,
the
videos
and
I
think
it's
a
really
is
really
like
what
we
were
missing
and
this
is
a
great
way
to
drive.
F
That
yeah
we're
really
wasn't
about
calling
out
as
much
as
its
emphasizing
the
fact
that
we're
just
getting
started
in
this
area
and
whenever
you
start
measuring
something
for
the
first
time
your
results
are
probably
going
to
be
very
varied
from
your
expectations
and
you're
just
going
to
take
that
and
then
start
working
from
it
right
then,
and
there
to
say:
okay,
how
we're
going
to
prove
on
that.
So
that's
that's
kind
of
my
thought
process
as
well.
Yeah.
B
I'm
glad
that
this
has
been
such
a
useful
exercise,
I
want
to
say,
I
am
so
proud
of
my
team
for
one
coming
up
with
the
approach
and
then
the
way
that
they
executed
on
it
I
think
you
know
it's
been.
It's
been
really
useful
for
us
to
as
a
team,
but
I'm
glad
that
it's
been
useful
across
the
board.
So
thank
you,
UX
Department,
for
all
of
that
air
Blackmon.
A
B
Catherine
is
working
on
that
now,
so
she's
got
it
set
up
and
she's
getting
it
ready
to
send
out
we'll
send
it
out
by
September.
30Th
I
only
know
that,
because
she
very
proactively
updated
me
on
that
this
morning
and
then
we'll
have
the
score
by
the
end
of
the
quarter.
So
by
the
end
of
October,
you'll
know
what
that
is,
and
I
am
realizing
now
as
we're
talking
about
this.
That
I
did
not
do
a
good
job
on
this
slob
number
six.
When
I
talked
about
56%
of
all
faces
being
complete.
B
A
Worries
and
just
to
refresh
my
own
memory,
this
will
be
our
is
our
third
data
point
for
a
sus
score,
so
we're
will
now
start
to
kind
of
see,
SIA,
see
a
trend
and
so
okay
cool
did
we
do
anything
where
we
would
expect
that
score
to
kind
of
trend
back
up
I
know
there
was
a
small
dip
that
was
kind
of
within
the.
You
know
that
the
air
range
from
our
first
data
point
to
our
second.
Do
we
expect
it
to
go
back
up
or
flat?
You
know
the.
B
Short
answer
is
I
legitimately,
don't
know,
but
I'll
tell
you
what
we
think
over
the
long
term
will
impact
it.
So
I
think
it's
two
things
one
is
I,
think
we're
getting
more
of
a
focus
on
when
we
launch
things
today
we
are
putting
more
of
a
focus
on
launching
things,
with
a
better
experience
right
out
of
the
gate,
so
I
think
that,
as
we
do
more
of
that,
that
will
help
the
other
thing
that
we've
got
to
do
ties
back
to.
Why
we're
doing
this
experience.
B
Baseline
effort
is
going
back
and
cleaning
up
all
of
the
experiences
that
are
already
there
all
of
that
UX
debt.
So
we
do
expect
the
experience
baselines
to
have
an
impact
on
that
now.
Realistically,
that's
gonna
take
a
little
bit
of
time,
so
you
know
we
have
five
experience.
Baselines
they're
scheduled
for
improvements
this
quarter,
but
they
have
to
get
implemented
and
then
I
gotta
get
in
there
and
people
have
to
use
them
and
they
have
to
see
the
difference
and
then
also
I'm,
not
sure
with
the
crew.
B
Massive
changes
is
going
to
be
to
really
affect
that
overall
score
right.
So
this
is
suss
is
a
long-term
metric
that
we
want
to
see
change
over
time,
but
it
could
take
quarters
for
us
to
really
start
to
see
the
type
of
change
we
want
to
see.
That
doesn't
mean
that
it's
not
valuable,
though
we
need
to
be
looking
at
it
long
term.
So
you
know:
we've
got
individual
research
efforts
that
help
us
gauge
usability
of
more
specific
things
in
the
short
term
and
we'll
continue
to
do
that.
G
Actually,
I
have
a
question
related
to
the
sis.
I
didn't
write
down,
but
I
just
wondered.
Would
it
be
helpful
for
us
to
do
because
it
is
fairly
easy
to
do
assess
survey?
Would
it
be
helpful
for
us
to
do
an
internal
sus
and
compare
that
with
the
external
sus
to
see
how
much
of
our
own
experience
with
gitlab
differs
from
the
rest
of
the
community.
G
G
B
I
think
I
got
you
now
so
I
think
what
you're
doing
is
you're
saying.
Okay,
so
we've
got
this
overall
sus
score,
but
we
also
have
done
our
own
internal
measurements
in
our
category
maturity
model
about
where
we
think
we
are
on
smaller
parts
of
the
product.
How
do
we
validate
that?
How
we've
scored
those
things
aligns
with
how
our
users
think
about
those
things?
Am
I
am
I
getting
you
right.
G
Let's
imagine
that
if
we
did
an
internal
survey,
we
would
get
a
very
high
score,
for
example,
because
it
levers
loved
to
use
git
lab
and
they
think
they're,
it's
mostly
flawless,
for
example,
and
the
external
survey
would
reveal
that
we
actually
have
a
lower
score.
Maybe
that's
a
good
way
to
surface
that
we
actually
yeah.
We
when
we
make
UX
decisions
and
product
decisions,
we're
very
closed
in
to
our
own
use
cases
and
to
our
own
perception
of
usability.
G
B
C
Yeah
I
see
I
think
it's
an
interesting
question
due
to
the
I
guess:
I
won't
go
into
the
questions
in
case
we
actually
do
run.
It
I
think
you'll
find
that
it
might
surprise
you
the
responses
you
get
even
internally.
So
that
might
just
be
an
interesting
exercise
anyway,
because
we
have
a
lot
of
new
get
levers,
so
they're
very,
very
fresh
to
get
lab
as
a
product,
so
they
might
not
have
the
same
love
and
loyalty,
that's
like
a
longer
term
user.
So
it
could
be
interesting
just
to
see
how
that
goes.
Yeah.
B
H
Sure
so
I
think
the
the
way
we
constructed
the
okay
are.
Firstly,
we're
going
to
be
rerunning.
The
general
experience
baseline
reviews
to
see
overall
for
that
specific
job
to
be
done.
Did
we
move
the
needle
as
a
team,
so
documentation
plus,
you
know
UX
and
everyone
else
specifically
in
terms
of
the
documentation
pieces,
we've
set
a
number
of
specific
goals
for
each
job
to
be
done
so,
for
example,
evaluate
SEO
around.
Can
you
easily
find
how-to?
H
Can
you
easily
find
the
use
case
in
the
documentation
and
how
to
complete
it,
so
we're
sort
of
comparing
it
before
and
after
and
making
the
changes
to
get
to
that
after
so,
you
know
before
I
couldn't
find
how
do
I
do
X
and
get
lab
in
the
docs
now
I
can
search,
Google
or
I
can
search
our
own
Doc's
and
I
can
find
it
so
individual
documentation
problems
as
well.
So
there
was
a
gap
in
the
docs
where
we
didn't
even
cover
how
to
do
X,
and
now
we
have
it.
H
G
H
I,
don't
think
we're
doing
documentation
testing
in
this
way
right
now,
it's
something
that
Christy
and
I
have
spoken
about
a
bit,
and
we
can
consider
how
to
build
that
in
as
part
of
actually
gauging
how
we're
addressing
the
jobs
to
be
done.
I
think
it
goes
back
to
the
broader
question
of
Howard
of
the
overall
experience
baseline
process.
H
I
think
there's
a
separate
question
as
well
of
how
we
test
the
docs
like
do
we
want
to
include
like
right
now
we
make
it
easy
to
create
an
issue
or
a
comment
on
a
doc,
but
we've
thought
about:
should
we
add
some
sort
of
rating
a
lot
of
doc
sets
have
to
make
it
really
easy
to
say.
Yes,
this
was
helpful.
If
not
what
was
missing
so
you
know
in
previous
companies,
I've
done
that
kind
of
thing.
H
We've
been
able
to
get
kind
of
numbers,
grouping
Doc's
as
a
set
and
looking
at
how
many
negative
comments
or
how
many
problems
are
cropping
up
as
a
whole.
So
we're
thinking
about
that
as
well.
I
think
we
have
an
issue
around.
How
do
we
group
Doc's
in
order
to
you
know,
rather
than
looking
at
one
doc
at
a
time,
look
at
a
set
of
docks,
for
example,
all
the
docks
that
pertain
to
one
stage
or
group?
How
are
their
number
is
trending
in
terms
of
feedback
we're
getting
back
in.
H
G
H
B
I
This
is
simply
effective.
Mache
just
follow
on
the
question
around
the
docks.
There
I
know
that
Mike
you
had
done
a
little
bit
of
review
on
things
like
support
tickets
and
kind
of
if
there
were
docks
or
mrjoe
issues
that
was
kind
of
related
to
them,
wondering
if,
if
there's
any
work
being
done
around
those
like
looking
at
those
jobs
and
around
the
okay
I'm,
looking
at
whether
we're
getting
more
or
less
support
tickets,
specifically
on
asking
those
questions.
H
We
could
be
doing
and
that's
kind
of
the
way
I've
been
thinking
about
it
plus.
You
know
periodically
going
back
and
sharing
good
examples
of
support,
generated,
mr-s
I
think
that's
also
a
good
idea,
so
I've
been
thinking
about
the
best
way
of
identifying
those.
Maybe
that's
something
that
I
could
talk
to
support
leadership,
to
to
kind
of
find
a
way
of
calling
out
I.
B
I
Yeah
thanks
I
hate
to
say,
I'm
less
involved
with
the
metrics
and
that's
more
than
I.
Had
your
real
a
little
bit.
I.
Definitely
I'm
always
free
to
bring
in
ideas
or
sit
in
and.
B
A
Not
really
a
question
more
of
a
more
of
a
comment,
but
just
saying
nice
job
and
reiterating
Sid's
point
on
loving
the
experience
baselines
as
a
shadow.
This
week
there
have
been
at
least
two
times
externally,
where
Sid
is
highlighted
that
work
to
both
both
in
the
talk
and
both
to
a
customer
using
them
as
an
example
of
what
makes
us
different,
defensible
and
differentiated,
and
so
I
just
wanted
to
share
that
as
a
cool
kind
of
anecdote,
I've
I've
experienced
as
part
of
the
shadow
program
and
I'd
say
good
job
thanks.