►
From YouTube: Progressive Delivery UX/PM meeting 2020-07-07
Description
Meeting to improve alignment between Product Management and UX for progressive Delivery
A
All
right,
so
first
up
is
more
of
like
a.
Let
me
first
write
it
out
by
the
way.
A
A
I
found
out
yesterday
that
I
was
working
with
through
like
the
participants
of
ib
testing,
and
then
there
was
an
error,
the
worst
one
in
the
survey
which
resulted
in
that.
If
you
got
through
the
questions
and
you
answered-
oh,
I
don't
work
with
gitlab,
which
it
is
informational
to
us
that
if
you
are
or
you
don't,
but
it
should
be
blocker
from
you
entering
the
you
know
the
the
the
the
research
and
then
it
just
cut
them
off.
They
couldn't
give
their
contact
information,
so
they're
they're,
all
their
responses
were
rendered
useless.
A
So
out
of
the
48
or
43
participants,
we
only
got
like
six
people,
because
those
were
the
ones
that
you
know
used
gitlab
at
all.
Only
those
we
got
some
some
contact
details.
Good
news
is
there
are
some
really
interesting
ones
between
them,
but
not
enough
like.
I
need
at
least
one
more
engineer,
one
more
designer,
so
we
need
to
send
them
out
again:
yeah,
there's
a
real
bummer.
I
would
say.
B
B
A
Yeah,
this
survey
has
already
been
fixed.
Oh
okay,
yeah,
like
I
posted
an
update,
I
believe
in
the
issue
which
I
also
sent
to
you
like.
I
was.
I
was
messaging
you
when
rupert.
So
the
like
the
the
survey
is
already
fixed
like
it
had
at
like
with
some
trial
and
error.
I
had
to
click
through
the
survey
like
six
times
to
figure
out.
If
I
now
did
it
right,
yes
or
no.
B
A
But
I
mean
it
does
make
me
wonder,
and
I've
said
this
before,
that
the
user
experience
of
qualtrics
is
atrocious.
It
is
not
easy
to
see
if
you've
made
a
mistake.
You
need
to
walk
through
the
the
survey
like
a
dozen
times.
Yeah
is
there?
Is
there
a
good
reason
that
we're
using
qualtrics
and
not
just
like
some,
some
other
application?
For
that,
because
I
cannot.
I
cannot
believe
that
there's
not
a
better
application
for
us
than
qualtrics.
B
So
qualtrics
is
really
it's
just
like
office.
It's
like
microsoft
office,
it's
very,
very
powerful,
but
it's
not
intuitive
to
use
and
the
skip
logic
and
the
like.
The
sheer
fact
that
I
can
show
somebody
a
question
based
on
the
answers
to
another
question
that
they
make
like
even
a
force,
ranking
question.
That's
a
qualtrics
thing!
You
can't
do
that
in
forms.
You
can't
do
that
in
surveymonkey.
B
I
think
type
forms
another
option,
but
they
don't
let
allow
you
to
do
that
either.
So
it's
really
super
powerful.
It's
just
that!
It's
not!
I
agree
like
the
ux
is
horrific
and
it's
not
intuitive
to
go
through
and
choose
what
you
need
to
choose
for
for
options.
So
that's
why
I
always
encourage
people
like
if
you
don't
need
all
those
starter
questions
just
take
them
out.
If
it's
just
merely
an
informative
question,
don't
don't
put
it
in
there?
B
You
don't
need
if
you
don't
need
to
know
if
they
use
git
lab
or
not,
don't
ask
them.
If
you
don't
need
to
know
how
many
people
they
are
working
with,
don't
ask
them,
make
it
as
simple
as
possible
for
yourself
in
the
survey
to
try
to
reduce
these
there's,
not
errors,
it's
just
it's
just
not
intuitive
the
survey
flow
sucks.
I
hate
that
tool.
I
had.
I
had
a
survey.
One
time
got
get
corrupted
on
whatever
server
node.
It
was
sitting
on
that.
B
I
had
to
reach
out
to
qualtrics
help
by
a
chat
and
they
actually
had
to
remake
it
on
another
node,
because
I
couldn't
get
the
survey
flow
to
work
like
it.
Wouldn't
it
wouldn't
add
things
it
wouldn't
move
things
around
the
survey
flow.
So
I
hear
you
I've
been
dealing
with
it
on
my
end
since
from
before
this
job
this.
This
has
always
been
this
way
in
qualtrics
say
they
have
never
changed.
B
B
It's
it's
it's
flexibility
and,
and
the
powerfulness
of
the
logic
within
the
survey
creation
so,
for
instance,
ian's
survey
he
put
together
for
his
jobs
to
be
done.
He
had
people
rank
the
jobs
to
be
done,
that
they
felt
most
matched
or
most
were
most
important
to
them.
Right
so
one
to
five,
you
can
actually
use
a
setting
in
qualtrics
to
only
ask
a
set
of
questions
based
on
whatever
option
they
put
in
slot
1
or
slot
5
or
wherever
on
that
ranking
question.
That's
just
an
example.
B
B
A
Mean
we
can
do
it
with
cultures,
but
you
know
like
it
makes
me
wonder
like
if,
if
50x
would
have
been
better,
I
would
easily
spot
at
that
and
now
it's
now
it
was
just
like.
Oh
did
I
fix
it,
or
did
it
didn't
that
and
then,
eventually
by
trial
and
error,
you
see?
Oh,
I
think
I
fixed
it.
I
hope
I
hope
this
is
good.
B
I
have
spent
two
hours
on
somebody's
survey
before
trying
to
fix
it
because
it's
been
so
complicated.
I've
actually
gone
into
the
survey
flow
and
deleted
everything
out
of
it
and
just
started
over
again
because
it's
it's
so
difficult
to
understand
so
you're,
not
alone,
dimitri
you're,
not
alone,
and
I
haven't
found
any
full
proof
like
tutorials
or
anything
that
will
get
me
around
what
I'm
doing
wrong,
because
usually
it's
it's
that
boolean
logic
and
I
just
don't
think
like
a
coder.
A
A
B
A
Anyway,
on
to
the
next
one
solution-
validation,
this
was
this
perhaps
a
more
important
one.
I
was
wondering
how
you
were
thinking
lori
on
what
are
other
states
for
doing
in
terms
of
solution?
Validation
like
how
many
times
do
they
set
it
up?
How
do
they
set
it
up
generally,
like
what
is
the?
What
is
the
amount
and
do
you
have
any
tips
on
and
expectations
for
like?
A
I
want
to
do
more
solution,
validation
and
progressive
delivery,
and
it's
not
just
because
I
want
to
do
them,
it's
more
because,
like
I
want
to
make
it
the
default
part
of
our
flow
of
working.
I've
been
discussing
earlier
today
with
nadia
on,
like
there's,
perhaps
potential
to
put
more
quantitative
data
tracking
inside
of
our
product
and
in
that
way,
kind
of,
like
yeah
part
of
a
feature
say
we
expect
it
to
reach
this
amount
of
usage.
A
B
B
B
So
when
I
use
the
term
validation,
I
refer
to
either
some
sort
of
unmoderated
study
where
we
put
something
in
front
of
people
and
ask
some
questions
and
they
tell
us
answers,
or
we
do
a
qualitative
study
to
validate
that.
The
thing
that
we've
designed
or
the
thing
that
we
propose
is
the
thing
that
will
allow
them
to
achieve
their
task
with
the
lowest
amount
of
effort
and
struggle.
B
So,
while
you
can
look
at
the
usage
data
and
see
like
hey
this,
this
new
thing
that
we
launched
is
getting
used
by
50
of
our
people.
It
still
doesn't
tell
you
if
they're
having
trouble
with
it.
It
just
says
that
they're
using
it.
So
it's
helpful
to
know
how
much
is
being
consumed,
but
you'd
still
want
to
go
back,
maybe
and
find
out.
B
B
So
I
don't
really
get
to
sit
in
on
anybody
else's
meetings
or
really
find
out
what
they're
up
to
I
wish
I
could,
but
what
I'd
like
to
tell
them,
if
I
could
talk
to
them,
is
if
you're
unsure
about
the
thing
that
you've
just
created.
If
you
have
any
doubts
at
all
run
some
validation
testing
on
the
thing
that
you
created
run
it
prior
to
it
getting
put
out
in
the
world
please
and
and
running
in
that
thing,
whatever
those
tests
are,
don't
always
have
to
be
qualitative
sit
down
sessions.
B
A
So
thanks
for
that,
yeah,
like
you
say
like
like
hey,
I'm
not
sending
it
in
those
meetings,
but
you're
joining
like
these
kind
of
like
ux
pm
meetings
across
multiple
stage
groups
right.
B
B
Talking
about
like
juan
and
them,
they
they're
doing
this
they're
they're
validating
when
they
have
questions
around
their
designs,
so
hayana
will
go
if
she's
got
uncertainty
around
how
something
should
be
presented
or
what
the
workflow
should
be
around
some
feature
for
release
management.
She
will
dive
in
and
do
some
solution,
validation
to
understand.
If
what
she's
designed
will
work
for
them,
jackie
will
to
do
it
too.
She'll
take
it
on
as
well
from
a
pm
perspective.
A
A
A
B
It
it
seems
to
differ
so
one.
He
talks
to
a
lot
of
internal
people
because
he
his
his
stuff
and
testing,
is
a
lot
internal
because
they're
they're
they
were
in
a
dog
fooding
stage
for
a
while
before
they
just.
I
think
their
category
just
moved
immaturity,
so
he
was
talking
to
a
lot
of
lab
people,
so
it
was
easy
for
him
to
find
people
for
runner.
I
think
he
I
think
he
talked
to
some
internal
people.
B
Then
I
think
he
talks
to
some
external
people
darren
the
pm's
trying
to
find
some
external
people.
They
were
going
to
use
some
tams
to
try
to
find
some
people
to
talk
to
about
runner
hayana
for
release
management.
B
B
B
Yeah
yeah,
and
so
she
she
has
her
own
style
but
she's
asking
the
questions
in
unbiased
ways.
So
that
makes
me
happy
so
I
think
sometimes
jackie
might
work
in
some
of
the
questions
for
high
on
in
her
conversations
that
she's
having
with
customers
she's
constantly
talking
to
people,
I
don't
know
where
she
finds
them,
but
she's
constantly
talking.
B
Let's
see
who
else
james
james
will
do
some
problem,
validation
and
testing,
and
he
will
try
to
find
some
customers
but,
like
I
did,
I
feel
like
I
did
the
last
external
problem,
validation
for
them,
and
we
tried
to
find
external
customers
who
were
doing
accessibility
testing
for
them,
but
he
has
a
project
he's.
We
just
he's
just
starting
to
work
on
to
try
to
reach
outside
now
that
their
category
has
gotten
more
mature
they're
going
to
have
to
go
outside.
They
can't
just
rely
on
inside
people
anymore.
B
You
know
you
know
the
situation
right
yeah,
so
there's
no
set
cadence
this
that
I
can
find
what
I
think
the
biggest
issue
is
in
what
I've?
What
I've
told
you
guys
since,
like
october
you're
doing
my
job
and
in
order
to
do
somebody
else's
job,
you
cannot
do
your
job
at
100,
so
they're
struggling
trying
to
work
in
time
for
design
time
for
validation,
time
for
understanding
the
problem,
even
from
both
the
pm
perspective
and
any
sort
of
problem
validation.
B
That
needs
to
happen
that
the
pm
should
be
doing
so
that,
coupled
with
all
the
other
stuff,
the
designer
has
to
do
they're.
I
don't
think
that
they're
doing
no
one
designer
is
doing
a
solution,
validation
every
milestone,
I
feel
like
it's
like
every
other
milestone,
maybe
every
other
and
a
half
milestone,
but
it
also
depends
on
what
they're
working
on
and
how
quickly
and
how
much
confidence
they
have
with
what
they're
working
on.
B
If
it
came
straight
out
of
an
issue
that
had
lots
of
great
comments-
and
it
was
very
clear
about
what
people
need
this
thing
to
do-
they
tend
to
have
a
better
confidence
around
it.
I
do
encourage
them
to
go
and
get
user
feedback,
but,
like
you
say
it
does
recruiting,
does
take
a
long
time.
I
mean
I'm
still
trying
to
recruit
for
my
stuff
and
I'm
just
asking
about
merge
requests.
A
B
A
Yeah,
so
that
brings
to
question
that
if
I
have
to
review
or
like
interview
things-
and
I
understand
this,
it
is
necessary,
but
it
makes
the
barriers
so
much
higher
and
if
I
just
want
to
do
some
quick,
like
validation,
if
like
hey,
this
is
a
good
idea.
Yes
or
no.
What
I
noticed
out
of
like
some
of
the
like
I've
also
had
you
know
like
some
conversations
with,
for
example,
people
from
create
they
often
pull
in
some
devs.
A
I
like
it,
doesn't
bring
me
in
terms
of
like
I
have
to
come
up
with
a
really
really
really
good
reason
to
say
that
I
need
external
participants,
because
it
will
make
the
time
frame
in
which
I
can
do
my
solution,
validation
or
problem
validation
for
that
matter.
So
much
longer
like
and.
A
Do
I
can
do
it?
I
can
do
the
same
so
so,
if
one
more
thing
so,
if
I
have
to,
if
people
see
my
work
and
they
say,
oh
dean
is
on
a
good
track,
he's
doing
like
five
solution
validations
a
month,
man
this
this
guy
is
on
fire
and
they
see
another
someone
in
another
stage
group
and
they
say
like
this.
Guy
is
only
doing
you
know
one
solution,
validation,
but
that
other
one
is
doing
one
external
solution,
like
with
external
participants,
I
do
only
with
internal
like
I
I
I
cheat
code.
A
B
Well,
you
know
my
my
advice:
internal
versus
external
part,
part
of
it
is,
are
the
internal
people
truly
the
people
who
should
be
using
this
feature,
and
are
they
the
only
people
who
should
be
using
this
feature,
meaning
do
we,
as
internal
people
embody
what
an
enterprise
organization
might
be?
Are
there
any
differences
there?
Sometimes
there's
not
a
devops
as
a
devops
is
a
devops
sometimes
depending
on
what
you
need
feedback
for.
B
Sometimes
there
are
differences
with
with
growth.
It's
hard.
I
know
because
they're
trying
to
I
mean
they
can't
even
they
can
even
recruit
people
who
don't
use
our
stuff.
It's
not
easy
to
find
for
them
to
find
people,
and
that's
probably
why
they're
using
internal
people
and
they're,
probably
in
the
dog
fooding
stage
too.
So
that's
another
good
reason
that
you
can
use
internal
people
if
you're
trying
to
dog
food
a
solution,
but
there
are
some
times
where
you
have
to
use
external
people.
B
If
you
like
release
managers,
I
don't
know
how
many
I
don't
even
know
if
we
have
any
internal,
how
many
we
have,
but
if
you're
trying
to
create
like
a
persona
for
release
managers,
you're
going
to
need
to
go,
find
external
participants
for
that
study,
because
that
is,
you
need
the
variation
there.
The
variety
there
if
you've
got
a
small
feature
that
you
just
want
to
make
sure
that
you've
implemented
it
correctly
and
the
people
who
would
be
using
that
feature
are
the
same
as
some
internal
participants.
B
B
A
Like
for
a
b
testing,
is
it
is
very
important
to
have
the
external
participants
as
well,
like,
I
think,
I'm
I'm
taking
it
quite
far.
When
you
be
testing
I've
done
seven
internal
interviews,
I'm
now
doing
probably
six
external
ones
to
get
a
better
perspective
of
what
is
what
is
being
used
out
there.
But
it
was
also
because
internal
interviewing
is
so
easy
and
there's
a
like
a
wide
variety
of
personas
being
like
using
a
b
testing.
So
it's
like
a
different
kind
of
like
every
research
is
kind
of
different,
but
I
I
I'm
a.
A
B
Well,
and
that's
the
see,
that's
the
other,
that's
the
other
thing
you
have
to
keep
in
mind
is
the
rigor
that
you're,
using
with
the
testing,
if
you
are
only
worried
about
getting
it,
done
faster
you're,
going
to
sacrifice
the
rigor
every
time,
and
I
don't
use
that
word
like
it
has
to
be
academic
research,
where
we
do
all
these
horrible
longitudinal
studies
that
go
on
for
months,
but
I
use
that
word,
meaning
if
you
are
only
going
60
of
the
way
to
me.
You're
wasting
your
time.
B
You
need
to
get
closer
to
80
to
90
of
whatever
it
is
that
you
need
to
get
confirmed,
and
if
that
means
working
in
some
external
participants
and
taking
more
time
to
do
it,
you
really
should
do
that
because
60
is
too
close
to
50
for
me
and
that's
that,
for
me,
that's
not
enough
rigor,
and
on
top
of
that
to
me
to
be
honest
with
with
you
guys
and
I'm
sure,
you've
heard
me
say
this,
the
company
has
to
understand.
B
To
other
people
is
like
if,
if,
if
it's
something
that
critical
things,
because
again,
it
does
depend
like
if
it's
something
small
or
if
it's
something
just
in
addition
to
something
else,
that
could
be
a
different
decision.
But
if
it's
like
a
brand
new
feature,
we're
trying
to
get
market
share
kind
of
trying
to
corner
you
know
get
people
away
from
another
product.
I
would.
B
B
I
would,
I
would
say
we
have
if,
if
what
I'm
hearing
is
true
about
people
complaining
about
the
fact
that
they
like
us,
but
it's
hard
to
use
or
it's
not
intuitive
or
the
errors,
don't
make
sense
and
all
these
tiny
thousand
little
paper
cuts,
then
I
would
say
you
have
to
stop
and
slow
that
down.
You
need
to
slow
it
down
and
you
need
to
go
deeper
in
the
things
that
you're
doing
to
make
sure
that
you're
not
making
silly
choices
or
silly
mistakes.
A
B
A
I
feel
that
sometimes,
if
you
know
like
it
is,
it
is
my
designer
ethics
versus
what
you're
being
rewarded
for
from
a
company's
perspective,
and
it
makes
it
sometimes
very
hard
to
choose.
Like
am
I
going
to
put
in
the
effort
to
ask
external
participants
like?
Can
I
not
just
do
like
a
solution,
validation
within
three
days
and
then
be
done
with
it
and
move
on
to
the
next
thing,
because
we
solution
validated
it
to
some
extent,
which
is
better
than
nothing
right?
A
B
B
If
it's
something
that's
smaller,
if
it's
a
tweak,
if
it's
an
addition
of
a
smaller
feature
to
an
existing
thing,
if
you
feel
like
it's
okay
to
move
fast
and
break
things
and
and
get
the
get
the
confidence
you
can
get
when
if
and
still
move
fast,
then
do
it.
I
don't
want
to
be
the
person
to
tell
you
you
have
to
stop
your
process
and
slow
down.
B
A
B
Don't
have
time
to
do
it
either
because
they're
they're
running
too
fast
too,
so
I
don't
know,
I'm
sorry
it's
taking
so
long
for
you
guys
pretty
good
participants
for
external
people,
and
I
I
can't
even
get
them
to
send
my
survey
out
to
first
look
so
I
mean
I
was
relying
on
rupert
like
finding
people
on
linkedin
to
fill
out
my
survey
and
I'm
like
this
is
not
working.
Please
send
it
to
first
look.
So
I
hear
you.
A
Let
me
see
last
question
is
on
the
usage
data.
I
understand
it's
not
validation,
but
the
motivation
I
have
for
it
is
not
just
purely
validating
that
a
solution
is
like
valid.
It's
it's
actually
like
engineering,
ux,
doesn't
all
don't
always
align
on
what
a
mvc
is
and
when
and
feature
flag
should
be
turned
on
and
off.
A
Okay,
if
we
see
that
the
features
is
in
there-
and
I
can
prove
that
when
we
decided
on
a
feature
to
be
shipped
in
a
way
that
ux
agreed
upon-
and
it
actually
has
a
higher
amount
of
usage
relatively
seen
versus,
like
oh
we're
just
going
to
ship
this,
because
we
want
to
get
done
with
this
merch
quest
for
whatever
reason
like
I
can
make
a
case,
and
I
can
make
a
case
to
say
like
hey:
there
is
an
improvement
in
our
actual
usage
of
our
day,
we're
providing
more
value
to
our
customers.
B
And
that's
what
I
don't
know
if
you
can
say
it's
value,
you
can
say
they're
using
it
more,
but
you
without
maybe
it's
an
issue
that
you
open
and
you
ask
people
to
give
feedback
now
that
the
future
flag's
been
turned
on.
You
know,
because
you
don't
know
that
just
because
they're
using
it
it
provides
value.
You
just
don't
know
that
you
all
you
know,
is
that
they're
using
it.
A
B
B
Yeah,
you
can,
you
can
definitely
say
they're
using
it.
Yes,
they're,
like
they're,
using
it
more
than
they
were
before,
that
is
very
valid
and,
and
you
can
extrapolate
the
fact
that
they're
using
it,
meaning
that
they're
going
to
it
and
they're
interacting
with
it
more,
but
that's
kind
of
where
that
stops.
B
You
can't
really
say
it's
providing
value.
All
you
can
really
say
is
they're
using
it
more
more
than
they
did
before.
Now,
let's
go
find
out
if
they
like
it,
and
I
wish
we
could
put
up
like
a
little
survey
or
something
we
did
this
at
rackspace,
but
we
also
had
trouble
with
anybody
who
was
doing
on-prem
stuff,
but
on
the
dot-com
we
could
put
up
a
little
survey.
That
said,
hey,
do
you
like
this
happy
face
sad
face?
You
know
whatever,
and
if
they
did
like
a
whatever
face,
you
could
ask
them
really
quick.
B
Like
would
you
like,
provide
more
information
about
that,
and
so
we
could
get
a
couple
of
little
verbatims
and
a
little
bit
of
a
passive
collection
of
data
on
like
how's
it
going
with
this
new
thing
that
we
just
turned
on
for
you.
We
used
qualtrics
for
that
actually,
and
it
worked
okay
yeah.
It
worked
okay
on
the
dot
com
part
because
we
could,
but
we
couldn't
do
it
on
the
on-prem,
because
obviously
we
can't,
but
that's
also
another
idea
that
we
could
look
into
along.