►
A
All
right
welcome
to
the
ux
showcase,
I'm
drugstore,
I'm
going
to
present
for
the
progressive
delivery
stage
group
under
release,
I'm
going
to
talk
a
little
bit
about
a
b
testing
and
our
problem
validation
of
that
a
new
part
of
our
product,
as
you
will
so
first
off
a
little
bit
general
things.
Where
is
release
within
the
devops
life
cycle,
it's
right
there
in
between
package
and
configure
after
verify
and
we're
kind
of
like
the
gateway
towards
the
next
stage
group
such
as
configure
and
monitor.
A
So
we
make
that
possible
and
they
extend
upon
our
feature
set
as
you
will,
then
we
go
a
little
bit
into
who
are
part
of
the
progressive
delivery
stage
in
every
part
of
my
presentation.
You
can
find
links
at
the
bottom
corner,
so
you
can
quickly
navigate
towards
that.
But
on
this
page
you
can
see
who
is
part
of
the
stage
group
all
awesome
people
to
work
with,
and
you
can
also
see
our
product
categories.
So
please
dive
a
little
bit
deeper
into
that.
A
If
you're
interested
in
that
now
on
to
the
next,
I
have
a
little
bit
more
general
news.
Progressive
delivery
also
has
a
new
ux
specific
handbook
page.
It
includes
things
like
quick
links
to
other
pages
that
are
relevant
for
the
progressive
delivery
stage
group,
ux
roadmap,
product
planning,
target
personas
jobs
to
be
done,
product
or
department,
level,
okrs,
but
also
stage
group
okrs,
which
you
can
see
on
the
right
side.
This
is
a
new
thing,
we're
going
to
try
out
with
quarter
3,
and
I
think
it's
pretty
exciting.
A
Then
you
have
rituals
and
meetings
they're
all
listed
there
plus
recordings.
This
is,
of
course,
a
work
in
progress,
but
I
think
it's
pretty
nice
that
you
can
immediately
dive
deep
into
the
actual
things
that
are
going
on
into
the
stage
groups
performance
indicators
that
are
more
ux
department
level,
but
also
coming
in
our
state
group,
specific
performance
indicators.
Those
are
working
progress
at
the
moment
and
some
info
on
ux
depth.
Triaging,
please
check
it
out.
A
The
link
is
in
the
in
the
slides
then
on
to
the
main
topic
of
today's
presentation,
which
is
a
b
testing
and
a
little
bit
onto
the.
Why
and
what
is
it
so?
A
b
testing
is
a
important
part
of
progressive
delivery
and
in
order
to
explain
a
little
bit
about
that,
I
have
to
briefly
explain
the
difference
between
progressive
delivery
and
continuous
delivery.
A
It
kind
of
expands
beyond
that
and
makes
it
less
risky
to
deploy.
It
still
makes
it
automated,
but,
for
example,
you
can
create
feature
flags
which
you
can
turn
off
immediately.
Some
functionality
might
introduce
risk
in
your
deployment.
A
Apart
from
that
progress,
delivery
expands
on
deployment
strategies
such
as
feature
flags,
maybe
testing,
but
also
blue
green
deployments
and
additionally
goes
into
detail
on
post
about
the
deployment
monitoring,
and
I
think,
as
I
detailed
also
out
in
my
last
showcase,
it
transfers
the
control
of
the
deployments
to
new
personas
and
that's
especially
true
for
a
b
testing
which
I'm
talking
about
today.
A
A
b
testing
is
geared
at
new
personas,
such
as
designers,
product
managers
and
anyone
else
that
might
be
influenced
by
those
or
as
a
stakeholder
in
those
a
b
tests
again
links
in
the
bottom
corner.
So
please
feel
free
to
check
that
out
a
brief
view
at
the
business
goal.
We
want
to
develop
an
a
b
testing
solution,
as
an
expansion
of
our
feature
flex
to
further
our
offering
towards
progressive
delivery.
A
This
is
going
to
bring
in
new
personas
to
the
progressive
delivery
stage
group,
which
is
going
to
bring
more
value
and
also
add
to
the
performance
indicators
such
as
s,
mao
and
gmail
for
us
internal
at
gitlab.
The
job
to
be
done
here
is
when
we
have
multiple
solutions.
I
want
to
be
able
to
test
them
and
monitor
key
data
variables,
so
I
can
find
out
the
preferred
method
of
moving
forward.
A
A
very
simple
example
of
this
is
you
have
a
signup
page
and
one
signup
page
is
a
little
bit
different
than
the
other
one.
It
is
not
really
clear
which
one
is
the
better
one,
and
you
want
to
kind
of
validate
each
direction
and
see
which
one
works
best.
Of
course,
there
should
be
a
human
human
aspect
to
the
decision,
but
at
least
gives
you
an
indication
of
which
direction
is
objectively,
better
or
at
least
helps
you
make
that
decision
so
on
to
the
next,
which
goes
into
the
opportunity
canvas.
A
I
just
want
to
highlight
a
few
points
at
the
bottom
corner.
You
see
a
little
bit
of
a
legend
where
the
light
blue
is
problem
focused.
The
dark
blue
is
business
case
focused,
and
the
green
is
a
little
bit
solution.
Focused
it
kind
of
gives
perspective
as
to
how
big
the
market
is.
Currently
it's
at
48.5
million
dollars
and
by
2025
this
market
of
a
b
testing
will
be
around
10.8
billion
dollars
and
we're
gonna
target
ourselves
at
at
least
two
percent
mark
chair
by
that
time.
So
this
is
gonna.
A
This
is
gonna
result
in
seed
expansion.
It's
gonna
result
in
a
increased
aicv,
as
this
will
be
a
paid
feature
and,
of
course,
sticking
is
if
combined
with
feature
functionality
from
follow-up
or
downstream
stage,
such
as
monitor,
plan
and
testing.
A
A
I
know
our
internal
growth
teams
are
super
happy
we're
working
on
this
and
as
as
can
be
seen
in
the
pain
points
we
want
to
be
bringing
together
all
those
personas
and
all
those
that
information
into
a
single
place
inside
get
lab
and
not
be
not
let
it
be
a
disorganized
effort
anymore
for
our
customers
as
well
as
ourselves.
Of
course,
the
growth
theme
can
be
a
testament
to
that
and
see
say
how
much
of
value
this
will
bring
to
us.
A
I
don't
want
to
go
too
much
into
detail
further
into
this.
I
invite
everyone
to
read
the
opportunity,
canvas,
there's,
also
recording
of
her
latest
meeting
into
that
links
in
the
bottom
this
corner.
Then
I
go
into
the
research
that
we're
currently
doing
the
problem.
Validation
is
finding
out
what
we
need
for
our
offering
inside
of
our
product.
The
we
have
a
brief
kind
of
definition
of
done
here.
A
They
are
interviewing
we're
currently
in
the
phase
of
dovetail
tagging,
which
dovetail
is
the
product
which
we
use
for
our
research
and
we're
doing
the
insight
creation,
along
with
issue
creation,
which
is
kind
of
like
a
secondary
step
there
next
step,
which
I'll
go
a
little
bit
into
detail
on
later
in
the
presentation,
is
going
for
that
ideation
sessions,
which
I
want
to
guide,
partly
with
whiteboarding
sessions,
do
some
follow-up
interviewing,
if
need
be
mostly
internal
and
and
basically
do
it
as
holistically
as
possible,
including
the
right
stakeholders
at
the
right
time
and
then
eventually,
following
into
planning
breakdown,
then
on
to
the
next
there's
some
interview,
learnings
I've
interviewed
a
lot.
A
There
were
like
13
participants
that
that
is
because
there
were
multiple
personas
to
interview,
and
I
did
part
of
it
internal
and
part
of
it
external.
So
here
you
can
see
they
were
like
multiple
personas
across
product
management,
design
and
engineering.
This
was
nice.
I
do
say
it
was
a
little
overkill,
but
that
is
after
the
fact,
I
would
say,
make
sure
the
external
recruitment
form
is
tested.
A
Well,
we
had
a
little
bit
of
an
error
there
where
some
of
the
people
we
got
in
that
actually
responded
to
our
our
form
online
didn't
give
their
contact
details,
rendering
their
responses
useless.
So
that's
a
thing
to
keep
in
mind.
A
Once
again,
it's
it's
good
to
ask
about
company
details,
make
sure
that
the
company
actually
aligns
with
the
target
audience
we're
developing,
for
I
had
one
participant
which
didn't
align
and
which
research
results
were
so
different
from
what
we
are
targeting
that
I
think
that
could
have
been
prevented
if
I
would
have
checked
the
company
beforehand,
but
that
is
at
least
their
learning
for
next
time
on
the
upside
recruitment
and
interviewing
internally
go
super
fast.
I
was
really
surprised
by
this
again,
but
it's
nice
to
highlight.
A
On
the
other
hand,
I
think
this
is
kind
of
nice
to
keep
as
a
tool
available
to
kind
of
validate
your
discussion
guide.
You
know
interview
with
at
least
one
person
internally
see
if
you
have
the
right
granularity
and
specifics
in
your
questions
that
is
kind
of
nice.
I
put
a
link
here
in
to
a
slack
thread
in
the
ux
research
slack
channel.
A
So
if
you
have
any
feedback
regarding
this,
please
leave
it
in
that
slack
thread
the
link
you
can
find
in
my
slides
here
and
as
the
last
point
here,
I
think,
for
this
research,
specifically,
the
internal
interviewing
seemed
to
yield
the
most
valuable
results.
This
is
because
the
research
participants
could
tell
about
both
their
previous
experience
so
before
gitlab
and
their
in-git
lab
experience
with
a
b
testing
about
their
needs
and
ones,
and
with
that
I
would
have
said
looking
back
at
it.
A
Perhaps
internal
interviewing
would
have
been
enough
at
least
initially
and
then
with
a
secondary
wave
of
research
later
on
in
the
process
going
on
to
the
interviewing
highlights,
you
suggest
some
highlights
I
want
to.
I
want
to
highlight,
I
would
say,
read
them
through,
but
some
things
are
kind
of
nice
here
so
say.
For
example,
product
management
takes
the
final
decision,
but
right
below
there
is
based
on
a
higher
objective.
We
made
a
proposal
to
the
head
of
marketing,
they
would
approve,
but
it
would
be
a
team
call
to
push
it
forward.
A
So
there's
some
insight
to
be
gotten
out
from
there
and
say
like
all
right.
What
is
the
right
way
that
decision
should
be
taken?
It
should
be
an
approval
process,
yes
or
no,
so
there's
a
lot
of
material
for
us
to
go
through
and
kind
of
base
our
insights
on
then
on
to
the
next.
I
think
this
is
the
more
exciting
one.
Some
early
insights
here
you
can
see
who
is
kind
of
involved.
So
far
with
with
a
b
test
you
can
see.
I
hate
a
product
manager
is
an
important
persona.
A
Of
course,
the
engineer
we
talk
a
lot
with
people
who
are
from
e-commerce
sites,
but
of
course
our
own
marketing
department
is
is
marketing.
Related
growth
is
marketing
related.
So
this
is
not
strange
to
see
that
here,
but
especially
the
designer
slush
research
persona
is
there.
Data
analyst
is
super
important,
and
then
we
have
some.
You
know
engineering
manager
who
can
be
seen
as
part
of
engineering.
I
would
say
to
highlight
a
few
of
the
early
insights.
A
I
think
it
is
interesting
that
we
see
that
each
experiment
needs
its
own
single
source
of
truth
kind
of
bringing
together
all
that
communication
evidence
decisions
and
also,
most
importantly,
that
postmortem
documented
that
decision
and
the
reason
why
and
what
happened
after
I
think,
is
super
important.
The
longer
you
go
into
a
b
testing.
A
Of
course,
this
one
is
kind
of
nice
data
presentations,
often
related
to
business
metrics.
We
have
had
multiple
examples
where
they
want
to
be
reported
or
derived
back
to
aiacv.
In
order
to
say
all
right
is
this:
is
this
experiment
beneficial
to
our
product
or
in
company
or
not,
instead
of
like
all
right?
We
want
to
have
so
many
clicks
like
that,
doesn't
mean
too
much.
You
need
to
drive
it
back
to
business
metrics,
as
you
will
there's
a
lot
of
other
interesting
insights
here,
and
these
are
just
the
first
few.
A
There's
got
to
be
a
lot
more,
but
I
think,
with
the
information
we
currently
have,
we
can
go
to
at
least
a
viable
state
of
a
b
testing,
which
is
which
is
nice
so
to
kind
of
conclude,
there's
a
clear
business
opportunity.
The
growth
team
would
welcome
it.
We
expand
our
own
and
our
customers
ability
to
lose
to
do
solution,
validation
and
do
it
more
easily.
I
think
this
is
going
to
be
super
welcome.
A
We
can
go
to
viable
and
it
increases,
of
course,
the
value
of
progressive
delivery
considerably
by
bringing
in
new
personas,
and
it
is
a
great
opportunity
to
follow
the
product
development
workflow
from
start
to
finish
from
my
own
perspective,
that
is
it
apart
from
next
steps,
of
course,
which
I
talked
about
a
little
bit
before
we're
going
to
do
some
white
boarding.
I
want
to
include
the
right
stakeholders
at
the
right
time
we're
going
to
do
some
story.
Mapping,
I
foresee
then
a
think
big
ideation
in
figma.
A
It's
not
like
a
main,
a
single
source
of
design
file
that
is
gonna
fuel,
multiple
implementation,
epics
and
issues
and,
of
course,
road
mapping
to
along
with
product
management,
making
sure
that
the
right
issues
are
scheduled
at
the
right
time.
Other
than
that
there's
an
additional
effort
of
making
jobs
to
be
done.
Central
for
progress
delivery,
with
a
focus
on
continuous
delivery,
not
progressive
delivery,
and
it
is
related
to
one
of
our
otrs.