►
From YouTube: Metrics-Driven Development - GitHub Universe 2015
Description
Lynn Root's talk will tell the story of how her team at Spotify chose which metrics are important for them to focus on, what technologies they have used and are using, and how they’ve iterated over feedback loops to fine-tune what metrics they care about.
About GitHub Universe:
Great software is more than code. GitHub Universe serves as a showcase for how people work together to solve the hard problems of developing software.
For more information on GitHub Universe, check the website:
http://githubuniverse.com
A
My
name
is
Lynn
route
I'm
from
Spotify
I
am
a
backend
engineer,
I'm
based
in
San
Francisco
and
I've,
been
there
for
about
two
years,
I'm.
Also
the
vice
chair
of
the
Python
software
foundation,
the
board
of
directors,
the
psf,
it's
a
nonprofit
organization
behind
the
Python
programming
language.
It
holds
like
the
intellectual
property
rights,
and
it
also
like
promotes
like
the
gospel
of
Python
and
helps
foster
like
the
well-known
community.
That's
behind
Python
I,
also
founded
the
San
Francisco
location
of
Pi.
Ladies
as
well
as
do
a
lot
of
global
work
for
pi.
A
Ladies
and
pilot
ease
is
a
mentorship
group
for
women
and
friends
so
for
everyone
in
the
in
the
Python
community.
All
right!
So
this
talk.
I
will
first
give
a
quick
intro
to
what
Spotify
is
I'm
sure
a
lot
of
people
know
about
Spotify
ready,
but
also
how
we
use
data
and
then
I'll
go
into
how
we
use
metrics
and
how
my
team
and
I
came
about
implementing
metrics
and
then
essentially
what
we
learned
along
the
way
to
to
sort
of
appreciate
the
bigger
picture.
So
so
you
can
sit
back
a
little
bit.
A
You
don't
have
to
take
notes.
I
have
everything
like
posted
online.
I
have
a
little
write
up
as
well,
so
so
yeah
you
can
sit
back
so
so
what
I
basically
want
you
to
take
away
is:
is
this
metrics
and
tracking?
It's
super
fun,
but
but
should
you
track
everything,
and
so
we
as
developers,
we
have
a
tendency
to
like
to
know
everything
like
how
many
visitors
are
on
a
website.
A
How
many
referrals
came
in
how
folks
use
our
services
if
our
servers
are
even
up
and
functional,
and
we
have
a
lot
of
tools
at
our
disposal
like
a
new,
relic,
graphite,
Google
Analytics
century
pagerduty,
whatever
we
even
track
ourselves
like
all
the
steps
and
exercise
and
but
breathing
how
fast
our
hair
grows,
I
don't
know
whatever
we
can
track,
we
track
it
right,
maybe
just
to
feel
better
about
her
elves
I.
Don't
know
if
you
do
anything
about
it,
but
if
you
measure
everything
it's
easy
to
get
lost
in
the
Y
you're
measuring
it.
A
It's
easy
to
lose
the
meaning
of
all
that
data,
so
so
to
start
some
background
information,
so
we're
all
on
the
same
page
about
Spotify
and
how
we
use
data
so
Spotify,
streaming,
music
service,
it's
available
in
nearly
60
countries.
We
betta
launched
in
2007
and
spread
across
larger
European
countries
in
2008
and
finally
came
to
the
u.s.
in
2011.
A
We
have
over
20
million
paid
subscribers
with
75
million
monthly
active
users,
and
we
have
over
30
million
unique
songs,
not
including
compilation
albums
and
such,
and
we
add
about
20,000
each
day.
We
also
pay
about
seventy
to
eighty
percent
of
our
income
to
rights
holders
totaling
about
three
billion
dollars.
So
far,
so
I
work
in
a
very
small
office
in
San
Francisco,
it's
about
five
other
developers
and
I
have
to
say
I
feel
obliged
to
say
we
are
hiring
for
to
mobile
devs.
A
So
as
you
can
imagine
that
Spotify
data
is
quite
important,
and
these
numbers
that
you
see
here
there
about
a
month
old
and
they
quickly
grow.
We
track
user
generated
data
like
signups
logins
activity
within
the
application
itself,
even
even
tweets
like
the
goods,
the
bed
and
then
the
embarrassing
tweets.
A
We
also
track
server
generated
data
and
including
requests
to
various
services
response
times
response
status
codes
among
a
million
other
things
in
each
squad
within
Spotify,
and
we
we
basically
own
collect
we
collect
with
sorry
we
own
what
we
collect
with
what
the
when
and
how
we
will
consume
such
data.
A
A
A
Yet,
with
all
this
set
up
with
all
this
technology,
I'm
very
embarrassed
to
say
that
my
team,
my
squad
in
San
Francisco,
did
a
lot
of
development
in
the
dark
we
had.
We
were
not
tracking
anything
and
we
we
didn't
know
what
was
successful
when
we
did
future
integrations.
We
had
no
clue
if
our
back-end
services
were
actually
up
and
running
that
we
were
supposedly
maintaining.
A
So
this
this
is
a
story
of
self-discovery
how
to
become
a
better,
more
effective
team,
and
we
did
this
by
capitalizing
on
understanding
our
own
data.
Not
everyone
is
that
our
data
scientists
or
statisticians
or
whatever,
but
everyone
can
graphs
why
it's
important
when
the
majority
of
our
users
can't
login
so
Spotify
we've
been
very
public
about
how
we
use
agile
and
it's
in
a
software
development
process.
A
If
you
actually
search
for
Spotify
and
add
dial
in
YouTube
you,
you
get
a
couple
of
really
good
videos
describing
our
process
I
highly
recommend
you
checking
them
out,
but
one
key
aspect
of
agile
development
is
to
iterate
over
it
right
over
our
product
iterate
over
ourselves,
and
we
trying
to
find
what's
best
for
works
for
us.
It
works
for
the
company,
the
squad
and
everything
in
between
and
so
late
last
year.
A
So
what
it
was
was
monthly
challenges,
kind
of
set
up
to
figure
out
the
team's
current
condition,
comparing
it
to
the
desired
condition
of
where
we
want
to
be
and
how
we
want
to
deliver
our
product
feature
service
whatever
so
like
the
following.
Explanation
might
sound
a
little
bit
prod
project
manager,
ish,
but
I
found
it
very
useful
when
thinking
about
implementing
metrics
for
our
team's
back-end
services,
so
the
main
goal
was
to
find
our
target
condition
as
a
squad.
A
Where
do
you
want
to
be,
it's
certainly
difficult
to
establish
a
goal
without
context,
without
understanding
of
where
we
are
now
so
so
to
figure
out
our
baseline?
We,
we
all
set
down
to
answer
a
few
questions
as
a
group,
so
Briggs
kind
of
bring
it
down
here.
The
first
question
was:
what
do
we
want
to
deliver,
or
what
do
we
deliver?
That's
a
seemingly
easy
question
right
myself
and
my
squad
kind
of
like
initially
struggled
to
answer
this
right
away
and
it
certainly
didn't
roll
off
our
tongues.
So
so
I
looked
at
our
past.
A
You
list
of
the
integration
projects
that
we've
delivered
and
the
services
that
we
currently
maintain
includes
the
uber
and
Spotify
integration
last.fm,
the
SoundHound
integration,
but
the
the
most
critical
one
is
certainly
our
Facebook
integration
is
the
reason
why
we're
here
in
San
Francisco,
so
the
Facebook.
We
have
a
new
login
new
user
registration
and
publishing
to
Facebook
with
them
with
over
half
our
user
base
connected
to
facebook.
In
that
way,
so
the
next
question
for
us
is
for
whom
do
we
produce
said
product
or
service
and
who
actually
defines
our
work
at
spotify?
A
There
isn't
really
any
micromanagement,
there's
a
lot
of
trust
actually,
but
our
league
team
defines
the
direction
that
the
squad
takes,
and
so
there's
certainly
one
of
our
customers
with
many
integrations
that
we've
done.
We
do
have
a
lot
of
external
partners.
Thankfully
the
tech
squad.
We
are
a
bit
shielded
from
from
the
partners
and
direct
communications,
but
that
makes
our
development,
our
business
development
team,
one
of
our
partners
or
customers
and
indirectly
the
partners
themselves,
but
then,
but
then,
who
depends
on
us
who
actually
uses
our
work,
our
product
or
service?
A
So
yes,
the
majority
of
users.
Do
you
log
in
with
Facebook
and
it's
safe
to
say
it's
a
pretty
integral
system
to
the
Spotify
platform,
so
he
certainly
can't
it
up
when
Facebook
sort
of
makes
breaking
changes
to
their
login
profile
or
login
protocol
or
their
api's,
which
they
have
been
known
to
do
unannounced
in
the
past,
but
there
are
also
other
teams
within
the
company
that
that
plug
into
systems
that
we
run
for
social
aspects
like
sharing
to
facebook
from
within
the
client
itself.
A
So
moving
on
the
next
question
is
about
expectations.
What
do
our
customers
actually
expect
from
us
when
trying
to
answer
this
question?
It
occurred
to
us
that
we
never
really
asked
our
customers
what
their
expectations
are,
and
so
we
did.
We
wanted
to
know
what
was
important
to
them
with
what
we
deliver.
What
was
it
on
time?
Delivery.
Predictable
versus
being
productive,
do
they
expect
solutions
to
problems
that
didn't
exist
or
then
know
existed?
A
What
were
their
experts
on
quality
usability
or
other
measurables
their
expectations
with
how
like
the
squad
work,
did
they
want
weekly
updates
for
progress
or
problems
etc?
And
so
we
couldn't
ask
all
of
our
customers
right.
70,
75
million
users
would
be
a
bit
much
and
expectations
could
be
different
for
different
customers.
So
internal
teams
expected
the
Facebook
service
just
be
reliable
and
scalable.
A
Business
development
want
us
to
be
very
clear
and
what
we
could
feasibly
implement
and
it's
safe
to
assume
that
our
users
will
want
to
log
in
or
sign
up
via
Facebook
if
they
choose
to
and
for
it
to
just
work.
So
the
last
question:
what
do
those
expectations?
How
did
we
know?
We've
met
those
expectations
and
this
sort
of
stopped
us
dead
in
our
tracks?
A
We
naturally
wanted
to
implement
a
technical
solution,
and
so
so
we
insulated,
what's
called
like
feedback
loops
and
it's
a
very
generic
term,
not
just
to
tech
right
that
we
can
use
to
understand
how
feedback
and
what
feedback
is
given
and
for
our
squad.
The
the
main
feedback
loop
we
wanted
was
metrics.
A
We
wanted
all
those
snazzy
looking
like
dashboards
with
eye
candy
graphs
like
visuals
with
the
latest
technologies
that
I'm
sure
will
be
like
obsolete
tomorrow
or
whatever,
but
but
in
all
seriousness,
and
we
wanted
like
immediate
visual
representation
of
what
was
going
on.
But
what
did
we
want
to
see
and
what
questions
we
want
to
answer
so,
in
line
with
the
idiom
to
throw
spaghetti
in
the
wall
to
see
what
sticks
the
squad?
A
Brainstormed
for
a
while
trying
to
come
up
with
any
question
that
we'd
like
to
see
and
answer
floor,
so
some
of
the
ideas
included
a
sign
up
and
off
flow
abandonment
and
the
Facebook
connected
users
like
a
percentage
over
total
users
and
that
trend
over
time.
The
percentage
of
users
that
signed
up
through
facebook
/
like
our
day
week
whatever
and
any
like
Facebook
related
errors.
A
We
were
also
interested
in
daily
active
users
for
park,
features
registration,
subscription
rates
by
partner
and
Web
API
usage
by
partner,
and
then
a
squad
focused
Twitter
feed
like
a
uber
and
Spotify.
So
we
could
see
what's
being
complained
about
that,
we
might
not
see
in
our
logs.
We
also
kept
track
or
keep
track
of
our
outstanding
JIRA
issues
and
requests
count
by
internal
requesting
service
or
team.
A
So
we
grouped
these
similar
metric
metrics
in
two
and
two
buckets
we
had
usage,
we
had
some
health,
we
had
business
performance
and
these
buck
came
in
into
a
dashboard
themselves.
They
had
their
own
little
dashboard,
and
so
we
also
created
a
few
processes
on
the
questions
and
metrics
that
I
said
earlier.
A
So,
for
example,
we
wanted
to
know
if
we
were
successful
when
this
integration,
if
it
was
successful
like
we
wanted
to
know
if
we
had
X
amount
of
users
within
the
first
two
weeks-
and
this
is
true-
this
sort
of
goal
can
only
be
judged
based
on
historical
user
acquisition
numbers.
So
we
definitely
have
some
some
work
to
do
beforehand,
but
this
will
also
feed
in
our
retrospectives
and
especially
once
the
project
is
complete.
Oh
we're,
gonna
change,
Mike's.
A
Okay,
all
right,
hopefully,
this
works.
Thank
you
alright!
So
moving
on,
we
also
had
a
few
post
integration
questions
for
business
development,
folks
to
like
a
scar
like
external
partners
on
behalf
of
the
squad
itself.
So
these
questions
include
understanding
our
responsiveness,
how
our
developer
tools
are
and
if
their
company
goals
were
met.
So
we
may
think
that
an
integration
was
super
successful,
but
they
might
have
some
insight
that
we
do
not.
A
So
we've
only
been
caring
about
metrics
for
since
the
beginning
of
the
year
embarrassingly
so,
but
this
is
certainly
at
the
beginning
for
us
and
it's
allowed
us
some
time
to
iterate
and
give
a
hard
look
of
what
we
track
and
why
so
you
can
track
everything
that
moves,
but
will
you
get
an
undated
really,
you
can
count
every
leaf
of
every
tree
and
the
forest
itself,
but
that
gets
kind
of
there's
a
lot
of
noise
right.
So
it
goes
back
to
the
understanding,
your
customers,
expectation
and
essentially
boils
down
to
business
volume.
You
can.
A
How
can
you
maintain
and
improve
upon
the
business
value
of
your
service
and
product?
How
does
counting
every
single
Facebook
connected
user?
Actually
help
better
ourselves
so
when
implementing
these
various
metrics
I
came
across
some
some
questions
that
I
I
found
really
helps
seeing
the
forest
for
the
trees.
So
when
creating
a
new
metric,
how
do
you
metrics
actually
mapped
to
business
goals,
for
instance
Willie?
Will
we
lose
so
much
money
or
how
much
money
will
we
lose
if
the
facebook
sign
up
service
isn't
up
how?
A
How
would
you
prioritize
different
goals,
which,
what
is
more
important?
Does
it
mean
that
you're
going
to
neglect
others
or
a
lot
time
by
priority?
Is
this
brand
new
shiny
integration
project
more
important
to
pay
attention
to
then
than
other
ones
that
we
have
going
on
and
that's
fine
that
if
it
is
but
how
we're
going
to
prioritize
our
time
and
then
how
can
we
create
dashboards
that
are
actually
actionable?
What
is
the
goal
and,
more
importantly,
how
can
we
drive
towards
that
goal?
Are
we
just
going
to
say?
Oh
look.
A
Our
facebook
sign
up
service
is
down
and
then
go
to
lunch.
We
actually
have
to
do
something
right
so
when
representing
metrics,
how?
How
do
we
correctly
measure
what
we
care
about?
We
have
all
these
tools
to
set
up
to
help
us
create,
like
gauges
meters,
histograms,
timers,
whatever,
but
what's
the
best
representation
for
this
question
or
metric,
and
we
might
actually
have
to
break
out
or
college
stats
book
for
this
kind
of
thing,
but
when
actually
consuming
them.
How
often
do
you
check
in
on
your
metrics
dashboards?
There
never
looked
at
right.
A
They
it's
a
common
problem
that
they
become
sort
of
background
noise.
How
do
you
make
dashboards
more
visible,
more
in-your-face?
Should
someone
be
responsible
every
week
to
check
in
on
them?
What
I
found
honestly
is
having
a
live
stream
of
kittens
on
there
rotated
throughout
the
dashboard,
and
do
you
make
them
more
visible
by
slapping
it
up
on
the
TV
or
monitor?
Are
the
metrics
too
sensitive
to
broadcast
throughout
the
office
with
with
visitors
coming
in?
A
Perhaps
you
email
snapshots,
but
will
they
be
filtered
away
and
not
notice
or,
like
me,
all
auto
archived
for
any
unread
messages
being
a
bit
introspective
for
the
things
that
we
don't
reach?
100
percent
of
our
goals
that
gap
between
the
baseline
and
the
goal
line?
We
need
to
assess
the
difference.
Why
does
it
exist?
Is
that
even
solvable?
If
you
look
at
the
dashboard
and
what
actions
are
you
actually
going
to
take?
A
Should
you
even
create
a
dashboard
if,
if
a
goal
or
an
alert,
is
not
set
up
and
if
no
action
will
be
taken,
and
the
answer
is
probably
not
what
about
the
unknowns?
What
is
unknown,
we
know,
like
X
amount
of
iOS
users
have
connected
their
accounts
to
uber,
but
we
don't
know
how
many
don't
use
it,
because
the
driver
has
an
android
phone
or
they're,
not
aware
of
the
service.
How
do
we
approach
these
known
unknowns,
and
is
it
even
worth
it
to
approach
them?
A
So
my
team
we're
still
gathering
insights,
driving
the
insights,
trying
to
understand
what
makes
sense
for
us
and
we've
certainly
added
more
metrics
over
time,
and
we
found
ourselves
more
focused,
at
least
for
right
now
on
service
reliability
rather
than
business
value
or
squad
performance
type
metrics,
but
out
those
will
come
with
time
as
well.
Our
monitoring
folks,
the
the
team
that
helps
squads,
monitor
their
services.
A
They
have
been
playing
around
with
the
idea
of
monitoring
levels
and
the
thought
is
to
have
certain
policies
and
levels
around
types
of
metrics,
much
like
logging
levels,
so,
for
instance,
how
long
should
you
retain
data?
That's
critical
versus
just
like
debugging
metrics
should
debugging
metrics
stay
locally
on
the
machine
and
all
non
debug
level,
metrics
kind
of
sent
away.
A
What
levels
of
metrics
should
there
be
a
should
to
be
like
debug
info
critical
error
whatever,
and
then
how
do
we
educate
all
the
developers
to
make
sure
that
they
don't
abuse
the
debug
level
kind
of
metrics?
And
so,
as
this
develops,
as
this
idea
develops
and
takes
hold,
it'll
certainly
come
into
play
with
what
we
think
about
our
metrics.
What
we
think
is
critical
and
important
and
what
we
end
up,
monitoring
and
storing
and
so
to
bring
back
to
this
slide.
A
The
ultimate
goal
in
us
answering
these
questions
is
to
give
us
both
a
shortened
decision-making
cycle,
as
well
as
make
more
informed
decisions
about
strategy
and
about
partnerships.
It's
super
easy
to
get
lost
in
the
forest
and
it
doesn't
help
that
it's
really
fun
to
play
with
all
those
visualization
software
but
in
essence,
we're
placing
current
values
in
historical
context
in
order
to
see
patterns
forming
how
long
on
average
does
it
take
for
the
team
to
implement
a
new
integration?
Do
our
customers
ourselves
expect
a
shorter
turnaround
time?
A
Do
we
wish
to
just
be
able
to
appropriately
estimate
the
time
that
takes
to
do
such
a
project
or
which
internal
team
we
should
educate
about
rate-limiting
against
their
services?
And
so
the
one
here
is
these
feedback
loops
these
thoughtfully
implemented
metrics?
We
can
use
goal
lines
and
alerts
to
create
a
more
efficient
team.
We
can.
A
We
will
deliver
higher
quality
software
because
of
it,
so
we'll
get
the
immediate
feedback
on
any
bugs
that
we
may
introduce
any
system
that
fails
and
the
like
and
well
up
for
better
integration
projects
based
on
historical
business
performance
trends,
all
right.
So
the
answer
to
this
question:
should
we
track
everything
that
moves
and
the
very
anti-climactic
answer
is
probably,
but
only
if
you
can
define
a
goal,
you
can
define
an
action
if
you
haven't
met
the
goal
and
you
can
actually
pay
attention
to
it
all
right.