►
Description
CircleCI’s VP of Platform, Mike Stahnke will cover a view into anonymized team data from millions of DevOps workflows to share insights, behaviors, and metrics that help teams build better software faster. Attendees will leave with benchmarks to determine the performance of their own software delivery teams.
A
A
All
right
so
today
I
want
to
talk
about
basically
devops
from
a
different
data
set
and
what
this
came
up
from
is.
Over
the
last
several
years,
I've
worked
on
the
state
of
devops
reports,
both
coming
out
of
puppet,
and
while
I
was
at
circle
ci
and
just
joining
in
as
an
author
and
looking
at
all
the
the
wonderful
research
and
data
that's
been
done
over
the
last
10
years
in
those
reports
and
when
I
got
to
circle
ci,
one
of
the
things
that
I
thought
about
very
very
quickly
was
wow.
A
We
have
all
this
data
about
what
people
are
actually
doing
with
software
delivery,
and
why
are
we
not
analyzing?
That
and
kind
of
you
know
overlaying
that
type
of
analysis,
on
top
of
a
survey
analysis
to
get
even
more
details
about
what's
going
on
in
the
world
of
software
delivery
and
how
our
devops
practices
or
cicd
practices
helping
us,
you
know,
deliver
software
at
a
higher
value
with
a
higher
rate,
and
so
I'm
going
to
break
this
into
three
different
sections,
which
is
more
around
the
setup
which
is
more
about
what
we're
talking
about.
A
While
we're
talking
about
it,
the
data,
what
we
actually
learned
from
the
data
and
then
insights,
that
you
can
glean
kind
of
putting
some
analysis
on
top
of
the
data,
so
to
begin
with,
with
the
setup,
as
I
said,
I've
been
working
with
the
state
of
devops
reports
for
several
years
and
I've
offered
them
authored
them
since
2018
and
the
things
that
we
were
really
interested
in
are
you
know
kind
of
at
first,
the
the
early
years
were
is
devops.
What
is
devops,
do
you
understand
it?
Is
it
helping?
A
Is
it
having
better
business
outcomes,
and
then
it
got
to?
How
do
you
model
your
evolutionary?
You
know
journey
through
a
digital
transformation
or
through
devops,
and
then
also
what
are
the
prescriptions?
What
are
the
things
that
are
working
for
people
because
it
got
much
of
it
got
less
from?
Does
this
work
to,
we
think
it
works,
but
how
do
we
get
there?
And
so
that
was
kind
of
the
next
thing
to
unpack?
A
And
then
I
I
moved
from
puppet
to
circle
ci
a
couple
years
ago
and
as
I
said,
I
got
this
giant
data
set
of
what
are
people
doing
on
a
platform.
That's
doing
continuous
delivery,
doing
continuous
integration.
A
There
wasn't
really
a
selection
bias,
because
we
just
looked
at
all
the
data
that
we
had
and
we
also
get
to
see
the
differences
in
in
how
people,
I
guess,
actually
work
with
software
versus
how
they
tell
us
they
work,
and
it
doesn't
mean
that
they're
lying
when
they
tell
when
I
say
they're
telling
us
something
different.
It
just
means
they
might
have
a
different
perspective
on
the
question
or
how
we're
asking
it
or
things
like
that.
A
So
this
has
44
000
organizations
in
this
data
set
and
so
the
most
of
those
devops
surveys
that
have
been
done
over
the
years
with
puppet
dora
and
others
have
been
about
3
000
participants.
I
think
the
max
was
about
4
000
participants,
so
you're
looking
at
something
about
10x
larger
than
the
largest
single
survey,
and
this
also
has
160
000
projects
to
analyze.
A
A
So
basically,
these
last
few
slides
to
tell
you
hey,
there's
a
lot
of
data
here
and
it's
going
to
be
really
fun
to
dig
into
and
then
the
second
thing
we
wanted
to
ask
was:
what's
changed
year
over
year,
so
I've
been
at
circle
for
about
two
years,
and
so
this
is
the
second
year
that
we've
done
this
type
of
analysis.
And
so
we
wanted
to
see.
Okay,
the
first
year
we
took
some
baselines.
What
have
we
learned
from
year?
A
These
were
kind
of
the
main
four
metrics,
which
was
you
know,
deployment
frequencies
and
recovery
from
failures,
lead
times
and
change
failure
rate,
and
so
what
we
did
for
ours
was
kind
of
map
those
into
what
that
would
be
if
you
were
using
a
ci
cd
platform
and
how
that
would
work,
and
so
we
started
with
mapping
metrics.
And
so,
if
we
have
deployment
frequency,
that's
kind
of.
How
often
do
you
initiate
a
pipeline,
and
we
call
that
throughput
and
lead
time
to
change
is
pipeline
duration,
and
we
just
call
that
duration
change
failure
rate.
A
A
So
now,
let's
dig
into
the
data
we'll
start
with
throughput,
which
is
how
often
do
you
push
code
that
triggers
ci?
Most
projects
are
configured
to
run
per
push
on
a
git
server,
and
so,
when
I
say
per
push
that
could
be
many
commits,
it
could
be.
One
commit
could
be
on
merge,
request
or
on
on
pull
request,
completion
whatever,
but
basically,
as
the
remote
gets
new
content,
it
runs.
It
runs
and
we
can
see
that
at
the
different
ratings
we
can
see
how
often
these
three
or
how
much
these
throughputs
are
going
on.
A
Yeah
this
is
this:
isn't
how
many
times
pretty
how
many
times
per
day?
Sorry
I
want
to
make
sure
I
got
that
right
how
many
times
per
day
is
this
being
pushed
through,
so
you
can
see
that
some
projects,
you
know,
are
being
pushed
not
even
quite
once
per
day
but
then
like.
A
If
you
want
to
get
how
many
are
90
of
our
customers
doing,
you
know,
or
the
90th
percentile
you're,
seeing
you're,
seeing
it
at
16
times
per
day,
and
so
what
this
really
is
telling
us
is
that
a
lot
of
people
have
a
lot
of
repos.
They
don't
use
all
that
often,
which
I
don't
think
is
surprising.
If
you
think
about
that
from
a
software
development
standpoint,
you
know
there's
some
that
are
just
very
rarely
used
and
there's
some
that
are
used.
A
So
if
you
saw
that
in
the
in
the
classic
state
of
devops
reports
metrics,
we
had
a
lot
on
you
know
how
many
times
are
you
deploying
well
how
many
times
you're
initiating
a
pipeline
and
how
many
times
are
you
doing
these
things
and
what
we
see
is
they're,
not
people
that
are
doing
this
dozens
and
dozens
of
times
per
day.
On
average,
you
know
the
average
is
8.22
times
per
day,
and
so
what
we
do
see
is
that
a
lot
of
people
are
not
deploying
dozens
of
times
per
day
and
you
think
well
why?
A
Why
would
these
surveys
tell
me
something
different
than
what
the
actual
you
know,
data
on
the
back
end
is
telling
me,
and
I
think,
the
problem.
The
reason
is
that
when
the
surveys
are
asking
they're
asking
for
what's
the
primary
application
or
service
you
work
on,
and
I
think
if
you
break
down,
you
know
all
of
the
projects
that
are
within
your
organization.
A
You
know
generally
they're
a
little
bit
a
little
bit
more
often
in
2020
versus
2019,
but
one
thing
you'll
see
is
the
average
actually
went
up
quite
a
bit
or
the
mean
went
up
quite
a
bit,
which
is
exciting
because
it
means
that
you
know
the
lower.
The
lower
bounds
are
about
the
same,
but
the
upper
bounds
are
are
bigger
than
they've
ever
been,
which
is
pulling
up
the
average.
So
you
know
some
teams
are
just
pushing
code
more
and
more
often
to
work
through
this.
A
A
You
know
we
definitely
saw
labor
cuts
in
parts
of
2020
and-
and
we
can-
we
can
talk
about
that
later,
as
we
get
into
further
analysis,
but
you'll
see
that
there
was
definitely
some
impact
about
over
how
much
traffic
was
going
on
to
the
platform
and
how
many
times
people
were
pushing
code
throughout
throughout
specifically
the
early
months
of
quarantine
duration.
This
is
how
long
does
it
take
to
get
results,
and
so
one
thing
about
duration?
That's
really
interesting
is.
A
Five
percent
of
our
builds
finish
in
less
than
12
seconds,
which
in
this
sample
is
about
500
000
builds,
that's
just
to
give
you
a
size
of
what
this
sample
looks
like
in
12
seconds.
You
probably
can't
do
a
lot,
and
so
you
know
you
have
to
ask
how
how
much
is
that
12
seconds
worth?
Well,
it
could
be
that
it's
a
step
in
a
larger.
You
know,
connection
of
pipelines
and
workflows.
Maybe
it's
just
I
put
I'm
putting
an
artifact
on
s3
or
maybe
it's
I'm
copying
something
out.
A
Somebody
else
or
I'm
updating
you
know
documentation.
It
could
be
things
like
that.
It
could
be
that
you
have
a
very,
very
fast
test
suite
or
you
have
very
low
test
coverage.
So
it's
hard
to
really
say,
but
we
have
five
percent
of
things
happening
in
less
than
12
seconds.
But
what
we
do
see
is
about
half
the
projects
are
done
in
under
four
minutes,
and
so
that
means
they're
running
either
very
fast
or
they
have
minimal
coverage
or
a
combination
of
both.
A
But
we
do
see
a
lot
of
people
really
doing
a
lot
with
test
engineering
to
bring
those
times
down.
So
we'll
see,
teams
that
have
kind
of
an
sla
or
an
slo
on
you
know.
How
long
does
it
take
to
get
results
and
it
once
it
gets
above
a
certain
threshold?
You
know
it's
kind
of
pull
the
rip
cord
and
let
the
engineers
go
back
and
do
test
engineering
to
make
sure
that
it's
shorter
again,
because
the
value
of
that
cycle
time
is
just
so
high.
I've
seen
teams
that
have
10
minute
limits.
A
I've
seen
something
that
have
you
know
a
six
minute
limit.
I've
seen
others
that
are
very,
very
thrilled
to
have
a
50
minute
limit,
because
if
you
can
get
the
results
in
under
an
hour
they're
thrilled
for
how
you
know
complicated
and
thorough
some
of
their
test,
suites
are,
and
you
can
see
that
you
know
the
the
average
project
is
finishing
in
about
24
and
a
half
minutes
and
again.
A
That
could
give
you
a
lot
of
signal,
depending
on
the
complexity
that
you're
doing
maybe
through
integration
and
systems
and
uat
and
all
those
types
of
things,
it's
hard
to
say
exactly
what's
going
on
in
each
of
these
pipelines.
But
I
think
we
can
glean
that
people
doing
things
in
12
seconds
are
probably
not
doing
the
same
level
of
complex
testing
that
people
are
doing
in
25
minutes.
A
So,
but
do
half
of
all
builds
do
finish
under
four
minutes,
and
another
thing
you
can
do
is
because
you
can
chain
different
pipelines
together,
you
could
have
a
unit
testing
that
runs
very
often,
and
you
could
maybe
only
run
system
tests
a
once
unit
test
pass
or
be
on
a
schedule
when
you
have
n
commits
through
or
when
you
have.
You
know
it's
been
four
hours
since
the
last
time
you
ran
or
something
like
that.
A
So
you
can
see
there
could
be
some
differences
that
way
as
well
2019
compared
to
2020,
not
a
lot
of
significant
differences
here
you
know,
but
the
one
thing
that
I
thought
was
really
cool
was
that
the
run
time
on
average
has
gone
down.
So
if
you
remember
from
the
first
metric
we
saw
there
were
more
more
initiations
of
pipelines
and
we're
also
seeing
that
they're
running
shorter.
So
people
are
basically
engineering,
a
faster
throughput
cycle
and
a
faster
feedback
feedback
cycle
in
2020.
A
A
How
often
does
your
pipeline,
complete
with
a
with
a
green
status,
is
really
the
question
that
we're
asking
and
in
2020
five
percent
of
things,
never
ever
complete
green,
and
that's
probably
people
that
you
know.
Maybe
they
set
up
ci
on
an
experimental
project.
They
were
working
on,
they
got
distracted
and
they
never
got
back
to
it,
and
so
it
just
you,
know,
kind
of
stayed
there
and
never
had
or
never
had
a
green
build
could
be.
A
There's
some
tests
that
are
super
difficult
to
work
with
they
never
had
a
green
build
could
be
the
right
terrible
code.
I
doubt
that
one
but
could
be,
and
then
you
can
see
that,
like
above
the
90th
percentile
scentile,
they
never
have
a
red
build
and
there
are
certain
workflows
that
that
makes
sense,
for
maybe
that
makes
sense.
But
there
are
certain
workflows.
That's
pretty
common
for
things
again
like
if
you're
uploading,
an
artifact
to
s3
a
lot
of
times.
Those
are
shell
scripts
and
a
lot
of
people.
A
Don't
codes
very
defensively
in
shell
scripts
in
it,
so
in
effect
it
may
exit
zero,
whether
it
worked
or
not.
In
other
cases,
it
could
be
that
people
just
have
had
very
good
coverage
and
they
do
everything
locally
on
the
laptop.
They
make
sure
it's
going
to
be
green
before
they
push
it
up
to
the
ci
system
and
that's
when
they
get
their
feedback.
That
says
yes,
everything's
fine,
but
you
can
see
that
on
average
it's
about
54
of
builds
are
in
the
green
or
past,
with
a
green.
A
A
We
also
saw
some
with
no
samples
or
no
failures
that
I
just
covered,
so
almost
no
difference
between
2019
and
2020.
I
think
this
is
really
the
only
difference
of
the
50th
percentile.
We
dug
in
a
little
bit
deeper
to
kind
of
see
if
there
was
any
other
meaningful
differences
and
again
it's
very
small,
but
you
can
see
that
you
know
the
all
green
rate
is
a
little
bit
lower.
It's
at
the
85th
percentile,
whereas
the
85th
percentile
last
year
was
still
98
percent
green.
A
So
I
don't
know
that
that's
that
meaningful,
but
we
just
wanted
to
look
and
see
if
there's
anything
different
at
all,
2019
and
2020.
A
in
terms
of
recovery
time.
This
is
the
time
that
a
pipeline
sits
in
a
failed
state,
and
so
you
know
if
a
bill
goes
red.
What
happens
and
kind
of
the
classic
agile
mentality
is
if
something's
red,
you
know,
you
stop
the
floor,
you
stop
the
factory
and
you
go
fix
it.
A
Well,
five
percent
of
things
stay
stay
read
for
about
two
minutes,
and
this
could
be
a
case
of
somebody
submitted
a
build
and
immediately
after
they
submitted
another
build
because
they
realized
that
something
was
broken.
It
could
be
two
different
developers.
Somebody
submitted
a
build
and
somebody
else
submitted
a
build
where
one
person
was
at
a
broken
one
and
one
person
had
a
green
one,
but
that
registers
as
a
red
to
a
green
transition.
A
But
what
we
can
see
is
kind
of
in
the
middle
55
or
55
minutes
is
how
long
people
take
to
go
from
red
to
green,
so
it
could
be
okay,
I
got.
I
got
the
notification
that
my
build
was
broken
and
it
took
me
a
while
to
took
me
a
little
bit
to
identify
the
bill
was
broken.
Read
the
logs
figure
out
what
happened,
create
a
fix,
do
a
local
validation
and
submit
the
fix
back
and
that
took
55
minutes.
That's
really
not
that
bad.
A
When
you
think
about
all
the
activity
that
has
to
happen
there,
and
so
that's
that's
pretty
prompt,
and
then
you
see
kind
of
at
the
90th
percentile
you're
at
39
hours,
and
that's
just
things
sitting
in
a
red
state
for
a
long
long
time,
and
then,
of
course,
you
get
into
the
days,
but
the
mean
being
around
14
15
hours.
The
thing
that
I
found
interesting
about
that
was
that
tells
me
that,
in
a
lot
of
cases,
people
are
submitting
a
build.
A
It's
going
red
and
they're
going
home
and
they're
working
out
the
next
morning
or
going
home
or
turning
off
the
lit
closing
the
lid
on
my
computer.
Turning
off
the
computer,
because
15
hours
is
about
the
end
of
the
workday
to
the
start
of
the
next
work
day.
Of
course,
excluding
weekends,
and
so
it
seems
like
a
lot
of
people
are
not
necessarily
following
the
mantra
of
like
I
can't
leave
with
a
broken,
build.
A
So
quick
recovery
time
can
be
for
multiple
contributors
running
in
parallel.
As
I
said,
if
developer
a
submits,
a
bad
build
and
malt
and
developer
b,
some
it's
a
good
one
right
after
it's
going
to
that's
going
to
look
like
a
quick
recovery
recovery
time
measurement
in
our
data
set,
and
then
the
gap
between
the
15th
and
75th
is
kind
of
this
gap
of
waiting.
You
know
working
throughout
the
day
versus
waiting
until
the
next
day.
That's
really
where
that
gap
shows
up
between
the
50th
and
the
75th
percentile.
A
So
the
recovery
times
have
have
definitely
improved
year
over
year,
and
so
a
lot
of
that
was
just
me
kind
of
I
don't
know,
draining
slides
towards
you
and
showing
you
a
bunch
of
data,
and
that
might
not
be
the
most
interesting
thing,
and
so
now
it's
like.
Well,
what
can
you
gain
from
that
data?
What
is
the
takeaway
from
that?
A
You
know
complicated
for
all
of
us
in
our
own
ways,
and
the
pandemic
was
the
thing
that
we
had
never
had
anything
like
that
before
and
we
had
never
had
anything
in
software
delivery
that
we
could
measure
and
say,
here's
the
impact
of
a
global
event
and
here's
what
we
can
show
for
it,
and
so
we
do
have
a
little
bit
of
that,
which
is
interesting
so
for
throughput
for
those
that
can't
see
the
bottom.
A
This
is
this
is
march
right
here
and-
and
this
line
right
here
I
think,
is-
is
april
and
then
out
here.
I
think
you
get
into
the
june
july
august
time
frame.
So
this
data
set
ended
in
all
at
the
end
of
august,
but
you
can
see
that
in
the
pandemic,
basically
there's
a
there's
a
curve
for
almost
everything-
and
you
will
see
this
throughout
the
rest
of
the
data
sets
as
well.
A
But
on
throughput
it
start,
you
know
things
start
to
go
up
even
at
the
the
high,
the
95th
percentile
and
even
at
the
lower
percentiles
so
throughput.
Basically,
people
were
putting
more
tests
and
things
were
taking
or
they
were
kicking
off.
More
excuse
me,
they're,
kicking
off
more
builds,
and
this
could
be
developers
back
at
their
laptops
instead
of
at
conferences.
It
could
be
a
lot
of
you
know:
people
focusing
on
the
core,
the
core
of
their
business
and
making
sure
that
they
had
stability
built
for
their
their
core
systems.
A
But
peak
throughput
from
this
data
set
until
august
was
april
was
in
april
of
2020
post.
Then
I
actually
think
throughput
has
been
higher
in
january,
but
that's
that's
a
whole
other
topic.
So
peak
throughput
was
april
of
2020
and
I
think
that
you
know
a
lot
of
that
was
wow.
People
are,
are
not
traveling
they're
on
they're,
not
really
going
on
vacation
and
so
you're.
A
Getting
almost
you
know
a
very
close
to
a
100
capacity
workforce
working
in
software,
and
so
people
were
just
doing
a
lot
more
with
their
software
delivery
practices
and
using
ci
and
cd.
A
After
april
throughput
falls
off
a
bit
and
that
could
be
from
mental
taxation.
It
could
be
people
realizing,
hey,
I'm
still
going
to
take
some
time
off.
Even
if
I'm
not
going
to
go
somewhere,
I'm
going
to
stay
at
home
and
you
know
clean
the
basement
or
whatever
the
activities
are
that
are
perhaps
fulfilling
for
people
that
who
can't
leave
their
house,
but
we
do
see
that
fall
off
a
bit
and
then
duration.
A
I
want
to
make
sure
the
work
I'm
doing
is
very
stable
and,
very
you
know,
business
ready
and
so
that
that's
something
that
people
could
be
working
on,
but
then
also
you
start
to
see
it
fall
and
I
think,
as
people
got
dissatisfied
with
the
duration
of
their
pipeline
runs,
they
said.
Oh,
let's
just
spend
some
engineering
time
on
this
and
make
it
shorter.
A
Changes
were
very
stable
or
stable,
made
and
then
increased
again
to
the
longest
duration
in
august,
because
again,
as
people
are
adding
more
tests,
basically
what
happens
is
there's
kind
of
a
you
know
a
camel
hump
over
time.
Where
you
add
more
tests,
you
get
more
capabilities,
but
it
takes
longer
than
you
do.
Some
re-engineering
now
you're
back
down
to
a
low
point.
You
kind
of
rest
on
your
laurels.
A
You
don't
do
that
engineering
for
a
while,
and
then
you
repeat
the
cycle
all
over
again,
so
the
hypothesis
is
definitely
more
tests
written
in
march
driving
up
driving
up
duration.
In
april
we
saw
kind
of
the
concentrated
effort
on
optimization
and
what
was
interesting
was
we
saw
that
across
the
board.
You
know
those
graphs
move,
they
don't
look
like
they
moved
in
like
super
significant
ways,
but
when
you
think
about
that
across
44
000
organizations,
it
is
a
pretty
big
move.
A
The
success
rate
overall,
I
think,
has
an
interesting
narrative
here
in
that
you
know
again.
Success
rate
goes
up
as
everybody
returns
home
from
lockdown,
you
know
kind
of
from
being
out
around
the
globe.
Success
rate
goes
up,
maybe
people
are
paying
more
attention
to
their
builds
or
they're
working
harder
on
them
or
they.
You
know
they've
thrown
themselves
into
work
to
distract
them
from
everything
else
going
on,
except
for
this
very
bottom
percentile,
where
it
kind
of
falls
off,
and
I
think
a
lot
of
this
might
have
been.
A
A
When
I
get
back
or
I
I
don't
care,
you
know,
or
how
many
times,
people
kind
of
look
at
the
ci
system
and
said
yes,
it's
red,
but
it's
fine,
because
that
one
test
isn't
meaningful
because
it's
flaky
and
they
haven't
optimized
that
out
or
you
know,
reworked
that
test
or
whatever
so
kind
of
in
the
lower
end.
A
I
think
you
can
we
can
clean
that
that's
a
lot
of,
I
would
say
different
behavior
than
up
here
like
the
up
here,
is
probably
more
the
the
business,
the
business,
critical
parts
of
the
of
the
software
delivery
system,
where
it's
your
main
application,
your
core
business,
your
core
business
features
your
business
logic
and
down
here
could
be
a
lot
of
developer
tooling
or
it
could
be
even
things
like.
You
know,
side
projects
for
me,
I
like
to
write,
chat,
bots,
and
so
it
could
be.
You
know
something
like
that.
A
A
You
know
in
our
houses
and
working
remotely
and
not
being
in
offices
and
for
some
companies
that
was
a
giant
transition
and
for
others
it
was
minimal,
but
the
overall
data
set
shows
that
success
rates
went
up
quite
a
bit,
and
so
the
hypothesis
was
working
hard
on
core
business
stability
during
those
march
and
april
time
frames,
and
it
could
be
that
a
lot
of
other
experiments
are
things
that
people
had
hoped
to
launch
throughout
2020
kind
of
got
derailed,
because
you
know
there
was
a
whole
new
focus
in
it
for
the
year.
At
that.
A
So
since,
since
april
recovery
time
has
been
improving,
which
means-
let's
see
this
is
golf
not
bowling,
you
want
to
be
lower
and
so
we're
seeing
people
do
a
better,
better
job
of
monitoring
their
builds
recovering,
faster,
saying,
hey.
This
is
red.
What's
going
on,
I'm
going
to
fix
this
and
being
more
successful
with
that
and
the
orgs
with
the
longest
recovery
time
is
really
the
ones
that
have
improved
the
most.
If
it
used
to
be
you
know,
sometimes
it
would
take
you
three
or
four
days
to
recover
a
build.
A
So
the
hypothesis
there
is
that
we
have
fewer
distractions
from
everybody
working
at
home
and
so
they're
more
able
to
pay
attention
to
those
build
notifications.
And,
of
course
we
say
fewer
distractions,
that's
for
some
values
of
distraction.
Everybody
has
their
own
situation
that
they've
had
to
deal
with
or
been
dealing
with,
whether
that's
not
a
good
place
to
work
at
home
or
having
children
or
other
loved
ones,
you're
taking
care
of
or
living
with,
or
you
know,
people
that
have,
I
guess,
shared
in
your
bubble
for
lockdown.
A
So
there
could
be
lots
of
distractions,
but
the
fewer
distractions
overall
might
be
that
you're,
not
you
know
having
as
many
walk
to
go,
get
coffees
or
you
know
down
in
the
cafeteria,
with
your
teammates
or
other
meetings
that
are
even
away
from
your
computer,
because
even
if
you're
attending
a
meeting
virtually,
you
may
still
see
a
build
notification
and
be
able
to
act
upon
it
while
you're
in
that
meeting
versus
you
know
in
a
different
room
or
some
things
like
that,
so
one
of
the
other
things
we
looked
at
through
all
of
this
was
what
what
can
you
glean
about
branch
information
from
a
data
set
like
this
and
the
the
reason
that
I
kind
of
asked
about
branch
information
was
during
the
summer
months
of
2020.
A
We
had
a
lot
of
social
unrest
around
black
lives
matter
and,
of
course,
the
killing
of
george
george
floyd,
and
there
was
a
kind
of
a
renewed
push
from
an
earlier
conversation
that
happened
in
technology
around.
Can
we
get
rid
of
this?
Word
master?
It's
it's.
It's
got
a
lot
of
baggage
for
people,
and
it's
not
necessarily
the
thing
that
we
need
to
talk
about.
A
It's
not
even
that
it's
not
even
the
most
correct
word
in
a
lot
of
cases,
for
what
we're
using
and
github
came
out
and
said:
hey
we're
going
to
kind
of
make
it
so
that
we're
not
using
master
anymore
and
we're
going
to
use
main,
and
I
think
a
lot
of
other
companies
were
saying
yeah.
We
can
do
something.
A
This
is
something
that
we
can
do
to
show
that
we
care
and
because
we
do
and
we
want
to
move
from
using
the
words
master
which
is
harmful
language
to
you
know
something
as
as
pinpoint
accurate
as
main,
which
I
think
is
actually
more
accurate
in
a
lot
of
cases,
and
so
we
wanted
to
look
at
that.
How
many
people
had
been
using.
It
did
the
use
of
master
really
decrease.
I
think
was
really
the
question
and
the
answer
is
no.
A
It
didn't
not
in
any
significant
way,
but
that's
not
yet,
and
as
of
this
data
set
drawing,
you
know,
kind
of
closing
out
at
the
end
of
august.
Early
september,
github
had
not
changed
the
defaults
on
everything
yet,
and
so
it
was
like
for
some
pro
for
some
new
projects
I
think
maine
was
was
the
default
branch,
but
what
they
weren't
helping
your
retrofit.
A
They
didn't
have
all
the
tools
up,
then
they
do
have
different
tools
available
now
to
make
it
a
little
easier
to
make
sure
that
your
default
branch
is
not
named
master.
So
this
is
something
that
we'll
continue
to
monitor,
but
I
was
kind
of
interested
in
the
conversation
here.
This
has
started
years
ago
with
a
lot
of
clustering
software
where
they
had
master
slave
mentality
and
they
wanted
to
take
all
that
away
and
use.
A
A
So
success
rate
on
the
default
branch
is
is
significantly
higher
than
on
the
non-default
branch.
And,
if
you
think
about
that,
if
your
default
branch
is
what
everybody's
using
and
you
have
everybody
collaborating
around
that
branch,
you
want
that
to
be
in
a
stable,
safe
state,
because
that's
the
thing
that
everybody
else
is
relying
upon
your
own.
A
One
of
the
things
that
this
also
tells
us
is
that
people
that
say
they
want
to
do
trunk
based
development,
they're,
not
all
developing
and
pushing
directly
to
trunk
or
main
what
they
are
doing
is
still
using
topic
branches,
but
those
are
relatively
short-lived,
which
I
think
is
kind
of
using
the
best
of
both
worlds
of
kind
of
you
know
you
want
to
continuously
integrate
onto
your
main
branch
so
that
you
don't
have
these
giant.
A
A
The
duration
on
default
branches
is
faster
than
topic
branches
and
generally
that's
because
people
have
probably
engineered
and
streamlined
that,
because
again,
everybody
relies
on
the
main
branch
than
they
do
off
of
the
topic,
branches
and
so
topic
branches.
Sometimes
they
take
longer
because
they
have
new
tests
or
they're
breaking
all
the
time
or
they're.
You
know
you
put
in
some
sleep
statements
because
you're
trying
to
debug
like
a
race
condition
or
something
like
that.
A
There's
a
lot
of
reasons
that
topic
branches
can
be
slower,
but
they
are
slower
at
every
single
percentile
than
that,
the
main
the
main
branch,
or
at
best
or
they're,
even
recovery
times
lower
on
the
default
branch
as
well,
and
we
imagine
this
is
because
more
people
are
watching
and
care
about
the
signal
from
the
default
branch
than
from
some
of
the
topic
branches.
A
And
I
guess,
when
I
say,
there's
other
topic
branches.
There
could
also
be
like
release
branches.
You
know
if
you
have
version
one
of
your
software
version,
two
version
three,
those
could
all
be
different
branches
in
a
tree
and
they
don't
get
merged
between
each
other,
because
they're
different
levels
of
software,
but
the
main
branch
is
definitely
the
one
that
that
is
having
the
lowest
recovery
time,
and
we
we
imagine,
that's
because
that's
the
one
that
most
people
are
watching
and
care
about
probably
has
the
biggest
impact
in
overall
software
delivery.
A
So
from
this,
what
development
practice
is
definitely
work
that
we
can
glean
from
some
of
this
branch
information
and
then
just
from
the
data
set
overall
success
rate
does
not
correlate
with
company
size.
We
don't
see
that
large
companies
have
much
more.
You
know
many
more
green
builds
than
smaller
companies
or
different
team
sizes
seem
to
have
a
big
impact
that
doesn't
seem
to
be
very
meaningful
in
most
cases.
A
A
Actually
it
takes
you
that
long,
you're,
probably
doing
great
because
you've
written
at
least
15
minutes
worth
of
tests,
but
if
no
one
else
is
relying
on
that
data,
you're,
probably
not
optimizing
it,
and
I
guess
that's
one
of
the
things
that
you'll
see
throughout
this
recovery
time
decreases
with
an
increased
team
size,
so
the
more
people
you
have
that
are
looking
at
the
the
content,
the
faster
your
recovery
time
will
be,
and
that
kind
of
makes
sense
when
you
think
about
just
the
impact,
if
you
have
10
people
that
are
all
looking
at
the
same,
the
same
mainline
branch
and
that's
the
thing
they
care
about.
A
If
that
goes
red,
there's
10
people
looking
at
it
are
trying
to
improve
it,
and
if
you
have
100,
you
have
100
people
trying
to
improve
it.
Not
a
lot
of
projects.
Have
you
know
100
or
200
contributors.
There
are
some
that
you
know
definitely
work
at
a
more
model,
repo
style
that
could
or
it
could
be,
it's
a
larger.
You
know
it's
the
main,
the
main
project
for
the
business
or
things
like
that
or
you
know
a
major
open
source
project.
A
I
guess
would
be
another
one
where
you
could
see
things
that
get
that
go
into
the
red
and
then
get
recovered
very
quickly.
So
more
people
I
mean
more
eyes
on.
It,
basically
makes
it.
So
you
have
recovery
time
more
more
quickly,
which
I
don't
think
is
too
surprising
and
the
longest
recovery
times
are
also
from
teams
of
one
and
I'm
just
as
guilty
as
contributing
to
this
data
set.
For
that
point
because,
like
I
said
earlier,
sometimes
I'll
write,
you
know
a
little
chatbot
thing
write.
A
Some
tests
it'll
fail
and
I
won't
get
back
to
it
for
four
weeks
and
so
it'll
sit
there
and
just
be
read
for
four
weeks
or
three
and
a
half
weeks
or
whatever,
and
I'm
sure
a
lot
of
people
do
that.
You
know
they
run
some
tests
and
they
think
oh,
this
isn't
that
important
and
they
move
on
to
something
else
throughout
their
day.
A
So
in
pretty
much,
every
indicator
performance
is
better
or
you're
getting
better
outcomes
from
the
way
we
measure,
if
you
have
more
than
one
contributor
and
kind
of
what
that
leads
me
to,
is
like
there's
a
collaboration
element
here
and
that's
if
somebody
else
is
impacted
by
your
work,
you're
going
to
be
better
at
that
work
and
that's
from
a
recovery
time
from
a
throughput
standpoint
from
a
duration
standpoint:
it's
just
how?
How
much
are
you
using
the
integration?
A
One
of
the
other
questions
I
had
is
is
don't
deploy
on
friday,
a
real
thing,
and
what
can
we
glean
out
of
that?
And
so
we
looked
into
this
and
we
see
that
there's
about
70
percent,
less
throughput
or
you
know,
initiations
of
builds
on
weekends
and
so
okay.
Well,
that's
that's
some
good
good
good
starting
point!
There's
11
throughput
11
less
on
fridays,
and
we
do.
A
We
do
all
utc,
because
it's
the
only
thing
that
makes
sense
when
you
have
global
users
and
so
friday
utc,
you
know,
is
thursday
night
in
the
us
through
friday.
I
guess
late
friday,
midday
in
the
us
and
that's
about
11,
less
okay!
Well,
that's
interesting!
But
then,
if
you
compare
that
to
monday
monday
is
about
nine
percent
less
and
that's
because
you
know
utc
again,
the
us
is
not
totally
online
for
all
of
that,
but
you
do
have
most
of
asia
online
and
things
so.
A
But
when
you
look
at
a
9
difference
versus
an
11
difference
between
monday
and
friday,
you
know
you
just
try
and
control
for
weekend
traffic
and
as
you
try
and
control
for
weekend
traffic,
you
see
that
there's
not
really
a
significant
difference
between
the
work
going
on
on
mondays
or
fridays,
and
so
from
this
we
can.
We
can
hypothesize
that
people
are
not
really
slowing
down
on
how
many
pushes
they're
doing
or
how
many
deployments
they're
doing
on
fridays.
A
So
they're
not
really
holding
back,
and
I
think
that's,
I
think,
that's
really
neat
people
that
are
doing
more
advanced
software
delivery
technology
at
this
point
are
not
as
scared
of
hey.
I'm
not
going
to
do
this
work
because
it's
a
friday
and
I'm
I'm
afraid
of
it.
A
So
that
was
an
interesting
data
point
and
then
the
last
thing
that
we
kind
of
asked
about
was
what
language
trends
do
we
see
on
the
platform
overall,
and
you
know
this
was
just
in
the
sample
of
percentages
and
there
are
like
once
you
get
below
about
a
half
percent
there's
it's
just
hundreds
and
hundreds
and
hundreds
of
languages,
but
you
know
so
this
is
this
is
kind
of
the
builds
overall,
you
see
javascript
and
python
being
a
big
portion
of
a
proportion
of
our
builds.
A
A
I
guess
I
came
from
a
ruby
background.
I
felt
like
testing
was
kind
of
really
ingrained
in
ruby
from
the
ground
up
as
soon
as
I
learned
it,
there
was
always
these
folders
that
had
you
know
tests
and
they
had
test
templates
and
people
just
filled
them
out,
which
was
one
of
the
coolest
things
ever,
and
so
people
got
very
used
to
running
tests
a
lot
in
that
community.
A
Some
of
these
other
communities
have
definitely
adopted
that
over
time,
but
that
was
the
first
one
I
was
really
exposed
to.
One
of
the
things
you'll
see
is
that
people
are
kicking
off.
You
know
kicking
off,
builds
quite
a
bit,
but
then,
like
the
things
that
take
a
lot
longer
like
you
know,
a
docker
builds
sometimes
take
a
lot
longer,
a
c
plus
plus
maybe
they're.
A
Maybe
they're
kicking
those
off
less
often
success
rates
overall
you're
getting
very
high
success
rates
in,
I
would
say
things
that
don't
feel
like
quite
traditional
programming
to
me.
You
know
like,
like
I
said
earlier
in
shell.
It's
a
lot
of
cases
where,
if
you
don't
code
it
very
defensively,
it
will
exit
zero.
Even
if
something
bad
happened,
you
know,
if
you're
not
doing,
set
dash
e
or
you're
not
doing
other
types
of
error
handling
with
css.
I
don't
actually
know
how
you
test
css
directly.
A
A
Traditional
programming
languages
pick
up
a
little
later
and
you
have
go
and
javascript
and
typescript,
and
then
you
kind
of
get
all
your
statically
typed
strongly
type
languages
down
at
the
bottom
here,
where
the
success
rate's
a
little
bit
harder
and
that
kind
of
makes
sense,
those
you
can
get
type
errors.
You
can
compile
errors,
you
can
get
other
types
of
errors,
so
that
may
contribute
to
your
success
rate
recovery
time.
I
found
this
one
fascinating
in
that
there
was.
A
You
know
a
lot
of
languages
that
I
would
consider
you
know
pretty
modern,
pretty
dynamic,
but
then
like
go
just
takes
the
cake
at
the
top
for
recovery
time.
So
apparently
people
writing
go
care
a
whole
lot
about
their
pipelines
and
whether
or
not
they're,
green
or
not,
and
so,
if
they're
red
they
go
and
flip
them
pretty
quickly
over
to
green
and
that's
I
thought
that
was
really
neat.
That
was
quite
different
from
last
year.
A
If
I
recall,
I
believe,
last
year,
javascript
was
number
one
and
go
isn't
like
the
15th
or
16th
spot.
So
apparently,
people
that
have
adopted
go
in
2020
have
really,
I
guess,
pay
attention
to
how
their
ci
builds
are
going
and
then
duration.
This
is
not
that
surprising
to
me.
The
things
that
are
slower
are
the
you
know
the
strongly
type
c,
plus
plus
swift
type
languages,
and
then
the
the
very
fast
things
could
be.
You
know
shell,
which
again
could
be
the
entire
workflow
is
copying
a
file
from
one.
A
You
know
like
one
f3
bucket
to
another.
You
know
something
like
that:
hashicorp
language
being
pretty
high
up,
and
so
a
lot
of
things
that
I
think
would
be
just
they
sound
like
they
would
run
fast.
If
I
think
about
them,
and
then
you
get
to
a
bunch
of
other,
you
know
languages
where
the
duration
would
slow
down
so
kind
of
from
all
that
we
had.
We
had
a
data
we
had.
Why
did
we
do
this?
What
got
us
here?
What
was
the
impetus
to
look
at
all
this
data?
A
Like
the
you
know,
the
state
of
devops
metrics,
and
so,
when
you
think
about
you
know,
deploy
rate
frequency
and
restore
time.
You
know
things
like
that
that
is
going
if
you're
just
average
at
using
ci
and
you're,
basically
applying
it
regularly
you're
going
to
show
up
somewhere
between
me
and
hot
medium
and
high.
So
if
you're
really
good
at
the
ci
stuff,
you
can
you're
going
to
show
up
in
the
high,
but
just
basically
being
okay
at
it.
Gets
you
a
pretty.
A
A
Maybe
you
spend
a
little
more
concentration
time
on
test
duration
or
test
throughput
and
cycle
time
and
how
often
you're
kicking
off
builds
and
how
fast
you're
recovering
you
can
move
right
into
the
high
performing
group,
and
I
don't
think
it's
quite
as
tall
of
order
as
it
once
was
seen
when
you
looked
at
kind
of
classic
software
delivery
and
devops
measurement
metrics,
and
so
this
is
basically
saying
that
exact
thing,
if
you're
kind
of
average
at
this
you're
going
to
be
running
line
between
medium
and
high,
our
most
frequent
users
of
ci
have
better
outcomes
on
all
four
metrics,
and
so
this
is
just
if
you're,
just
kicking
off,
builds
more
you're,
actually
probably
better
at
integrating
your
pipelines
overall,
and
that
could
be
because
you
watch
you
watch
the
recovery
time.
A
You
optimize
your
throughput,
you
optimize
your
tests,
you
optimize
for
feedback
for
other
users,
but
that's
just
one
of
the
main.
The
main
things
is,
you
know
the
more
you're
leaning
into
this,
the
better
you're
going
to
do
and
that
the
metrics
were
not
designed
to
be
like
the
more
you
do
this
you
should.
A
You
should
be
better
at
the
metrics,
it's
just
that's
kind
of
how
they
correlate
and
then
more
collaborators
means
better
outcomes,
and
I
I
think
this
one
was
my
favorite
just
because
I
I'm
really
a
human
person,
and
I
like
working
with
other
people
and
so
the
more
people
you're
working
with
on
your
team,
the
better
outcomes
you're
going
to
be,
and
that
could
be
because
you
all
hold
each
other
accountable
to
keep.
You
know,
pipelines
green
or
keep
durations
low
and
fast
feedback
cycles
for
your
software.
A
So
I
guess
one
of
the
things
I
would
say
is:
if
you're
having
trouble
you
know
doing
integration
well
go
find
a
partner
and
go
work
with
them
and
invite
another
one
and
write
another
one
and,
and
that
could
be
you
know,
for
any
type
of
software
delivery.
I
guess
the
last
thing
is
I'm
at
circle
ci
and
we're
hiring.
So
if
you're
interested
in
learning
more
about
this
working
with
this
data
set
or
building
the
platform
that
helps
you
know,
people
create
software
all
day
every
day,
you're
more
than
welcome
to
apply.
A
B
Great,
thank
you
so
much
michael
we're,
gonna
open
it
up
for
questions
since
we're
a
small
group,
I'm
gonna
unmute
folks,
so
you're
welcome
to
start
coming
off,
mute
and
talking
or
you.
If
you're
shy,
you
could
also
just
type
in
your
question
and
I
can
moderate.
B
Okay,
michael,
I
won't
take
any
more
of
your
time.
Thank
you
so
much.
I
will
post
this
also
on
the
cdf's
youtube
channel
under
the
cdf
webcast
playlist.