►
From YouTube: CDF Member Webcast Presents: Continuous Delivery Insights - Common Challenges in Continuous Delivery
Description
Speaker: Ravi Lachhman from Harness.io
A major goal of the Continuous Delivery Foundation is to drive education and adoption of Continuous Delivery practices. Recently at Harness, we concluded research spanning a year by interviewing over 100 firms in their Continuous Delivery maturity. We analyzed and aggregated the results and uncovered common challenges that many organizations face. In this webinar, we will go over common challenges and approaches to overcome these challenges while educating Continuous Delivery is certainly a journey. For example, only four percent of organizations we interviewed leveraged a canary based approach. Not sure even what a canary based approach is, tune in!
A
About
just
where
they
see
continuous
delivery,
what
are
they
doing?
It
was
super
Trudy
models,
so
I'll
go
ahead
and
get
started
yeah.
What
were
you
talking
about
a
Cepheid
for
sure?
So,
thank
you.
Everybody
for
coming
to
you
continue
this
axillary
foundation.
Webinar
I'm,
Robbie
Laughlin
today,
I'll
be
talking
about
delivery,
insights,
common
challenges
of
coos
delivery.
Also
I'm
part
of
the
CDF
speaker's
bureau
always
excited
to
talk
about
all
sorts
of
things
that
continue
so
every
related
or
unrelated.
If
you
have
any
questions,
feel
free
to
type
of
man.
A
If
you
have,
you
know
just
any
thoughts,
love
to
talk
to
people
and
I
find
myself
talking
all
the
time
a
little
bit
about
myself,
so
Robbie,
lock
him
in
most
of
my
backgrounds,
been
in
distributed
systems
I
like
to
build
and
break
things,
I
had
several
high
profile
incidents.
I
was
getting
outages
before
there
was
blameless
culture,
so
I
definitely
got
blamed
for
a
lot
of
stuff,
but
currently
I
worked
for
harness
I'm
one
of
their
eventual
ISM
team.
A
Prior
I've
worked
for
epidemics,
and
then
this
is
my
career
has
been
building
big
JAMA
basis
is
that
Linda's,
favorite
hat
and
IBM-
and
you
know
with
with
all
of
this
experience,
I've,
got
to
see
how
a
lot
of
people
got
to
deploy
software
and
to
some
challenges
of
what
we
we
see
instead
of
this
of
one
minute
on
who
is
harness
so
harness
where
this
canary
in
the
middle
here
so
harness
is
a
continuous
delivery
as
a
service
platform,
and
so
we
help
orchestrate
a
few
things.
We
help
orchestrate
your
releases.
A
We
help
orchestrate
your
conference
building
exercises
and
we
help
orchestrate
your
approvals.
So
we
do
all
of
this
as
a
service,
but
let's
go
ahead
and
get
started
so
a
part
of
what
we
do
at
harness
as
when
we
look
talking
to
potential
prospects
or
customers
or
anybody
in
between.
We
really
do
something
called
we
do
a
capability
assessment
and
take
a
look
at
the
maturity
model
or
people
have
on
their
continuous
delivery.
Let's
say
maturity
mall
now,
with
the
business,
her
maturity
model
right.
It's
it's
always
very
subjective.
A
Right
like
this
is
where
a
kind
of
going
off
script
a
little
bit.
It's
you
don't
know
how
much
sure
you
are
actual
users,
benchmarking
against
other
people
like
were
the
pundits
kind
of
if
you
read
Lean
Enterprise
or
if
you
read
some
other
type
of
benchmarking
type
of
study
a
lot
of
times.
If
you
have
to
put
yourself
like
where
the
authors
of
those
particular
books
are
until
they.
A
Corpus
of
like
interviewing
people,
they
don't
have
a
rage
to
take
a
look
at
a
this.
Is
this
is
a
little
part
of
the
range.
This
is
the
high
part
of
the
branch,
because
you're
coming
in
really
objectively
to
this
type
of
conversation
saying
okay,
this
is
what
you
have
right.
So
when
we,
when
we
typically
interview
customers,
we
take.
We
take
this
a
attack
right,
okay,
there's,
no
right
or
wrong
answer
to
this
everybody,
those
things
a
little
bit
differently,
yeah
yeah!
We
don't
have
beyond
the
Phoenix
project.
A
It
gives
me
like
nightmare,
so
it
might
be
a
time.
There's
been
a
booing,
but
yeah.
You
also
try
not
to
be
on
the
unit
or
project
right
like
hey,
though
soon
pieces
of
literature
or
fictional,
but
they're
they
hit
home
really
closely
to
folks,
and
so
what
we?
What
we
typically
rod
is
something
called
a
CDC,
a
a
kadhi
delivery
capability
assessment,
and
so
what
we
do?
We
take
a
look
at
a
prospector
customers,
current
process.
We
look
at
common
tools
and
then
we
just
interview
them
a
level
of
effort
right.
A
So,
if
we're
not
doing
them
on,
like
oh
you're,
only
deploying
once
a
day,
you're
deploying
five
times
a
day,
if
you're,
not
you
deploy
once
every
couple
months
and
I'll
share
those
findings,
but
what
we've
seen
over
a
hundred
organizations
but
we're
really
having
to
build
this
capability
assessment
so
making
sure
that
paint?
This
is
a
set
of
four
piece
of
data
that
we
have.
So
we
will
wrap
this
up
in
a
book
actually
so
you're
getting
the
abridged
version
of
the
book.
A
So
I
can
talk
talk
through
it,
so
it's
called
cookies
delivery,
insights,
2020.
We
have
a
blog
post
in
the
CCF
kind
of
going
through
a
handful
of
the
findings,
also
at
the
end
of
the
webinar
Bible
links.
If
you
want
it
feel
free
to
take
a
look
at
it,
we
kind
of
get
the
findings
on
the
talk
about
today
or
level
of
effort
type
of
things
in
in
the
continuous
delivery
insights,
2020,
there's
more
tool,
tooling,
centric
stuff,
like
hey
this
many
people
were
using
Jenkins
for
Cir
there's
many
people.
A
We
talked
to
use
some
sort
of
like
ticketing,
Pro,
ticketing
or
Tim
management.
Type
of
application,
when
going
through
the
process
but
I'll
spare
you
from
those
details
a
bit
so
the
very
first
question:
if
you
were
totally
green
to
continuous
delivery,
you
actually
might
do
a
google
search
like
this
right.
You
know
you
might
have
what
Lean
Enterprise
I'm.
Sorry
like
you
could
have
a
a
copy
of
Lean
Enterprise
with
one
hand
you
might
flip
through
it
and
like.
Let
me
go
to
Google
to
make
sure
that
we're
doing
it
correctly.
A
B
A
Certainly
an
outlier
right,
the
amount
of
time
that
people
take
you
to
play,
but
I
always
like
fear
of
deploying
or
fear
of
missing
out.
If
you
care
about
FOMO
right,
if
you're
not
doing
it
like
the
best
industries
you're,
not
there
right,
the
pundits
told
you
that
you
need
to
be
doing
more
you're,
not
there,
hey,
don't
worry
like
you're,
not
you're,
not
alone.
In
fact,
a
butcher
of
people
deploy
a
lot
less
than
this
and
go
back
to
you
if
you're
flipping
through
this
actually
is
a
image
from
the
Lean
Enterprise.
A
Again
sorry
jazz,
like
this
is
an
image
from
the
lead
enterprise,
and
so,
if
you,
if
you
were
to
take
a
look
at
this
know,
clearly
like
as
a
person,
you
might
want
to
be
all
the
way
to
the
right.
You
know
what
we
want
to
make
sure
that
we're
enabled
to
the
right
so
for
us
to
get
to
100
releases
a
day.
That's
still
less
than
the
area
level
point
seven
seconds.
You
know
we
need.
We
need
some
sort
of
get
option,
pull
request
face
development.
A
We
also
need
particular
ways
of
packaging
particular
ways
of
oh
yeah.
Everybody
could
talk
yeah
sure,
like
we
need
particular
ways
of
having
discoverability
in
particular
ways
of
like
the
mixing
right,
some
sort
of
release
strategy
such
as
a
viewer
canary
and
then
also
you
have
the
abilities
all
service
cool
like
these
are
all
like,
like
great
aspirational
goals
to
get
to,
but
maybe
realistically
the
folks
on
the
phone.
Do
you
know
your
enterprise?
The
best?
Do
you
know
your
organization,
the
best
and
sometimes
the
paradigm?
Does
it
really
meets
no
meet
your
requirements?
A
So,
okay,
first
question:
so
how
much
did
we
see
organizations
deploy?
Well,
if
the,
if
the
picture
didn't
give
it
away,
we
saw
him
pull
out
of
a
hundred
organizations.
We
saw
a
book
share
of
organization.
That's
deploy
every
two
weeks
now.
Listen!
This
is
to
a
production
based
system
right
like
you're
like,
oh.
A
So,
in
my
background
way,
back
in
the
day,
I
used
to
work
for
an
investment
bank
and
so
like
even
before
that
the
product
companies
and
so
like,
we
would
typically,
we
would
typically
build
and
deploy
to
like
a
QA
or
a
lower
environment
like
a
dev
environment
every
day,
but
we
would
privily
going
into
production
like
what's
the
bug.
Unless
there
was
an
emergency
right,
then
nobody
had
a
break
fix
or
there's
some
sort
of
like
a
security,
vulnerability
or
emergency
scenario
which
would
bypass
our
change
management
process.
A
But
typically,
we've
seen
every
two
weeks
right.
There's
bi-weekly
releases
that
the
customer
that
entry
to
tend
to
use
proceed
at
least
about
a
50
percentile,
if
I'm
not
by
the
Queen
and
going
into
you
know
going
going
into
depending
who
we
talk
to
know.
Some
of
the
data
might
be
a
little
bit
more
aggressive
because
by
the
time
we're
talking
to
an
organization,
they
already
have
an
interest
to
do
a
continuous
delivery
right.
A
Oh,
you
know
what
the
application
takes:
two
and
a
half
minutes
to
start
and
first
the
package,
something
it
to
get
to
you
part
of
the
cluster.
It
takes
another
couple
minutes.
No,
that's
that's
as
effective.
That
is
there's
a
lot
of
steps
that
are
required
to
kind
of
lead
you
up
to
it
right.
It's
so
like
what
you're
looking
at
how
hard
it
is
to
deploy.
We
call
this
toil.
A
There's
multiple
people
involved
right,
taking
it
from
the
top
going
down
like
there's
folks
who
have
to
give
you
have
to
get
approvals
to
also
like
hurt
Leslie.
This
is
actually
very,
very
good
science
group
here,
let's
say
almost
like
all
several
of
us
are
on
the
team
most
likely
each
one
of
us
would
have
to
touch.
A
If
we
had
different
roles
to
a
release,
engineer
DevOps
engineer,
we
have
people
with
dev
team
might
have
a
s
like
it's
s,
death
type
position,
each
one
I
was
at
a
touch
and
we
see
that
the
average
amount
of
time
to
talk
tol
totality
from
the
time
you
said.
Okay,
this
is
ready
to
deploy.
So
as
a
developer
up,
say:
okay,
cool.
We
noticed
that
the
of
time
that
it
actually
takes
is
two
and
a
half
weeks
or
pardon
me.
A
A
You
know
times
up
by
seven
for
this
poor
dog
here
to
go
through,
but
now
for
the
time
I
say:
okay,
let's
let's
say
like
one
day:
I
was
like:
let's
do
it
it'll
be
like
Wednesday
before
it
actually
gets
the
point
because
again,
number
of
resources
you
have
to
get
together
the
amount
of
testing
that
we
have
to
do,
and
so
it
starts
to
be
very
expensive
right.
So,
if
you're
sure
boiling
into
that
bi-weekly
deployment,
you're,
typically
spending,
you
know
we
thought
about
month,
just
told
its
total
preparation
total.
A
You
know
getting
it
on
the
team
together
and
lining
up
all
the
tickets,
making
sure
that
people
agree
making
sure
the
test
Suites
run,
making
sure
that
each
one
that
really
takes
a
long
time
and
going
for
just
going
back
to
it
again.
Like
total
effort
to
get
the
Prada
few
know,
it's
Monday
we're
gonna,
be
on
with
our
camel
says
hump
day
here,
I'll
be
Wednesday
right
by
the
time
you
get
to
the
play.
Now,
usually
what
ends
up
happening?
A
It's
it's
not
sequential
right,
like
you
know,
given
Durst
like
there's
Marcus
Marcus
being
on
the
call.
So
like
we
can
won't
have
a
multiplier
effect,
because
one
hour
total
time,
that's
elapsed
is
actually
four
dollars
for
us.
There's
floor,
let's
say
for
Jacqueline,
so
it's
five
integers
we're
making
that
quicker.
But
again,
internal
just
total,
like
pure
hours,
it'll
be
about
20
to
25
hours
of
pure
work
said,
don't
buy
that
by
us.
It
might
be.
A
We
might
be
able
to
get
everything
on
a
day,
but
still
it's
a
dedication
to
us
all
running
on
the
same
same
effort
right
and
so
one
of
the
one
of
the
core,
let's
say
pillars
of
a
delivery.
You
might
hear
this
term
called
a
canary
deployment
so
for
those
unfamiliar
the
canary
deployment
I'm
using
a
canary
and
a
chunky
otter.
A
Here,
as
this
is
my
version
of
the
canary
deployment,
so
the
users
are
tacos,
the
load
balancers
are
the
traffic
lights
and
basically,
what
the
canary
based
approach-
let's
say
the
canary
is
version
one
of
an
application
and
the
chunky.
Otter
is
a
version
1.1
at
some
point,
you're
making
a
switch
right.
So
it's
a
safety
mechanism
you're
able
to
build
confidence.
It
will
quicker,
and
so
with
that
you're
able
able
to
say
hey,
you
know
what,
in
theory
that
this
this
it
seems
simple
right.
Like
oh
yeah,
you
know
we
have
one
version.
A
We
swapped
the
other
version
of
the
keep
swapping
the
versions
until
the
canary
or
verse
of
1.1
has
completed
the
entire
transition,
but
there
just
is
a
lot
of
difficulty
so
over
the
hundred
four
hundred
customers
and
prospects.
We
talk
to
you,
ironically
only
four
percent
of
them
leverage
a
canary
deployment,
at
least
somewhere
in
the
organization.
So
again,
I
might
be
limited
to
who
talking
to
a
specific
team.
A
We
might
be
so
some
organizations,
we
might
be
talking
to
a
devops,
your
popper
majority
m--,
and
so
they
might
have
a
wider
purview,
but
if
he
added
all
the
team,
he
talked
to
you
only
about
four
percent
of
folks
said.
Yes,
so
let's
talk
about,
why
that's
the
case
mm-hmm
they
go
back
to
the
canary
here,
so
well,
replace
assure
my
services,
otter
here
with
canary
there's,
there's
something
that
happened.
You
have
to
make
a
decision
on
that
is
a
go
no-go
or
judgment
call
decision.
A
So
you're
making
you're
deciding
if
you're
able
to
promote
something
or
designing
if
you're
able
to
pull
back
something.
So
with
the
doctor,
you
are
gonna,
promote
something
or
lucrative
rollback.
That's
actually
very
difficult,
very
difficult,
because
you're
actually
doing
another
deployment
open-ended
question
for
those
on
the
on
the
phone
here
or
the
webinar.
Usually
when
you,
when
you're
deciding
to
do
a
canary,
you
also
have
to
like
clearly
bacon
water.
A
Bowl
back
scenario
has
to
be
it's
almost
like
a
coupon
type
of
work
right
up
until
recently
like
we,
we
might
have
talked
about
roll
backs
like
okay.
What's
your
back
out
rata
G,
how
did
we
go
up,
making
a
change
but
we've
never
actually
tested
rolling
back
on
us.
You
have
to
write
like
that.
That's
always
a
funny
part
as
quintessential
is
having
a
roll
back
might
be
you
actually
don't
test
it,
and
so,
like
you're,
actually,
banking
on
a
roll
back
potentially
happening
with
the
canary
deployment.
That's
part
of
the
challenge!
A
Also,
it's
just
what?
How
do
you
end
up
promoting
something
right
like
what
are
your
success?
Metrics
people
would
bicker
over
that.
All
the
time
say
you
know
if
it's
me
to
do
always:
bang
all
big
bang
at
one
time,
just
dis
is
signing
the
judgment.
Call
we
always.
This
is
the
hardest
issue,
because
usually
it's
kind
of
a
manual
process
right
so
you're,
looking
at
we're
all
a
team
here
at
Dan
and
we're
deploying
something
it
takes
us
a
long
time
to
make
the
call
right.
A
We
have
the
benefit
of
using
other
tools
like
a
PM
tools
like
tracing
tools,
modern
tools,
but
again
that
kind
of
decision.
It's
like
hey,
you
know,
figure
the
areas
of
working
well
and
that's
it
also.
The
problem
or
the
hardship
with
using
that
type
of
deploy
mechanism
so
having
a
rollback
or
a
role
for
tragedy,
could
be
very
difficult
also.
A
So
here
we
have
our
FL
copter
rolling
around,
but
that
becomes
quite
difficult
when
you
make
the
decision
like
do
we
continue
to
move
or
do
we
key
to
move
back
and
so
taking
the
canary
out
of
the
equation.
We
also
interview
people
say:
okay,
if
there
was
a
problem
in
production
right
what?
What
is
your?
What
is
your
course
of
action?
Are
you
gonna
roll
forward?
A
That's
it
to
release,
restore
because
there's
a
couple
rational
reasons
why
you
want
to
go
back
if
you're,
under
if
you're
looking
at
pure
metrics,
such
as
MTTR
mean
time
to
resolution,
rolling
back
is
actually
in
restore
right.
So
you
restore
the
service
versus
where
we
see
the
split
but
85
percent
of
organizations,
favor
a
rollback
and
then
15
percent
of
organization.
We
talked
to
actually
favorable
forward,
and
so
what
is
a
roll
forward
you're
actually
going
to
try
to
fix
in
place
right
like
okay?
You
know
what
your.
A
If
there
was
a
problem,
yo
you're,
a
bull
share
of
organizations
would
immediately
go
back
to
restore
the
service,
but
first
15
percent
of
organizations
they're
saying
no.
The
future
velocity
is
important
to
us.
So
you're
trading
off
velocity
for
restoration,
so,
whichever
is
more
important
organization,
so
rolling
forward
actually
require
a
patch.
Now
you
could
be
argumentative
that
saying
Florida's.
Actually
you
make
you
know,
the
versions
of
virtual
will
put
cheese
coming
out,
so
you
did.
A
You
actually
go
back
and
roll
forward,
but
that's
typically
the
breakdown
that
we
typically
see
ahead
of
myself
so
like,
and
this
is
core
to
the
metrics
right
like
are.
Are
you
looking
at
velocity
or
are
you
looking
at
MTTR,
and
so
here
we
have
a
Porsche.
It's
broken
down
versus
it's
going
really
fast.
A
So,
for
me,
as
an
application
arrives,
dead
manager,
I
was
bonus
on
getting
features
into
production,
so
we
had
to
roll
back
well.
That
means
my
future
was
not
in
production
and
so
I
would
really
have
to
like,
like
negotiate
with.
You
know
the
highway,
the
my
VP
right,
like
a
like
that
you
know
we
there's
there's
an
issue.
A
You
violated
an
SLA
the
functionality
so
there,
but
we're
gonna
peel
back
something,
and
that's
it
right,
like
always
fine-tuning,
and
that's
why
I
go
that's
why
I
both
share
organizations
might
go
back
to
the
MTR
saying:
okay,
let's
we
don't
know
how
long
it
would
take
us
to
fix,
but
we
do
know
law
how
long
something
could
take
us
to
restore
the.
So
if
we
talk
about
restore
your
segue
here,
so
we
can,
we
actually
talk
about.
How
long
did
it
take
an
average
organization
once
they
decide
to
roll
back?
A
The
picture
didn't
give
it
away
the
average
time
to
restore
once
the
rollback
decision
was
made.
Was
60
minutes
right
so
going
back
from
our
canary
to
honor
example,
if
the
otter,
if
canary,
was
any
good
in
and
we
decided
to
go
to
the
stable
owner
it's
about
an
hour,
it
takes
the
average
enterprise
for
an
average
application
to
on
deploy,
redeploy
and
kind
of
validate
that
the
previous
stable
version
this.
What
do
you
think
about
it
like?
A
Well,
that's
your
fairly
complicated
right
like
it's
not
you
know
in
theory,
they're
easy,
yeah
just
redeploy
the
last
version
of
the
application.
Well,
there's
a
lot
of
stuffs,
better
involved!
It's
if
you
take
you
that
long
like
it's
like
a
half
a
week,
preparation
to
go
and
deploy
something.
It
certainly
will
take
some
time.
A
I
actually
was
impressed
with
this
amount
of
time
it
took
because
most
of
rollbacks
I've
been
if
it
was
several
hours
right
like
we
might
have
rolled
back,
but
we
had
to
continue
to
validate
continue
to
fix,
continue
to
check
if
there
was
any
systemic
problems.
We've
been
a
few
times
that
I
have
like
programmatically
corrupted
something
because
we
had
an
error
in
watching,
and
so
we
have
to
programmatically
back,
put
back
things
out,
but
again,
like
the
average
Enterprise,
give
it
an
average
application
that
that
restore
time
is
60
minutes
which
that
we
have.
A
We
have
all
white
camera
that
and
here's
a
sink,
which
you
know
it
might
be
a
little
bit
of
just
disheartening.
But
hey.
You
know
a
lot
of
times
in
the
software
world,
we're
dealing
with
things
for
the
very
first
time
right.
So
it's
it's
the
first
time
we're
doing
innovation,
work
you're
about
to
to
to
mess
up
somewhere,
hey
it's!
You
know
it's
if
you
were
doing
the
same
thing
over
and
over
and
over
and
over
again
your
boring
job
right,
we're
either
in
this
new
technology.
A
Introduce
new
features,
run
a
new
team
but
11%
of
deployments
that
had
to
profit
how
they
feel
at
some
point
right,
which,
which
is
much
rigor.
As
you
have
there's,
there's,
there's
still
the
rationale
that
things
will
will
fail
right,
and
this
is
folks
who,
before
they
emptied
so
with
this
11%
appointment.
This
is
usually
what
we
see.
These
questions
are
like
hey,
but
before
you're,
making
a
heavy
investment
tooling
for
jewelry.
A
What
is
your
typical
failure
rate
and
that's
one
of
10
right,
which
no
one
wants
to
be
in
the
scenario,
but
that's
also
very
substantial.
You
have
a
1
in
10
chance
of
failing
and
you
can
never
fail
testing
if
you
go
shake
the
prod.
How
do
you
know
this
go
ahead
and
deploy
it
not
production
right?
But
but
that's
it
right,
like
part
of
the
big
emphasis
we
have
of
the
continuous
delivery
foundation
here.
A
Is
you
know
if,
if
you're
have
the
ability
to
have
no
contractual
you're
contractually
like
what
what
is
expected
of
you
as
you
go
through
a
pipeline,
you
have
a
much
better
chance
about
your
deploy,
but
not
feeling
or
your
deployment
failing
earlier
in
the
SDLC,
so
you're
able
to
make
remedy,
and
also
you
have
a
lot
more
confidence
going
into
your
your
department.
So
question:
you
know
what
your
should
we
get
to
the
you
know
the.
A
What
can
you
do
about
some
of
these
metrics
you,
the
one
deploy
more
you
want
to
have
our
company
want
to
deploy
with
more
success
like
what
what
are
some
steps
that
you
could
do,
and
so
my
advice
is
actually
starting
small
right
and
so
Smokey
the
Bear
I
would
say
here
you
have
to
search
somewhere
right
like
trying
to
it's
as
cliche
as
a
business
example
of
this
might
might
sound.
You
actually
don't
want
to
boil
the
ocean.
A
That's
that's
fairly
difficult
or,
like
you
know,
if
we
get
this
one
out
of
your
DevOps
engineer
or
if
you're
a
clapping
engineer,
are
you
able
to
get
the
most
complicated
application
on
to
this
particular
pipeline?
We
have
solved
a
problem
for
eighty
percent
so
that
80/20
argument
well.
That
could
be
very
difficult
right.
It's
it's
another
talk.
I
gave
I'm,
give
me
a
talk
so
getting
to
I.
Guess
that's
like
a
book
by
like
William
ury
like
every
consultant
reads
it:
how
to
get
it
yes
and
having
incremental
success
is
important
right.
A
So
you
be
able
to
shave
off
a
little
bit
of
time
somewhere
able
to
document
something
a
little
bit
better,
having
more
consistent
process,
hey
like
more
power
to
you.
You
know
like
if
you're
going
from
deploying
every
six
months
to
the
playing
every
three
months,
that's
huge,
you
double
the
throughput
on
your
team,
potentially
right,
also
really
figured
out.
Where
are
you?
A
You
know
we
we're.
Actually
this.
We
see
the
conference
lesson
the
people
process,
people
aspect
of
it,
so
the
technology
process
aspect
of
it
technology
fairly,
easy
to
be
confident.
I'll
give
you
example,
let's
say
we're
deploying
into
kubernetes
or
we're
deploying
into
some
sort
of
cluster.
You
might
say
you
know
what
you
know,
what
Robbie
we
have
five
notes,
and
so
we
can
take
two
failures
right.
A
So
if
we're
doing
maintenance
and
then
a
failure
occurs,
we
still
have
two
available
nodes
or
three
depending
on
how
you
do
the
math,
like
we
have
enough
node
to
sufficiently
handle
the
process,
if
there's
a
maintenance
and
a
failure
or
two
concurrent
failure,
just
a
few
things
and
fights
that's
what
I've
learned.
So
your
technology
is
confident
process
wise,
absolutely
yeah.
Several
people
we
know
both
Marcus
is
on.
This
call
have
a
son
off
on
something
sorry
I,
just
like
the
with
the
market
system,
both.
A
Have
to
sign
off
on
something
yeah.
We
are
super
confident
in
our
Marcus
technology.
Okay,
there.
You
know,
you
have
a
change,
control
board
test
meat
is
taking
the
service
not
to
get
sure,
but
all
of
that
is
driven
by
the
one
at
the
top,
which
is
us
we're
emotional
right.
We're
humans
we're
the
sum
total
of
our
experiences.
You
know
we,
we
have
different
perspectives.
A
If
we
were
to
ask
everybody
palana
skal,
like
hey
like
how
do
you
think
we
should
do
something,
because
each
one
of
you
were
very
smart
and
you
just
want
to
use
of
highly
skilled
like
you
won't
you
chub
you
on
opinion.
Right,
like
you
know
you
you
don't
stuff
before
you
know
what
you
might
say.
You
know
why
this
worked,
this
didn't
work
and
then
we
all
get
together
and
we
have
a
kumbaya
moment
and
that's
it
usually
it's
the
people
aspect
of
it
like
hey
just
turning
out.
A
Why
are
these
approvals
in
place
like?
Why
do
we
need
this
and
that's
where
you
want
to
start
building
the
foundation
on
also
using
a
pipeline
of
a
system
of
record?
So
going
back
to
that
people
part
of
it
a
lot
of
times
what
ends
up
happening
is
that
we're
not
one
place
forever
right,
we're
people,
we
know
we
get
another
job.
You
go
to
the
project.
You
know
part
of
our
skill
set
as
engineers.
A
So
you
know
I've
been
running
Java
for
15
years
now,
and
it's
always
Java
design
patterns
paradigms
they're
similar,
you
know,
there's
some
tax
might
change
or
additional
features
which
add
to
my
come
on,
but
I'm
able
to
carry
that
with
me
when
I
go
to
replace
it.
Every
single
organization
I
do
deploys
differently
and
as
critical
as
odd
as
that
sounds
like
definitely
make
an
investment
in
some
sort
of
pipeline
technology
right
so
being
part
of
the
CDF.
There
are
lots
of
pipeline
technologies.
You
can
choose
from
you
know.
A
We
have
a
couple
of
special
interest
groups
trying
to
make
sure
that
we
call
things
the
same,
which
is
your
interoperability
specialist
group,
which
can
join
if
you
like,
if
you'd
like,
to
join
but
having
a
pipeline,
is
system
a
record
good.
Usually
what
ends
up
happening
not
like
when
you,
if
you
were
tasked,
deploy
something
for
the
first
time
it
gets
fairly
tricky
right,
because
it's
really
the
people
ask
that
comes
in
so
I
would
say
about
pretend
Jacqueline
to
see
your
team.
A
A
But
that's
it
right
having
a
system
a
place
of
record,
so
you
can
go
back
and
look
at
something
is
helps.
You
learn
kind
of
look
at
what's
ran
before
and
look
at.
What
has
it
ran
before
looking
much
one
before
more
successful
and
not
successful
is
quite
important,
and
also
since
teeing
us
up
here,
the
CD
episode
kind
of
kind
of
come
at
the
back.
The
back
end
of
the
presentation
here
is
that
hey?
A
Does
he
use
the
CDF
as
a
way
to
learn
right,
so
the
continuous
delivery
foundation
I
have
a
big
in
this
industry.
Now,
like
I've,
haven't
seen
so
many
organizations
come
together
for
a
common
goal
of
helping
you
deploy
more
successfully
deployed
easier
and
so
definitely
leverage
resources
like
a
cdf
very
minute.
A
Right
I
want
to
see
the
app
webinar
talking
about
a
CDF
of
preaching
in
Levicy,
do
any
more
better
than
that,
but
Jesus
EGF
for
that
right,
hey
great
resources,
it's
great
webinars,
great
blog
posts
and
just
heard
TJ,
headed
here
to
help
people
and
then
kind
of
like.
Lastly,
you
know
coming
come
a
the
end
of
it.
If
you
would
like
to
get
a
copy
of
continuous
delivery
insights,
this
presentation
is
it's
in
the
shared
drive
list.
A
Yes,
you
can
take
a
look
at
it,
go
through
it
and
also,
if
you
go
to
harness
IO
long
name,
/,
Matthias,
delivery,
insights,
2020,
t
book,
that
is
it
long
slog
you
get
a
full
copy.
The
report
we
put
together,
which
is
about
25
pages
of
where
people
fall
and
certain
categories,
but
with
that
yeah
I,
think
that's.
What's
pretty
much
most
of
the
presentation
that
I
that
I
did
have
certainly
open
for
questions
or
discussion.
B
A
A
We
have
a
lot
of
resources,
a
lot
of
human
power
behind
this,
but
if
it
wasn't
very
scaling
right,
saying,
okay,
how
you
get
through
those
to
know
if
I
was
a
little
technolon,
my
technology
leadership
hat,
let's
say
I
got
committed
a
couple
times
and
I'm
the
VP
of
like
software,
cheering
I'm,
sorry
or
congratulations.
Okay
on
my
team,
diggity
right
I
might
be
like
you
know
what
we
have
to
deploy
every
two
weeks.
Just
because
like
it
is
what
it
is.
A
This
is
we're
not
doing
it
we're
not
any
any
sort
of
marks,
and
it's
like
what
I'm
gonna
take
or
if
I,
first
of
all,
I'll
role
play
yeah
and
you're
my
boss-
and
you
told
me
that
right
so
like
I'm,
like
oh
boy,
so
that's
what
we're
saying
like
matter
of
resources
that
are
being
dedicated
to
it.
It's
surprising,
like
it's
quite
an
important
problem
to
solve
how
challenging
it
happy
to
go
next
year.
I
think
it's
just
as
there's
more
emphasis
on
pipeline
and
development.
Right
like
hey!
A
It's
you
know,
if
you
don't
have
a
pipe
like
my
father
would
say
playing
a
lottery.
If
you
don't
have
a
lottery
ticket,
you
don't
have
a
chance.
Well,
looking
out
the
pipeline,
you
don't
have
a
chance,
I
think
you're,
just
waiting
through.
All
of
that.
If
you
take
a
look
at
there's
so
much
there's
a
lot
of
choice
that
you
have.
You
know
if
you
take
a
look
at
any
sort
of
cloud
native,
stack,
right,
I
think
we're
the
the
the
challenge
the
changes
are
going
to
be.
A
There
will
be
more
emphasis
on
pipeline,
so
I'll
give
you
examples
so
like.
If
you
follow
like
our
parent
organization,
the
Linux
Foundation
so
and
take
a
look
at
like
the
C
and
C
F,
the
cloud
native
compute
foundation
every
year
it
there's
more
and
more
car
injector
to
landscape
there,
and
so
there's
more
and
more
choice
for
us
to
use
a
type
of
technologist.
A
But
there's
also
a
lot
more
opportunity
for
us
to
get
told
know
right,
like
hey
I'll
put
that
was
accurate,
let's
say
yeah
and
you're
my
boss
and
I
want
to
use
SEO
you're
like
why
Robbie?
Why
do
you
want
to
use
that
Service
Master
like
it's
cool,
but
there's
a
lot
of
like
the
fundamental
operational
problems
that
we've
had
the
last
couple
decades
like
you
might
make
sure
you
can
upgrade.
You
want
to
make
sure
it's.
A
You
want
to
make
sure
you
can
test
it.
You
want
to
make
sure
you
got
skill
set
to
do.
Those
type
of
things
have
not
gone
away.
We
just
have
more
technology
right
and
so
I
think
what?
If
you
start
looking
out
like
shifting
shifting
challenges,
it
might
be
more
for
those
particular
projects
like
if
I
was,
you
know,
let's
say
pretend:
I
was
on
the
SEO
project.
A
I
would
might
want
to
give
you
some
sort
of
CRD,
say:
hey
I,
have
a
package
wait
for
you
to
play
something
down
that
any
one
of
these
CH
tools
can
deploy
right
so
I'm,
hoping
that
was
a
very
long-winded
answer.
I
say
that
people
are
definitely
putting
a
lot
of
effort
into
it.
So
it's
a
very
important
problem,
but
number
of
technical
choices
that
we're
seeing
folks
probably
will
be
more
alluding
to
give
you
like
a
pipeline
to
run
right.
Saying:
oh,
don't
worry,
you
know
what.
A
C
Was
really
helpful
things
that
I
guess
I've
seen
in
my
organization
here,
Autodesk
others
is
Amazon's
like
put
up,
is
at
that
eleven
point,
seven
seconds
every
developer
is
deploying
to
production,
or
you
know
a
deployments
going
out.
One
of
the
challenges:
I
guess
that
I
see
and
I
kind
of
want
to
hear
your
opinion
on
is
all
right.
Well,
Amazon
and
the
Netflix
is
the
world's
and
the
Microsoft's
and
Google's
of
the
world
are
there,
and
even
with
harness,
it's
still
the
internal
frictions
that
have
to
go
through
about
like
hey.
C
How
does
the
dev
go
from
code
commit
to
making
sure
it's
validated
and
tested
and
that
it
actually
is
safe
to
deploy
just
one
in
terms
of
like
how
you
can
measure
the
the
velocity
or
look
at
thinking
about
how
we
can
deploy
with
speed
and
safety?
And
if
you
see
anything,
any
trends
changing
around
safety
being
more
important
than
speed
or
speed
being
you
know
and
enable
her
to
safety
or
if
anything
here
for
your
customers
are
ya.
A
A
A
Me
to
the
money,
or
vice
versa,
right
and
just
determining
what
is
success
for
that,
like
every
organization
is
totally
different,
like
if
an
insurance
company
and
a
financial
services
institution
and
would
be
inherently
more
risk-averse
and
a
software
company,
like
you
know
just
because
you're
dealing
with
like
people's
lots
of
money
is
rate.
So,
as
a
engineer
you
have
I
would
say
you
have
a
specific
duty,
but
a
moral
duty
to
uphold
certain
things
right
and
so
I
kind
of
kind
of
get
into
the
hearts.
However,
that
question
it's
like
it
depends.
A
A
There
was
this
process
so
like
so
when
I
was
in
academics,
I
was
a
development
manager
and
epidemics,
and
so
it's
so
funny
like
we
were
actually
looking
at
modernizing
some
of
our
pipelines
that
we
had
to
actually
do
open
source
security
testing
right,
so
we're
adding
additional
security
safety
to
turn
the
whole
pipeline
and
I
was
like
one
of
like
three
deaf
managers
that
were
like
hey,
let's
you
know
come
together
like
come
by
on.
Do
this.
A
It
actually
took
a
long
time
to
figure
out
the
process
like
I
was
able
to
write
a
bunch
of
walk
applications
and,
like
you
know,
kind
of
like
show
we
have
to
do,
but
getting
that
wired
into
the
process
like
it
was
a
bunch
of
degree.
At
that
point,
like
you
know,
it's
like
well,
why
do
you
want
to
do
this
here?
Why
is
it
so
slow
here?
Why
is
this
faster?
And
so
it's
again,
it's
like
it
shows
emphasis
on.
A
You
know
it's
people
who
have
to
weigh
both
of
those
depending
on
your
organization
if
it's
getting
it
right.
The
first
time
like
it's
there's
an
irony
of
it
like.
If
you
face
books,
I,
don't
give
you
a
long-winded
answer.
Just
do
like
excited
to
talk
about
it.
If
you
know
like
Facebook
like
go-to,
move
fast
and
break
things,
there's
a
very
interesting
like
TechCrunch
article
out
there,
like
the
people
who
move
fast
and
break
things
forgot
like
it
was
like
a
banking
sector
in
New
York
like
like.
A
No,
no,
because
we
have
like
you,
know,
Leslie,
so
your
Bank
of
America
and
you
run
their
check,
cashing
application,
which
they
process
a
trillion
dollars.
I
know
the
number
in
checks
a
year
they're
like
we're
not
going
to
cause
the
economy
to
stop,
because
people
can't
cash,
a
check
right
and
so
like
in
all
the
pens
of
the
organization.
People
want
both,
but
you
need
to
be
realistic
at
some
point.
Hopefully
that
was
just
like
yeah.
A
A
Okay,
so
get
a
great
great
question
so,
like
it's,
so
it's
there's,
like
a
part
partially
like
tooling,
will
have
to
come
into
like
to
this
I'll
start
the
very
top
it
every
application.
Depending
on
how
stringent
you
like
your
platform,
engineering
practices,
every
application
will
have
different
metrics
right.
So,
like
it's,
it's
a
it's
a
very
hard
question
to
get
right.
A
I'll
put
Susie!
It's
on
the
call
like
you
and
I
are
two
different
teams
and
so
begins
an
application.
It
was.
It
has
completely
different
metrics
of
my
applications
like
what
you
might
take
a
generic
approach.
All
those
particular
gives
you
like
a
platform
engineer
great
looker
you're
running
like
that
organization,
so
you
have
to
start
with
generics
and
then
get
specific
domain
specific.
A
So,
like
you
might
say,
and
a
bare
minimum
of
what
saying,
where
you
know,
I'm
gonna
use,
Dairy
Queen
I'm,
looking
at
their
pinata
blisters
right
now
online,
so
like
I,
say
we're
rolling
out
this
pinata
lizard
and
a
bare
minimum.
If
the
application
does
even
come
up,
you
have
to
roll
back
right,
like
you
might
start
with
very
severe
type
of
things
and
then
get
specific,
because
any
sort
of
monitoring
tool
or
metric
tool,
like
it's
quite
easy,
to
see
that
versus
like.
A
If
there's
like
a
regression,
you
know
like
there
might
be
some
more
generics
like.
How
do
you
know
when
you're
violating
a
that's
the
way
right,
like
Ian
zap?
My
app
know
like
if
you
might
violate
an
SLA,
then
I
don't
or
vice
versa,
and
so
this
understanding
that
you
get
more
domain-specific
when
you
should
get
more
demanding
specific
there's
like
this.
A
This
is
where
the
tooling
conversation
kind
of
comes
in,
like
hey,
if
you're,
using
a
tool
that
knows
out
like
you're
an
APM
tool
like
a
new
relic
or
epidemics
like
inferring,
for
those
like
using
those
tools
for
what
they're
good
for
like
they
have
a
lot
more
domain,
knowledge
of
the
application,
and
so
it's
amazingly
tying
that
into
place
I
think
that
that's
actually
be
the
biggest
part
like
wide.
Let
me
go
back
a
couple
slides,
so
that's
that's
lad
went
back
to
you
for
that.
A
A
That's
the
stable
canary
ratio,
there
it's
a
deployment
every
time
and
so
like
that's
pretty
up
for
pretty
big
and
the
I
think
that
would
be
the
biggest
part
like
like
tuning
my
own
order.
First
I
get
like
an
artist
like
that's
exhaust,
the
biggest
problem.
We
see
people
having
it's
like.
We
don't
know
how
to
automate
that
right,
like
hey,
we
can
get
very
close
to
that,
but
it's
every
monitoring
tools,
different
metric
tools,
do
Fred
and
that's
just
surly
tooling,
like
the
CDF.
Is
several
projects
out
stuff
like
that.
A
C
I
would
just
mention
one
of
things
for
us.
A
table
minute
roll
backs.
We've
seen
is
that
from
the
APM
tooling
or
even
from
the
metrics
pulling
humans
out
of
the
system
makes
it
really
hard
because
the
system
roll
backs
always
and
so
much
like
Skynet
we're
finding
that
it's
killing
our
deployments
and
it's
because
there's
a
human
judgment
factor
that's
involved
and
it's
almost
to
sentient
to
say.
A
C
A
A
You
know
it
and
having
that
kind
of
knowledge
just
takes
time
like
it,
takes
you
time
to
get
a
baseline,
it's
a
it's
it's
about
conference
at
some
point
to
like
this
is
my
like.
So
boxy
talk
like
it,
it's
cool
too
late,
the
movie
company
in
in
Los
Gatos
or
the
book
company
South,
Lake
Union,
its
the
rationale
behind
them
deploying
so
quickly
is
that
they
have
it's
it's
this
new
moniker
of
a
full
lifecycle
developer
right
or
if
you
write
it,
you
run
it
I'm,
actually
very
hungry.
A
Dissenting
opinion
against
that
type
of
ball
always
curious
to
hear
what
people
have
to
say
about
that.
But
it's
they
have
a
very
strong,
a
very
a
very,
very
strong,
the
popular
ministering
practice
right
like
hey.
No,
we
have
to
give
you
a
prescription.
Can
we
in
a
common
way
of
suggestions
of
metrics?
We
got
a
common
way
of
you
deployed
because
we
need
to
be
able
to
make
judgment.
Calls
on
your
behalf
to
like
yeah.
A
It's
similar
thing
right,
like
the
Skynet
sentient
being
you
know
you,
you
kind
of
have
to
build
that
up
a
little
bit
to
a
point,
I
think
where
it
gets
a
little
bit
difficult.
It's
like
having
each
team
infer
what
they
need.
You
know
it's
like
I'll
play
devil's
advocate.
Let's
say:
I
was
gonna
jump
like
a
deputy,
even
taller,
both
like
okay
Robbie.
You
know
for
you
to
be
to
have
a
lower
risk,
a
lower
risk
category
for
us.
You
have
to
use
a
set
of
tools
right.
A
Some
tools
like
you
need
to
be
using,
flew
and
D
for
trace.
You
need
to
be
using
Prometheus
for
metrics,
like
if
your
application
has
those
hooks
like
yeah,
like
you're,
a
lower
risk
category
for
us
and
so
will
provide
you
support
all
that
which
is
difficult
to
do
right.
Like
you
know,
it's
always
Lee
its
choice
versus
innovation
and
it's
you
know
it's
like
having
so
much
guardrails.
You
want
to
give
versus
how
much
you
want
to
give
freedom.
Every
organization
is
different.
C
C
Hit
a
home
far
too
frequently:
oh,
that's
awesome,
aw,
dude
I'm
totally
in
that
Blizzard,
but
that
sure
is
great
yeah.
So
if
you
scroll
down
you'll
see
this
diagram,
and
this
was
as
basically
first
a
CT
pipeline.
You
test
you
feel
you're
put
to
have
manual
tweaks
forget
about
the
tweaks,
opportune
to
provoke
more
tweaks.
He
death
of
the
universe
escalates
director
change
approval
on
site,
unseen
wafers
here,
maintenance
window
deployed
a
prod.
Everything
works
just
kid.
Why
would
any
of
this
work?
Errors?