►
From YouTube: 2021-05-24 - Deployment SLO sync
Description
Sync on deployment SLO
A
Okay,
hello,
everyone.
Thank
you
for
joining.
We
are
going
to
discuss
the
current
status
of
the
deployment
slo
and
what
are
going
to
be
the
next
steps
so
for
context.
The
definition
that
we
have
about
deployment
slo
so
far
is
that
it
is
going
to
measure
the
elapsed.
Time
from
the
deployment
to
our
first
environment
starts,
whether
that
one
is
staging
until
the
moment
the
last
environment
is
updated,
which
is
production
and
some
interesting
patterns
that
we
want
to
expose.
A
So
the
first
discussion
that
we
have
is:
what
should
we
use
as
an
indicator
for
this
slo?
So
we
have
to
proposal
so
far.
The
first
one
is
to
use
merchant
quest
from
march
to
event,
meaning
the
moment,
meaning
measuring
the
moment
from
the
merch
request
being
merged
until
it
reaches
one
of
our
environments,
whether
it
is
a
staging
planner,
your
production
and
the
other
one
is
to
use
deployment
pipelines
from
release
tools
and
from
the
deployer.
A
Now
I
do
feel
inclined
to
use
deployment
pipelines,
as
I
do
feel
those
give
us
a
more
natural
measure
from
the
deployment
duration,
which
is
what
we
are
looking
for
and
using
versus
using
merge
requests,
which
is,
I
think,
that
give
us
some
more.
Those
are
associated
to
mttp
because
it
is
measuring
from
the
moment
america
quest
is
merged
and
from
until
the
moment,
the
merchant
quest
reaches
gitlab.com.
B
I
have
a
question
myra,
so
I
do
agree
with
you
that
well
in
terms
of
what
we
define
for
deployment
slo,
I
mean
the
things
that
we
want
to
understand
here
is
how
much
time
we
are
spending
deploying
the
package.
So
the
unit
of
interest
is
the
the
single
package,
the
single
every
single
deployment.
B
What
I'm
not
sure
about
is
this,
the
meaning
of
tracking
from
the
when
we
start
staging
deployment
up
until
we
complete
production
deployment,
the
reason
being,
for
instance,
these
numbers
make
sense
as
a
whole
right.
It's
kind
of
we
started
this
morning
and
then
the
thing
wasn't
ready
by
ready
means
running
in
production
up
until
seven
hours
later.
B
So
it's
a
it's
an
important
number
right,
okay,
but
if
I
think
it
was
more
interesting
the
breaking
down
of
what
composed
this
number
because,
for
instance,
we
have
qa
only
on
staging
and
on
cannery,
but
we
don't
have
it
on
production.
B
So
if
we,
if
we,
if
we
try
to
break
it
down
to
a
single
unit,
that
makes
sense
there
is
an
imbalance
on
the
production
deployment,
for
instance,
because
there's
no
qa
there,
and
also,
if
I
remember
correctly,
the
the
idea
behind
this
is
that
we
are
not
accountable
for
qa
time.
So
it
is
interesting
to
extract
it
more.
I
mean
we
need
to
count
it
so
that
we
know
how
long
it
takes,
but
it's
not
something
that
we
we
should
act
upon
as
a
delivery
team.
It's
more
about.
B
C
Is
a
bit
qa
time,
one
night,
so
that
will
become
a
separate
issue,
a
separate
measure.
So
this
is
a
certainly
we
should
work
out
in
the
kind
of
the
eventual
stage
we'll
have
three
of
these
so
deployment.
Slo
is
delivery,
owned
there'll,
be
a
qa
slo,
be
quality,
owned
and
there'll,
be
a
package
slo
which
will
be
distribution,
and
that
allows
us
to
set
different
different
targets
for
different
teams.
B
A
I
think
we
can
say
the
same
about
the
migrations
and
the
migrations
that
they
do
not
depend
on
the
delivery
team,
but
other
than
they
depend
on
what
other
teams
in
the
development
area,
and
I
think
what
we
want
with
the
deployment
slow,
is
to
well
set
unless
a
low
for
the
deployment
duration,
whether
it
is
like
a
complete
measure
from
staging
to
deployment
from
staging
to
production
and
then
break
it
down
into
individual
builds
like
the
migrations.
Whether
we
can
have
like.
I
don't
know
just
for
saying.
A
C
I
I
was
also
going
to
add
as
well.
It's
also
a
lot
easier.
So,
like
absolutely,
I
think,
in
an
ideal
world,
every
bit
would
be
separate,
but
in
terms
of
a
what
can
we?
What
gives
us
a
good
starting
point?
I
think
if
measuring
end-to-end
is
the
easiest
thing?
If
it's
not,
then
we
shouldn't
do
it,
but
if
it
is
then
like
that's
a
nice
way
to
introduce
something.
B
Yeah,
what
I'm
not
sure
about
measuring
end-to-end
is
that
there
is
too
many
human
factors
in
between
so
just
having
or
not
having
released
managers
around
to
press
the
promotion
button
changed
this
greatly
and
is
kind
of
yeah.
It
tells
us
something,
but
it
can.
It
can
dilute
timing
over
time
because
it's
kind
of
very
depending
on
availability
and
things
like
that,
while
if
we
just
take
the
time
it
takes
from
warming
up
to
running
positive
line
of
migration
for
each
environment,
I
expect
this
to
be
yeah.
D
D
In
the
case
of
what
we're
doing
here,
I
think
you
could
argue
that
the
customer
is
the
user
who
has
submitted
a
merge
request
and
it's
arrived
on
master
and,
and
you
want
to
kind
of
measure
their
experience
of
having
that
arrive.
And
you
know
you're
welcome
to
argue
this
and
it's
just
a
it's
a
point,
but
you
you
measuring
their
their
kind
of
customer
satisfaction
of
having
their
merge
requests
delivered
into
production,
and
so,
if
you're,
starting
with
that
from
the
mast
from
the
merge
to
master
of
merge
to
main
or
whatever.
D
Obviously
we
have
those
segments
so
that
we're
able
to
attribute
it
to
where
the
problem
is,
but
we're
really
measuring,
like
the
customer
satisfaction
from
their
point
of
view,
and
we
can
you
know
the
customer
doesn't
really
mind
if
it
got
delayed
because
of
qa
or
because
of
whatever
you
know,
they're
getting
delayed
and
in
the
same
way
you
know
with
slos
on
a
on
a
service,
we
try
to
measure
like
the
entire
thing
and
it
doesn't
matter
whether
it's
queuing
or
database
or
whatever
it's
the
user
experience
of
the
entire
request.
D
Duration,
for
example,
that
we
that
we
kind
of
try
to
focus
on.
So
you
know
just
it's
more
like
a
philosophical
thing
rather
than
a
technical
point,.
A
Yeah
thanks
for
bringing
that
up.
So
it
is
interesting
to
me
that
you
mentioned
that
the
customer
is
the
merch
request,
because
I
think
we
were
visualizing
in
a
different
way.
I
think
the
customer
here
could
be
the
delivery
team
or
the
release
managers,
because
we
want
to
measure
we
want
to
have
some
sort
of
control
of
the
deployment
direction
which
we
do
not
have
now.
A
I
I
think,
with
mttp.
We
are
measuring
something
like
from
merch
to
package
and
then
package
building
and
then
deployment
direction,
and
we
don't
have
a
way
so
far
for
us
to
measure
the
deployment
duration,
which
is,
I
think,
what
how
we
are
seeing
from
different
approaches.
C
From
that
video
you
shared
earlier
andrew,
which
I
might
maybe,
if
you
drop
the
link
in
other
people,
can
see
what
were
you
thinking
of
something
I
like
mttp
being
the
overarching
slo
and
then
sli
is
bini's
individual
breakdown,
because
we
have
lots
of
different
pieces
right.
We
have
the
it's
interesting
to
know
what
a
deployment
pipeline
is,
but
it's
interesting
to
know
that
at
the
the
stage
level
to
your
point,
alessio
like
yes,
we
want
to
be
able
to
separate
out
staging
from
a
release
manager
having
lunch.
C
But
ultimately,
I
suppose
all
of
this
stuff
is
to
work
out.
What
do
we
need
to
focus
on
like?
Do?
We
need
to
have
more
apac
release
managers,
or
do
we
need
to
have
qa
tests
running
faster
or
do
we
need
to
have
fewer
manual
retries
like
so?
Is
that
kind
of
what
you
were
thinking
here,
andrew
that
we
would
start
off
and
have
a
mttp
is
the
kind
of
overarching
slo
and
then
all
of
those
things
become
slis.
D
Well,
yeah
I
mean
each
each
one
of
those
measurements
is
an
sli,
that's
sort
of
just
the
way.
You
know
that
the
actual
indicator,
as
opposed
to
the
threshold
but
yeah,
I
think
the
overarching
one,
the
one
that
you
would
report
in
the
in
the
general
conversation
or
the
you
know,
the
group
conversation
or
the
one
that
you
would
put
in
the
key
performance
indicators
is
that
top
level
one.
D
But
if
you
could
see
that
the
performance
you
know
the
the
mean
time
to
production,
but
if
you
could
see
that
there
was
a
problem
and
you
wanted
to
understand
whereabouts,
it
was
you
could,
then
you
know
internally,
we
would
list
the
the
components
of
that
and
then
we
could
see.
Well,
this
part
of
it
is
is
not
meeting
our
expectations.
D
That's
the
part
where
we
need
to
focus,
but
you
know
that's
almost
like
internal,
but
the
headline
one,
the
one
that
and
I
think
it
is
kind
of
important
to
figure
out
who
the
customer
is
because
I
I
don't
know
if
I
necessarily
agree
that
the
that
the
delivery
managers
are
the
customer
because
they
are
kind
of
the
service
provider.
D
If
you
want-
and
you
know,
then
you
know
if
if
they
are
the
customer,
then
we've
got
to
kind
of
think
about
it
from
their
point
of
view,
where
kind
of
I
guess
we'll
be
optimizing,
maybe
more
for
like
interruptions
or
like
how
many
manual
kind
of
I
don't
know
if
it
would
be
necessarily
the
time,
because
if
it
takes
different
amounts
of
time
it
doesn't
make
a
big
well,
I
mean
I
might
be
wrong
on
this,
but
it
doesn't
make
a
big
difference
to
the
to
the
delivery
manager.
Does
it
it's?
E
C
Cd
right,
like
everything,
is
kind
of
optimizing
for
speed
to
production.
So,
assuming
that
everything
we're
releasing
is
worthwhile,
I
would
think
the
customer
is
the
actual
customer.
D
It
is
that
those
people
are
going
to
go
off
and
start
working
on
something
else
and
they're
not
going
to
go.
Look
at
the
dashboards
that
they've
built
and
everything
else,
because
they're
now
working
on
the
next
piece
and
if,
if
we
optimize
for
bringing
that
time
down,
then
there's
more
chance
of
them
actually
owning
that
as
it
arrives
in
production.
D
But
yeah
I
mean
it's
a
bit
of
a
meat
point.
It's
not
really
like
a
it.
Probably
won't
make
any
difference
to
the
way
it's
designed
right.
I
mean
we've
already
got
a
goal,
which
is
to
optimize,
for
the
mean
time
to
production,
presumably.
F
I
would
argue
something
slightly
differently
andrew,
but
with
the
same
end
goal
in
mind,
I
think
both
of
the
proposals
that
myra
have
provided
enable
us
to
measure
various
things.
So
the
slos
that
we
would
have
for
getting
things
in
production
quickly
would
contain
indicators
for
every
individual
merge
request
per
se,
but
they'd
also
take
into
account
the
time
it
takes
for
testing
and
the
time
it
takes
for
building
and
for
those
the
customers
of
that
are
still
going
to
be.
F
The
merge
request,
owners
or
authors,
but
the
sli's
all
get
contributed
towards
what
that
particular
process
that
is
taking
a
long
time
leads
to
so
like
the
building
process
would
be
distribution,
the
qa
would
be
the
qa
team
and
such,
but
all
that
contributes
to
the
overall
slo
of
or
the
mtp
mttp
time.
Rather
so
I
think
if
we
could
start
gathering
all
that
information,
we
could
attribute
owners
to
all
of
that,
and
then
we
could
start
figuring
out
where
we
need
a
budget
for
optimizations,
potentially.
B
Yeah,
I
was
also
I
wanted
to
say
something
because
when
we
first
started
discussing
this
ngo
was
kind
of
following
you
in
this
idea
of
going
to
the
merger
was
tracking
by
michael
jacobs
and
then
basically
what
I
was
thinking
reading
your
idea
is
that
mergers
is
just
small
granular,
because
a
package
is
just
a
set
of
merged
requests.
So
if
we
have
enough
data
point,
we
can
we
necessarily.
We
don't
need
to
say
this-
is
package
x
and
took
x
amount
of
time.
B
We
don't
care
right,
it's
just
as
long
as
we
know.
In
the
meantime,
it
takes
for
something
to
get
to
this
stage
or
where
they,
where
a
stage
is
slowing
down,
is
fine.
I
mean
we,
we
it's
not
important,
but
maybe
the
bigger
benefits
from
just
tracking
from
merge
requests.
The
emergency
point
of
view
is
that
we
have
a
single
metric
and
from
that
one
we
can
invest
more
time
on
understanding
intermediate
data
and
then
we're
we
have
just
one
single
collection
of
information
and
not
something
that
is
completely
different.
D
From
that
point
of
view,
there's
one
other
thing
that
I
think
is
kind
of
an
important
technicality
point,
and
that
is
that,
like
it's
all
service
level,
monitoring
is
just
kind
of
like
fast
and
loose
statistics
and
like
any
statistics,
it
only
really
works
if
you
have
a
big
enough
sample
size,
and
so,
if
you
look
at
on
gitlab.com,
we
actually
don't
alert
on
on
like
low
volume
traffic.
D
We
just
cut
it
out
because
it's
too
noisy
and
we
get
too
many
samples
that
are
kind
of
outside
and
they
they
cause
alerts
to
fire
off.
You
know
at
the
wrong
time,
and
so
one
of
the
reasons
why
I
favor
merge
requests
is
because
there's
more
data,
and
so
because
you've
got
more
data,
you've
got
more
samples,
and
so
it
smooths
it
just
kind
of
naturally
smooths
it
out,
because
you
know
if
you,
if
you're
only
having
one
package
a
day.
For
I
mean
I
don't
know
what
it
would
be.
D
B
I
have
to
argue
on
this
because
once
we
reach
the
package
level,
every
murderous
moves
together,
so
in
mean
time
to
production.
The
only
variable
part
between
emerge
requests
is
from
merge
to
the
new
outside
branch.
Then
it's
they
bundle
together.
So
it's
just
more
simple
that
they
have
the
same
values,
so
they
yeah
statistically,
is
it
doesn't
change.
C
We
already
know
that
stuff,
but
I
think
what
we
don't
know
is:
is
it
a
one-off
and
it
always
happens
at
10
a.m,
on
a
tuesday,
or
is
it
that
this
thing
has
trended
up
over
the
last
six
months
and
is
now
you
know
unacceptably,
so
we
haven't
got
any
visibility
into
how
it's
behaving
over
time.
D
The
the
one
thing
about
a
package,
though,
is
that
if
a
package
has
got
three
merge
requests
or
in
it
or
if
it's
got
30,
it
carries
the
same
weight,
whereas
if
you,
if
you're
measuring
it
so
even
though
they
have
the
same
samples
if
you're
measuring
it
on
merge
requests
and
you
get
a
merge
request
that
rolls
back
that
has
30
merge
requests.
Sorry,
you
get
a
package
that
gets
rolled
back,
it's
got
30
in
it.
I
kind
of
think
there's
a
natural,
that's
more
problematic
than
a
really
small
package.
D
B
Okay,
so
the
the
this
is
a
good
point,
but
you
also
have
to
consider
that
we
have
a
shared
medium,
which
is
the
production
environment
and
we
cannot
deploy
twice
at
the
same
time.
So
if
we
are
tracking,
for
instance,
the
time
it
takes
for
replying
to
a
request,
then
you
can
consider
every
single
request
as
a
unique
distinguished
event
right.
But
this
is
not
true,
because
if
your
package
holds
only
three
merge,
requests
is
still
delaying
six
hour.
The
next
one,
which
maybe
has
thousands
I
mean
just
made
up
numbers.
D
But
if
it's
slowing
down
the
one
behind
it,
would
that
not
be
so?
Then
we
would
see
that
so
the
next
one
of
the
thousand
in
it
is
getting
slowed
down,
and
then,
when
we
look
into
why
that
one's
taking
such
a
long
time,
we'll
naturally
get
that
weight
of
the
well
the
thousand
were
delayed
because
of
the
three.
So
you
know
we're
yeah,
so
you
kind
of
it
it.
I
think
that
ties
into
that
at
least
I
haven't
thought
about
it
enough,
but
I
think.
B
I
think
the
biggest
problem
we
have
here
as
a
team
is
the
reason
why
we
want
to
have
this
deployment
slow,
and
this
is
probably
why
we
are
kind
of
going
back
and
forth
between
merge
requests
and
we,
as
a
seeing
trying
to
pushing
more
for
packages,
is
that
I
mean
what
we
were
trying
to
achieve
here
is
having
a
tool
that
kind
of
clearly
tell
us
what
is
a
delivery
problem
and
what
delivery
should
take
care
of,
and
what
is
something
that
other
teams
that
provide
us:
packaging
qa
where
the
problem
is
there
and
when
the
problem
is
in
something
that
we
have
to
to
fix
ourselves
and
then
something
like
it.
B
D
D
D
So
if
you
feel
like
packages
are
the
best
one,
I'm
definitely
not
going
to
get
in
the
way
of
that,
like,
I
think
the
most
important
thing
is
to
get
that
experience
and
just
to
start
trying
to
get
the
numbers
in
and
and
and
then
kind
of,
push
that
to
the
stakeholders.
That's
the
that's
the
tricky
bit
here,
not
not
the
the
actual
maths.
B
C
B
So
the
opening
on
this
question,
which
is,
I
have
the
impression
that
right
now
we
are
kind
of
struggling
with
data
collection
on
mttp.
So
in
this
sense,
if
we
get
because
of.
B
C
In
the
delay,
so
that
should
be
caught
up.
If
it
isn't
already,
it
will
be
soon.
B
Yeah,
but
still
you
know
right,
so
we
are
talking
about
real-time
metrics
here
with
andrea.
So
my
point
is
that
if
we
move
in
the
direction
we
can
get
rid
of
sisense,
which
is
super
delayed,
and
he
has
a
lot
of
troubles
and
it's
hard
to
to
to
just
use
the
thing,
and
we
have
one
single
metric
with
breakdown
on
the
information
that
we
want
and
is
real
time
and
then.
F
B
Yeah,
basically,
we
get
two
birds
with
one
stone.
If
this
is
items
that
is
also
makes
sense
in
english,
I
mean
it
makes
in
italian.
So
I
don't
know
yes.
A
I'm
not
sure
if
it
should
be
like
the
first
iteration
like
to
explode
the
mttp
data
and
to
the
deployment
slow.
I
think
for
the
first
iteration
like
we
should
just
use
deployment
pipelines
and
then
learn
from
it
and
if
it
doesn't
work,
we
can
use
like
the
original
proposal
from
android
from
merge
to
event.
F
D
I
mean,
I
would
say,
pushing
it
to
the
ops.
Push
gateway
would
be
like
a
pretty
reasonable
approach
and-
and
you
know
just
having
it
labeled
in
there,
like
you
know
of
each
stage.
F
This
could
be
really
fun
information
to
advertise
to
our
customers
by
the
way
like
this
could
make
an
interesting
blog
post
and
probably
serve
some
interesting
public
visibility
with
as
to
how
quickly
we
realistically
move
and
improvements
over
the
course
of
time.
Despite
the
fact
that
we
have
a
lot
of
issues
that
sit
there
in
our
backlog
for
a
long
time,.
A
Well,
I
think
we
kind
of
decided
to
use
the
deployment
pipelines
and
there
is
like
the
next
action
item,
that
is,
they
analyze
the
pipeline
duration
to
set
an
objective
because
we
still
don't
have
it.
So
that
would
be
the
next
step.
A
Do
we
know
how
we
would
do
that?
No,
we
don't
there's
still
like
open
to
discussion.
C
Okay,
let's
do
a
follow-up
action
and
to
work
out
what
that
is,
I'm
assuming
we
already
have
stuff
in
place
andrew
that
we
could
reuse
to
bring
us
in
line
with
other
slos.
D
Well,
all
the
other
ones
are
in
prometheus.
So
so,
if
we
want
to
take
the
deployment
pipelines
and
push
them
into
the
push
gateway,
then
we
could
use
the
other
data
that
we've
got,
because
it's
all
about
evaluating
it
in
prometheus,
and
so
we
could
almost
build
it
as
a
it's
pushing
the
the
terminology
a
bit
far
but
like
building
a
service
which
is
the
deployment
service
and
just
take
that
terminology
with
a
grain
of
salt,
because
it's
not
a
service
at
all.
D
But
then
it'll
have
an
slo
and
we
automatically
generate
that
dashboard
and-
and
everything
gets
kind
of
built
out
from
that.
But
we
can
just
use
the
in
the
same
way.
We've
got
the
web
service
and
when
we
define
the
slos
in
there,
but
it
has
to
we
have
to
take
it
from
size
sense
and
put
it
back
into
prometheus.
D
In
order
to
do
that.
So
I
don't
know
if
that's
something
that
that
you'd
consider
doing
myra.
B
D
Packaging,
yes,
okay,
okay,
yeah,
yeah,
then
then,
then
just
push
it
into
push
gate
to
the
prometheus
push
gateway
as
kind
of
a
yeah
through
that
and
it's
it's
pretty
straightforward.
You
can
curl
to
that
and
and
you
can
get
access
to
it
from
the
ci
runners
in
in
the
ops
instance,.