►
From YouTube: 2023-01-25 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
What's
that
one?
So
we
are
at
time
while
we're
over
time
apologies.
So
this
is
the
25th
of
January
delivery.
Weekly
call,
so
we
haven't
got
any
announcements,
but
do
have
a
little
read
down
to
on
the
agenda
to
Monday.
It's
called
so
we're
a
few
on
there.
C
On
the
discussion
points,
I
wanted
to
chat
just
a
little
bit
about
release
manager
metrics.
So
these
have
been
sort
of
ongoing
discussion
actually
for
quite
a
long
time,
but
they've
cropped
up
quite
a
bit
through
our
okr
discussions
and
I.
Think
we
will.
C
You
know
we're
going
to
need
to
find
a
way
to
do
something
so
that
we
can
actually
plan,
what's
the
right
stuff
to
focus
on
and
actually
also
track,
our
our
progress
there.
So
I
just
wanted
to
share
in
a
link
here.
C
I
think
most
of
you
have
seen
it
because
I
think
most
of
you
have
commented
already,
but
one
of
the
things
that
would
be
super
useful
is
to
sort
of
be
able
to
say
so.
For
example,
we
have
a
process
like
a
deployment,
for
example,
and
we
say
that
a
failure
is
five
manual
steps
and
it
takes
us
three
hours
great
now.
C
D
Do
we
do
we
start
with
pipeline
metrics
right
like
so,
if,
if
a
coordinated
pipeline
like
the
auto
deploy
pipelines
fail
for
whatever
reason
like
sometimes
it
might
be
something
simple,
but
is
that,
like
a
way
to
start
like?
Is
that
what
we
mean
by
tracking
problems
of
deployments
like
I'm,
trying
to
think
of
somewhere?
We
can?
D
What
do
we
actually
find?
Is
the
failure
in
a
deployment
or
a
problem,
deployment
right
and
I?
Guess
it's
most
of
the
time
a
pipeline
fails
and
then
a
release
manager
gets
notified
about
it.
Maybe
it's
something
simple
or
maybe
it
leads
to
an
incident,
but
maybe
starting
with
pipeline
metrics
like
if
we
know
Auto
deploy
pipelines
fail
twice
a
day,
then
we
should
really
go
back
and
find
out
why
they
were,
and
maybe
rate
them
afterwards
or
something
I'm,
not
sure.
A
A
So
no,
this
is
counting
coordinated
pipeline
by
status
since
the
last
successful
one.
So
what
this
tells
us
is
that
for
this
period
of
time,
from
the
21st
to
the
23rd,
we
had
13
failed
pipeline
since
the
last
successful
Pipeline
on
coordinator
pipeline.
So
that's
exactly
the
number
you
want
and
it
goes
down
to
zero
when
you
actually
have
a
successful
deployment.
C
D
A
But
this
one
it
captures
only
one
single
information
which
is
a
the
unit,
is
the
number
of
deployments.
So
if
you
want
to
see
the
percentage,
maybe
we
want
to
check
this
with
what
was
the
attack
or
the
production
promotion?
A
I
mean
it
doesn't
have
to
be
production
promotion
right
because
this
is
counting
is
based
on
Tagged
package.
So
you
want
to
see
how
the
number
compares
to
the
number
of
tagged
package
in
the
same
I
would
say
same
amount
in
the
same
time,
but
this
is
not
entirely
correct,
so
let
me
I'm
going
to
show
screen
again
so
because
I
would
that's
probably
we
need
someone
that
has
more
metrics
experience
than
me,
but
basically,
if
we
look
at
so
no,
this
is
the
thing.
A
So
those
two
numbers
are
the
important
one,
but
they
are
based
on
a
different
assumption.
So
this
one
tells
you
the
number
of
package
that
we
talked
in
the
past
24
hour.
So
it's
a
rolling
window.
So
that's
that's
the
data
point
right
so,
as
we
can
see
here,
the
good
things
that
we
have
the
red
bar
on
both
graphs
right.
So
starting
here
we
started
having
failed
Pipelines.
A
The
problem,
though,
is
that
we,
we
can't
really
do
compare
this
based
on
this,
because
over
the
weekend
we
had
we
were
tagging
nothing,
but
we
will
still
had
the
until
the
next
successful
deployment
right,
but
this
is
a
very
specific
Edge
case,
because
so
what
I
wanted
to
say
here
is
that
the
graphs
above
it
gives
you
information
over
the
last
24
hour,
while
the
graphs
below
takes
in
memory
the
status
since
the
last
successful
deployment.
D
Because
I
guess
it's
like
we
looking
at
Amy's
comment,
we
want
to
get
how
brittle
it
was
right
so
or
even
if
we
say
over
the
week
how
brittle
was
Auto
deploy
this
week
because
we
know
Auto
deploy
the
num
is
in
terms
of
the
known
manual.
Steps
is
quite
low.
There's
really
just
promote
right,
like
that's
the
only
thing
so
the
the
for
like,
whereas
for
security
releases,
it's
different,
there's
a
lot
of
manual
steps,
so
the
complexity
is
all
on
that
side.
D
So
for
deployments
this
the
manual
steps
is
low,
but
I
think
the
brittleness
is
pretty
high,
like
it
fluctuates
right.
Widely
right,
like
brittleness,
can
definitely
get
us
there
on
so
I.
Think
a
combination
of
somehow
re
reworking
these
metrics.
We
could
get
some
kind
of
golden
value.
That's
like
this
week
or
whatever
time
period.
You
know
I.
Guess
it's
like
your
typical
SLI
SLO
right.
D
Maybe
thinking
about
it
that
way
like
what's
our
deployment
I
mean
we
already
kind
of
have
a
deployment
SLO
but
I
think
that's
slightly
more
talking
about
like
time
to
production
and
things
like
that,
but
I
guess
for
us.
We
have
kind
of
like
a
an
SLO
for
release
manager
that
that
we
would
like
I,
don't
know
99.5
of
deployments
to
not
have
release
manager,
if
not
have
failures
or
release
manager,
involvement
right
and
that's
what
we
kind
of
we
want.
D
C
E
So
we
were
just
discussing
this
in
alessia's
office
hours,
but
we
can
have
a
metric
that
tracks
the
amount
of
time
lost
by
failed
jobs.
So
you
tracked
like
from
the
time
a
job
first
fails
to
the
time
that
it
successfully
passes.
All
that
is
lost
time,
including
the
you
know,
time
between
retries,
like.
A
E
E
A
Together-
and
we
were
also
discussing
exactly
the
the
probably
the
number
that
you're
looking
for,
because
we
will
say
we
don't
really
have
metric
for
failed
deployment
pipeline,
because
the
the
things
that
I
was
showing
right
now
is
the
accumulated
deployment.
Failure
which
is
a
different
right.
So
it
is,
it
kind
of
gives
you
an
idea
of
the
status
of
every
Auto
deploy
pipeline
in
between
successes,
while
the
the
information
that
you
are
looking
for,
if
you
want
to
get
a
percentage,
is
how
many
packages
do
not
complete
for
any
reason
right.
A
So
we're
not
we're
now,
accounting
not
promoted,
but
we
we
want
to
see
everything
so
for
any
reason,
things
are
done,
not
promoted,
plus
something
right.
So.
C
That's
yeah,
that's
it
because
what
I'm
thinking
with
this
metric
right
so
I
I
think
it's
slightly.
It's
slightly
different
from
existing
metrics.
We
have
in
our
team
I'm
thinking
of
this
one
as
being
much
more
like
our
mttp
metric,
so
we
get
something
which
we
can
feed
data
into
and
come
up
with
an
overall
single
number
that
we
can
plot
and
put
onto
the
infrastructure
pis.
And
at
that
point
we
basically
say
right
now:
release
manager,
workload
effort
is
nine
or
whatever.
Our
number
is
right.
C
Whatever
the
scale
is
and
from
there
we
can
set
targets,
and
we
can
also
have
a
kind
of
break
point
where
we
go
release
manager
workload
is
unacceptably
high.
It's
this
month,
it's
been
15
the
month
before
it
was
18
and
from
there
we
would
say
we
are
now
going
to
take
this
action.
So,
for
example,
we
could
say
in
the
next
quarter.
We
are
going
to
only
take
okrs
that
reduce
that
workload
as
we
dive
into
that
workload.
C
That's
why
we
kind
of
need
to
know
what
feeds
into
this
number,
so
we
can
say
actually
the
reason
it's
so
much
higher
is
because
deployments
have
been
breaking
10
times
more
like
more
frequently
than
they
used
to.
So
what
can
we
do
there
or
we've
added
20
manual
steps
to
the
security
release
process?
So
what
can
we
do
there,
but
we'll
we'll
need
to
sort
of
I.
Think
that's!
C
What's
super
hard
about
deployment
SLA
right
now,
if
you
try
to
use
that
number
to
plan
something
it's
very
hard
to
figure
out
what
you
should
plan,
because
it's
really
hard
to
work
out
what
fed
into
that
number
and
what
caused
the
change.
So
those
are
kind
of
the
bits
that
make
this
a
little
tricky.
I.
Think.
C
Not
taking
action
I
think
we
have,
but
not
for
a
while,
but
I
think
like,
for
example,
if
you
have
something
like
slack
outage,
or
something
like
that
right
now
that
will
block
us
but
you're
not
necessarily
needing
to
do
something.
E
Maybe
we
can
use
like
time
lost
in
failures
as
a
kind
of
proxy
metric
for.
C
A
Talking
about
iteration
I'm
thinking
that
probably
we
want
so
instead
of
focusing
directly
on
getting
the
right
number
having
a
pool
of
things
that
are
easier
to
understand.
Even
though,
are
not
easy
to
show
up
say
on
higher
levels.
That's
the
number,
so
the
workload
is
10.
workload.
This
10
is
great.
If
you
can
get
the
number
right,
but
because
we
don't
have
number
at
all
the
only
number
we
are
those
ones
we're
looking
at.
A
If
we
can
sort
something
like
okay,
we
don't
have
idea
of
the
overall
numbers,
but
we
can
say
time
lost
on
doing
things
time.
It
takes
to
run
releases
of
any
type
I'm
talking
about
patches
and
see.
Here
is
here
because
the
the
monthly
release
takes
virtually
takes
no
time
and
then
a
number
of
retries
wasted
packages,
all
those
things
right
and
we
started
looking
at
those
numbers,
and
then
we
do
the
data
mining
right
as
a
human.
Let's
say:
how
do
we
fill
this
week?
A
So
this
is
what
happens
and
there's
and
we
can
see
by
all
the
new
metrics
as
well,
and
how
does
it
feel
and
then
it
may
become
easier
to
us
to
give
a
weight
to
those
numbers
and
to
to
say
I,
don't
know,
maybe
the
number
of
manual
job
and
retries
takes
for
fifty
percent
of
the
the
the
value
or
the
final
value
right,
so
that
that
metric
has
more
more
meaningful
because
it's
spread
across
every
workday
and
then
you
must
say
number
of
releases
well.
A
This
doesn't
really
count
that
much,
but
the
the
time
it
takes
to
complete
the
releases
all
together
has
an
important
value
compared
to
the
number
of
days
we
have
in
a
month
as
an
example
right,
so
you
can
say
well,
if
it
takes
couple
of
the
extra
is
not
that
much.
Well,
maybe
true,
but
then,
if
you
are
spending,
no
no
12
days
amount
just
doing
manual
work
for
releases,
that's
a
huge
implication
right,
so
it
should
bump
up
the
number
as
well.
A
B
I
I
have
a
question,
so
we
are
building
the
packages
and
then
and
we
are
doing
the
building
the
packages
continuously
and
if
some
incident
happened
and
we
are
not
out
to
deploy
those
packages,
are
they
kind
of
piling
up
and
when
incident
finished
we
do
we
kind
of
apply
them
subsequently
or
we
apply
like
the
last
version
of
what
how
it
works.
A
The
last
version
that
I
mean
we
promote
one
version,
that's
the
thing
so
I
think
you
can
see
this
here
again.
So
if
we
look
at
everything,
basically
what
you
care
about
is
the
manual
so
manual
means
this
is
the
number
of
packages
that
are
waiting
to
be
promoted.
A
So
if
it's
one
yeah
it's
one,
but
if
you
look
at
here,
you
have
two
right
so
means
that
you
are
piling
up
at
this
point
in
time.
Release
managers
I'll
say
go
here:
three
release
managers
will
just
promote
the
last
one
unless
there
are
known
reason
to
promote
a
previous
one,
but
usually
they
would
just
deploy
the
more
recent
version.
B
And
what's
happened
with
the
the
versions
that
didn't
promote,
wasn't
promoted.
A
They
show
up
here
has
not
promoted,
so
we
are
kind
of
at
this
point
in
time.
We're
trying
to
keep
this
as
an
information
about
are
we
do
we
have
an
aggressive
schedule
in
terms
of
deployment,
so
are
we
producing
more
packages
than
the
one
where
we
can
actually
deploy?
A
This
is
this
is
what
this
number
represents
today.
I,
don't
think
this
is.
This
will
continue
serving
this
purpose
if
we
try
to
decouple
tagging
from
deployment,
so
this
is
this
is
this
was
just
an
easy
way
to
say,
considering
that
this
is
a
hand-to-end
process.
It
starts
by
tagging
and
completes
by
production
deployment.
A
Not
promoted
is
a
good
indication
of
wasted
packages
or
wasted
effort
if
we,
but
because
of
the
design
of
the
process,
if
we
are
gonna
change
the
design
of
the
process
and
say,
let's
tag
every
hour
regardless
and
then
let's
promote
in
10
times
a
day.
Whatever
is
the
latest
available
package,
then
this
makes
no
sense
anymore,
but
it
you
want
to
check
out
the
things
that
we
were
discussing
before,
like
how
many
coordinated
pipeline
didn't
complete.
A
For
some
reason
that
gives
you
the
the
real
failures
in
that
case
right
because
you
are
decoupling,
the
the
package
creation
so
then
you
may
have
up
may
have
another
Matrix,
which
is
how
often
package
creation
fails,
but
that's
kind
of
a
gift
to
distribution
right.
So
we
we're
counting.
We
keep
them
accountable
in
certain
sense
right,
because
in
that
case,
if
something
fail
in
building,
we
just
get
notified
about
it.
But
as
the
build
itself
is
something
that
we
are
using
as
a
client
right.
So
yeah.
B
C
Think
that
sounds
quite
close
to
mttp
yeah,
that's
right
and
I
think
this
would
be
in
addition,
right,
I
think
it
would
be
a
keep
mttp
review
that
that
would
be.
You
know
like
overall
pipeline,
like
duration
measure.
This
one
is
release,
Manager
work,
Cloud
measure,
and
then
we
want
to
also
probably
have
something
which
is
release
an
mttp
equivalent
for
releases
so
like
how
healthy
are
releases
but
I,
don't
quite
know
what
that
would
exactly
look
like.
So.
C
Cool
all
right
sounds
good.
Well,
let's
see
I
might
catch
up
with
you
separately
and
actually
figure
out
like
how
can
we
start
working
towards
this?
If
anyone
has
any
other
comments
or
wants
to
interject
in
the
in
the
comment
section
please
dive
in
on
that
issue,
but
I
think
this
one
would
be
a
good
one
for
us
to
start
figuring
out
what
we
can
pull
together
so
that
we
can.
What
kind
of
the
key
things
I'd
love
to
be
able
to
say
is
like?
C
Oh,
if
we
just
fixed,
you
know
this
task.
We
would
see
a
decrease
in
release
management,
whether
we
might
have
a
little
bit
of
work
to
do
initially
to
try
and
figure
out
like
is
it
back
ports
or
security
releases
or
deployments
like
I,
think
we
can
make
some
guesses,
but
it
would
be
great
if
we
actually
had
some
data
to
to
show
that
so
yeah
I'll
see
what
we
can
do
to
actually
start
measuring
this
awesome.
Is
there
a.
A
There
any
oh,
there
was
something
that
came
to
my
mind.
It's
still
on
the
same
topic
when
grams
was
thinking
about
understanding
the
status
of
incident,
so
the.
B
A
Closest-
and
this
is
more
a
question
for
Reuben,
so
the
work
you're
doing-
we
are
receiving
web
books
for
jobs
right
now,
but
I
think
there
is
also
another
web
hook
for
issues.
So
if
we
could
also
add
the
Web
book
for
issues
for
the
production
tracker,
we
may
interject
when
the
block
deployments
was,
it
was
added
to
something
when
it
was
removed
and
trying
to
also
count
those
information
real
time.
D
Like
even
just
using
the
blocks
deployment
label
right,
you
could
have
like
a
metric.
That's
one
or
zero,
like
it's
deployments,
blocked
one
for
how
long.
Then
it
goes
back
to
zero
when
that
incident
is
closed
or
whatever,
and
we
might
even
be
able
to
kind
of
see
just
in
one
simple
line
when
we're
deployments
blocked
over
the
course
of
a
week,
maybe
even.
D
B
I
have
another
stupid
question
when
we
have
an
incident,
is
it
like
a
the
normal
procedure
that
we
stop
deployment
during
the
incident
like
during
the
all
incidents
or
some
particular
incidents.
C
A
Yeah
and
we
block
promotion
to
production,
actually,
not
not
all
the
deployment,
that's
right,
which
came
to
my
mind
as
another
idea:
Reuben
I'm,
always
relying
on
you.
We
should
instrument
the
chat
ups
for
locking
environment
to
track
when
an
environment
is
locked.
A
E
A
D
I
think
even
just
to
understand,
if
deploys
are
backing
up,
because
they're
all
locking
on
the
one
environment,
it
would
be
great
actually
if,
if
system
starts
looking
at
like
the
gitlab
com,
repo
locking
as
well,
we
could
unify
that
with
environment,
locking
so
like,
for
example,
if
someone's
rolling
out
a
config
change
to
an
environment,
if
that
was
bound
into
the
Locking.
D
So
a
deployment
couldn't
happen
to
that,
and
then
we
could
track
that
metric,
because
I
think
that
would
be
very
interesting
as
well
to
see
how
many
times
like
people
are
getting
held
up
on
doing
change.
Issues
like
we
do
it
there's
a
lot
of
CRS
that
happen
as
well
as
deployments,
and
then
hopefully
we
could
get
more
less
of
the
situation
of
people
pinging
release
managers
to
do
change
issues,
because
actually
that
is
disruptive
like
it's
totally.
D
Fine
people
need
to
do
change
issues,
but
just
the
whole
coordination
of
it
at
the
moment
is
it
can
be
stressful
if
things
are
already
happening
and
stuff.
C
Great
yeah
release
State
group
are
thinking
about
whether
there's
a
feature
they
can
think
of
as
well,
where
someone
could
more
easily
use
the
environments
page
to
see
what's
going
on
and
also
kind
of
request,
a
hold
on
that
environment.
So
yeah
exactly
the
same
thing.
Take
away
that
manual
coordination.
D
D
Also
agree:
environments
across
projects
group
level
environments
would
also
be
a
game
changer.
For
us
that
would
be
fantastic,
but.
E
Sorry
I
didn't
get
what
Graeme
was
saying
entirely.
Would
you
mind
writing
it
down
in
the
in
the
agenda.
C
So
I'm
gonna,
move
on
I
want
to
stop
the
recording
and
give
you
a
quick
update
and
I
I
have
to
leave
on
time.
So
I'm
going
to
go
ahead
and
move
to
the
stopped
recording.