►
From YouTube: 2022-04-20 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Okay,
let's
get
started
we're
at
time,
so
we
have
a
few
announcements
they're
in
read
only
unless
anyone
wants
to
go
through
any
more
details.
C
No
okay,
great,
so
I
had
just
a
brief
discussion
point.
Whilst
I
have
a
few
of
you
here
on
the
on
this
year's
strategy
sort
of
mr
proposal.
C
C
One
of
the
items,
though
I
have
proposed,
is
to
essentially
reduce
effort
to
release
managers,
so
I
know
that
the
last
few
months
of
release
management
have
been
harder
than
they
previously
were
many
different
reasons
for
that,
but
one
of
which
is
that
we
just
have
more
stuff
that
we
have
to
kind
of
manually
deal
with
now
as
part
of
having
this
an
okay
or
having
us
kind
of
a
strategy
item
an
okr.
It
would
be
great
to
be
able
to
measure
that
we
are
making
some
improvements
here.
C
B
So
when
you're
talking
about
effort
amy,
are
you
talking
about
manual,
charts
or
things
that
are,
I
don't
know
outside
of
the
regular
schedule?
What
are
you
thinking
about.
C
C
My
guess,
there's
probably
a
few
ways
we
do.
My
guess
is
that
some
of
the
pain
we
see
in
release
management
has
always
been
that
way
and
that
now
we're
doing
more
frequent
deployments
and
deployments
are
faster.
We
see
these
things
a
lot
more,
so
it
could
be
that
that
we
focus
on
like
say
I
don't
know
the
top
three
most
painful
things
in
a
deployment
or
the
three
three
we
see
most
frequently
or
you
know
some
way
of
actually
taking
things
that
we
have
to
retry.
D
So
things
like
having
to
retry
italy
when
it
fails
or
ansible
when
it
fails
to
get
facts
that
comes
to
mind,
but
that's
not
the
most
painful,
because
when
something
like
that
fails,
you
can
see
it.
It
shows
up
in
announcements,
but
more
painful
is
when
something
fails
a
second
time
because
then
it
doesn't
show
up.
D
D
So,
like
yeah,
multiple
failures
in
the
same
pipeline
or.
D
I
think
it's
like
when
you're
tagging
a
release,
and
you
know
you
have
to
wait
for
the
command
and
the
pipeline
to
complete,
but
that's
that's
like
a
irregular
pipeline.
We
don't
have
notifications
for
it
stuff,
like
that.
C
One,
so
data
would
be
the
best
way
of
measuring
progress
on
like
how
the
state
of
things
right
now
and
how
things
are
improving.
We
know
we've
consistently
found
that
quite
difficult.
This
data
comes
in
from
different
places,
but
what
are
people's
general
thoughts
around
having
a
a
more
of
a
like
sort
of
release?
Manager
assessment?
I
mean
it's
very,
very
individual,
but
I
mean.
C
Could
we
use
something
like
at
the
end
of
each
week?
The
release
managers
kind
of
give
a
a
rough
score
of
the
state
of
the
week
and
highlight
particular
pain
points,
and
we
track
that
like
super
tough,
because
it's
not
based
on
maths,
it's
based
on
kind
of
gut
feel,
which
means
it'll,
be
easy
to
kind
of
bias
up.
D
B
Support
group
for
release
managers
yeah.
I
was
going
to
to
try
and
suggest
something
that
we
can
actually
try
to
measure.
I
I'm
supportive
of
this
effort,
so
let
before
we
go
into
the
in
the
things
right,
there
might
be
things
that
are
not
really.
It
doesn't
surface
from
things
that
we
can
get
out
from
numbers
and
it's
important
to
keep
track.
Also
of
this.
That
being
said,
this
might
be
harder
to
prove
when
we
want
to
say
now.
B
We
want
to
show
an
improvement,
so
I
was
thinking
about
numbers
that
we
can
show
up,
that
we
can
collect
and
show
the
what
changed
there.
So
something
that
came
to
my
mind
is
doing
some
avi
scraping
on
our
pipeline,
and
we
can
check
things
like
a
number
of
things
that
we
tried
so
job
that
got
retried,
and
this
is
a
number
we
can
show
it.
It
tells
us
something.
B
Another
thing
that
I
would
like
to
play
with
is
checking
the
deployment
statistics
on
the
release
metadata
project,
because
on
that
project
we
are
tracking
deployments
per
environment
and
when
something
fail,
we
we
record
the
failures.
So
we
can
have
a
very
high
level
of
review
on
how
things
were
bad
during
a
week
by
counting
the
amount
of
time
between
failed
deployment
and
the
next
successful
one.
B
Another
thing
could
be
just
counting
number
of
deployment
related
incidents
declared,
because
even
if
it's
a
quick
one
to
resolve
it
still
takes
effort
from
release
managers,
they
have
to
figure
out
that
there
was
an
incident
or
something
blocking
a
deployment,
make
the
painful
decision
of
engaging
in
incident
declaration
and
everything
that
scams
out
of
it.
So
this
all
looks
like
good
numbers
that
I
would
like
to
bring
down
as
an
improvement
from
what
what
does
it
mean
to
be
a
release
manager.
E
So
so
like
today,
there
was
two
issues
two
times
where
there's
the
issues
we
have
with
the
kate's
workloads
repo
happen
like
two
different
issues
that
we
know
about,
and
I
was
like.
Oh
yeah,
I
see
that
I
know
how
to
fix
that
I'll,
go
and
fix
that
myself
should
I
be
opening
an
incident
for
them,
even
though
it's
like
a
five
minutes
like
two
minute.
E
I
really
think
if
we
could
somehow
get
prometheus
metrics
about
our
jobs
and
pipelines
either
from
like
git
lab
itself
or
we
have
to
write,
you
know,
expand
the
delivery
exporter
or
something,
I
think
would
be
really
useful,
I
think,
being
able
to
use
metrics
to
quantify.
I
think
you
already
mentioned
it.
E
Alessio
like
job
retries
in
general,
like
over
time
like
and
stuff
like
that,
like
we're,
really
just
getting
more
metrics
out
of
our
ci
stuff,
so
we
can
visualize
it
a
bit
better
because
at
the
moment
you've
got
to
kind
of
go
to
like
for
kate's
workloads.
You've
got
to
go
to
that
pipeline
in
ops
and
like
the
pipeline
view
and
go
scroll
down
through
the
history,
and
you
can
kind
of
get
it's
really
hard
to
visualize.
E
Just
over
the
last
24
hours,
how
many
jobs
retrieved
like
we've,
actually
retried
so
it'd,
be
you
know
the
number
of
jobs
that
were
in
a
I
don't
know
how
to
put
it
in
metric
terms,
but
just
the
amount
of
jobs
that
are
retried
because
yeah,
I
just
don't,
have
a
good
enough
visibility
into
that.
I
know
it's
a
lot,
but
it'll
be
good
to
quantify
that.
C
So
should
we
wrap
that
in
to
our
plans
as
well,
then
right
so
it
sounds
like
we
roughly
agree
that
we
want
to
know
how
long
we're
blocked
for
and
how
many
retries
we
are
doing
and
exactly
how
we
measure
that
or
where,
if
we
proxy
that
data
like
we
can
figure
that
out.
But
we
maybe
start
the
quarter
and
say
what
is
the
number
of
these
two
things
and
then
we
aim
to
reduce
them.
C
Awesome
to
answer
your
question,
graham,
it
sort
of
depends
like
yes
from
a
bookkeeping
point
of
view.
It
is
really
helpful
to
have
an
incident
for
these
things,
because
it
does
give
us
a
way
of
measuring
it.
If
it's
a
five
minute
thing,
I,
the
overhead
of
the
incident,
is
probably
greater
so
I'll
leave
it
to
be.
Your
judgment
call,
but
I
think
it's
something
which
the
teacher
of
it's
not
something
that
we
exclusively
can
fix
ourselves.
It's
really
helpful.
C
It's
where
it
sort
of
came
about
from
is
opening
an
incident
as
a
kind
of
first
thing
like
say,
like
qae
tests
failing
it
really
makes
it
much
easier
for
us
to
work
with
quality.
So
if
it
is
something
where
actually
we
are
going
to
in
the
future
need
to
put
a
project
about
it
with
reliability
or
do
something
a
bit
bigger,
then
it
is
helpful
to
at
least
have
some
incidents
that
we
can
point
to.
For
that.
E
One
of
the
big
ones
I
don't
do
at
the
moment
is
when
ops.getlab.net
goes
down
for
its
daily
upgrade.
That
happens
during
my
shift,
and
so
I
guess
before
I
came
on,
no
one
was
doing
releases
in
apac,
so
it
didn't
matter.
It
breaks
the
pipeline
yeah
pretty
much
every
day.
You
want
to
just.
C
Because
the
we
already
kind
of
know
the
solution
for
that
is
that
it's
not
a
single
node
ops
instance,
so
it
doesn't
have
to
go
down
right.
We
could
do
a
zero
time
deployment.
We
just
haven't
done
the
work
to
ops
to
make
it
so
it's
never.
It
doesn't
hurt
anyone
apart
from
us,
so
no
one's
prioritized.
E
And
it's
not
just
like:
oh,
it
went
down
the
job,
just
failing
you
have
to
retry
it.
Everything
goes
weird
everything
gets
marked
as
skipped.
I
got
to
go
back
and
like
retry,
random
jobs
to
like
poke
it
enough
for
the
directed
acrylic
graph
to
you
know
like
the
jobs
just
go
into
a
bad
start.
I
can't
even
click
them
manually.
They
just
everything
in
the
pipeline
gets
marked
as
skipped.
E
So
it's
like
the
whole
coordinated
pipeline
is
lost,
and
then
I
have
to
go
back
and
run
like
the
first
prepared
job
again
and
then
eventually
it
somehow
on
wedges
itself,
like
it's,
not
a
trivial
thing
and
yeah
I
I
haven't
done
probably.
I
should
open
I'll
open
an
issue
about
that,
but
that's
does.
B
E
B
C
C
C
Now
I
admit
I
haven't
updated
any
of
the
deployment
blockers
yet
in
the
last
weeks
or
so,
but
ruben
as
release
manager.
Is
there
anything
that
we
need
to
take
action
on
that?
You
know
of
at
the
moment
for
mttp.
D
B
I
do
have
a
question
about
something
that
we
were
in
the
previous
weeks,
which
is
the
the
the
problem
with
the
pipeline
that
generates
assets
when
we
had
this
incident.
Where
so,
is
my
understanding
correct
that
nothing
is
happening
on
that?
We
are
just
betting,
that
pipelines
are
that
assets
are
okay.
E
B
B
C
B
E
Just
also
on
the
discussion
of
mean
time
to
production,
we
at
least
in
apac.
I've
lost
a
few
days
due
to
the
database
ci
database
provisioning
like
ongoing
work,
because
they're
doing
a
lot
very
big,
very
long.
C
Thing
I
want
to
start
thinking
about
it
as
we
increase
the
number
of
deployments
and
as
reliability
grows,
we're
going
to
reach
a
point
where
the
overlap
of
deployment
and
infra
changes
becomes
like
a
real
contentious
point.
So
I
want
to
actually
start
trying
to
map
that
out.
C
So
we
can
figure
out,
like
I
think,
what's
going
to
need
to
happen,
is
we'll
need
to
work
with
reliability
and
actually
assess,
what's
safe
to
change
in
parallel
and
what's
not
so
for
anything
like
that,
if
you
are
blocked
on
any
crs,
even
if
they're
scheduled,
please
put
the
deploy
block
labels
on
and
then
we
can
say
we
can
point
to
it
and
say,
like
you
know
this
one
blocks
us
for
10
hours
or,
however
long
we
end
up
with.
A
C
Okay,
not
really
some
in
some
cases,
I
can
show
you
some
of
the
dashboards.
Some
of
it
goes
into
size
sense,
but
no
not
as
easily
as
we
should
be
able
to.
C
D
C
Awesome
great
yeah:
let's
take
a
look
through
the
issue
and
figure
out
a
plan
for
making
the
change
and
making
that
known
about.