►
From YouTube: 2022-09-27 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
So
we've
got
a
bunch
of
stuff
in
the
announcements
section
so
I'll.
Let
you
all
read
that
in
your
own
time,
discussion
so
Jenny.
You
have
the
first
discussion
item.
A
If
I
know
how
to
unmute
myself
yeah
hi
so
yeah,
we
were
talking
about
this
in
G
delivery.
Last
week
and
yeah
we
were
talking
about
the
October
10th
lacking
American
release
manager
coverage
and
yeah.
There
were
a
few
discussions
going
back
and
forth,
so
I
decided
to
bring
it
up
here
and
yeah
and
and,
as
I
said
in
the
announcements
yeah.
Thank
you
so
much
for
covering
for
a
few
days
that
I'll
be
out
yeah.
C
B
Awesome,
thank
you.
That's
good
sounds
good,
so
I
just
wanted
to
go
a
little
context
and
take
any
I
put
it
in
discussion
to
keep
people
who
have
questions
or
or
want
to
discuss
further,
but
just
to
give
you
a
little
more
context.
B
B
B
That's
come
up
a
few
times
in
our
discussions
as
we've
sort
of
looked
up
pipelines
and
figured
out
like
what's
the
right
order,
or
where
do
we
put
jobs
and
things
like
that,
and
also
we,
as
we
start
talking
about
health
checks
and
things
like
that,
I
think
we
all
have
opinions
around
how
much
time
is
being
spent
on
tests
and
are
they
in
the
right
place
and
do
they
fail
enough?
So,
just
to
give
you
some
kind
of
awareness,
I
guess
that
quality
are
looking
into
this.
B
Interestingly,
as
well,
what
they've
decided
as
a
kind
of
first
iteration
for
tracking
things,
is
they're
going
to
make
use
of
our
deployment,
blocker
data
and
so
as
a
kind
of
super
early
version
of
this
process,
which
we'll
get
figured
out
in
the
next
few
weeks,
I'm
going
to
just
give
Vincy
a
ping
on
the
deployment
blockers.
Once
we've
got
our
weeks
kind
of
review
on
there
and
then
what
Vince
is
going
to
do
is
figure
out
a
process
which
will
most
likely
be
the
quality
on
call
or
or
somebody
named
from
quality.
B
Each
week,
essentially,
will
do
a
bit
more
of
a
deep
dive
into
quality
related
failures.
I
expect
they're
interested
in
a
few
things,
so
I
know
they're
interested
in
like
things
like
that.
We
class
as
flaky
test
but,
like
you
know
anything
like
the
API
cookie
or
the
sorry,
the
canary
cookie
problem
you
had
on
the
API
the
other
week,
Myra
like
that
sort
of
stuff,
they're,
certainly
interested
in
a
kind
of
like
other
tests
working
in
the
way
they
expect,
but
they're
also
really
interested
in
failures.
B
And
what
they're,
trying
to
figure
out
I
guess
is:
where
do
you
put
the
tests
in
order
to
catch
the
test
failures
in
the
wrong
place
at
the
right
place?
So
they're
going
to
be
really
interested
I
think
to
see
when
we
have
tests
failing
on,
say,
production
Canary,
that's
like
super
late
stage
in
Canary
is
probably
good,
but
it
means
something
did
get
merged.
B
So
anyway,
quality
is
starting
to
take
a
look
through
our
data
and
dig
through
and
they're,
going
to
try
and
use
this
to
try
and
make
some
decisions
about
which
tests
are
running
and
where
do
they
run
and
how
many
times
does
an
MR
go
through
the
same
set
of
tests?
So
I'm
not
sure
what
we'll
see
coming
out
of
that,
but
hopefully
we'll
start
to
see
tests
being
changed
and
hopefully
reduced
a
bit
as
well.
C
Something
we
can
add
there
is.
They
are
actually
interested
also
in
a
lot
of
things
that
we
wanted
already
to
touch
on
as
part
of
reducing
noise
management
metrics,
so
we
spoke
about
retries
manually
tries.
That
is
something
we
spoke
about
since
long
time.
How
many
are
there
will
be
interesting
to
start
to
count,
Malory
tries
also
which
stage
and
pipeline
is
happening,
and
it's
okay.
C
These
are
the
and
in
particularly
highlight
the
quality
ones
right,
because
at
least
in
my
release,
management
shift
I
had
a
few
times
where
I
had
to
retry
QA
jobs
that
actually
passed
after
that
and
on
one
side.
I
was
extremely
happy
on
the
other
side.
C
I
was
a
bit
too
busy
to
understand
why
the
second
time
passed
or
not,
but
this
would
would
be
great
if
we
start
to
highlight
okay,
you
know
out
of
the
last
50
rounds
we
had
to
do
20
times
we
had
to
retry
right
and
then
at
that
point
at
least
we
have
already
some
data
point
that
can
point
them
out
in
the
right
direction,
to
look
what
happened
in
these
cases
and
so
on,
because
I
guess
our
job
here
is-
would
be
to
inform
them
mainly
and
give
them
like
the
the
tools
for
them
to
investigate
better
on
the.
C
It
will
be
also
nice
to
understand
if
we
have
a
Baseline
that
we
had
to
use
later
on
to
say:
okay,
this
is
increasing
in
a
pretty
worrying
way,
is
decreasing
or
how
is
there
an
innovation
of
more
tests
and
less
failure,
so
this
kind
of
things
so
could
be
that
sometimes
you
know
we
we
run
tests
that
are
never
passing.
Never
failing.
Are
these
never
failing,
because
they're
actually
testing
the
right
things
or
not,
I
mean
in
the
past.
C
B
Yeah
for
sure-
and
that
reminds
me
as
well
of
an
interesting
additional
thing
that
it's
motivating
this
so
obviously
we
we
care
a
lot
about
deployment,
duration
and
kind
of
manual
efforts
and
we
all
care
about
kind
of
making
sure
the
right
tests
are
running
in
the
right
place.
B
But
one
additional
thing
which
was
mentioned,
which
is
actually
not
something
which
we've
actually
ever
really
talked
about,
is
also
the
cost
and
one
thing
that
if
we
have
jobs
that
are
frequently
re-running
or
we
have
the
same
tests
running
on
the
same
Mr
like
a
number
of
times,
that's
actually
quite
expensive.
So
there's
also
an
additional
sort
of
focus
there.
That
quality
will
be
looking
at
about,
like
other,
actually
ways
that
we
could
be
running
like
fewer
tests
without
increasing
risk.
So
certainly
an
interesting
one.
B
I'm
hoping
it
will
lead
to
some
some
kind
of
I
guess,
like
in
efficiency,
improvements
that
we'll
actually
realistically
see.
B
Cool,
so
are
there
any
other
discussion
topics
that
anyone
wants
to
bring
up
today.
B
B
So
we
have
Jenny
and
I
studying
release
Management
on
Friday,
so
we're
just
super
early
in
our
shift.
So
we'll
see
how
we
go
on
these
things.
Dashboard.
B
Looking
super
unhealthy,
so
we've
got
a
few
things
that
came
off
the
back
of
this
that
we
actually
do
need
to
to
do
a
bit
of
a
follow-up
form.
So
earlier
in
the
week
I
think
things
were
were
reasonably
okay,
we
had
we
had
some
QA
failures.
We've
actually
had
an
interesting.
We've
had
quite
a
lot
of
database
migration
problems
in
the
last
week.
So
a
little
cluster
of
those
so
they'll
be
interesting
to
to
see
if
we
can
actually
get
someone
to
investigate
a
pattern.
B
So
we
certainly
had
some
failures
earlier
in
the
week.
This
one
was
also
migration
to
Friday.
We
just
had
straight
up
incidents
with
with
test
failings,
and
no
no
production
deployments
on
Friday
and
PTL
yesterday
means
that
we
also
looking
pretty
pretty
light
on
things,
so
lots
of
lots
of
packages
not
very
much
promoted
the
one
follow-up
I
do
have
that
I
want
to
take
off.
This
is
around
the
20
seconds.
We
have
our
soft
PCL
in
place.
B
I
am
working
to
figure
out
what
we
do
there.
So
Michael
and
I
were
chatting
about
this
last
week
and
either
we're
going
to
lift
the
PC
out
the
soft
PCL
on
the
22nd
or
we'll
improve
the
documentation
to
make
it
clear
that
we'll
continue
deployments,
but
not
not
do
the
pdms,
and
that
gives
us
rollback
guaranteed.
B
Basically
so
one
or
the
other
there's
certainly
good
value
to
us
doing
deployments
we're
still
reviewing
whether
we
also
want
all
the
feature
flags
and
all
of
the
change
requests
to
take
place
on
on
the
22nd
like
that
feels
like
it
could
be
too
much
but
but
being
reviewed.
So
hopefully
that
will
improve
things
a
bit,
but
certainly
last
few
days
we
have
not
had
very
many
deployments.
So
that's
looking
a
little
unhealthy,
so
I
will.
B
I
realize
now
that
the
the
experts
amongst
you
open
these
tabs
in
advance
I'm
learning,
I'm
learning
on
the
job.
B
Okay,
so
lead
time
for
changes
is
unsurprisingly
very
high,
and
that
is
because
we
didn't
do
very
many
deployments
on
Friday.
We
did
not
have
nursery
on
Thursday,
we
did
not
on
Friday
and
we
did
nothing
yesterday
and
the
first
one
went
through
today
around
my
midday.
So
unsurprisingly,
lead
time
is
very
high,
but
we
have
got
the
Data
Tracking.
B
That's
that's
okay,
deployment
frequency
is
hello,
and
surprisingly,
also,
though
we
can
see
here
that
it
is
the
PDM
that's
affecting
these
metrics,
so
I
know,
we've
talked
a
few
times
over
the
last
few
weeks
about.
Are
we
double
counting?
So
we
know
we
don't
ever
do
10
deployments.
We've
talked
about
whether
canaries
involved
there
I
mean
it
certainly
could
be,
but
on
the
23rd,
the
only
thing
we
did
was
the
PDM
and
actually,
oh,
that's
not
true.
So
we
did.
We
did
one
deployment
and
it
was
a
PDM
deployment.
B
So
I'll
open
an
issue
to
adjust
that,
because
that
will
be
incorrect
on
our
data.
So
we
should
only
track
the
package
deployments,
but
anyway
I'm
surprising
there
that
deployment
frequency
is
lower
than
in
recent
days,
but
we
we
should.
We
know
that
one
as
well
and
then
I
would
just
quickly
show
the
blockers,
the
metrics.
So
after
quite
a
few,
it's
been
really
really
spiky
recently,
which
will
be
an
interesting
one
to
dig
in
so
early
in
September.
We
we
lost
69
hours,
I
think
there
were
some
pcls
there.
B
Last
week
we
lost
51
hours,
so
super
super
spiky,
even
though
we
are
saying
we
don't
usually
go
above
six
things
in
a
in
a
week,
but
certainly
the
numbers
are
way
higher
across
staging
and
production
than
recently.
B
So
I
believe
that
is
all
the
metrics
and
I'm
going
to
add
in
a
thank
you
to
Reuben
thanks
very
much
Reuben
for
fixing
that
merch
train
error
I,
don't
believe
we
have
a
run
book
so
I
have
no
idea
how
to
fix
these
things.
So
I
appreciate
you
jumping
in.
B
I'll
I'll
put
one
together,
I'll
drop
one
up
and
send
it
your
way
for
a
review.
So
we
can
recover
that
stuff,
so
yeah
awesome
cool.
Are
there
any
other
things
I
should
have
shown
in
the
metrics
or
anything
anyone
got
any
questions.
B
A
B
B
B
That
is
a
really
good
point,
yeah
that
actually
I'll
have
a
look
through
it
highlight
because
I
noticed
I,
don't
know
if
any
of
you
have
read
through
the
development
off-site
notes.
B
I
share,
I'll
post
a
link
afterwards,
if
you
haven't,
but
one
thing
that
was
interesting
in
amongst
all
of
the
notes
was
a
lot
of
developers
have
been
talking
about
master
being
broken
and
they
specifically
highlighted
certain
dates
where
Master
breaks
or
or
where
it's
broken,
and
then
it's
suddenly
available.
So
they
see
a
pattern
which
I
think
we
probably
see
a
pattern
like
before
or
after
that
goes
along
the
lines
like
everyone
works
on
the
same
Cadence
right
around
the
Milestone.
So
yeah
you,
that's
a
really
good
course
go
back.