►
From YouTube: 2023-04-03 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Happy
Monday
everybody
welcome
back
Amy,
so
I
think
we
we
were
a
bit
of
a
critical
mass.
Some
people
will
join
probably
very
quickly
but
I
think
we
can
start
at
least
with
the
announcement.
So
this
is
the
third
of
April
2023
one
month
to
the
end
of
the
first
quarter
of
the
year,
so
we
have
some
announcements,
most
of
the
people
being
off
for
Easter
and
some
lucky
ones.
Having
goes
to
the
Friday
off
because
of
public
holidays
in
some
countries,
so
I'll,
let
you
go
through
those
ones.
B
I
think
discussion,
topics
Amy
the.
C
First
Resource,
before
we
jump
onto
those
ones,
though,
do
we
have
release
management
coverage?
Are
we
running
really
deployments
as
normal
on
10th.
B
D
B
F
Off
for
the
public
holidays
like
Friday
and
Monday,
I
will
be
off
because.
C
You
know,
let's
get
a
plan
for
those,
because
I
don't
think
we
usually
PCR
dates,
because
it's
not
public
holidays
for
everybody's.
C
Well,
so
yes,
so
16.0
is
a
project,
so
I
wanted
to
just
share
a
few
things,
because
there
are
a
bunch
of
things
in
progress
already,
but
maybe
not
everybody
has
seen
all
of
these.
So
thank
you
for
all
of
those
who
have
already
done
some
work
to
help
us
prepare
for
this
stuff.
C
Just
some
links
there.
There
is
a
coordination,
epic,
and
you
can
see
linked
off
that
epic
Sam
has
a
bunch
of
issues
open
where
his
planning
kind
of
comes
throughout
the
company
and
with
users
as
well.
So
you
can
see
those
Reuben
and
Steve
are
doing
a
super
job
of
preparing
the
release
prep.
What
to
expect
issue
once
we've
got
that
we're
happy
with
it.
C
I
suggest
we
can
just
begin
sharing
it
like
it's
there's,
probably
no
too
early
moment
to
begin
letting
the
stage
groups
know
that
particularly
this
time
around
that
we
will
be
cutting
or
at
least
quite
early,
and
then
there's
also
the
additional
piece
which
is
we
potentially
need
to
do
some
prep
to
to
be
ready
for
these
breaking
changes
coming
in.
So
thank
you
for
all
of
those
who've
reviewed.
Some
of
those
so
far
what
I'm
going
to
do
is
turn
that
into
a
proper
project.
C
So
we
can
actually
then
plan
what
needs
to
happen.
The
moment
it
looks
like
possibly
most
if
maybe
maybe
I'll
go
with
most
are
things
that
possibly
reliability
will
be
more
involved
with,
but
we
do
try
to
make
sure
that
we
don't
accidentally
miss
something
which
will
cause
US
problems,
I'll,
say
pre
or
release
or
those
sorts
of
like
prep
environments,
so
I'll
see
what
I
can
pull
together
there
and
we
may
potentially
need
to
schedule
some
work
around
that
stuff
in
the
next
like
from
next
week.
E
I
think
some
was
investigating
how
to
deploy
all
the
breaking
changes
throughout
this
period.
Was
there
any
conclusion
to
that?
Are
they
going
to
be
deployed
under
a
feature
flag
or
they
are
going
to
be
deployed
at
any
moment.
C
Ideally,
ideally
under
a
feature
flag,
but
not
all
like,
where
possible,
they
will
be.
The
stage
groups
will
probably
figure
that
out
Case
by
case
from
our
side,
with
nothing
special.
We
need
to
do
our
kind
of
what
to
expect
issue
covers
everything
really
so
stage
groups
will
will
coordinate
from
there.
C
Awesome:
okay,
so.
C
I
wanted
to
share
I
know.
Some
of
you
have
seen
some
of
these
bits
already
about
release
manager.
Metrics
I
wanted
to
share
with
you
what
I
have
so
far
Sony
work
in
progress.
Let
me
share.
Let
me
show
the
patch
releases,
because
it's
a
nice
clean
one
and
actually
it's
also
at
a
fairly
complete
state.
So
I'll
show
you
what
I
have
and
then
I'll
tell
you
a
little
bit
about
what
I'm
trying
to
think
for
next
iterations
so
actually
says:
I
decided
for
the
releases.
C
I
was
hoping
to
be
able
to
do
these
things
by
Milestone,
but
actually
it's
quite
difficult,
because
it's
security
release
is
spanning
multiple
milestones
and
multiple
weeks,
so
just
for
ease
and
to
try
and
give
a
little
bit
of
consistency.
I'm
just
going
to
look
at
these
on
a
release
by
release
basis,
and
then
we
can
work
out
how
we
put
those
together
to
give
a
kind
of
Milestone
view.
So
patch
releases
are
looking
pretty
good.
C
This
is
all
the
data
we
have
available
for
patch
releases,
because
this
goes
back
to
the
beginning
of
slack
time,
but
still
I
was
pretty
pleased
to
see
that
for
the
last
five
releases,
it's
been
relatively
stable.
So
what
I've
tried
to
do
here,
because
I
don't
want
to
maybe
assume
what
the
interesting
data
will
be
until
we
start
to
see
the
patterns
I've
tried
to
cover
it
from
different
angles.
C
So
I
know
Alessio
mentioned
last
week
about
patch
releases
might
be
interesting
in
terms
of
just
like
the
number
of
days
we
spend
working
on
them.
It's
actually
really
consistent.
It's
it's
usually
around
two
days
we
had
one
that
was
on
one
day.
C
Ahmad
went
like
rocket
through
I
got
that
done
so
nice
work,
but
we've
also
had
one
notice
three
days.
So
two
days
seems
to
be
about
the
amount
of
time.
One
of
the
big
variants
we
see
really
just
depends
on
as
the
day
goes,
whether
the
really
long-running
tasks
end
up
running
overnight,
so
that
we're
not
actually
sitting
watching
them
versus,
if
you're,
actually
waiting
for
packages
to
build,
and
it's
the
middle
of
your
day.
Of
course,
they
still
take
hours
to
build.
C
So
that's
like
one
of
the
big
variants,
I've
captured
a
number
of
manual
steps,
because
I
think
this
will
be
an
interesting
one
to
see
more
in
the
security
releases
for
us
just
to.
If
something
is
going
to
take
multiple
days.
It
would
be
amazing
if
it
was
quite
a
hands-off,
multiple
days
process,
so
we
don't
have
to
keep
doing
loads
of
little
tasks.
The
variance
on
patch
releases
is
coming
down
to
when
we
have
to
actually
modify
the
plan
to
handle
specific
cases.
C
So
you
see
it
varying
just
a
little
bit,
there's
a
guess:
a
good
indicator
as
well
of
just
the
the
slightly
invisible
behind
the
scenes.
Work
of
of
release
managers
having
to
investigate
an
issue,
make
a
plan
update
an
issue
and
kind
of
respond
to
it.
So
I
think
that's
an
interesting
one
to
note.
C
So
far
this
year,
all
of
our
patch
releases
have
been
within
our
current
maintenance
policy.
Yeah,
that's,
probably
an
indication
that
we're
putting
more
non-security
fixes
into
security
releases.
C
So
that's
a
good
thing
and
I've
got
the
number
of
changes
included
in
a
release.
I
was
curious
to
see
whether
size
of
release
adds
complexity,
I
suspect
it
doesn't
for
reasons.
I'll
show
you
in
a
second,
but
anyway,
it's
useful
to
have
it
tracked,
so
we
can
get
a
rough
average
size
for
a
patch
release
and
then
on
to
the
interesting
bits.
So
I
have
tried
to
capture
the
kind
of
the
sort
of
beginning
to
end
working
time
of
a
patch
release
in
the
wall
time
of
process.
C
This
is
I,
think
relevant,
not
because
you're
necessarily
working
on
it
all
that
time.
But
this
is
the
amount
of
time
you'll
have
it
on
your
mind
as
a
release
manager.
So
you
will
know
this
thing
kind
of
exists,
so
I
know
it
adds
to
weight,
even
if
it's
not
necessarily
that
you
spend
say
like
28
hours,
pressing
buttons.
C
How
is
that
that,
for
the
actual
active
time,
what
I've
tried
to
do?
I've
left
some
more
notes
here,
so
you
can
dig
through,
but
what
I've
tried
to
do
is
actually
use
the
activity
log
of
the
issues,
so
you
can
see
when
things
start
and
end
it's
coming
up
quite
consistent.
At
the
moment
it
looks
like
a
patch
release
takes
us
about
10
hours
of
active
work,
often
that
will
be
spread
across
different
release,
managers
and
different
shifts,
but
that
looks
like
from
looking
through
slack.
That
is
a
reasonable
estimate.
C
I
think
that
we
are
actively
working
on
a
patch
release
for
10
hours,
so
patch
release
is
looking
really
nice
in
that
they're
very
stable.
That
looks
like
a
nice
consistent.
We
haven't
in
these
five
patch
releases.
Nothing
has
changed.
This
is
the
same
process,
the
same
template,
so
the
next
patch
release
will
be
very
interesting.
We
should
be
able
to
see
some
differences
coming
in
from
the
pilot
patch
release
process.
A
I
have
a
couple
of
questions,
therefore,
so
questions
remarks
random
ideas.
So,
first
of
all,
it
would
be
nice
to
see
the
effects
on
mttp
or
number
of
deployments
of
running
a
patch
release.
So
what
does
it
mean
that
1573
run
in
9.3
hour
9.5
hour
time
compared
to
the
regular
workflow
of
a
release
manager
compared
to
something.
C
Right,
yes,
so
let
me
get
to
this
right,
so
actually
you're
right.
So
this
is
where
the
bidding,
the
piecing,
the
together
I,
think,
will
be
interesting.
I
made
an
estimate,
I
haven't
done.
Anything
with
this
I've
made
an
estimate
that,
within
a
typical
week,
we
have
82
hours
of
release
management
available.
C
This
is
assuming
we
have
half
of
Graham's
time
on
a
week
like
this.
We
don't
right.
So
what
I'm
going
to
be
really
curious
to
say,
I,
don't
know
how
easy
will
be
to
calculate
this
up,
but
what
will
be
really
interesting
is
to
see
how
close
to
82
we
get
in
a
typical
week
right.
So
not
every
week
is
the
same,
so
say,
for
example,
on
a
week
where
we
spent
three
days
doing
a
security
release,
and
then
we
do
a
patch
plus
you've
got
all
of
this
stuff
on
deployments.
C
A
D
A
Bit
and
then
the
other
thing
that
I
wanted
to
mention
or
even
ask
him
out
if
he
remembers,
is
that
if
it's
possible
that
the
1573
was
one
of
those
Lucky
Patch
release
when
you
just
run
the
merge
phase
and
everything
works
out
of
the
box,
because
timing
looks
like
everything
was
working
and
there
was
no
merge
conflict,
everything
was
ready
and
boom.
You
get
the
package
out
and
that's
it
takes
that
amount
of
time.
I
think
it
was
the
case
yeah.
C
C
C
These
are
okay,
so
the
bad
news
on
this
one
so
again,
I
suspected
that
the
number
of
vulnerabilities
we
deal
with
in
a
patch
release
would
be
be
visible
in
the
data
like
the
amount
of
time
we
spent
now
sure
that's
really
not
the
case,
so
that
was
quite
interesting,
so
I
don't
want
to
make
you
all
feel
or
like
even
worse,
but
like
this
patch
release
that
we
did
in
at
the
end
of
January
was
super
painful
and
we
actually
ended
up
same
format
as
I've
used
for
the
patch
releases.
C
Just
as
a
few
more
sections,
so
security
release,
issue
I've
tried
to
use
the
issues
to
break
things
up,
so
we
ended
up
spending
nearly
50
hours,
actively
working
on
this
release
and
there
were
only
five
fixes.
Okay,
that
was
really
bad
returns
for
our
work.
So
this
one
was
super
interesting,
like
I've
I.
This
is
the
one
with
an
exception
which
I
haven't
managed
to
get
on
any
of
the
others
again.
C
We
had
to
reschedule
the
security
release
because
of
that.
So
there
was
a
couple
of
hours
of
admin
and
we
had
a
bunch
of
QA
tests
failing
on
the
release
environment,
which
also
blocked
us
for
a
lot
of
hours
like
I,
don't
it
didn't
look
like
we
were
actively
working
on
that,
so
security
releases
are
super
Hands-On
time
intensive
and
a
couple
of
really
interesting
things
I
think
that
jumped
out
as
I
have
actually
I
should.
C
Just
not
it's
been
quite
manual
gathering
this
data,
but
it's
fascinating
because
actually,
when
we're
in
the
midst
of
a
release,
it's
amazing
the
lots
of
little
things
that
we
just
go.
Oh,
that
doesn't
work
well,
we
haven't
got
time.
Let's
just
move
on
and
actually
going
back
a
little
bit
later
and
looking
through
it.
It's
quite
interesting
to
see
the
patterns.
C
One
thing
to
note
is
the
number
of
manual
steps
on
the
security
release
is
massively
increasing,
so
we
had
57
ish.
My
accounting
might
be
saying
we're
also
57
in
January.
There
are
85
currently,
so
we
can
do
ourselves
some
favors
by
cutting
that
down.
C
This
doesn't
quite
work
out
because
I've
made
a
bit
of
an
assumption
on
the
first
steps,
but
anyway
we
largely
spend
some
hours
on
these
kind
of
like
first
steps
of
like
little
notification
tasks
and
like
tell
this
team
that
we're
getting
going
on
something
and
like
update
little
bits
and
pieces
like
little
general
admin
and
the
same
on
the
kind
of
post
publishing
we
spend
at
least
an
hour,
often
two
on
just
doing
the
kind
of
like
version,
creations
and
sinking
and
lots
of
little
admin
tasks.
C
So
it
feels
like
there
should
be
a
lot
of
easy
wins
to
cut
that
down,
but
they
are
time
consuming
very
very
time
consuming.
C
C
What
I
would
do
is
very
much.
It's
been
iterative
at
the
moment
because,
as
I
go
through
the
different
types
of
releases,
in
particular,
try
and
I'll
try
and
standardize
these
as
much
as
I
can.
So
if
I
get
interesting
data
on
a
different
type
of
release,
I'll
see
if
I
can
match
up
onto
a
a
different
one.
So
we
can
compare
things
and
then
I'll
also
have
a
think.
If
anyone
has
any
great
ideas,
it
would
be
so
much
better
if
this
wasn't
hidden
in
a
spreadsheet.
C
I
think
worst
case
I'll
create
an
epic
similar
to
the
deployment
blockers.
So
we
can
start
to
have
this
stuff
living
in
issues
and
and
summarized
in
there,
so
that
we
can
actually
find
it
again.
But
if
anyone
actually
has
nicer
ideas,
then
let
me
know
because
I
this
will
be
useful
for
us,
of
course,
but
it's
also
going
to
be
useful
for
helping
people
outside
of
our
delivery
understand
why
we're
quite
busy.
B
I
have
a
question
here:
I
mean
a
question
is
just
like
a
tooth
and
maybe
it's
the
wrong
one,
but
Jenny
the
the,
therefore
that
we
have
to
understand
how
much
time
we
actually
losing
also
due
to
failures.
Is
it
something
we
could
actually
cross
reference
with
this
data
to
understand?
There
is
any
kind
of
correlation.
D
B
D
Because
what
we're
tracking
right
now
is
more
so
the
auto
deploy
processes
like
retries
and
failures,
right
and
yeah.
We
can
definitely
cross-reference
the.
A
Problem
of
security
releases
because
we
first
do
the
auto
deploy,
which
is
not
true
for
the
patches,
because
in
theory,
development
prepare
patches
after
the
things
landed
on
gitlab.com.
B
A
No
for
batch
release,
there's
no
real
active
check.
We
need
to
wait
for
this
to
be
deployed
and
things
like
that,
because
it's
supposed
to
already
have
happened.
So
can
you
scroll
a
little
bit
up
Amy?
Please,
because
you
have
dates
yeah.
So
probably
we
can
check
those
metrics
that
Jenny
worked
on
by
those
dates,
because
every
deployment
in
that
date
range
is
a
security
Auto
deploy
and
so
affects
the
our
ability
to
actually
deploy.
A
There
will
be
gaps
like
before
the
first
deployment
and
after
publishing,
because
it's
just
a
date-
and
so
maybe
we
released
a
early
morning
and
then
but
overall,
it's
kind
of
a
it's
a
good
number.
C
Yeah,
that
might
be
how
I
started
if
we
haven't
already,
because
I
guess
what
we're
going
to
find
with
these
metrics
there's
two
angles
so
improving
this
stuff
one
is,
we
have
like
easier
processes
right.
They
are
like
easier
to
run
or
faster.
So,
for
example,
like
we
do
less
steps,
and
the
other
is
that
we
have
less
failures.
B
Yeah
Jenny:
do
you
mind
to
open
an
issue
maybe
to
try
to
look
at
that
outside
of
the
Quran
topic
that
we
are
working
on
right
now?
Maybe
because
you
know
we
already
only
like
three
four
weeks
left
for
that.
But
if
you
can
open
an
issue
to
start
to
understand,
if
we
can
correlate,
those
numbers,
I
think
would
be
would
be
great.
D
C
So
great
so
next
steps
for
this
I'm
going
to
finish,
adding
in
last
week's
security,
patch
release
security,
release
to
get
the
numbers
and
then
I'm
going
to
try
and
do
deployments
because
I
have
some
software
deployments
and
deployments.
I've
got
on
a
weekly
basis,
because
that
matches
are
much
better
with
our
metrics.
C
So
I
haven't
touched
this
for
a
long
time.
So
I'm
going
to
go
back
and
take
another
look
at
deployments
and
see
what
we
can
do
there.
C
Oh
and
the
other
one
I'm
gonna
do
for
you,
Myra,
for
as
the
dri
for
maintenance
policy
is
also
backboard
requests,
because
what
I
want
to
be
able
to
get
to
is
so
that
when
we
come
to
the
deciding
next
steps
for
the
maintenance
policy
process,
what
we
can
use
is
the
data
on
how
time
consuming
a
patch
releases,
how
time
consuming
our
back
Port
requests
and
then
try
and
match
that
up
against
the
the
impact
we
think
we'll
have
on
the
new
process.
C
Awesome:
okay,
well,
I
will
keep
sharing
stuff
as
I
go,
but
I
wanted
to
make
sure
people
could
kind
of
vaguely
see
and
also
understand
a
little
bit.
Why
there's
some
random
numbers
in
a
spreadsheet
and
they're?
Certainly
evolving?
It's
definitely
work
in
progress.
If
you
have
questions,
feel
free
to
Ping,
Me
or
Leave
comments
on
the
spreadsheet
and
I'll
help
there
as
well.
E
I
am
assuming
that
you
are
seeing
my
screen,
please
let
me
know
if
otherwise,
so
last
week
last
week
the
incidents
were
not
kind
to
us.
We
had
too
many
which
can
be
seen
on
the
graph,
because
basically
we
were
promoting
half
of
the
packages.
We
had
a
security
issue,
a
security
incident.
Then
we
had
the
S1
offer
that
got
a
lot
of
attention
and
a
lot
of
minor
ones
with
q
a
and
some
other
deployments.
So
the
graph
is
not
pretty,
but
I
guess
is
what
we
have.
E
We
also
have
this
Auto
deploy
failed
thing,
I
think
yeah
yeah,
and
that
is
basically
because
we
had
the
security
release,
so
the
packages
were
out
of
sync
and
by
the
time
the
Picker
tried
to
run
the
commit
was
not
on
security,
because
we
also
had
some
merge,
train
failure,
so
security,
so
yay,
okay,
moving
on
a
little
time
for
changes,
I'm,
not
sure
why
the
Greek
is
starting
on
March,
28th,
I'm.
Sure
last
month
was
March,
27th
right,
but
I
don't
know
either
way.
It
looks.
E
Okay,
ish,
assuming
this
bump
that
we
have
over
here
is
about
the
security
release.
In
the
S1
incident
that
was
delaying
gitlab.com,
so
there
is
a
little
nasty
bump
here.
E
The
deployment
frequency
yeah.
It
doesn't
look
very
right
here.
We
go
yeah,
3.1
wasn't
our
best
week,
but
we
managed
to
do
is
to
do
to
still
do
some
deployments
either
way.
25
plus
weeks,
I
think
it
is
a
good
number,
considering
all
the
events
and
what
am
I
missing.
What
am
I
missing
are
those
photographs
deployment
I
need
time.
Perhaps
oh,
we
already
see
this
one,
this
one
which
one
am
I
seeing
employment.
The
time
are
these
all
just
three:
okay,
cool
questions,
comments,
suggestions.
C
Can
we
get
that
is
updated
because
I
think
given
there
was
quite
a
lot
of
stuff
going
on
last.
E
Week
yeah
I
mean
look
at
that
yeah.
We
can
follow
up.
Yeah
yeah
I
mean
for
44
hours,
yikes
14
production,
blockers,
yikes,
yeah.
E
C
C
Random
question:
when
we
have
QA
failures,
what
is
the
actual
process?
Today's
time
into
instance
now,
or
are
these
still
issues
that
we
create
that
we
work
with
our
quality
on
the
release,
issues
yeah.
F
I
honestly
talk
to
Quality
directly
and
they
point
to
the
issue
that
they
have
and
it
seems
like
they
fixed
most
of
the
kind
of
Lucky
Lucky
tests.
But
today
I
had
to
rerun
some
quality
tests
twice,
maybe
because
they
they
didn't
fail,
but
it
used
to
fail
a
lot
like.
It
only
worked
like
from
the
third
time
when
you
click
when
when
you
deploy,
but
now
it's
better
but
not
ideal,
though.
E
Yeah
so
last
week
was
also
particular
for
Q
a
we
had
some
fluffy
failures
that
I
reported
on
this
issue.
Basically
they
failed
the
first
time,
but
then,
after
a
try,
they
work
which
it's
good,
but
it
is
also
bad
because
each
retry
takes
a
lot
around
10
minutes.
So
in
those
pile
up
you
can
start
building
hours
of
deployments
being
blocked
and
then
well,
you
can
see.
We've
got
a
lot
of
currencies
throughout.
F
One
more
thing,
I
wanted
to
add
So.
Currently
we
have
version
that
it's
that
is
good
now
on
prod,
but
the
couple
of
new
versions
so
especially
like
the
the
newest
one
that
we
have
on
on
in
the
pipeline
on
and
we
have
in
in
in
this
in
staging
this-
is
currently
actually
broken.
So
there
is,
there
were
some
Mr
that
leaked
and
it's
breaking
the.
F
What
does
it
break
major
Buck
for
the
new
navigation
so
and
I
just
wanted
to
say
that
we
should
not
promote
the
latest
release
and
the
release
that
have
revert
for
this
Buck
is
not
picked
up
by
by
packager.
Yet.
F
I
just
got
a
it's:
it's
a.
There
is
a
discussion
in
in
in
release
Channel.
They
just
directly
think
this.
So.
D
F
Also
yeah,
currently,
migration
post
deploy
migrations
are
running
and
they
it's
kind
of
version
to
do
so.
I
I
got
pinged
by
some
other
team
that
they
wanted
to
have
it
as
soon
as
possible.
So.
B
Thank
you
any
other
topic
to
discuss
in
the
recording.