►
From YouTube: 2023-08-14 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay,
so
we
are
at
time
so
I'll
begin.
You
know,
I'm
sure
how
people
are
joining
us.
We
go
so
welcome
everyone.
This
is
Monday,
it
is
the
14th
of
August.
This
is
our
email,
America's
scheduled
delivery
weekly,
so
we've
got
a
in
fact.
We've
got
quite
a
lot
of
people
off
today
and
through
this
week
we
have
a
few
extra
people
off
today
who
are
returning
in
the
next
day
or
two.
B
So
we've
got
one
async
announcement,
so
everyone
please
note
on
that
one
on
two:
does
anyone
have
any
suggestions
for
things
that
should
be
in
the
changelog.
B
Nope,
okay,
great
and
on
two
please
also
do
keep
thinking
about
how
we
can
make
this
more
like
a
more
of
a
nudge
process
versus
kind
of
randomly
trying
to
keep
track
of
this,
and
remember
this.
So
if
anyone
has
any
good
ideas
for
like
how
can
we
make
if
this
is
useful,
how
do
we
make
this
more
of
a
sort
of
integral
part
of
our
process?
That
would
also
be
useful
so
on
to
three.
C
B
The
discussion
so
two
weeks
ago,
McKelly
and
I
were
intending
to
chat
to
you
all
about
Q3
okrs.
We
didn't
get
time
in
that
call,
because
that
was
the
call
where
we
did
the
end
of
quarter
demos
that
took
up
all
of
the
meeting
time.
So
then
we
had
intended
to
do
this
last
week,
but
for
those
of
you
in
the
call,
we
shifted
Direction
a
bit
last
week
and
we
now
have.
B
Finally,
we
have
our
updated
Q3
okrs,
so
I
wanted
to
just
quickly
walk
through
and
show
what
we
have
for
these,
and
if
anyone
has
any
questions,
please
just
dive
in
and
and
Shout
as
I'm
as
I'm
walking
through.
So
we
have
two
okrs
for
this
reporter
for
delivery
group.
B
The
first
one
that
I'm
going
to
talk
through
is
the
adapting
release
process
to
adjust
the
customer
current
needs.
So
we
have
two
parts
to
this:
one.
There
is
the
tooling
and
process.
We
need
to
change
delivery,
owned
things
in
preparation
for
the
monthly
release
date,
changing
so
from
16.6.
We
will
not
be
releasing
on
the
22nd.
B
We
will
then
be
releasing
on
the
third
Thursday
of
every
month,
so
big
Improvement
for
us,
because
it
means
all
releases
will
land
in
regular
work
hours,
which
is
great,
but
it
does
mean
that
we
have
to
make
some
changes
to
the
release
tools,
because
there
is.
There
are
quite
a
few
points
where
we
kind
of
assume
stuff
will
have
happened
on
the
22nd,
so
we've
got
some
work
to
do
there.
B
Hopefully,
though,
that
is
not
you
know,
a
mass
of
work.
Myra's
got
the
Epic
scoped
out
she's
the
dri
for
this
there's,
maybe
four
or
five
issues
to
go
through
for
that
work.
So
the
bigger
part
of
this
project
is
actually
doubling
the
frequency
of
scheduled
security
releases.
B
So
we
have
two
really
big
motivations
for
security
releases
to
become
more
frequent.
One
of
those
is
a
company
kind
of
requirement,
so
in
order
that
we
are
able
to
meet
fed
drop
requirements
and
in
order
that
we
don't
have
to
sort
of
do
so,
many
critical
security
releases
already
respond
to
things
very
urgently.
We
want
to
have
our
process
be
able
to
support
this,
so
what
we
sometimes
find
is
we
find
a
security
vulnerability.
It
gets
mitigated,
but
we
can't
live
with
the
mitigation
for
three
or
four
weeks.
B
So,
if
we're
able
to
do
more
regular,
scheduled
Security
leaders,
they
would
just
get
combined.
It
would
save
us
some
extra
work.
It
would
save
our
customers
some
extra
work.
We
could
build
a
process
around
this,
but
one
of
the
other
really
big
benefits
for
us
working
towards
increasing
the
frequency
of
scheduled
security
releases.
B
So
we
get
to
continue
to
do
the
work
of
improving
the
security
releases,
so
Steve
LED
hugely
successful
project
on
the
last
quarter
to
start
moving
steps
of
the
security
release
process
onto
a
pipeline,
so
we
can
kind
of
automate
the
manual
tasks.
We
have
a
couple
more
things
along
those
lines
that
will
hopefully
get
security
releases
down
to
be
under
24
hours
each
time
and
means
that
we
can
just
combine
patch
and
security
releases
and
save
ourselves
a
whole
bunch
of
work,
hopefully
so
two
kind
of
epics
sitting
within
this
one
objective.
B
Oh
okay,
great
so
then
the
second
okr
is
our
new
one
that
McKelly
updated
last
week
and
this
is
to
focus
on
dedicated,
but
with
the
view
of
moving
us
forwards
and
sort
of
getting
us
lined
up
ready
for
sales.
So
what
we
want
to
focus
on
here
is
reducing
the
risk
of
downtime
and
deployment
related
incidents
via
zero
downtime
deployments,
so
in
this
quarter,
we'll
be
looking
at
dedicated,
but
it's
not
going
to
be
exclusively
a
solution
that
just
works
for
dedicated
and
then
we're
kind
of
done.
B
This
is
the
first
step
towards
cells
and
we
have
two
KRS
for
this.
What
we've
tried
to
do
is
separate
out
some
of
the
pieces
that
we
know
so
around
the
inventory
of
services
to
be
deployed.
That
is
a
lot
of
that
is
actually
the
work
you're
already
doing
Jenny.
So
it's
there.
You
know
what
is
actually
involved
in
dedicated.
What
are
the
different
pieces
that
we
need
to
understand?
B
Do
people
have
sandboxes
and
permissions,
and
all
of
that
stuff
is,
is
included
in
this
inventory
of
services,
so
largely
figure
out
what
dedicated
looks
like
and
how
we're
going
to
work
with
it
and
then
the
second
part
is
around
five
zero
down
times
deployments
to
be
completed
on
the
experiment
platform.
B
So
you
know
a
Sandbox
or
whatever
we
end
up
setting
up
to
be
able
to
work
with
with
dedicated
now
just
want
to
click
into
this
one,
because
this
has
very
draft
but
a
little
bit
of
a
breakdown
of
how
we're
going
to
try
and
score
this.
So
it's
not
just
going
to
be
a
case
of
we're
on
zero
and
then,
when
we
get
to
our
first
zero
downtime
deployment,
we
tick
off
20,
there's
quite
a
lot
of
work
involved
in
this.
B
We've
got
to
be
able
to
kind
of
recognize
that
so
there'll
be
some
stuff
where
we
learn
about
how
to
use
get,
learn
how
to
use
the
dedicated
Tech,
look
into
our
tool,
options
and
understand,
what's
involved
and
dedicated,
then.
Hopefully
that
sets
us
up
and
we
can
start
to
look
into
what
is
actually
involved
in
a
non-downtime
deployment.
So
there's
perhaps
like
a
point
of
no
return
on
these
deployments,
or
we
may
find
like
migrations.
Do
something
special
and
interesting
we're
not
sure
about.
B
We
need
to
understand
how
to
work,
Italy,
I
believe
but
I,
don't
know
for
sure.
I
believe
that
gitly
is
on
VMS
in
dedicated,
which
means
that
it
is
quite
likely,
but
we
should
check
it's
quite
likely
to
have
mixed
version
considerations
so
and
we
deployed.com,
we
very
deliberately
order
our
rollout
to
handle
these
mixed
version
dependencies
for
ghetto
league,
so
we
probably
have
something
similar
inside
dedicated.
B
Then
things
like
rollout
duration,
I
saw
on
one
of
the
releases.
Quite
recently,
I
have
to
go
and
check
out
the
number
post-deploy
migrations
for
larger
instances
were
expected.
They
could
take
days
to
complete
those
sorts
of
things
are
going
to
be
interesting
for
us.
B
I,
don't
know
at
what
point
a
instance
is
considered
large
I,
don't
know
how
that
compares
with
the
current
instances
we're
running
for
dedicated,
but
there
could
be
some
interesting
stuff
around
how
long
a
rollout
actually
may
take
and
then
hopefully,
we'll
figure
out
enough
of
these
things
to
succeed
with
a
note,
downtime
deployment,
after
that,
it's
going
to
be
all
around
optimizing.
So
how
do
we
improve
this
process
like?
B
Maybe
we
could
make
them
faster,
maybe
sort
of
a
smarter
way
we
can
trigger
these
the
deployments
or,
like
other
health
checks
that
help
us
on
the
way,
all
the
things
that
we're
kind
of
used
to,
and
perhaps
like
system
you
already,
perhaps
looking
at
a
lot
of
this
stuff
in
Q2.
So
basically,
how
do
we
start
to
scale
these
deployments
up
so
that
we
can
get
to
the
point
where
you
know
we
are
actually
doing
five
within
a
week.
B
B
What
I
would
suggest
is
so
those
are
the
sort
of
high
level
okay
Arts.
We
have
got
working
epics
in
progress,
I
think
probably
only
Myra's
one
for
the
release.
Dates
is
probably
the
the
most
scoped
and
and
sort
of
like
unlikely
to
change.
I
think
all
the
others
have
lots
of
scope
for
being
shaped
and
improved,
and
so
please
do
keep
on
asking
questions
and
keep
on
sort
of
considering
like
what
is
going
to
be
the
best
use
of
our
time.
B
This
quarter,
like
other
smaller
iterations,
we
can
focus
on
or
are
there
some
pieces
we
should
just
Mark
as
like
known
unknowns,
and
we
can
kind
of
come
back
to
those
a
bit
later
on
in
the
quarter
or
you
know
just
do
feel
free
to
move
things
around
and
flex
them
up
as
needed.
B
Nope
excellent,
okay,
great
I,
am
working
on
getting
dedicated
permissions
for
those
of
you
on
that
project.
I
know
I
hope
it
will
be
today,
but
it
may
not
be
today
that
the
request
goes
in,
but
it
is
in
progress
so
hopefully
we'll
have
that
solved
as
soon
as
we
possibly
can.
D
Is
your
audio
out
live
yeah,
okay,
excellent
release,
manager.
D
D
C
B
So
is
there
any
that
will
be
useful
because
they
don't
have
any
questions
about
your
chaos?
Let
me
summarize
that.
A
Yeah,
let's
loop
back
to
number
two
and
Reuben:
did
you
wanna?
Why
is
that.
C
Yeah
sure
I
was
wondering
if
we
should
make
it
standard
practice
to
execute
the
post
deployment
post
deployment
migrations
at
the
start
of
the
emea
release
manager
shift
as
long
as
we
don't
have
an
Apec
release
manager.
The
start
of
the
email
EMA
shift
means
that
whatever
package
has
been
on.
Production
has
been
there
for
like
eight
hours,
so
that
seems
the
safest
time
to
run
the
post
deployment
migrations.
A
And
this
is
something
we've
been
doing
these
past
few
weeks
and
it
seems
to
be
working
well
so
makes
sense
to
to
continue
after
we
finish
our
shift
as
well.
C
B
Very
good,
would
you
mind
Ruben,
also
asking
I
think
Myra
owns
the
slack
notification
that
triggers
each
like,
whatever
time
each
day
to
to
prompt
the
running
the
PDM.
Would
you
mind
just
checking
with
Myra
as
well?
If
that
is
the
case,
you
should
be
able
to
update
this.
B
I
think
so,
I
think,
even
if
we
have
a
a
pack
release
manager,
the
benefits
of
running
it
right
at
the
start
of
email
day
is
traffic
is
so
much
lower
that
the
migrations
probably
run
a
lot
faster.
So
you
probably
don't
hit
quite
as
many
lock
issues
or
or
things
like
that,
so
it's
probably
quite
a
good
time.
Normally.
C
B
That
is
true
yeah,
the
only
difficulty
I
think
is
I.
Think
I
don't
actually
know.
This
is
I.
Think
Graeme
is
slightly
limited
by
I.
Don't
know
if
Ops
still
has
down
time.
Actually,
maybe
it
doesn't,
he
used
to
have
the
Ops
upgrade
sort
of
middle
of
his
day,
which
meant
he
couldn't
do
it
beforehand.
The
other
problem
is
I,
don't
know
if
he
always
is
able
to
guarantee
enough
attention
to
run
these
every
day,
just
because
he's
always
juggling
Project
work
at
the
same
time,
but
yeah
where
possible.
A
All
right
so,
first
we'll
look
at
this
dashboard
controls,
so
last
week
went
pretty
well
overall
I
actually
have
to
add
in
some
annotations
here,
but
at
the
end
of
the
week,
that's
when
we
had
to
pause
deployments
for
the
database
upgrades.
But
prior
to
that
things
were
running
pretty
smoothly.
We
had
a
few
QA
failures
that
the
tries
for
the
most
part
not
anything
much
beyond
that.
A
And
then,
if
we
look
at
the
deployment
frequency
overall,
we
still
see
a
high
frequency
outside
of
you
know
before
we
we
paused
things,
and
so
that's
continuing
to
be
a
good
result
with
the
increase
in
the
overall
speed
that
the
deployments
can
run
at
and
asked
for
lead
time.
We
kind
of
continue
to
see
the
normal
pattern,
but
it's
a
little
different
here,
because
again
we
did
pause
things.
A
So
the
lead
time
went
way
up
over
the
weekend
because
we
paused
I
say
on
Thursday
sort
of
like
the
end
of
email
time,
beginning
of
a
America
time,
and
so
we
saw
that
kind
of
go
up
there
and
then,
as
for
blockers,
we
have
the
disruption
due
to
the
database
upgrade.
A
Summary
here,
as
I
mentioned,
there
were
a
few
QA
failures.
There
was
the
only
thing
that
really
actually
took
up
a
lot
of
time
outside
of
the
upgrade
was
on
Friday
and
this
actually,
this
overlapped
with
the
upgrade.
So
it
didn't.
It
didn't
block
anything
because
we
were
already
blocked
essentially,
but
it
was
an
interesting
failure
where
CNG
builds
were
failing
and
it
was
due
to
a
dependency,
well,
I,
guess
kind
of
a
dependency
change
on
the
gitlab
project.
A
A
A
How
do
we
properly
test
for
it
so
that
we
don't
run
into
a
situation
like
this
after
it's
been
merged,
but
outside
of
that
the
week
was
pretty
light
overall.
B
C
We
have
a
pipeline
metrics
now
so
I'll
see
if
I
can
get
an
average
for
the
last
few
days.
Yeah.
D
Great
can.
C
So,
due
to
the
CR
and
us
having
to
stop
deployments,
the
mean
time
to
production
has
risen
quite
a
bit.
C
C
20
and
13.
another
metric
that
I
wanted
to
see
wanted
to
show
you
all
is
the
mean
for
mean
time
to
production.
So
sorry
not
mean
what
is
this
called
the
50
percentile
so.
D
C
Median
right,
so
what
we
see
here
is
the
average
or
mean-
and
this
is
the
median,
so
the
median
even
now
shows
9.75
hours
and
I
think
this
is
without
ignoring
weekends
yeah.
This
is
without
ignoring
weekends
Pat,
so
80
of
Mrs
in
this
month
have
been
deployed
within
16.8
hours
of
being
merged,
95
have
been
deployed
within
3.5
days
and
99
within
3.8
is
so.
This
gives
us
a
slightly
better
idea
of
how
many
Mrs
were
within
the
12
hours
and
how
many
were
outside
it
we
can
also
have.