►
From YouTube: 2023 07 24 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
Can
hear
you
perfect,
thank
you
so
welcome
the
24th
of
July
2023.
That
will
be
very
good
weekly
and
we
can
start
a
lot
of
upcoming
holidays
for
August
I
can
see
and
we
start
to
have
a
discussion
Point.
The
first
one
is
from
aqua.
Someone
wants
to
verbalize
it
or
like.
C
That
you
might
have
seen
is
about
how
to
stay
up
to
date
with
the
risk
management
changes
in
the
processing
tools,
because
I
guess
here
the
problem
of
like
some
of
us
fitting
risk
managements
having
a
lot
of
months,
offer,
release
measurements
and
coming
back
increase
managements
where
there
were
some
some
changes
in
our
processes
and
finding
it
a
problem
to
keep
up
to
date
with
which
part
of
the
purpose
process
was
was
different
and
so
on
so
introduce
this
changelog
functionality
and
E
highlighted
a
good
candidate
for
it.
Now.
C
D
He
pointed
to
the
demo,
but
actually
I
I
did
have
a
question
because
I
think
I
wonder
if
the
better
time
is
actually
when
we
like
I,
don't
like
merge
in
the
documentation
or
so
in
the
demo.
We
were
talking
a
bit
about
the
new
auto
deploy
timers.
So
let's
say
that
you
did
the
other
month,
so
I
I,
wonder
I,
guess
like
what's
a
good
way
for
us
to
actually
at
the
point
we're
making
changes
that
affect
release
managers.
E
Well,
at
least
the
change
log
is
into
release
docs
right.
So
when
we
are
updating
the
documentation
that
we
did
for
the
change,
we
could
have
also
added
the
some
relevant
link
to
the
changed
page,
some
blurb,
explaining
what
was
happening
at
the
same
time
and
I.
Think
the
important
thing
here
is
that
we
are
not
going
to
have
the
third
merge
request
for
doing
something,
because
one
will
be
the
say,
release
tools
or
whatever
the
tool
that
we're
going
to
change,
and
then
we
we
want
to
document
this.
E
D
E
Yeah
it
doesn't
it,
it
doesn't
really
much
actionable
as
it
is
because
oftentimes
we
split
those
changes
over
several
merge
requests
because
we
do
and
then
there
is
a
point
in
time
we
say:
okay
now
is
how
it
should
look
like,
and
that
is
is
done
right
and
then
we
recommend
it
and
so
the
it's
really
hard.
Maybe
you
every
time
you
just
say:
okay,
I
will
do
the
documentation
later.
E
So
it's
kind
of
easier
when
you're
working
on
the
ethics
say,
adding
an
issue
part
of
the
epics,
which
is
now
I'm
gonna,
write
down
everything
that
we
changed
this
time,
because
I
mean
it
really
depends
on
the
type
of
changes
that
you're
doing.
G
It
kind
of
reminds
me
of
like
the
engineering
FYI
idea,
where
there's
sort
of
this
Doc
and
a
channel
to
to
just
add
notifications,
and
everyone
knows
they
should
keep
up
with
that
channel,
even
if
they
haven't
paid
attention
to
it
for
a
month
or
two.
G
D
Would
it
be
worth
having
a
prompt
on
this
call
for
the
next
few
months
where
each
week
we
just
do
a
collective
did
anything
happen
that
should
have
gone
in
the
change
log
and
then
take
actions,
because
it
feels
like
yeah
either
we
have
to
build
a
habit
or
we
have
to
put
a
nudge
in
a
good
place
so
that
we
don't
have
to
remember
to
do
this.
D
I'll
add
something
above
the
after
the
announcements
we
could
just
because
I
think
it
would
be
helpful
to
have
a
kind
of
Collective
thing.
So
if
you
hear
of
something
that
someone
shares
that
you
think
that's
what's
changed,
log
worthy
I
think
those
are
the
sort
of
not
just
like
I
would
have
liked.
You
to
I
would
like
to
see
that
in
the
changelog,
because
I
think
when
you're
actually
in
the
details,
it
can
be
hard
to.
D
H
C
H
D
D
H
B
Yeah
So
speaking
of
things
that
we're
trying
to
figure
out
as
far
as
sharing
knowledge
for
during
the
release
day,
tasks
there's
a
step
that
was
a
surprise
to
Akbar
and
I.
B
B
B
Ahmad
removed
the
stable
Branch
rules
for
the
16
winch,
stable,
Branch
prior
to
documenting
anything.
So
the
step
while
clear
language
wise.
We
were
just
a
tad
confused
because
we
didn't
see
what
that
rule
looked
like
before
deletion,
so
we
couldn't
just
create
the
protection
rules,
the
same
way
right
and
then
the
other
item
I,
wanted
to
see.
I
know.
There's
an
epic
I
couldn't
quickly
find
it,
which
is
why
I'm
asking
for
assistance.
I
know,
there's
an
epic
that's
supposed
to
help.
Let's
remove
all
the
steps
from
our
release.
B
C
Can
we
can
we
have
other
first
mini
migration,
maybe
document
which
I'll
get
right
permissions
to
add
for
the
next.
You
know
when
we
are
gonna
remove
now
16
3
and
we
are
the
164
so
to
be
sure
that
we
can
read
something
release
Docs,
that
we
know
which
kind
of
permission
we
need
to
add
I
think
there
would
be
probably
smaller
than
automating
it,
but
I
agree
that
ultimately,
probably
would
be
beneficial
if
possibility.
F
B
D
Wanted
this,
isn't
it
one
of
our
epics
at
the
moment,
or
it
won't
be
in
an
active
epic
because
we're
just
working
on
the
security
release,
but
I
think
this
is
an
interesting
one
like
I
think
if
there's
something
we
can
do,
especially
as
we
go
into
sort
of
the
early
part
of
the
quarter,
I
guess
like
we
could
pull
in
some
extra
bugs
and
do
some
improvements,
but
I
do
think
it's
an
interesting
one.
D
So
I
know
Steve,
we've
chatted
quite
a
bit
about
the
security
release
and
once
we've
got
the
kind
of
pipelines
and
some
Automation
in
place.
What
we're
hoping
is
that
kind
of
drives
our
future
behavior
on
not
adding
manual
tasks.
D
B
We
have
was:
was
it
Sam
that
opened
up
an
issue?
That's
setting
forward
some
guidelines
that
we
as
a
team
should
follow
like
I.
B
D
And
to
give
a
little
context
on
this,
so
this
fmr
and
The
Guiding
principles
is
something
that
Scarborough
galessio
myself
McKelly
Sam.
We
started
chatting
about
last
week.
This
is
to
start
to
help
us
move
from
where
we
are
today
to
where
we
want
to
get
to
in
three
years.
It's
all
sort
of
tied
to
the
strategy
stuff
that
we're
working
on
and
what
we
are
not
going
to
be
able
to
do
is
just
sort
of
turn,
90
degree
Direction
overnight
and
immediately
build
the
tools
we
want
and
that's,
okay.
D
So
what
we
were
thinking
with
these
guiding
principles
is
to
put
some
stuff
in
place
which
allows
everybody
to
kind
of
maintain
like
decent
levels
of
Independence.
Hopefully,
that's
the
aim
and
allow
you,
when
you're,
making
your
when
you're
going
through
your
work
and
you're
making
decisions
either
by
designing
stuff
or
implementing
stuff
you've
got
a
few
guiding
principles
in
place
which
help
you
make
a
decision
which
is
still
moving
towards
the
future.
D
We
want
to
get
to
and
not
make
you
kind
of
have
to
keep
checking
in
against
like
a
bigger
strategy
dark
or
checking
in
on
kind
of
up-to-date
discussion
things,
so
the
guiding
principles
will
be
there.
We've
just
got
this
we're
going
for
the
first
iteration.
So
if
anyone
wants
to
add
anything
or
comment,
please
just
jump
in
it
won't
be
a
full
list
and
I'm
sure
we'll
want
to
amend
these
over
time.
H
D
You
pull
the
issue,
then
I'm
happy.
We
can
have
a
chat
later
this
week
and
figure
out
like
how
do
we?
How
do
we
get
this
into
orchestration
in
the
next
few
weeks?.
C
C
Since
we
had
some
problems
in
the
past,
that
I
mean
I've
been
Shadow
already
last
week.
So
if
someone
can
start
Shadow
screen
directly
from
that
book,
I
think
it's
a
better
approach.
B
But
as
far
as
things
that
we
want
to
see
last
week
at
most,
we
did
not
promote
around
five
packages,
but
we
stayed
around
three
and
four
for
the
majority
of
last
week.
So
I
think
this
is
fantastic,
especially
when
compared
to
our
normal,
which
is
around
six
I
believe
so
I
think
we
did
pretty
good
promoting
packages
all
last
week.
We
did
not
have
any
blockers,
no
change
requests
blocked
us
we
did
have
some
Auto
deploy
picks,
because
people
did
alert
us
that
there
were
some
bugs
that
were
being
found.
B
So
we
just
wanted
to
make
sure
those
got
plucked
in
before
before
a
certain
deployments.
I
think
one
of
these
was
at
least
specifically
tours
a
16.2
release
where
a
bug
was
found
in
staging
no
harm
was
being
done.
To.Com,
it
was
just
causing
a
lot
of
errors
and
we
were
only
got
that
picked
in
before
we
started
the
16-2
release
procedure.
So
all
the
deploywise
we
did
pretty
good
deployment
frequency.
This
is
reflective
of
the
prior
chart,
so
you
know
averaging
six
deploys
I.
B
Guess
we
hit
nine
deploys
one
time
so
last
week
we're
doing
pretty
good
as
far
as
the
lead
time
for
changes.
This
was
also
a
little
bit
lower
like
we
breached
8.2
8.9
hours
for
a
few
of
those
days.
So
this
is
like
this
is
turning
out.
Wonderful
I.
Think
last
week
was
one
of
the
greatest
weeks
in
my
opinion,
and
then
just
showcasing
that
we
did
not
have
any
blockers
last
week
in
our
review.
B
E
B
I,
do
not
have
an
answer
for
that,
but
I
will
say
as
far
as
the
keenness
of
trying
to
prevent
availability
was
a
pressure
point
for
us
as
release
managers
limiting
ourselves
from
running
the
PDM
twice
a
week
was
a
mild
impedance.
We
still
ended
up
running
the
PDM
more
than
the
two
allocated
times
that
we
were
provided
and
it
did
kind
of
raise
our
stress
level
simply
because
we're
trying
to
reduce
the
risk
of
the
release,
but
we're
also
trying
to
reduce
the
risk
of.com.
B
Those
are
in
direct
conflict
with
the
way
that
we
expect.com
to
be
our
guinea
pig
for
testing
to
make
sure
changes.
All
work
and
part
of
that
is
requiring
that
post-deploy
migrations
are
run
so
I
do
think.
There's
conflict
there
and
I,
don't
think.
I
did
a
good
enough
job
expressing
that
concern.
When
we
raised
up
this
issue
to
talk
about
this
and
determining
the
schedule
so
I
think
in
the
future.
If
this
happens
again,
I
just
want
to
stress
that
a
little
bit
more
to
see.
B
B
It
was
just
annoying
I
think,
but
no,
unless
you
I,
do
not
have
any
idea
as
to
why
we
were
more
successful
this
last
week,
it
I
think
mainly
due
to
lack
of
incidents
for
one
and
QA,
was
actually
successful
for
the
majority
of
runs
last
week,
which
I
was
greatly
appreciative
of.
We
ended
up
spending
more
time
and
effort
on
pre-prime
than
keyway
and
say
staging
or
production,
which
was
really
nice.
E
Cool
I
was
actually
trying
to
figure
out
from
the
merge
request,
analytics
page,
which
is
one
of
the
other
Pages.
We
usually
don't
look
at
in
the
metrics
from
gitlab
if
there
was
actually
because
we
look
at
those
numbers,
but
we
don't
look
at
the
underlying
value,
which
is
how
many
changes
we
are
bringing
in
each
week
right
so
and
but
it
looks
like
oh
so,
just
quickly
share
what
I'm
looking
at,
because
I've
never
done
this
before
so
this.
Oh,
let
me
just
put
it
it's
wider.
So
basically,
this
is
here
right.
E
H
H
E
E
Science,
we
are
accumulating
by
by
just
mttp.
B
D
I
also
wanted
to
make
a
comment
about
the
PDM
execution,
mostly
for
Steve
and
Ruben's
benefits,
so
we
are
trying
to
protect
rollback
windows,
but
we
also
do
control
this
schedule,
so
don't
risk
releases
for
for
that
I,
but
we
we.
Basically,
we
were
asked
like
what
can
delivery
contribute
to
availability
and
the
only
thing
we
can
really
contribute
in
sort
of
one
week's
notice
is
rollback
windows,
so
that
was
that's
really
kind
of
the
background.
D
So
please
do
update
the
proposed
plan
with
what
you
need
for
your
week
with
trying
to
balance
those
two
things
as
best
you
can.
If
you
need
to
change
it,
that's
also
fine.
What
we're
basically
trying
to
do
is
be
able
to.
In
the
event
someone
asks
us
for
a
rollback,
hopefully
be
able
to
say.
Yes,
that's,
that's
all
people
are
expecting
from
us.
E
Sorry
I'm
coming
back
because
I
found
those
information.
There
is
another
page.
Just
so
that's
another
thing:
it's
in
here
analyze
insight
and
there
are
this
merge,
Rico's
dashboard.
So
this
seems
to
be
week
by
week.
Isn't
it
yeah
it's
week
by
week,
so
it
looks
like
that
the
numbers
were
kind
of
in
line.
This
is
merged.
So
it's
absolutely
in
line
with
previous
weeks.
E
B
The
last
comment
I
want
to
make
about
the
PDM
is
that
it
has
not
been
run
since
Wednesday
of
last
week
and
the
last
time
I
looked
at
the
dashboard.
There
were
eight
pending
migrations.
H
B
I'll
see
if
I
can't
find
it
I
saw
it
first
thing
my
boarding
and
I.
Don't
know
where
that
tab
went
so
I'll
have
to
go,
find
it
don't.
E
Get
scared
by
those
numbers,
so
this
is
some
oh.
This
is
something
that
I
realized
during
my
last
shift
that
and
I
forgot
about
it.
So
the
CI
decomposition
effort
just
broke
our
metrics,
because
now
every
pending
migration
is
counted
three
times
because,
basically,
when
we
just
run
the
rails
command
to
see,
the
status
of
the
migrations
is
putting
the
output
for
each
database,
and
so
some
of
those
most
of
those
migration
will
only
apply
to
one
of
the
three
databases.
So,
just
as
a
rough
number
divide
by
three.
B
E
One
strange
so
I
I
do
remember
that
in
my
shift
there
was
some
very
high
numbers
and
it
was
a
I
was
a
bit
scared
and
so
I
went
into
the
machine
looking
at
the
output
of
the
command
and
basically
was
printing
the
the
line
more
than
once,
because
they
were
there.
Was
this
main
CI
and
embedding
so
I,
don't
know
well.
B
E
The
problem
is
that
we
can't
say
if
something
should
be
up.
We,
so
that's
the
point
inside
the
migration.
There
is
something
that
tell
us
what,
if,
if
it
affects
one
database
or
another
one,
but
the
rails,
Engine
outside
doesn't
know
about
it.
So
when
you
ask,
is
it
running?
Does
it
have
to
run
or
not?
E
It
just
tell
you
if
if
this
doesn't
run
on
this
database,
so
it
has
to
run,
but
it
will
be
no
hope
on
many
of
those,
because
basically
it's
just
it's
the
same
schema,
but
that
I
will
only
be
in
one
of
the
three
databases.
F
So
this
has
happened
before
when
they
introduced
the
decomposition
database
and
they
split
the
migration
in
main
NCI.
The
post-deployment
migration
count
basically
duplicated
because
it
was
counting
the
ones
from
Maine
and
the
ones
for
CI.
So
as
a
boring
solution,
what
we
did
was
to
split
the
metric,
so
there
is
like,
if
you
see
the
dashboard
Source
you're
going
to
see
like
this
is
the
metric,
and
this
is
basically
like
split
into
so
perhaps
we
need
to
check
if
the
numbers
of
pending
migrations
are
now
correlated
to
the
new
databases
or
whatever.
D
Super,
let's
go
back.
I
had
one
final
question
about
the
release
manager
stuff.
So
last
week
we
had
a
special
issue
to
get
pre
into
a
like
tested
state.
So
do
it
auto,
deploy
to
pre
and
run
the
QA
tests
before
the
release
managers
needed
the
environment?
Was
that
useful?
Is
it
something
we
want
to
repeat
for
next
month.
B
B
B
The
second
thing
I
want
to
mention
is
that
we
do
have
a
potential
performance
issue
with
the
database
in
pre-prod,
which
may
be
causing
a
little
bit
of
pain
for
some
QA
runs
just
get
lab
returning
502s,
instead
of
either
pass
or
fail
on
the
actual
test.
So
we're
not
performing
a
test
we're
failing,
because
gitlab
is
not
responding
appropriately.
We
have
an
open
issue
for
that.
That
performed
a
very,
very
rough
preliminary
investigation
on.
B
So
we
just
need
to
dig
into
that,
but,
along
that
same
line
we
do
have
issues
that
are
I'm
trying
to
work
with
Michaela
to
potentially
schedule
those
issues
for
the
next
quarter,
where
we're
segregating
pre-prod
out
from
our
current
testing,
that's
being
done
within
our
team,
which
was
the
root
cause
of
one
of
the
preliminary
issues
we
had
during
the
June
release,
so
hopefully
stuff
like
that
gets
resolved
between
those
two
groups
of
work.
D
Oh
okay,
all
right.
Okay,
we'll
get
the
issue
ready
to
go
for
16.3
and
system
as
a
collective.
Are
you
you're
on
board?
You've
got
all
the
other
issues
in
the
right
places.
A
About
the
migrations,
the
post
deployment,
migrations,
I
can
actually
see
14
post-op
migrations
and
those
get
repeated.
So
the
job
log
shows
there
are
28
migrations,
but
there
are
actually
14.
So
the
I
think
the
metric
is
correct.