►
From YouTube: 2022-04-04 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
See
so
we've
got
a
few
I'll
just
copy
them
up,
but
just
to
be
aware,
there's
a
lot
of
time
off
coming
up
in
the
next
in
the
next
month.
A
Actually
so
be
aware
of
that,
like
we
will
be
planning
to
prioritize
less
work
in
the
next
month
to
accommodate
for
the
fewer
people,
so
please
don't
feel
like
we
need
to
be
maintaining
kind
of
the
the
same
throughput
we
normally
do
like,
particularly
if
things
are
lower
priority
and
they
can
wait
till
may
when
we
have
more
people
available
for
reviews,
then
you
know
please
consider
doing
that
and
scarback.
You
have
the
first
discussion
item.
B
I
was
partially
not
bored
on
friday
and
I
had
a
lot
of
random
thoughts
buzzing
around
my
head
in
terms
of
various
items
related
to
release
management
to
potentially
make
small
things
easier.
These
are
all
tiny
tasks,
they're,
not
really
big.
B
So
anything
that
was
created.
It
was
not
an
april
fool's
joke.
I
decided
to
create
what
I
thought
were
legitimate
ideas
and
also
one
item
was
qa
related.
So
if
anyone
has
time
feel
free
to
go
through
the
issues
that
I
created
during
that
day
and
see,
if
you
have
ideas
as
to
what
we
should
do
with
them
and
if
they're
worth
the
effort
or
not.
A
Do
you
feel
so?
I
know,
like
we've
said,
for
the
last
few
months,
there's
been
quite
a
lot
of
hands-on
stuff
for
release
management.
Now
we're
getting
close
to
starting
to
pick
goals
for
q2.
How
do
people
feel
about
where
we
are
like?
Would
it
be
worth
us
actually
trying
to
analyze
release
management,
pain
points
and
gather
all
these
things
up
to
make
a
plan
to
reduce
that
as
a
top
priority?
A
B
B
B
A
This
is
what
I'm
exactly
getting
at
right,
like
I'm,
not
sure
we
can
iterate
out
of
that,
and
that's
kind
of
the
question
is:
should
we
actually
or
maybe
let
me
reframe
that,
because
we
don't
want
to
maintain
that
right,
like
is
there
a
good
reason
not
to
spend
a
bunch
of
time
in
the
next
week
or
two
and
really
try
and
like
find
a
few
bigger
impact,
things
that
actually
bring
down
release
management
toil.
A
Get
ahead:
let's,
let's
see
if
we
can
get
some
stuff,
because
I
know
we've
done
some
great
stuff
in
the
past
on
the
kubernetes
migration
where
we
actually
were
like
here
is
20
billion
things
we're
thinking
of
and
let's
review
them
and
actually
pick
the
most
impactful
ones.
I
think
we
need
to
do
something
similar
for
release
management,
so
let's
have
a
think
about
how
we
can
put
that
together
and
actually
see
if
that
throws
up
some
stuff.
That's
like
beyond.
A
Just
like
you
could
save
a
minute
here
or
a
little
bit
there,
but
actually
it's
a
yeah,
a
real
game,
change,
sort
of
shift
for
us.
A
I'm
not
sure
it's
that
much
like
20
seconds,
like
let's
get
to
the
point
where
20
seconds
is
gonna,
be
a
significant
change
right
like
I
know,
we've
got
a
lot
of
stuff
that
it's
easy
to
just
say:
we've
been
doing
pipeline
changes
or
we've
had
kind
of
unexpected
things
happen,
and
those
things
will
just
resolve
themselves
for
sure.
However,
we
also
now
are
deploying
so
much
more
frequently
we've
kind
of
created
some
problems
for
ourselves,
we're
deploying
so
much
more
frequently
and
our
pipelines
are
super
fast,
which
gives
us
some
other
problems
right.
A
It
means
we
collide
with
more
changes.
It
means
you
have
more
stuff
in
flight.
So
let's
not
neglect
that.
I
think
like.
Let's,
let's
see
if
we
can
actually
review
and
pick
some
some
things
that
might
ease
up
or
we
change
the
way
we
do
it
right
like
we're
very
much
running
with
the
model,
that's
probably
been
the
release
management
model
for
forever
right
like
so.
Perhaps
we
actually
review
the
the
work
like
the
expectations
of
release
management.
D
D
D
A
Yeah,
I
think
that's
what
we
need
to
do.
Let
me
think
about
some
structure,
but
I'm
going
to.
I
think
it
might
be
useful
for
us
to
do
some
sort
of
analysis
and
quantify
some
stuff,
but
also
have
a
few
smaller
groups,
discussion
groups
because
yeah,
I
agree
with
you
maya,
like
I
don't
think,
there's
an
obvious.
If
we
just
did
this
issue
or
this
project,
things
would
get
better,
which
will
make
it
a
little
harder,
I
think,
to
async.
A
A
So
in
terms
of
picking
up
these
issues
scrapbook,
so
these
are
all
on
kind
of
planning.
Is
there
one
that
or
two
that
you
want
to
nominate
as
like?
If
we
could
solve
this
problem,
that
would
be
great,
and
then
we
can
kind
of
focus
on
figuring
out
a
solution
for
that.
B
Is
failing
or
for
specifically
for
backboards?
Oh
yes,
so
I
think
I
think
that
one
would
probably
be
the
most
impactful,
but
that's
only
because
I've
been
working
very
closely
with
backports
recently
and
I'm
probably
the
nicest
person
when
it
comes
to
backyards
compared
to
anyone
else
on
the
team.
So
I'm
self-inflicting
that
wound.
A
B
A
Yeah,
okay,
great,
that's
a
great
idea:
let's,
let's
figure
out
what's
going
on.
B
Only
because,
depending
on
the
test
case
scenario,
some
qa
jobs
get
retw.
Some
specific
qa
tests
get
retried
and
there's
an
exponential
back
off.
So
if
qa
is
starting
to
fail
and
we're
hitting
that
exponential
back
off
we're
not
going
to
get
notified
for
three
plus
hours
that
the
environment
failed
as
deploy.
A
Cool
okay,
that
sounds
good
I'll
move
those
onto
the
the
board
so
that,
if
someone
has
some
time,
we
can
investigate.
C
A
So
question
great:
okay,
I
think
we've
got
the
answers
great.
So
one
thing
we're
discussing
in
reliability
or
with
reliability
is
whether
a
pcr
needs
to
be
in
place
over
the
easter
dates:
a
lot
of
public
holidays
around
15th
and
18th.
A
We
didn't
have
one
last
year
and
actually
I'd
like
to
avoid
one
this
year.
If
we
can,
because
that
is
going
to
be
very
painful
on
the
monthly
release
if
we
pcl
over
15th
and
18th.
So,
okay,
are
you
between
the
three
of
you?
Are
you
happy
for
us
to
put
together
some
sort
of
release
management
coverage
so
that
we
don't
have
to
do
full
scale
deploys,
but
we
can
do
some
deploys
and
also
run
through
any
release,
monthly
release
tasks.
A
D
Yep,
thank
you
on
the
same
vein
of
making
release
manager
activities
easier.
I
noticed
this
incident
was
open
last
week
and
it
was
an
s1
incident
that
wasn't
related
to
the
gitlab
functionality.
It
was
related
to
payments
and
transactions
that
are
outside
of
gitlab
now
for
context.
This
incident
was
opened
well
was
active
for
three
days
and
it
was
an
s1,
an
s1
being
open
for
three
days
and
active
for
three
days
in
my
head
is
kind
of.
D
I
don't
know
it
sounds
weird
because
it
is
an
s1
and
it
shouldn't
be
open
for
three
days.
But
that's
probably
another
point.
This
incident
being
active
meant
that
delivery
activities
were
automatically
blocked.
Light,
auto
deploys
also
feature
flags
were
blocked
and
when
it
comes
to
auto,
deploy,
israelis
managers
have
to
deploy
through
chat
ups,
which
is
kind
of
painful,
and
of
course
it
wasn't.
The
only
thing
that
happened
last
week.
We
also
had
the
security
thing
and
I'm
sure
another
incident
right,
so
that
last
week
was
kind
of
messy.
B
I
think,
as
long
as
we
meet
our
compliance
goals
and
I
think
if
we
have
a
label
that
we
could
apply
and
the
reason
that
label
was
applied
on
the
originating
issue
is
well
documented.
I
think
this
should
be
doable.
So
I've
opened
a
proposal.
I've
got
steve,
lloyd,
tagged
into
it,
but
I
think
it's
just
a
matter
of
priority
and
timing
as
to
whether
or
not
this
is
both
doable
and
when
we
can
pull
it.
I
guess.
A
Just
literally
added
a
comment
on
that
scroll
back,
which
I
actually
meant
to
add
last
week
forgot
to
press
enter.
So
I've
added
a
comment
there
and
the
on
the
one
you
mentioned
in
that
issue
around
their
change
request.
I
think
that
was
a
change
request
being
slightly
misused,
but
I'm
not
sure
the
waiting
wording
in
the
handbook's
a
little
vague,
but
I
think
we
could
tighten
that
up
on
the
incidents
itself.
Yes
there,
it
would
be
great.
A
I
I
need
to
do
some
work
and
put
some
stuff
together
and
then
we
can
make
a
proposal
to
compliance.
A
In
the
meantime,
I'm
not
sure
how
quickly
that
will
happen.
It
depends
a
little
bit
on
how
this
week
plays
out,
but
I'll
try
and
do
it
as
soon
as
I
can.
A
In
the
meantime,
ruben
has
started
work
on
trying
to
like
reduce
the
pain
a
little
bit
so
right
now
we're
seeing
that
overriding
is
much
harder
than
it
used
to
be
well
twice
as
hard
as
it
used
to
be
because
you
have
to
override
on
staging
as
well
as
on
production
and
reuben
has
already
picked
up
the
yes
yeah.
A
Is
our
process
at
the
moment
allows
the
eoc
to
bear
kind
of
like
ultimate
responsibility
for
all
changes
going
to
production,
and
we
just
need
to
be
a
bit
careful
that
once
the
label
like
if
we
introduced
a
label,
anybody
could
add
that
and
and
would
add
that
at
the
moment
from
chatting
with
job
at
the
moment.
I
don't
think
reliability
want
to
have
a
process
where
they
police
that,
but
I'm
not
sure
what
compliance
will
think
about
that.
A
So
actually
this
one
changed
because
rehab
questioned
it
and
said:
hey,
you
know
we're
dealing
with
this
like
it's
an
s1.
However,
it's
been
open
for
three
days
and
it
causes
a
heap
of
extra
work.
Like
is
that
correct
and
steve
decided
to
kind
of
work
around
that
so
yeah
release
managers.
I
would
encourage
you
to
be
more
demanding.
A
I
guess
like
I'm,
seeing
on
the
incidents
as
well,
like,
I
don't
think
we're
necessarily
calling
people
out
as
much
as
we
can
that
like
hey
this
change
that
we
did
has
led
us
to
this
pain,
and
I
don't
think
an
individual
would
necessarily
realize
that
unless
we
point
it
out
to
them,
so
feel
free
to
happy
to
help
wording.
If,
if
you
need
help
but
like
you
know,
do
feel
free
to
challenge
people
on
severities
and
and
things
like
that,.
C
I
wanted
to
just
add
a
reminder
that
we
added
the
backstage
label
for
a
non-customer
facing
incidents
right,
so
this
would
be
the
reverse.
We
are
looking
for
something
which
is
affecting
customers,
but
shouldn't
block
deployments
or
feature
flex
right.
That's.
A
C
A
Awesome
thanks
for
raising
that
up
mara,
so
release
managers-
we've
talked
quite
a
few
things,
but
like
is
there
anything
specific?
You
wanted
to
call
out
on
mttp
for
last
week.
B
A
A
And
stuff
right,
so
that's
the
stuff
which
make
like,
I
think,
is
going
to
make
it
harder.
Like
I.
I
know
this
is
a
bad
process.
I
wonder
if
we
should
also
before
we
go
into
q2
change
this
process,
because
I
think
until
unless
we
have
without
data,
it's
really
hard
to
say
to
people
like
hey,
we
need
you
to
invest
in
changing
this
thing
because
you're
causing
us
pain,
because
the
pain
is
quite
invisible.
D
Perhaps
we
should
automate
that
process
of
adding
labels.
I
know
we
have
talked
about
this
for
a
while,
but
we
can't
try
to
think
of
a
boring
solution
to
adjust
that
label.
So
we
don't
forget,
for
example,
we
could
just
add
the
closest
label
based
on
the
time
the
incident
was
active
with
a
web
cook
or
something
and
just
have
it.
There.
A
Yeah,
let's
have
a
how
about
so.
How
about
we
get
an
issue
scottish?
Would
you
you
and
ruben?
Do
you
think
be
okay
to
like
drop
a
list
in
an
issue
of
like
all
the
things
you
can
think
of
that
held
you
up
and
then
we
can
figure
out
like
like,
doesn't
have
to
be
specifics
but,
like
you
know,
master
was
broken
or
we
were
dealing
with
something
else
or
there
was
an
incident
and
we
could
just
start
to
look
at.