►
From YouTube: 2022-05-09 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Most
important,
actually
I
haven't,
got
some
on
the
agenda,
but
just
most
importantly,
very
top
of
this
agenda
is
the
q1
actions
from
our
retro.
I'm
going
to
move
these
off,
this
agenda
drop
them
onto
the
q2,
okr
and
retro
issue
that
we
have
and
just
do
a
split
summary
here.
But
if
you
haven't
done
your
review,
you
have
a
few
minutes.
I
actually
know
that
I,
like
you,
have
until
tomorrow,
because
I
won't
have
chance
today
to
actually
take
a
look
through
that.
B
Do
do
we
have
to
confirm
or
to
act
as
that
we
reviewed
it
in
some
way,
or
we
just
have
to
review
it.
A
It's
a
good
idea:
that's
okay!
Yeah!
How
about
I
put
something
on
the
on
the
retro
issue,
the
q2
and
retro
issue,
and
we
can
just
have
a
thumb
up
on
there
if
we've
done
it
because
yeah,
what
we
should
have
done
is
actually
properly
opened
an
issue,
so
we
could
track
that
yeah.
Good
question,
awesome,
so
announcements,
a
few
announcements
up
there
and
discussion.
A
So
I've
put
this
in
discussions,
so
you
can
all
disagree
if
you
want
to
so
two
a
is
a
request,
if
possible
for
us
to
try
and
get
some
more
questions
into
public
channels.
So
mostly
I
put
this
in
because
I
noticed
that
generally
delivery
is
kind
of
a
quiet
channel,
but
also
I'm
aware
that
sometimes
you'll
get
pinged
like
individually
by
people
asking
for
stuff
via
dm,
if
possible.
A
If
you
deem
it
to
be
something
that
could
be
public,
please
try
and
encourage
people
to
take
it
into
a
public
channel.
Mainly
it
just
relieves
kind
of
you
having
to
be
the
kind
of
point
of
contact
for
everything
like
other
people
can
join
in,
but
even
more
importantly,
is
other
people
can
then
learn
from
whatever
answer
or
link
that
you
provide.
So
if
you
know,
if
you
do
get
see
opportunities
for
this,
then
please
please
see
if
we
can
make
things
more
public.
A
So
b
now
I
don't
think
this
is
a
big
change,
but
I
am
aware
that
we
do
have
a
lot
of
incidents,
so
we
already
have
it
documented
that
we
have
the
kind
of
concept
of
a
dri
in
an
incident
and
the
dri
is
also
the
eoc
and
that
is
based
on
they
get
assigned
into
the
incident.
We
do
also
already
have
in
our
in
the
linked
here.
A
Release
management
like
release
manager,
sort
of
docs
that
for
incidents
that
we
raise
like
tests
failing
and
things
like
that,
we
we
sort
of
are
the
owners
for
these
things
already.
So
one
change
you
may
see
happening
is
that
the
eoc
may
become
more
active
in
actually
assigning
issues
to
us
or
asking
you
to
assign
an
incident
issue
to
yourself.
So
I
don't
believe
it's
a
change
in
kind
of
ownership.
A
The
intention
behind
it
is
to
try
and
avoid
an
incident
issue,
looking
like
it's
assigned
to
an
eoc,
but
the
eoc
not
feeling
like
they
are
owning
this
incident,
because
it's
very
much
test
failing
or
something
which
you
know
maybe
we're
working
on
or
quality
is
working
on.
So
hopefully
that
doesn't
mean
you're
going
to
get
more
work,
but
one
thing
I
would
sort
of
recommend
everyone
sort
of
takes
care
on.
D
I
wonder
if,
because
often
the
incidents
that
we
have
if
they
are
related
to
quality
failures
or
to
infrastructure
problems
outside
of
our
team's
reach,
especially
in
case
of
development
issues,
do
we
usually
make
devs
owners
of
incident
issues
because
they
feel
very
much
like
belonging
to
infrastructure
so
far?
But
maybe
I'm
not
sure
if
this
concept
of
an
incident
is
known
well
enough
and
all
over
the
engineering
department.
C
D
It
could
be
owned
by
anybody
in
the
engineering
department.
That's
my
main
question
here.
A
It's
a
great
question:
yeah,
probably
not.
Now
there
are
times
when
I
think
it
is
and
does
make
sense.
That
would
usually
be
something
I
think
around
the
database,
because
I
think
we
see
the
database
team
get
very
involved
and
quite
often
actually
fully
own
an
incident
in
that
case.
Yes,
for
like
random
code
thing,
perhaps
not
we
should
make
use
of
the
imocs
as
well
right.
So
if
it,
the
imoc
is
really
there
as
a
coordination
person.
A
So
if
it
is
something
where
you
feel,
the
engineering
call
has
handed
over
to
you
and
now
there's
lots
of
coordination,
and
particularly
if
you
have
another
release
management
task
so
like
here,
it's
the
day
of
the
security
release
or
you're
trying
to
do
a
patch
release,
or
you
know
you
have
other
tasks,
then
it's
not
a
one-way
street.
It's
not
like
an
engine
or
call
dumps
an
incident
on
you
and
you're
now
left
holding
everything
right.
A
B
I
was
also
thinking
that,
looking
back
at
the
past
weeks,
myra
and
I
were
releasing
managers-
I
mean
I've
opened
several
of
those.
I
was
trying
to
do
my
best
to
at
least
add
comments
when
we
were
discovering
things,
but
it's
really
a
lot
of
work,
and
especially
in
the
past
week,
we
were
just
moving
from
one
incident
to
the
next
one,
or
I
mean
this
is
not
correct.
I
was
starting
my
day
inheriting
incidents
from
the
night
and
continue
up
until
I
was
just
giving
my
instruments
to
myra.
B
So
this
is
more
accurate
description
of
the
past
week
so
and
I
feel
that
it's
really
hard
to
to
even
look
back
at
those
things
right,
because
we
usually
we
say
that
if
it's
blocking
a
deployment,
then
it
should
be
at
least
a
severity
to
incident,
which
kind
of
open
up
a
lot
of.
Let's
say
extra
work
on
it,
because
then
he
has.
You
need
to
compile
more
stuff
on
that
and
oftentimes.
B
You
are
just
either
retrying
something
or
hoping
that
someone
from
quality
understand.
Where
is
the
failure,
and
so
they
could
fix
it,
and
it's
a
lot
of
extra
work
that
it's
the
the
problem
is
that
when
it's
done-
and
you
are
mitigating
it,
so
you
can
move
on.
You
have
at
least
three
to
six
hour
of
accumulated
work,
because
your
task
was
to
deploy
something.
B
C
I
share
this
concern.
I
do
disagree
that
we
should
own
something,
especially
if
it's
a
developer's
realm
like
issue
like
if
a
merge
request
was
merged
and
that's
the
root
cause.
I
shouldn't
own
this
incident,
I'm
just
a
person
who
participated
in
the
incident.
The
team
that
introduced
that
particular
bug
should
own
the
incident.
In
my
opinion,
I
also
agree
with
alessio
that
working
an
incident
sucks.
C
It
takes
a
lot
of
administrative
time
and
effort
and
when
I
was
released
manager
for
the
prior
two
months,
like
filling
out
incidents
and
being
a
release,
manager
was
probably
90
of
my
day,
and
that
was
infuriating.
C
A
Yes,
I
agree.
No,
I
don't
think
in
that
case
like
I
agree,
so
I
suppose
what
in
the
the
sort
of
maybe
more
obvious
cases
when
there's
an
investigation,
but
certainly
that
I
think
we
can
do
more
to
hand
over
to
quality
dev
escalation
like.
D
I
mean
we
have
an
imac
role,
imoc
role,
right,
an
incident
manager
on
corn
and
by
by
what
we
are
talking
about
right
now.
That
feels
like
we
should
take
over
the
incident
manager
and
call
role
for
all
incidents
that
we
are
opening
up
right,
being
the
die
of
it,
because
if
you
are
the
dii
but
can't
solve
it
because
it's
you
know
maybe
a
death
problem,
then
that
means
you
just
run
around
and
try
to
ping
people
and
try
to
organize
all
of
the
incident
handling
then.
D
But
this
I
I
mean
I
wish
it
would
be
more
like
fire
and
forget
the
open
incident
and
then
at
the
point
where
you
know
what,
where
the
problem
lies
and
who
could
be
the
di,
you
should
be
able
to
give
it
over
there
and
forget
about
it
right,
even
if
you
need
to
wait
for
it
to
to
be
able
to
deploy
again.
But
I
think
this
is
either
an
incident
manager
on
courts.
I
think
to
to
then
organize
all
of
this
form.
It
will.
A
Be
I
think
we
should
be
henry
I
actually.
I
actually
think
we
should
be
so
I
think
there
are
different
types
of
incident
right,
so
I
think
in
that
case,
where
it's
so
say,
test
fail
or
no
say
we
have
a
degradation
on
production,
in
which
case
the
engineering
call
would
be
involved
right
because
we
have
a
reliability
problem,
so
that's
straightforward:
they
would
be
the
dri
in
the
case
where
we
have
tests
failing,
we
would
open
the
incident.
I
think
we
engage
quality
and
then
we
hand
off
dri
to
their
on-call.
A
They
have
somebody
on
call
and
then
they
can
own
this.
In
the
case
where
this
comes
back
and
they're
like
there's
an
application
problem,
we
should
use
dev
escalation,
but
once
we're
at
the
coordination
point,
that's
not
our
role
to
do
the
coordination.
That
is
the
eye
mark
and
what
we
should
be
doing
is
escalating
to
the
imoc.
So
I
think
our
responsibility
is
to
make
sure
that
the
person
who's
assigned
to
the
incident
is
the
person
who
feels
like
they
are
owning
now.
A
Sometimes
that
is
us,
and
sometimes
we
are,
you
know,
like
we've,
had
say,
kubernetes
thing
and
we
are
upgrading
or
you
know
we
are
actually
very
actively
involved
in
resolving,
in
which
case
we're
the
owners,
but
it
isn't
always
us-
and
I
think
what
I
would
love
to
see
is
actually
a
more
discussion
taking
place
in
these
incident
channels
with
the
eoc
on
actually,
who
is
the
dri
for
this
incident
right
now
and
that
person
being
assigned,
because
at
the
moment
it's
just
auto
assigned
and
that
person
isn't
really
like
trying
to
be
the
dri
they
they
sort
of
go?
A
Well,
it's
not
my
thing.
It's
quality
is
looking
at
this
and
they
just
sort
of
you
know,
ignore
the
incident.
So
I
think
that's
really
the
thing
is
it
doesn't.
It
certainly
doesn't
have
to
be,
as
I
don't
think
we
should
have
to
own
more
incidents,
but
I
do
think
the
incident
reports
need
to
be
more
assigned
to
more
people.
B
On
that
regard,
I
had
a
point
which
is
three
point
b,
which
I
think
is
going
in
the
same
direction
so
because,
as
I
said
the
past
week,
we
had
so
many
incidents
and
the
schedule
for
the
quality
engineering
and
call
is
kind
of
fixed
for
longer
periods.
B
So
I
was
always
interacting
with
the
same
engineer
every
day
and
we
kind
of
tried
to
figure
out
what
was
happening
and
what
what
I
realized
it's
that
they
have
no
way
of
knowing
if
a
qa
failure
is
their
own
scheduled
failure
or
it's
blocking
a
deployment.
They
see
everything
together,
so
it
happened
sometimes
that
they
fixed
something
that
was
also
failing
or
they
fixed
something
that
was
failing
on
the
the
regular
qa
that
they're
doing
on
a
timely
base
and
that
same
failure
was
blocking
a
deployment.
B
But
they
just
picked
an
another
qa
task,
so
they
retried
the
wrong
one.
I
mean
they
said.
Okay,
qa
is
good,
because
now
it's
green,
but
they
didn't
retry,
the
wonder
was
blocking
the
deployment.
So
the
idea
here
is
trying
to
make
sure
if
someone
from
the
team
can
represent
delivery
in
that
conversation
and
make
sure
that
we
provide
them
some
stronger
signal
because
they
are
acting
quickly
if
something
is
broken,
but
they
don't
know
how
which
is
the
most
important
broken
part
to
fix.
B
So
if
we
can
give
them
more
visibility
in
what
blocked
a
deployment
compared
to
just
a
regular
qa
failure
and
then
from
this
building,
what's
the
next
step,
so
if
blocking
deployment,
let's
open
an
incident-
let's
do
this
this
and
that
maybe
this
is
gonna-
I
mean
they.
They
are
uncool.
So
we
we
have
someone
that
can
help
it's
just
a
matter
of
making
sure
they
are
focusing
on
the
right
thing.
B
Yeah
well,
if
someone
can
take
a
look
it
probably
better.
If
not,
I
can
I'm
not
really
sure
how
much
they
are.
I
mean
now.
Let
me
rephrase
yeah.
I
think
that,
probably
from
our
side,
we
just
need
to
code
something
so
that
the
triggers
is
more
visible
because
we
we
give
them
the
trigger.
We
give
them
the
failure.
B
So
if
someone
from
us
can
provide
some
extra
variable
when
we
are
sending
the
trigger
and
someone
from
quality
can
make
this
variable
surface
in
their
alert,
then
probably
this
is
gonna,
be
the
the
missing
link.
So
it's
more
of
a
coding
things
that
actually
a
real
coordination,
because
I
think
they
are.
They
agree
that
they
want
to
fix
the
right
thing
and
not
just
a
random
failure.
A
We
I
mean
I'm
going
to
assume:
we
don't
have
much
capacity
this
week
unless
robert
you
want
to
pick
this,
but
otherwise
reuben
is
back
next
week
and
we
could
like
if
we
got
an
issue
and
kind
of
had
some
day
guidelines.
We
could
plan
that.
A
Okay,
let's
see
how
we
go
on
this
incident
stuff,
like
I'm,
not
expecting
any
documentation
up
changes
on
this
stuff.
It's
just
could
be
that
you
may
get
engineers
on
call
asking
you
to
take
incidents
right,
but
do
feel
free
to
make
that
a
discussion
and
let's
see
what
sort
of
incidents
you
feel
like
come
in,
that
shouldn't
sit
with
release
managers
that
perhaps
the
engineers
on
call
are
asking
most
likely.
A
A
If
you
get
an
incident
coming
your
way
that
you
don't
think
should
be,
and
we
can,
we
can
certainly
have
a
look
at
that
also
I'd,
say:
scott
alessio,
like
you
both
raised
that
like
the
overhead
of
incidents,
there
is
definitely
value
in
us
having
incidents,
but
I
do
appreciate
the
overheads.
If
there
is
a
way
we
can
reduce
this,
then
I'm
very
happy
to
kind
of
hear
suggestions
and
ideas
on
how
we
can
actually
keep
the
tracking
without
having
so
much
admin
work.
B
B
B
So
right
now
the
ruby
gems
are
not
working
as
expected,
because
the
the
things
that
infrastructure
did
around
the
bot
just
removed
bundler,
and
so
it
can't
really
update
the
log
file.
This
is
going
to
be
fixed,
but
right
now
they
are
unassigned.
Just
want
to
say
this
right,
so
we're
testing
how
it
works
they
by
default
are
unassigned,
but
on
monday
you
can
take
a
look.
They
are.
They
have
labels,
so
it
it's
both
generated
dependency,
upgrade
and
type
maintenance.
Something
like
this.
So
if
you
want
to
take
a
look
just
yeah.
A
Awesome
thanks
a
lot,
so
the
link's
all
updated
note
on
deployment
blockers
what's
very
visible
from
the
last
few
weeks
is
the
number
of
incidents
last
week
was
very
painful,
but
only
from
a
couple
of
incidents,
but
I
suspect,
that's
not
where
we're
feeling
most
of
pain
and
where
we're
feeling
most
of
pain
is
probably
sheer
numbers
of
incidents.
A
So
I
am
going
to
try
and
spend
some
time
later
this
week
and
try
and
pull
some
stuff
together
around
types
of
incidents
and
things
we
are
seeing.
But,
like
you
know
previously,
you
know
like
week.
Six
of
this
year
we
had
just
two
incidents
logged
week.
Seven
we
had
three,
whereas
now
it's
much
more
likely
we're
seeing
like
over
five
each
each
week.
So
you
know
I'm
not
surprised
the
overhead
of
raising
incidents
is
getting
higher,
although
the
numbers
of
which
time
we
have
them
recorded
for
is
very
low.
A
So
I
don't
know
if
we
are
more
optimistic
these
days
and
we're
much
kinder
and
say
something
was
only
blocked
for
an
hour
or
whether,
actually
we
are
genuinely
seeing
a
lot
more
incidents,
but
they
are
resolving
more
quickly
than
they
used
to.
So
I
need
to
do
a
little
bit
more
digging
in
there
but
question
for
release
managers.
I've
reworded
the
question
a
little
bit.
A
B
Well,
I
put
my
important
item
that
we
discussed
so
the
faster
we
can
offload
quality
failure
straight
to
the
quality
engineering
code,
the
batteries,
because
there's
less
things
on
us
and
other
things
yeah
I'm.
I
just
saw
that
during
the
call
we
got
pinked
by
marin.
There
is
another
security
thing,
not
sure
if
something
that
we
will
have
to
handle
outside
of
the
process
like
a
critical
release
or,
if
we'll
fit
into
the
regular
one,
but
it
just
it's
a
thing
for
release
managers
that
it
says.
A
This
at
the
moment
is
manual
and
the
reason
is
manual
is
because
the
labels
are
quite
inconsistent,
so
what
I
would
love
to
do
is
once
we
have
the
labeling
consistent.
This
can
just
come
off
the
api,
but
actually
what
I
find
at
the
moment
is
so
last
week
I
spent
about
two
hours
doing
this,
because
I
actually
had
to
find
the
incidents
mostly
from
the
handover
messages
and
then
go
through
the
incidents,
and
that
was
usually
that's
where
you
start
getting
pings
for
labels
and
write-ups
and
things.
So
that's
the
bit.
A
That
would
be
really
good
and
has
always
been
really
good
to
get
consistent,
because
as
soon
as
we
have
as
soon
as
we
know,
we
have
everything
labeled
with
the
numbers.
This
is
a
super
straightforward
bunch
of
incidents
to
gather
up
and
grab
the
numbers
from,
but
yeah
at
the
moment.
This
is
fully
manual.
B
A
So
I
think
about
it
as
a
like.
If,
if,
if
your
expected
production
deployment
got
delayed
because
of
a
package
missing
which
usually
it
will
have
done,
then
yes,
the
only
times
where
it
doesn't
is
say
like
ava.
Now
I'm
a
lot
less
generous
now
because
I
think
we're
more
optimistic
than
we
used
to
be.
But
in
theory,
if
you
are
like
well
like
just
starting
a
deployment
to
production
and
staging
broke,
you
may
fix
staging
or
staging
canary
before
you
needed
that
package,
but
just
for
simplicity.
A
B
Okay
also
oftentimes.
These
are
s2
as
one
incident,
so
even
if
they
are
not
blocking,
I
mean
they
are
technically
broken
deployment,
because
you
can't
deploy
yeah.
A
E
Yeah,
can
you
hear
me
now
we
can
yeah
awesome.
I
don't
know
what
happened
to
my
mic,
I'm
using
another
one,
but
I
am
going
to
echo
the
same
blocker
that
I
echoed
the
last
week
about
moving
q,
a
from
deployer
to
the
coordinated
pipeline
for
staging
canary
as
release
manager.
I
found
this
environment
very,
very
painful.
E
It
is
very
slow
and
quality
takes
a
lot
of
time
and
sometimes
since
we
trigger
it
from
the
deployer
and
if
it
fails,
it
is
going
to
trigger
the
whole
pipeline
instead
of
the
job
and
sometimes
fails
because
we
have
lucky
failures
and
it
is
annoying
to
find
out
like
okay.
A
new
pipeline
needs
to
start
instead
of
me
just
retrying
the
job,
so
that
basically
adds
30
minutes
or
one
hour
if
the
pipeline
fails
again
and
you
kind
of
need
to
keep
an
eye
on
the
pipeline.
E
So
if
it
fails,
you
quickly
retry
the
job
before
the
time
out
finishes,
and
I
found
that
the
deployment
to
canada
staging
is
taking
around
two
hours
because
of
it.
So
I
can
put
all
these
details
in
an
issue
if
it
is
easier
for
you
to
see
it
amy,
but
if
we
could
prioritize
that
it
would
be
very
great,
can
we
do
it,
but
I
just
don't
have
the
time.
E
E
So
I
guess
it
will
have
a
high
priority.
Yes,.
A
Okay,
all
right
I'll
take
a
look
through
and
see.
If
we
can,
we
can
squeeze
that
in
great
thanks
a
lot
awesome,
so
we
are
nearly
at
time
so
warn
you
in
advance.
I'm
gonna
drop
just
before
the
half
past,
but
if
any
of
you
are
able
to
stay
around
then
you
know
you
can
continue
on
the
on
the
social
stuff.