►
From YouTube: 2021-04-05 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
C
C
So
the
type
of
incidents,
it's
called
something
like
incident
type.
Let
me
find
out.
C
So
there's
an
incident
type
of
deployment
related,
but
don't
worry
too
much
about
that.
The
reason
for
that
is,
we
don't
actually
have
millions
of
them,
so
it's
not
a
huge
pain
when
I
add
them,
but
also
I'm
thinking
about
a
little
bit,
because
in
a
way
it
captures
two
types
of
incidents
because
it
captures
things
like
this,
where
deployments
are
blocked
because
of
an
incident,
but
it
also
captures
incidents
that
were
caused
by
a
deployment
which
is
slightly
different,
but
at
the
moment
the
numbers
aren't
big
enough
for
it
to
be
too
problematic.
C
A
C
That's
the
main
one
we
are
tracking
on
it.
Doesn't
if
you
do
add
it
and
it's
the
other
one,
that's
also
fine,
but
it
the
mod
the
one
that
we
we
tend
to
less
consistently
track,
and
the
one
that
we
is
quite
important
is
the
first
one
yeah,
the
the
deployments
are
blocked
and
the
reason
it
is
incident
worthy
is
because
at
that
stage
we
can't
deploy
anything.
So
if.
D
C
B
For
the
label
incident
type
deployment
related,
is
this
moment
as
caused
by
deployment
or
meant
as
a
deployment
was
impacted
by
something
it's.
C
Sort
of
both
yeah,
that's
what
I
kind
of
at
the
moment.
As
I
say
we,
don't
we
don't
really
analyze
this
data
as
thoroughly
as
we
should
so
I'm
I
haven't
yet
committed
to
which
one
like
we
definitely.
I
don't
know
whether
it
matters
that
that
kind
of
is
the
both
anything.
I
think
that
we
raise
in
delivery.
I
tend
to
put
it
on
just
so
that
I
can
filter
on
them.
B
D
Yeah,
usually
we
don't
use
it
for
so.
Let's
say
we
deployed
something
that
is
broken,
but
it
gets
bought
two
hours
after
the
deployment.
Then
it
doesn't
get
the
deployment
related
just
get
the
attribution
to
the
piece
of
piece
of
software
that
was
broken,
but
if,
let's
say
it
happens
during
a
deployment
or
it
prevents
our
deployment
to
be
executed,
then
usually
we
put
that
label.
C
Yeah,
that's
a
good
distinction
yeah,
but
don't
worry
too
much
about
that
and
I
tend
to
stick
it
on
and
then
I
filter
by
them.
So
so
everyone
welcome
to
the
first
of
our
apac
emea
delivery
weekly.
So
just
welcome
particularly
to
graham,
is
this
your
first
one.
So
at
the
moment,
we've
got
these
in
every
two
weeks
at
this
time
very
happy
to
review
that
like
and
see
how
we
go
right.
Like
particularly
graeme.
C
I
know
it's
evening
time
for
you,
then
I
will
use
the
same
agenda
so
that
we
get
overlap
with
the
emea
americas
one
and
then
it's
a
little
bit
easier
for
people
to
sort
of
see
things.
C
So
what
I
do
is
I
copy
up
the
down
there
template
on,
I
usually
join
a
monday
for
the
email
americas
I'll.
Do
it
on
tuesday
for
apac
email
feel
free
to
add
anything
you
want
to
to
the
announcements
or
the
discussion,
and
it
doesn't
have
to
be
won't
be
that
the
two
meetings
are
the
same.
They
are
recorded,
so
people
can
catch
up
on
those
things
so
just
be
whatever
sort
of
current
stuff.
C
We
have
at
the
meeting
time
it's
also
a
social
call,
so
it's
not
intended
to
be
a
all
the
updates
and
things
like
that.
So
usually
we
have
some
announcements
of
maybe
a
bit
of
discussion,
but
then
we
use
the
bulk
of
the
time
to
just
find
out
what
we've
been
up
to
and
share
whatever
you
like
with
each
other.
C
So
I
haven't
got
an
mttp.
I
usually
put
the
image
in.
I
haven't,
got
an
image
because
there's
no
data
for
may
so
I'll
follow
up
on
where
the
mttp
data
is
for.
May
I
haven't
got
any
announcements
in,
but
just
on
discussion
doesn't
necessarily
have
to
be
so
much
discussion,
but
just
to
give
a
little
bit
more
visibility.
So
man
and
I
have
been
tracking
deployment
blockers
over
the
last
few
weeks.
C
We
want
to
it's
kind
of
in
relation
to
mttp,
moving
from
12
hours
up
to
24
hours,
and
then
we
now
have
to
make
a
decision
like
when's
the
right
time
to
drop
it
back
down
to
12
hours,
so
we're
still
getting
a
decent
amount
of
disruption.
Last
week
we
had
quite
a
number
of
incidents
all
on
same
day,
pretty
much
which
was
disruptive
but
then
also
just
to
share.
Like
I
had,
I
created
a
follow-up
issue
which
was
to
look
at
there's.
Definitely
some
patterns.
C
We
see
around
pipelines
failing
the
morning.
The
first
one
of
the
day
fails
the
most,
and
I
think
this
is
I've
shared
with
you.
I
think
my
theory
is
for
this.
One
is
it's
most
likely
that
a
lot
of
things
get
committed
at
the
end
of
the
americas
day,
which
is
when
just
a
lot
of
people
are
working.
It
just
happens.
The
next
pipeline
we
run
is
this
one,
but
kyle
is
helping
us
with
that
one.
There
definitely
are
flaky
tests.
C
Impacting
us
and
also
master,
fails
quite
a
lot
which
will
impact
us
as
well,
so
we're
trying
to
narrow
down
on
those
things
and
then
last
week
I
attended
quality
staff
meeting
and
I
was
trying
to
chat
about
deployment
blockers,
but
also
some
of
these
flaky
tests
and
things.
It's
quite
a
lot
of
things.
We
don't
have
a
good
way
of
handling
like
we
don't
have
a
good
way
of
handling
quarantining,
flaky
tests
quickly
it's
hard
to.
C
If,
if
someone
does
fail,
the
qa
test
and
they've
done
it
with
them
like
a
front
end
chain
like
a
ui
change.
Often
the
tests
running
on
staging
are
the
first
time
those
tests
have
run.
So
it's
not
surprising.
They
fail,
but
then
because
we
have
to
run
the
full
suite.
That's
a
two
to
three
hour
test
suite,
which
is
why
it
then
takes
so
long
to
recover.
So
quality
are
looking
into
that
they're,
putting
together
some
stuff
to
try
and
get
it
so
that
those
ui
tests
can
run
on.
C
C
This
is
just
one
aspect,
so
the
other
one
that
came
out
is
that
database
migrations
have
been
a
big
focus
over
the
last
few
months,
but
there
may
be
other
things
as
well.
So
as
we
go
through
and
like
this
is
kind
of
why
raising
incidents
and
having
that
tracking
is
super
important
just
so
that
we
can
actually
start
to
spot
patterns
and
be
like
these.
Sorts
of
changes
have
caused
three
incidents
over
the
last
few
months,
and
here
are
the
numbers
and
we
can
pull
people
into
that
stuff.
B
Who
could
work
on
this
and
also
the
workflow,
and
you
know,
picking
the
right
label
for
for
the
revertamar,
for
instance,
while
the
developers
who
really
needed
to
fix
this
in
this
case
didn't
know
the
workflow
really,
and
I
also
needed
to
look
it
up,
because
I'm
not
very
often
you
know,
look
had
to
look
for
the
right
labels
for
mrs
and
stuff,
like
that,
and
also
the
devastation
engineer
was,
was
for
the
first
time
in
deaf
escalation
and
also
didn't
know
what
to
do
so.
C
D
Yeah,
I
think
that
qa,
since
they
started
having
their
rotation
to
be
on
call,
they
really
improved
because
they
now
they
have
knowledge
of
how
to
handle
these
things.
So
probably
the
the
biggest
part
is
the
engaging
with
them
so
the
the
incident.
I
think
it
was
friday
right,
so
we
we
were-
or
if
not
friday,
I
mean
it
was
last
week
yeah.
No
because
friday
you
were
not
working
before
friday.
D
So
the
problem
that
I
saw
there
is
that
it
happened
overnight
for
emea.
So
for
me
I
was
overnight
and
basically,
the
engagement
started
around
9
00
a.m
right,
so
it
took
four
hours
before
we
started
handling
that
plus
everything
the
pipelines,
fixing
it
finding
devastation
and
etc.
So
it
took
basically
gave
us
a
blackout
window
of
half
a
day,
even
more
yeah
12
hours,
yeah
more
or
less.
B
D
Yeah
exactly,
but
I
don't
think
they
don't
have
upper
coverage,
so
that's
probably
the
same,
the
same
situation
that
we
are
right.
So
if
it
happens
overnight,
basically
everything
is
shifted
to
emea
morning.
If
they
had
epic
coverage,
probably
by
the
time
we
were
online,
we
would
have
already
been
fixed.
Maybe
we
had
to
retry.
No,
maybe
we
will
have
to
retry
the
deployment,
because
the
environment
gets
locked
okay,
but
at
least
we
do
it
just.
It
was
just
a
matter
of
retrying
a
job
setting
a
variable
instead
of
waiting
for
the
fix
and
everything.
C
Yeah
absolutely
absolutely
and
yeah.
It's
definitely,
I
think,
there's
a
little
bit
of
a
disconnect
between
the
tests
failing
and
people
fully
knowing
what
this
means
in
terms
of
like
stuff's
blocked.
So
I
think
they
do
start
looking
because
they
know
the
tests
have
failed.
They
often
do
start
looking,
but
I
don't
think
they
necessarily
know
the
urgency
of
getting
the
mr
prepared
and
picked
until
everyone
comes
online.
So
there
are
a
few
extra
bits
we
can.
C
We
can
tighten
up
there
as
well
and
then,
graham
once
we
get
you
trained
up,
this
will
be
a
really
good
one,
like
it's
a
great
example
of
one,
where
it's
pretty
low
touch
from
our
side,
but
being
able
to
like
kick
off
that
process.
A
little
bit
earlier
will
save
us
quite
a
lot
of
time,
assuming
there
are
apac
people.
I
think
this
is
the.
C
Untested
is
actually
we've
never
tried
to
kick
it
off
early.
So
like
this
one
that
we
had
this
morning
with
the
migration
failing,
it
would
have
actually
been.
Quite
I
don't
know
what
apac
cover
we
have
for
those
sorts
of
failures,
we'll
find
out.
D
D
C
Our
current
situation,
so
we
can
test
that
out,
but
yeah
there's
some
the
developer,
dev
escalation
processes,
it's
unfortunate.
There
isn't
a
kind
of
shadowing
period
or
it
does
seem
that
you
get
somewhere
random
and
it's
the
first
time
and
they
kind
of
expect
that
the
person
requesting
them
understands
the
full
process
and,
of
course,
like
we
often
don't
so
yeah
there's
still
improvements
going
on
there.
C
I
think
for
that
one
always
these
things
I
mean
as
we
go
through
q2,
so
we
have
an
okr
to
create
deployment
slo
that
myra
is
driving
on
and
the
idea
behind
that
really
is
to
give
us
a
little
bit
of
a
separation
from
mttp.
So
probably
in
mttp,
is
it
tracks
deployment
frequency,
but
also
it
tracks
like
how
many
release
managers
are
online?
C
It's
like
right
now.
I
think
it's
going
to
be
hard
to
get
mttp
back
below
24.
To
be
honest,
which,
given
all
the
things
we're
doing
and
how
hard
we're
working
is,
is
really
like
difficult,
really.
B
Talking
about
the
mttp
metric,
I
was
wondering
a
little
bit
how
many
other
metrics
we
have
for
the
deployment
process
like
do
we
have
fine,
grained,
metrics
or
logs
and
timestamps
for
certain
parts
of
running
pipelines
and
and
things
like
that,.
C
No,
but
that
will
be
part
of
the
deployment
slo
work,
so
it's
kind
of
three
layers
of
that
one
will
be
kind
of
the
overall
deployment
pipeline.
C
B
B
In
a
previous
job,
we
had
a
team
who
we
are
instrumenting.
I
think
we
use
jenkins
with
a
fine-grained
metrics
for
for
every
step
in
any
kind
of
pipelines
running
there,
which
was
great
because
that
enabled
developers
and
the
whole
company
to
really
look
at
how
long
does
it
take
to
run
this
kind
of
container,
for
instance,
and
really
starting
to
improve
all
these
timings,
which
shave
out
a
lot
of
things.
D
Indeed,
we
have
in
the
product
because
the
ci
job
have
markers,
so
you
can
send
hansa
string
to
initiate
a
marker
within
the
output,
and
this
will
start
counting
time
so
three
years
ago
was
only
stored
in
the
database
which,
if
we
want
to
get
numbers
into
periscope,
can
already
be
enough,
but
maybe
now
we
are
exposing
this
also
as
an
api.
B
Yeah
I
mean
because
if
you
track
these
metrics,
you
would
catch
things
like,
for
instance,
that
certain
containers
take
longer
and
longer
over
time
to
execute
because
they
get
bigger
and
bigger.
Take
more
time
to
download,
and
there
you
see
where
performance
degrades,
where
you
lose
time
and
they
could
shave
out
time
and
things.
C
Yeah,
absolutely,
I
think,
you're
absolutely
right.
The
the
one
graph
that
robert
put
together
did
just
like
trend
up
and
up
and
up
right
like
it's.
It's
no
surprise
that
deployments
feel
long
now,
so.
C
Certainly,
do
take
that
out.
B
C
So
just
linking
the
epic.
B
C
B
B
C
Andrew's
also
going
to
help
us
out
on
that
one
as
well,
so
but
yeah
there's
the
epic.
It's
got
the
child
issues
to
have
the
different
layers
so
feel
free
to
go
ahead.
Add
comments.
Some
of
the
comments
on
defined
deployment.
Slo
do
have
kind
of
like
proposals
for
how
we
could
track
it.
What
tools
we
could
use
and
things
so
feel
free
to
add
thoughts
to
that.