►
From YouTube: 2023-05-17 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yesterday
was
super
fun
super
fun
day
where
the
internet
was
out
and
they
were
like.
Oh,
it's
gonna
be
just
a
little
while,
although
I
was
nervous
because
the
very
first
thing
they
did
was
send
the
compensation
terms,
which
is
never
a
good
sign,
never
a
good.
It
was
terrible
terms,
but
it
wasn't
a
confidence
boost.
Wait.
You.
B
A
Don't
normally
get
anything
at
all.
Actually
this
one
was
like
the
first
time
ever,
we've
been
contacted
to
say
your
internet's
down.
That
probably
means
it
wasn't
the
isp's
fault
and
they
were
like
it's
fine.
It's
all
broken.
D
A
D
Because
we
have
ISP
data,
most
of
them
are
also
mobile
providers.
Usually
you
get
a
SIM
card
connected
to
your
Home
Connection,
so
in
case
of
extended
downtime,
they
give
you
unlimited
data
on
the
Sim,
but
I
have
no
signal
on
my
place,
so
I
just
can't.
E
A
Right
yeah,
that
was
my
second
problem.
Was
our
Windows?
Just
don't
let
allow
mobile
signal
through
them,
so
you
don't
get
any
mobile
signal
inside
so
everyone
was
just
outside
for
the
holidays.
I
don't
know.
Hopefully,
today
will
be
better.
A
Yeah
that
that's
the
better
way
to
do
it,
yeah
back
up
cool
so
on
the
agenda.
I've
made
a
slight
change,
because
what
I
know
is
tricky
to
see
is
actually
who's
out
across
the
two
teams.
It's
not.
Everyone
has
great
disability,
so
I
gotta
just
see
if
it
makes
it
easier
to
just
have
a
section
at
the
top
of
the
agenda,
rather
than
us
having
them
on
specific
meeting
dates
so
that
we
can
see
feel
free
to
play
around
with
that
or
give
feedback.
A
Depending
on
how
that
works,
cool
I
know
this
is
the
short
one.
So
I
don't
want
to
spend
ages
on
this,
but
I
did
just
want
to
give
you
a
quick
overview.
The
release
manager,
workload
metrics
are
nearly
done,
I'm
just
finishing
up
documentation
and
then
that
epic
will
close
out.
A
Let
me
just
give
you
a
super
quick
walk
through
of
what
you
can
find
on
that
the
the
biggest
problem
with
these
metrics
is
they're
hard
to
find
because
they're
in
a
spreadsheet,
so
I'll
have
a
think
about
that
for
now,
I'll
just
link
the
spreadsheet
in
from
the
top
of
our
slack
channel.
So
at
least
there's
a
jumping
in
page,
but
I
will
figure
out
what
we
can
do
to
make
this
more
easily
discoverable,
so
dashboard
pulls
in
the
sort
of
the
averages
that
we
have
so
far.
A
I
haven't
really
figured
out
what
will
be
the
right
amount
of
data
to
use
for
an
average
like
whether
we
want
to
look
at
it
quarterly
or
all
the
data
we
have
or
whatever
the
time
frame
is
for
now
it's
just
everything,
and
then
you
get
individual
sheets
to
show
you
what's
going
on
on
the
individuals
so
deployments
are
week
by
week
tracking,
and
that
is
to
keep
it
in
line
with
the
deployment
blockers
data.
So
we
can
make
use
of
that.
A
This
is
basically
trying
to
figure
out
how
much
of
our
week
did.
We
spend
having
to
do
stuff
to
deployments,
which
is
surprisingly
High,
given
that
we
have
an
almost
fully
automated
process.
You
all
know
that,
like
it's
because
of
the
incidents,
but
people
outside
of
delivery
definitely
don't
know
that
I.
A
Does
yeah
exactly
so
it's
pulling
in
two
bits
of
data:
we've
got
the
failed
jobs
from
the
new
dashboards.
That
system
have
got
so
they
when
the
data
starts
contract
for
those
they
come
in
and
then
we
have
got.
The
number
of
deployment
failures
is
from
deployment
blockers
and
then
the
total
time
spent
on
deployment
failures
is
the
number
from
deployment
blockers
plus
adding
on
the
admin
that
I
know
you
all
end
up
having
to
do
every
time
we
have
incidents,
and
things
like
that.
A
And
then
I've
tried
to
do
something
similar
I
won't
spend
too
long
to
post
deploy
migration
to
something
largely
similar,
that
one
is
a
little
bit
more
straightforward.
That
was
looking
a
little
bit
more
at
the
deployments.
A
Sorry,
the
incidents,
because
the
actual
normal
case
is
fine,
but
once
we're
blocked
we're
so
blocked,
so
that
one's
a
little
bit
different,
and
then
we
have
the
releases
so
monthly
release
this
one,
there's
not
an
enormous
amount
of
data
on
this
one,
just
because
I
think
it's
kind
of
we're
in
a
monthly
prep
period
or
we're
not
and
when
we're
in
it.
It's
like
everything
you're
doing
is
related
to
that.
A
So
the
main
bit
of
information
I've
tried
to
capture
here
is
just
what
is
that
end-to-end
prep
time
for
a
monthly
release
security?
Lease
is
largely
the
same,
but
I've
broken
this
one
down
a
bit
more,
which
I
may
live
to
regret,
because
this
is
the
hardest
one
to
keep
keep
up
to
date.
But
at
the
same
time,
particularly
from
orchestration,
we
will
be
using
this
data
for
our
Q2
okrs
to
try
and
make
some
workload
reductions.
A
So
at
least
for
now
it's
definitely
useful
for
us
to
be
able
to
kind
of
easily
see
that,
for
example,
we
spent
quite
a
lot
of
hours
on
the
first
steps,
which
is
just
pure
admin,
but
then
the
merging
time
steps
are
the
the
really
time
consuming
bits
normally
and
I.
Think
that's
not
surprising,
based
on
what
we've
seen
with
patch
releases
and
then
finally,
we
have
patch
releases
so
again,
very
very
similar.
This
one
has
a
little
note
on
G,
which
is
when
we
move
to
the
new
patch
release
process.
A
So
you
can
see
there
are
some
improvements
to
time
which
is
expected,
but
it's
still
a
reasonable
number
of
hours
to
get
through
a
patch
release
for
for
various
reasons,
which
is
why
we
didn't
go
ahead
and
extend
the
maintenance
policy.
A
So
this
will
hopefully
give
us
a
place
where
we
can
decide
on
what
is
the
most
time
consuming
stuff
like
which
process
do
we
need
to
go
after,
like
which
aspect
of
which
process
do
we
need
to
go
after
and
also
hopefully,
start
to
answer
the
questions
of?
How
long
does
it
take
to
do
X
so
like?
How
long
does
it
take
to
do
a
patch
release,
or
how
long
does
it
take
to
merge
in
security
fixes
or
publish
things?
B
For
the
batch
release,
the
ones
that
the
last
few
ones
15
or
10
or
3
15.8.6-
could
it
be
that
these
took
longer
because
they
were
for
the
git
security
release?
Because
that's.
A
Always
yeah,
okay,
yeah,
that's
right!
Exactly
so
did
I
create
it.
A
I
didn't
have
it
in
that
one
on
the
security
respawn,
yes,
I've
tried
to
indicate
whether
there
were
special
cases,
because
that
definitely
affects
things
so
I've
left
some
notes
on
there.
If
we,
if
we
see
something
like
particularly
interesting
for
this
one,
for
example,
we
had
two
of
the
fixes
were
reverted
which
just
generated
a
heap
of
extra
work
for
us.
A
D
So
when
we
talk
about
patch
releases,
these
are
not.
Security.
Releases
are
just
purely
patch
releases
correct.
So
when
what
Reuben
said
about
1586,
which
is
which
has
an
outlier
number
of
time
spent
in
preparation?
Is
that
because
it
is,
it
was
the
detail,
the
git
upgrade,
which
was
a
public
security
vulnerability,
and
so
it
was
not
handled
as
a
security
fix,
but
as
a
general,
even
though
it
was
actually
a
security
release.
Is
that
the
case.
B
Don't
remember
the
actual
release
versions,
I.
A
Think
that
one
is
separate,
I
think
this
was
just
a
genuinely
tricky
patch
race.
No.
D
I'm
not
talking
about
this
one
I'm
talking
about
fifteen
eight
six
on.
D
15
8
6
has
four
hour
of
preparation
time,
which
is
unexpectedly
long
compared
to
the
the
other
one
or
after
the
change.
The
process
change
right
because
we
used
to
have
a
long
preparation
and
then
progression
should
be
short
right
and
so
I
was
trying
to
understand.
Why.
A
I,
don't
think
there
was
anything
special
I
think
this
was
just
a
indication
of
of
how
our
process
can
stretch
out
now.
I,
don't
think
there
was
a
particular
problem
for
this
one
I.
Don't
because
I
didn't
note
anything
on
this,
so
I
think
it
was
a
straightforward
one.
I'm
going
to
guess.
It
was
just
a
case
that
we
started
the
patch
and
then
we
weren't
in
a
rush
to
finish
it
so
that
we
we
steadily
went
over
the
prep,
the
prep,
and
then
we
moved
through
the.
A
A
It
might
be
that
this
gets
delayed
by
other
things
like
deployments
or
we
were
doing
other
releases,
but
I
think
it's
still
useful,
even
though
it
might
not
be
someone
sat
there
on
a
keyboard
for
four
hours,
end-to-end
typing,
but
it's
a
good
indication
of.
If
that
was
fully
automated,
you
wouldn't
have
to
think
about
it
for
four
hours,
so
I
think
look
at
it
more
like
that.
One
I
actually
don't
think
that
patch
release
was
a
difficult
one.
Looking
at
the
other
numbers,
I
think
it
was
just
we.
We
started
slowly.
B
You
Counting
QA
failures
has
always
been
a
problem
for
us,
the
it
so
I
think
Myra
had
started
in
her
shift.
She
had
started
creating
a
release,
tasks,
issue
for
every
QA,
flaky
failure
and
that
that
is
like
a
lot
of
admin
work.
So
I
wonder
if
we
can
have
like
one
release,
tasks,
issue
say
per
week
and
every
time
we
have
a
flaky
QA
failure.
We
add
like
half
an
hour
to
it,
something
like
that.
You.
C
A
A
B
We
actually
have
a
metric
that
counts.
The
time
spent
three
trying
jobs,
but
the
problem
with
it
uses
the
web
hook
feature
of
the
product
and
from
what
I've
seen
the
last
couple
of
months,
it's
not
very
reliable
so
like
if
a.
E
C
B
A
Not
necessarily,
unfortunately,
like
I
think
that's
a
great
one.
Let's
get
an
issue
and,
like
you
know,
explain
what
we
need
and
then,
like
yeah
I,
think
some
can
give
us
a
hand
on
like
directing
the
edits
with
the
right
Stage
Group.
B
A
D
I
want
to
point
out
that,
even
though
I'm
not
the
biggest
fan
of
the
admin
work
related
to
creating
those
issues
for
tracking
failures
and
things
like
that,
they
kind
of
give
a
sense
of
ownership
of
the
problem
so
because
you
don't
want
to
create
an
incident
if
it's
not
an
incident,
but
you
want
to
make
sure
that
there
is
someone
assigned
to
solving
that
problem
and
having
that
issue
kind
of
help.
D
So
if
we
can
find
some
sort
of
Middle
Ground
where
the
thing
can
be
generated
by
itself,
we
don't
have
to
do
the
admin
worker
writing
or
describing
what
is
failing,
but
still
having
an
issue
or
somewhere
that
can
be
assigned,
say.
Okay,
this
QA
engineer
is
looking
at
this
or
distribution
is
looking
at
this
and
kind
of
give
you
a
lightweight
incident.
That's
the
thing
right,
because
this
is
what
it
is.
It's
a
lightweight
incident
tracker.
A
Then
these
metrics
plus
all
of
the
delivery
metrics
that
we
have
now
I
think
puts
us
in
a
really
good
place
to
be
able
to
produce
a
sort
of
monthly
overview
of
how
did
releases
and
deployments
go
and
what
were
the
biggest
problems
and
who
do
we
need
to
help
us
so
I've
opened
another
epic
I
didn't
want
to
bond
it
into
this
release,
manager
workload,
one
because
it's
taken
me
long
enough,
but
I've
opened
set
up
another
epic
that
you
know
when
someone
gets
some
capacity,
we
can
try
and
actually
move
forwards
so
that
we
can
be
putting
more
of
this
information
in
front
of
people.
A
Awesome
I
will
share
this
stuff
in
slack
as
well
feel
free
to
have
a
poke
through
and
ask
questions
about
things
that
don't
make
sense
or
we
should
be
adding
or
improving.
So
we
can
start
using
those.
A
Cool
B
there
is
a
proposal
up
that
will
move
the
release
date
from
off
the
22nd
and
on
to
the
third
Thursday
of
the
month,
wanted
to
see.
If
anyone
had
any
thoughts,
so
Ruben
trying
to
verbalize
that.
B
I
just
kept
put
some
examples
of
what
the
dates
would
look
like.
So
this.
B
A
Okay,
I
think
that's
the
the
main
benefit
is.
We
won't
have
random
weekends
releases,
which
are
difficult,
but
I
I
hope
we
will
yeah
we'll
need
to
adjust
our
processes
and
things
a
bit,
but
I'm
hoping
it
will
become
a
bit
more
predictable
and
we
would
be
able
to
say
because
everything
would
adjust
around
this.
A
So
stage
groups
would
work
off
a
different
Milestone
and
this
if
we
go
with
the
Thursday,
which
looks
like
we
probably
will
say
like
they
would
have
until
the
previous
week
to
wrap
up
the
Milestone
and
then
we
could,
for
example,
just
say
whatever
the
last
deployment
on
the
Friday
is
is
our
candidate
or
yeah?
We
can
figure
out
our
process
and
then
just
work
around
a
predictable
number
of
days.
A
So
hopefully
it
makes
things
a
little
bit
more
straightforward
and
then
we
could
also
set
release
manager
handovers
to
be
off
around
that
as
well
like.
So
it
could
be
like
the
the
third
Friday
of
every
month
to
start
a
Handover
date
or
something
like
that.
C
It's
gonna
very
quick
Point.
Why
we're
kind
of
revisiting
all
this?
Can
we
also
look
at
including
time
in
a
lot
of
these
messages?
Notifications,
tooling
talks,
because
people
like,
for
example,
there's
like
release
candidate
has
been
tagged
and
people
are
in
in
like
on
the
15th.
The
release
candidate
is
tagged,
but
I
I
in
my
time
zone
always
are
like
it's
the
15th
now,
but,
like
things
often
happen
in
the
Europe
or
America's
time
zone,
so
APAC,
we
we
were
like
what
am
I
trying
to
say
is
just.
C
E
C
D
Just
have
strict
time
on
any
of
those.
The
only
time
we
have
is
the
release.
The
release
has
to
start
preparation
on
a
given
UTC
time
because
he
has
to
be
ready
by
the
blog
post
publishing,
which
has
fixed
time
everything
else
we
are
in
control,
so
we
can
do
what
we
want,
but
everything
else
is
more
about
number
of
days
before
the
release.
So
we
don't
have
we
don't
have
we
don't
have
a
fixed
time
at
all
right
we
just
decide.
Maybe
it's
the
emea
release
manager.
That
is
doing
it.
D
C
B
D
I
think
what
we
can
do
here
is
that,
because
it
used
to
be
two
working
days
before,
if
I
don't
remember,
to
work
in
this
before
the
release
now
and
and
the
working
days
is
the
tricky
part,
I
think
confusions
comes
from
working
days,
because
then
you
don't
know
when
this
will
happen.
So
if
we
move
to
third
Wednesday
of
the
week,
then
we
can
say
first
Wednesday
of
the
week
is
release
candidate
and
we
can
say
UTC
morning
something
like
this.
A
I
think
the
other
thing
that
might
be
worth
adding
in
those
well
is
a
lot
of
our
stuff
does
move
around
based
on
incidents
and
other
things,
so
actually
I
think
what
we
should
also
be
making
sure
people
understand
trying
to
help
like
communicate
is
things
haven't,
happened
until
they've
been
announced,
so
I
think
people
make
kind
of
assumptions
like.
C
Sure
and
I
think
that's
the
key
too
is
they
can,
because
the
messages
are
in
slack
like
when
we
post
them,
they
can
like
Mouse
over
the
time
or
whatever,
and
it
converts
it
to
the
local
time,
but
maybe
we
should
make
that
clearer
as
well
as
that.
The
message
itself
kind
of
indicates
as
of
this
message.
This
has
now
happened.
Kind
of
thing.
B
Go
ahead,
mine
is
just
tangentially
related
to
what
we
are
discussing
so
go
ahead.
D
Yeah
I
just
wanted
to
point
out
that
the
the
time
so
the
the
22nd
is
fix
it
in
code
inside
release
tools.
So
there
would
be
need
for
upgrading
the
thing
so
checking
where
we
are
relying
on
the
date
and
making
sure
is
adapted
to
the
new
workflow.
B
Don't
know
so
in
system
we've
been
discussing,
having
like
a
dashboard
where
people
outside
delivery
can
have
a
look
at,
say:
Auto,
deploy,
related
information
or
release
related
to
the
information
I
wonder
if
we
can
have
a
sort
of
announcements
dashboard
where
So.
Currently
we
announce
things
in
slack,
but
the
problem
is
you
know
that
goes
so.
You
get
newer
messages
and
that
announcement
is
gone.
Like
many
people
miss
it.
B
A
It's
totally
different
okay,
100
like
like,
we
can
talk
about
it
now.
I
think
this
is
actually
the
stuff
that
will
be
a
really
interesting
beginning
of
pass
for
us,
because
I
think
this
is
the
the
probably
easiest
small
bit
around
surfacing
up
information
about
our
stuff
and
what
we're
doing
I
think
announcements
is
a
great
first
sort
of
obvious
piece
and
that's
the
information
that
people
already
want.
A
But
it's
not
the
only
information,
I
think
stuff,
like
the
diffs
on
deployments,
is
one
I
think,
there's
lots
of
information
that
we
currently
surface
up
in
slack.
That
would
be
great
to
have
just
on
our
landing
page,
so
I,
I,
I,
think
for
now
we
should
make
this
look
like
what
we
need
it
to
do
like
in
a
dashboard
or
somewhere
like
that
and
then
figure
out
next
steps.
A
Yeah,
no,
that
makes
sense,
yeah,
I,
agree
and
I
think
that's
a
great
first
step
and
then
kind
of
probably
a
little
bit
further
down.
We
can
work
out
where
we
like.
Is
there
a
way
we
can
actually
put
more
data
together
and
make
this
a
bit
more
interactive
or,
like
you
know,
what
does
that
look
like.
A
Maybe
I
should
say
when
I
say
a
little
later,
actually
think
about
our
roadmap,
quite
a
while
later,
based
on
capacity
not
like
in
the
next
couple
of
months,
but
also
yeah.
It's
not
a
Million
Miles,
Away
yeah.
Okay,
so
I'll
keep
you
posted
like
you,
can
follow
on
the
Mr
on
that
I'll.
Let
you
know
if
any
other
stuff
comes
about,
tricky
thing
will
be
the
transition,
so
we
need
to
figure
out
what
goes
on.
If,
when
we
do
change
this.
A
Great,
should
we
jump
to
release
management
stuff.
B
B
Okay,
so
been
a
long
time
since
I've
done
this
I
think
we
start
with
package
package
graph.
B
B
This
issue
was
14
259,
which
was
the
I,
don't
know
how
to
describe
it.
The
gitly
related
issue
that
we
figured
out
was
not
caught
in
staging
Canary
and
was
only
caught
in
staging,
so
we
kind
of
even
though
the
revert
Mr
was
merged
and
deployed.
B
We
thought
that
the
issue
was
still
there,
so
the
deployment
didn't
proceed
and
it
could
have
been
stopped
much
earlier,
but
yeah
I'll
just
address
this,
so
this
is
nothing
no
major
concern.
It's
just
that
because
of
a
bug
in
in
the
pipeline
code,
it
generated
a
lot
of
pipelines,
pipelines
that
couldn't
go
forward.
B
Okay,
so
because
of
the
large
number
of
blockers
the
deployment
frequency
dropped
so
earlier,
it
was
at
like
eight,
then
seven
this
times
at
five.
A
B
D
E
B
B
No,
it
doesn't
look
like
I'll.
Just
try
one
more.
If
this
one
is
not
okay,
I
think
it
is
this
one.
Why
is
it
not
showing
data
seven
days.
B
Yeah
I
think
this
is
the
latest
one.
Today
is,
but
that's
the
problem
it.
It
shows
only
for
one
particular
version.
A
B
D
B
I
think
that's
it
right.
So
these
are
all
the
this
is
the
blockers
for
this
week.