►
From YouTube: 2022-02-23 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
let
us
begin.
A
So
I
want
to
use
some
of
the
time
today
to
set
some
actions
from
the
retro,
so
these
are
kind
of
going
to
be
proposed
actions
because
we
haven't
got
the
entire
team
here,
but
I've
deliberately
chosen
this
version
of
delivery
weekly,
since
we
already
have
comments
from
myra
and
skarbeck,
so
we
kind
of
get
a
little
bit
of
cover
from
the
america's
side
as
well.
So
thank
you
for
adding
your
comment
as
well.
A
Henry,
firstly,
was
there
any
additional
stuff
that
anyone
wanted
to
call
out
in
terms
of
like
stuff
that
they
felt
particularly
happy
about
in
q4?
In
fact,
let's
start
that
like
was
there
anything
anyone
wants
to
call
out
on
like
things
they
were
additional
things
they
liked
for
keto.
B
Maybe
it
sounds
obvious,
but
I
think
it's
really
impressive-
that
we
each
once
deliver
our
police
and
delivering
our
software
daily
with
high
frequency.
I
mean
this
is
really
one
of
the
main
targets
of
our
team
right,
and
I
think
we
really
are
exceptional
at
this.
Besides
all
of
the
obstacles
that
we
have
in
the
very
often
so
I
think
this
is
really
even
if
it
might
be
like
we
are
used
to
doing
this
all
the
time,
but
I
think
it's
really
exceptional
to
have
this.
A
Yeah
thanks
for
calling
that's
absolutely
true.
An
mttp
at
the
moment
is
looking
phenomenal
and
that's
down
to
release
managers.
You
know
setting
up
a
schedule
and
reacting
and
doing
things
and
everyone
who's
contributed
to
the
tooling
and
also
the
dev
migration
right.
So
it's
like
full
team
effort
for
sure.
A
So
I
don't
want
to
set
an
action.
That's
going
to
increase
that
burden
right
like,
but
at
the
same
time
I
think
it
would
be
worth
us
taking
one
or
maybe
two
reasonable
sized
actions
that
we
can
take
to
just
try
and
make
this
quarter
a
little
bit,
smoother
less
stressful
or
more
enjoyable
or
whatever.
It
is
compared
to
the
previous
one.
A
So
there
are
some
ideas
already
on
the
issue,
but
I
wonder:
does
anyone
have
any
ideas
of
either
an
action
you'd
like
to
take
or
something
that
you
would
like
to
see
change
in
some
way?
Either
we
do
more
of
or
we
do
less
of
in
this
quarter.
C
Sorry,
okay,
I've
got
three
right.
We
can
pick
and
choose
if
it's
too
much.
C
So
this
is
similar
to
what
I
discussed
with
amy
in
the
one-on-one.
C
The
second
is
to
automate
the
posting
of
that
table
that
we
manually
create
currently
in
the
security
release
issue,
and
the
third
is
to
make
it
possible
for
chat
ops
to
trigger
a
release
tools
pipeline
instead
of
a
deployer
pipeline.
A
Thanks
ruben
and
henry
was
it
you
who
was
talking
at
the
same
time
like?
Did
you
have
ideas.
B
Yeah
just
for
an
action
which
probably
isn't
a
lot
of
work,
so
we
can
probably
just
do
it
right
away.
One
of
my
suggestions
was
to
so
these
managers
have
a
look
that
we
don't
too
often,
you
know,
manage
incidents
and
then
are
the
dies
for
them.
B
We
just
talked
about
this
in
the
last
little
meeting
also,
and
I
think,
if
we
would
find
the
right
places
to
update
the
process
of
how
to
deal
with
incidents
and
make
it
clear
everywhere
where
we
might
be
missing
it,
I
need
to
check
where
it
is,
but,
but
I
think
we
have
a
lot
of
documentation
around
incident
management
and
and
stuff,
but
I'm
not
always
aware
of
I'm
not
always
looking
at
it.
So
I
think
maybe
just
some
documentation
updates
if
needed
could
help
here.
B
A
I
agree
so
it
is
in
the
handbook
it's
a
little
bit
blurry
and
I've
now
talked
to
some
of
you
around
this,
but
I
think
you're
right.
There
is
some
action
on
our
side,
so.
B
Yeah,
I
could
just
open
an
issue
for
that
and
then
go
through
the
documentation
that
we
have
try
to
figure
out
what
needs
to
be
clarified
and
then
make
a
proposal
on
certain
places.
I
guess.
A
Let's
I
would
like
us
to
try
and
do
a
less
hands-on
version
of
that,
just
because
I
know
we're
super
busy
and
we
do
have
other
things
so
the
document
the
handbook
I've
just
mentioned
there
like
included
there-
has
it
says
at
the
point.
The
engineer
on
call
is
engaged
in
the
incident.
They
are
the
dri
now.
What
we're
seeing
at
the
moment
for
various
reasons
and
one
of
the
biggest
ones
is
because
of
woodhouse
not
having
the
link
anymore
to
email
addresses
is
that
things
are
not
getting
assigned.
A
A
I
think
we
need
to
be
very
active
about
saying,
hey,
engineer,
cool
you
know,
can
you
take
this
dri
and
actually
be
explicit
on
that
and
make
sure
we
set
that
on
the
issue
right?
So
that's
probably
the
one
that
would
be
good
for
us
to
document.
We
do
have
a
a
run
book
in
the
release
tools
like
a
release,
manager,
guide
instance.
So
maybe
that
would
be
a
good
place
for
us
to
do
it
and
then,
at
what
that
point,
what
we're
doing
is
eliminating
the
nobody's
sure
what
we're
creating.
A
There
is
a
situation
where
we
either
get
a
yes
or
a
no
and
depending
on
whether
they
are
yeses
or
no's.
We
can
start
to
figure
out
what
the
problem
is.
Some
of
them
will
be
no's
and
I
think
there'll
be
notes
for
great
reasons
like
there
are
too
many
incidents,
or
I
don't
know
how
to
deal
with
this,
and
then
we
can
figure
out
how
to
solve
those.
B
Then
we
could
actually
start
with
the
eoc
and
if
eoc
doesn't
know
how
the
process
works,
which
probably
is
true
in
most
cases,
they
need
to
find
someone
who
knows
what
to
do
or
look
into
that
and
then
drive
it
forward
and
there
the
problem
starts
right.
If
they
don't
know,
then
who
is
taking
over
then
and
then
we
again
have
the
problem
of.
How
do
we
delegate-
and
this
is,
I
think,
I
don't
know-
an
organization
and
also
a
knowledge
problem.
A
Yeah,
a
great,
absolutely
agreed
yeah,
let's
review
our
guide
and
see
where
we
go
there
like
for,
and
I
think
we
we
all
of
the
people
we
need
to
use
have
rotations.
So
we
don't
need
to
go
and
find
the
right
person.
So
much
as
like
we
do
for
quality,
because
you
have
to
go
and
look
up
who's
on
call,
but
there
is
always
a
quality
on
call
and
if
we
can't
find
one
we
can
escalate
the
quality
managers.
A
There's
always
an
engineer
on
call
if
we
can't
find
one
we
escalate
to
the
imoc
and
there's
a
dev
escalation,
and
if
we
can't
find
a
dev
escalation
that
automatically
escalates
up
to
the
engineering
managers.
So
I
think
we
need
to
let's
review,
like
that's
great.
Actually,
let's
review
our
incident
management
guidelines
on
our
side
and
make
sure
we
all
have
a
kind
of
shared
understanding
like
there
are
definitely
are
going
to
be
some
improvements
that
need
to
come
from
here.
There's
some
there's
quite
a
lot
of
things
going
on
yeah.
B
Yeah,
I
guess
just
getting
used
to
delegate
instead
of
trying
to
shut
things,
cut
things
short
because
we
feel
like
we
can,
because
we
know
how
it
works
to
help
with
that
right.
That
often
would
help
probably
and
then
we
would
also
train
other
people
at
the
same
time,
and
that
would
prove
it
for
everybody
else.
D
I
was
going
to
say:
we've
had
a
lot
of
incidents
too,
where
it's
delivery,
like
qa
jobs,
are
failing,
which
is
the
canary
to
like
an
infrastructure
problem
right
and
it's
like
your
lodging.
Your
logic.
Incidents
and
qa
tests
are
failing,
and
it's
like
I
I
understand
from
especially
an
eoc
side
like
it's
hard
to
really
jump
in
and
feel
like
that,
like
that's
just
could
be
tests
failing,
but
you
know
it's
just
so
happens
that
it's
test
failing
which
holds
us
up,
and
it's
turns
out.
It's
an
infrastructure
issue.
A
Think
we
need
to
get
better.
We
do
yeah.
I
intend
to
go
through
the
delivery
blockers
stuff
this
morning
and
take
a
look,
a
closer
look
at
positive
numbers
on
recent
stuff,
because
there
are
conversations
going
on
around
staging
and
how
can
we
can
improve
and
of
ownership
and
how
we
can
make
it
easier
to
debug.
But
actually
what
I
want
to
see
is
the
the
root
cause,
because
actually,
I
think
recently,
I'm
not
sure
if
I've
seen
any
software
change
failures.
A
Failing
those
tests,
I've
seen
a
lot
of
config
changes
from
from
various
teams.
I
think
including
us,
but
you
know
that's
as
an
interesting
workflow,
that
we're
failing
at
deployment
we're
asking
quality
to
debug
and
then
it's
coming
right
back
around
to
infra.
So
I'm
going
to
pull
those
things
out
this
morning
and
see
what
we
can
do
but
but
for
now
let's
take
an
actual
small
action
and
then
hopefully
it's
not
super
overwhelming
that
we
henry
maybe
review
the
actually
no
pacman
as
release
manager.
A
Would
you
like
to
review
the
release
manager
incident
guide,
check
it
matches
with
your
understanding
of
what
we're
sort
of
expecting
and
then
for
the
next
quarter?
I
think
we
all
try
and
hold
each
other
accountable,
so
when
you're
a
release
manager
or
when
you're
involved
around
incidents,
let
us
refine
our
process
so
that
we're
not
always
left
with
holding
the
incidence
and
then
that's,
maybe
a
manageable
action
for
us
all.
A
Okay,
great
so
how
about
we
set
two
actions
for
this
quarter
then,
based
on
the
q4
retro,
one
is
to
make.
A
Two
minimum
at
least
two
improvements
to
release
tooling,
to
reduce,
toil
so
ruben.
I
know
you've
got
some
suggestions
there.
I
think
robert's
opened
a
couple
as
well,
but
we
make
sure
we
get
at
least
two
of
those
prioritized
plus
we
review
the
release
manager,
incident
guide
and
start
proactively
like
helping
spread
knowledge
so
that
we're
not
only
incident
resolvers.
D
A
Great
great,
unless
there's
anyone,
I
don't
want
to
say
anything
about
retros
or
actions
or
improvements,
then
no
great
over
to
you.
D
The
current
thinking
is,
we
will
kick
off
the
change
with
the
2100
utc
auto,
deploy
for
a
specific
day,
simply
because
that'll
be
kind
of
at
the
start
of
myra's
day,
and
then
I
come
online
directly
after
her
and
both
of
us
to
have
had
the
most
involvement
in
the
project
so
like
we
will
be
watching
those
first
few
pipelines,
the
closest
so
yeah.
If
you
obviously
any
questions
and
concerns,
let
us
know
for
those
of
you
who
are
release.
D
Managers
and
following
auto,
deploys
just
at
least
familiarize
yourself
with
the
new,
the
new
layout
it
it's
it's
a
lot
simpler
than
it
really
looks
like
it's
really
just
changing
the
order
of
things
around
right
like
that,
at
the
end
of
the
day.
That's
what
it
is,
there's
no
there's,
no
reason
why
deployments
should
fail
or
anything
we
may
have
some
teething
problems.
If
we
haven't
set
up
our
gitlab
ci
file
dependencies
exactly
right,
but
I'm
pretty
sure
we'll
iron
them
out
quickly.
D
Thinking
back
and
reflecting
on
this
week,
though,
it
has
got
a
very
interesting
ramification
in
that
the
incidents
we
or
some
of
a
subset
of
the
incidents
we
have
had
is
when
we've
been
deploying
to
staging,
and
so
in
the
new
pipeline
model
we
would
have
gone
to
staging
canary,
which
would
have
been
fine.
We
would
have
gone
to
production
canary,
which
would
have
been
fine.
We
would
have
had
baking
time
and
then
there
would
have
been
a
manual
promote
to
deploy
to
both
staging
and
production.
D
We
would
have
gotten
held
up
by
these
big
incidents
in
production
and
therefore
been
running
production
canary
and
staging
canary
on
a
newer
version.
To
be
clear,
I
don't
think
this
is
an
issue.
It
just
means
that
canar
production
canary,
for
example,
is
baking
one
release
for
like
a
day
instead
of
say
an
hour
before
production
is
upgraded.
So
I
don't
think
it's
an
issue,
but.
A
D
Just
another
point:
we
should
also
consider
when
reviewing
all
these
incidents
of,
is
there
anything
and
I'll
do
this
as
well?
Personally,
is
there
anything
I
can
take
away
from
them?
That
would
have
made
things
worse
somehow,
if
this
was
in
the
pipeline
reorder,
but
I'm
pretty
sure
it
should
be
fine.
The
big
thing
with
the
pipeline
reorder
once
again
thinking
about
incidents
is
that
the
goal
is
to
have
staging
in
production,
basically
almost
one
to
one
whatever
the
one
does,
the
other
one
needs
to
do
also.
D
So
that's
just
that's,
probably
the
biggest
takeaway
in
consideration.
If
you
roll
back
production,
you
probably
need
to
roll
back
staging
and
vice
versa,
whereas
before
it
didn't
really
matter
as
much
like,
we
could
roll
back
production
and
then
we
just
start
again
and
go
through
staging
now
to
really
accurately
test
mixed
deployments
on
the
versions.
We
should
be
testing
production
and
staging
need
to
stay
as
one.
B
I
have
one
question:
why
do
we
have
a
baking
time
and
staging
two
after
staging
canary.
D
So
so
to
be
clear,
so
we
go
staging
canary
sorry,
yes,
staging
canary
production
canary
baking
time
for
that
production
canary
then,
after
the
baking
time
is
done,
then
we
do
staging
in
production
with
like
a
10
minute
kind
of
skew
just
to
catch
any
deployment
problems,
if
that
makes
sense,
so
still
the
baking
time
for
production
canary.
But
there
is
no
baking
time
necessarily
for
staging
canary.
So.
B
D
It's
it's
not
so
you
know
mentally
before
we're,
like
you
know,
staging
production,
canary
production.
It's
like
it
was
a
very
clean,
simple
model.
I'd
argue:
we've
kind
of
misused,
we're
kind
of
misusing
the
term
staging
canary
like
it's,
not
really
a
staging
canary
we're
kind
of
starting
it
is
and
how
we've
set
it
up
in
beginners.
A
A
D
And
that's
why
failures
and
staging
canary
now
are
critical,
because
that
is
the
only
line
of
testing
we
have
before
something
goes
to
production,
canary
and
customers.
So
so
it's
kind
of
this
isn't
entirely
true,
but
I
would
call
it
sort
of
consider
the
importance
flipped
on
its
head
before
it
was
like.
D
Staging
canary
was
new
and
we
did
have
good
we've
got.
You
know
tests
on
it,
but
you
know
it
was
probably
and
it
is
blocking,
but
you
know
it
was
we.
There
was
a
lot
of
focus
staging
right
with
the
confidence
wasn't
staging.
We
need
to
flip
that
on
its
head
route,
we
really
need
to
be
confident
in
staging
canary
because
after
that
is
production
canary
and
then
staging
and
production
happen
after
that,
so
yeah.
D
But
everything
is
blocking
so
at
the
end
of
the
day
station
canary
fails,
it's
not
like
you
can
force
like.
I
don't
even
think
we
can
force
it
through.
Like
the
end
of
the
day,
you
know
everything
is
blocking
now,
besides
staging
ref,
which
is
fine,
and
that
should
be
so.
You
know
we
all
should
always
treat
every
failure
as
a
blocker.
A
Awesome
thanks
for
sharing
that,
graham
great,
so
not
necessarily
one
to
discuss
unless
anyone
has
read
and
already
wants
to
add
comments,
but
on
c
I
just
wanted
to
highlight
that
I
have
merged
in
the
first
iteration
of
our
strategy.
For
this
year.
A
I
will
have
some
additional
bits:
you'll
notice,
there's
a
gap
around
q2
and
there's
a
couple
of
things
I
mentioned
on
the
mr
live
pulled
out
just
so
that
we
can
actually
think
about
them
and
scrape
them
a
bit
and
then
word
them
to
fit
back
in.
So
this
is
not
a
complete
strategy
for
now,
but
hopefully
gives
a
little
bit
of
guidance
on
like
how
much
do
I
need
to
care
about
solving
this
thing.
You
can
actually
see
some
of
these
things
do
already
fit
on
our
strategy
for
this
year.
A
So
let
me
know
if
you
have
any,
if
you
think,
there's
any
gaps
or
if
you
think
like
there's
anything
else,
we
should
be
bringing
in
or
changing
like.
You
know
we
could
bring
in
other
things
and
just
slow
like
reduce
some
of
the
other
stuff,
but
I
what
I've
tried
to
do
is
already
think
through
all
the
suggestions
from
the
platform
direction,
discussion
which
we
had,
where
we've
kind
of
already
raised
these
things,
so
don't
feel
like
you
have
to
change
this
up
like
just
if
you
would
like
to
please
do.
A
Awesome
is
there
any
other
stuff
people
want
to
bring
up
in
the
discussion.