►
From YouTube: 2022-03-07 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone
welcome
to
another
monday,
so
we
have
a
couple
of
announcements,
I'll,
let
you
read
and
then
we
have
quite
a
few
discussion
items.
So
let's
kick
off.
A
Firstly,
I
want
to
say
thanks
everyone
for
all
the
work
last
week
to
get
through
the
security
release,
improvements
super
quickly,
so
yeah
much
appreciated,
and
hopefully
that
sets
us
up
for
a
much
smoother
security
release
next
time
around
on
thursday
scarbeck
and
I
are
attending
a
retro
around
that
release,
so
we
will
cover,
I
think,
kind
of
the
main
points
that
got
raised
last
week.
A
Awesome
so
b,
I
want
to
see
what
you
all
think.
So
I
know
there's
been
a
lot
of
discussions
going
on
around
incidents
and
severity
of
incidents
and
also
the
fact
that
I
mean
like
ahmad.
I
think
I
don't
know
how
many
times
you've
been
asked
recently,
but
like
every
time
we
raise
an
incident
people
question
like:
what's
the
impact
of
being
an
s2.
A
The
reason
a
lot
of
that's
coming
about
is
because
of
just
the
number
of
incidents
happening
and
the
fact
that
s2s
are
blocking
for
future
flags
and
and
other
changes
now
the
reason
it
has
always
been
like
that
is
because
of
the
risk.
So
when
we
have
got
an
incident
running,
we
are
in
a
state
where
we
can't
deploy
code
and
therefore,
if
something
does
happen,
we're
not
in
a
very
good
position
to
respond
to
that.
So
I
wanted
to
hear
what
your
thoughts
are
around
a
proposal
to
to
alter
this
now.
A
My
kind
of
hope
is
that,
by
shifting
the
message
we
might
be
able
to
kind
of
get
better
support
on
the
incidents
that
really
matter
for
us.
So
my
proposal
is
perhaps
consider
that
we
make
deployment
related
incidents
in
s3
unless
they
affect
production,
and
that
includes
canary
or
they
affect
they
impact
in
some
way
like
hold
up
or
include
a
critical
commit
and
allow
a
critical
commit
to
be
defined
by
the
release
manager.
A
So
that
may
be
that
you
want
to
get
that
into
production
in
order
to
do
a
patch
release
or
the
monthly
release,
or
it
could
be
that
you
know
there's
a
long-running
post-deployment
migration
or
something
you
want
to
push
through
or
or
something
else
that
you
deem
to
be
needing
an
s2
and
the
pros
around
this
are
we
may
reduce
the
number
of
incidents
that
are
kind
of
going
to
the
engineering
call
and
therefore,
hopefully,
the
ones
that
really
matter.
A
B
And
the
second
thing
is
that
I
think
these
concepts
of
severity
and
if
something
should
be
blocking,
maybe
are
also
gonna
in
many
cases
like
it's
good
to
have
a
severity.
But
this
can
mean
a
lot
of
things
and
not
in
our
case
it
needs
to
be
blocking
right
because
it
can
be
severe
of
other
reasons,
but
not
because
we
need
to
block
deployments
or
feature
flag
changes
or
something
else.
B
So
I
would
like
to
have
something
like
a
label
which
is
called,
I
don't
know,
non-blocking
or
non-deployment,
blocking
or
non-feature
flag
blocking,
which
can
be
added
to
an
incident
within
s2
or
s1
just
to
signal.
This
is
a
high
severity
incident,
but
it
shouldn't
block,
feature
flag
changes
or
it
shouldn't
block
deployments.
For
instance,
I
think
that
would
make
us
more
flexible
would
be
easy
to
use.
C
C
There
are
72
for
staging
blocking
production
when
that
doesn't
make
a
lot
of
sense,
so
we
could
have
like
a
label
for
staging
only
like
this
is
only
blocking
staging,
but
not
blocking
production
that
will
make
it
more
flexible
and
regarding
your
proposal,
let
me,
if
I
understood
it
correctly,
then
we
will
have
s3
incidents
that
are
associated
to
staging
and
staging
canary.
Only.
C
I
think
it
could
work.
It
would
worry
me
a
bit
that
we
are
not
getting
enough
attention
if
we
label
it
as
s3
that
we
might
not
get
quality
on
board
or
development
on
board,
and
that
will
make
the
incident
to
be
basically
on
us,
which
is
something
that.
D
A
D
The
issue
is
most
of
the
like.
I
don't
want
to
say
like
most
but
like,
unfortunately,
it's
most
of
them
are
qa
jobs.
Failing
like
it's
not
frequent
that
we
get
getali
or
kuenitus,
and
even
though
these
situations
it's
blocking
right,
it's
it's.
We
need
someone
to
basically
jump
in,
but
but
with
qa
this
I
have
no
clue
like
what
to
do.
D
D
So
yeah
like
then,
I
recommend,
like
the
current
situation,
is
like
it's
better
like
marking
it
as
like,
s2
or
like
something
like
to
basically
block
deployment
is
better
because
most
of
the
issues
are
coming
from
qa
jobs.
Actually.
A
A
But
okay,
as
long
as
you're
still
like,
I
suppose,
the
so
there
are
two
things
in
here
and
I
think
henry
kind
of
touched
on
the
other
aspect,
which
is
certainly
the
other
piece
we
need
to
do,
which
is
having
more
flexibility
or
giving
the
engineer
on
call.
I
think,
more
flexibility,
so
that
when
an
incident
is
in
action,
they
actually
have
a
bit
more
control
over
saying,
am
I
okay
with
other
changes
taking
place
so
can
seals,
take
place?
A
Can
deployments,
take
place
and
have
some
sort
of
mechanism
labels,
probably
to
control
that?
Does
anyone
have
any
ideas
around?
What?
If
anything,
can
we
do
to
improve
kind
of
help,
because
I
definitely
see
there
are
quite
a
lot
of
times
where
we
raise
an
s2
incident
and
you
get
asked?
Why
is
this
an
essay
like
I'm
kind
of?
I
guess
I'm
curious
like
how
frequently
is
that
happening
and
are
there?
Has
anyone
got
any
ideas
on
how
we
can
actually
reduce?
You
have
to
keep
on
kind
of
justifying
these.
E
I
don't
think
it's
asked
often,
but
normally
when
I
create
an
incident
which
I
haven't
done
so
for
a
while,
I
usually
just
note,
either
on
the
slack
channel
or
quickly
on
the
incident
declaration
that
I'm
not.
I
don't
need
eoc
or
cmoc
assistance
and
I'll
start
pinging
the
qa
persons
directly
as
necessary.
E
That
helps
eliminate
some
of
that
conversation
pretty
quickly
still
with
the
side
effect
of
you
know,
blocking
feature
flags
and
such.
A
A
A
Okay,
well,
in
which
case
that
is
super
helpful.
So
thank
you.
I
will
not
change
this
so
continue.
Deployment
related
incidents
will
all
remain
as
s2,
and
what
I
will
do
is
have
a
I
will
continue.
I
have
an
action
from
the
kind
of
infrastructure
management's
meeting
to
actually
think
about
the
other
piece
which
henry
mentioned,
which
is
around.
How
do
we
separate
these
things
out
a
little
bit
and
then
the
piece
that
goes
alongside
that
is
also
around?
A
How
can
we
give
engineers
on
call
better
like
visibility,
so
that
deployments
are
not
necessarily
assumed
to
be
the
cause
of
like
new
changes
or
new
problems
happening
in
production?
How
we
can
actually
get
better
visibility
of
like
where
we're
up
to
on
the
deployments
and
how
that's
affecting
things,
because,
as
we
start
to
increase
deployments
over
this
year
again,
we
are
probably
not
that
far
away
from
more
or
less
always
running
something,
and
at
that
point
right,
we
we
don't
necessarily
want
to
be
permanently
blocking
deployments
to
investigate
things.
C
Yep,
thank
you
amy,
so
you
have
probably
heard
about
the
new
order
of
the
coordinated
pipeline
and
that
we
are
doing
this
to
account
for
mixed
deployment
testing.
C
So
I
just
want
you
to
be
aware
of
what
is
the
new
order
and
how
is
it
going
to
look?
First
of
all,
I
think
we
all
know
that
the
coordinated
pipeline
goes
like
first
is
staging
canary,
then
staging
then
post
deployment
migrations
in
staging
then
production
canary,
then
production
and
then
post
deployment
migrations
in
production.
C
C
C
C
This
one
doesn't
block
it
takes
around
one
hour
or
one
hour
and
a
half.
Then
we
execute
the
smoke
and
the
main
one
these
are
these
are
blocking.
Then
we
are
going
to
execute
baking
time.
It
is
going
to
last
30
minutes.
Instead
of
one
hour
that
is
going
to
change,
then
we
are
going
to
move
to
the
manual
promotion.
C
Now
this
part
is
very
kind
of
interesting
one,
because
once
we
click
on
the
manual
promotion,
it
is
going
to
do
two
things.
First
of
all,
it
is
going
to
automatically
deploy
to
a
staging
and
then
after
30
minutes
it
is
going
to
execute
the
production
checks,
the
ones
we
have
right
now
that
check.
If
the
the
like
the
production,
health,
if
canary
is
up-
and
I
think
that's
it-
and
then
it
is
going
to
start
the
deployment
production.
C
So
that
means
that
deploying
to
staging
and
deployed
to
production
are
going
to
be
executed
roughly
at
the
same
time
after
those
these
two
deployments
are
finished,
then
the
post-deployment
migrations
on
the
stadium
are
going
to
be
executed.
C
C
D
C
Mechanism
for
it
yeah
yeah,
we
can.
This
job
should
be
have
like
a
retry
button
in
which
you
can
retry
it.
Okay,
thank.
B
D
B
Have
a
question
so
after
the
deployment
to
g-staging,
we
are
not
waiting
for
staging
qa
tests
before
we
deploy
to
production
right.
B
Not
okay
got
it,
so
we
are
mainly
relying
on
a
staging
and
a
canary
like
like
qan
g-station
canary,
to
do
signals
problems
as
well
as
in
production.
Canarity
signals
problems,
and
not
so
much
in
the
main
stage
anymore.
B
B
The
most
important
thing,
then,
maybe
is
that
we
have
enough
traffic
on
canary
right
to
resemble
the
qa
test
that
we
do
doing
the
main
stages.
B
C
C
Yep
for
sure,
okay,
so
this
is
a
new
order.
I
know
it
is
like
well
a
new
order.
I
know
that
we
are
used
to
the
order
so
because
of
it
we
are
going
to
roll
it
out
very
safely
and
that's
why
we
have
the
testing
and
rollout
issue
that
is
composed
on
three
phases.
C
The
first
one
is
to
do
an
initial
test
of
a
very
well
of
a
simple
package,
and
when
I
say
simple
package
I
mean
probably
the
coordinated
pipeline
is
only
going
to
have
a
documentation,
change
or
like
fixing
a
typo
or
something
that
doesn't
really
matter
when
it
is
deployed,
something
that
it
doesn't
matter
if
it
is
going
to
fail,
because
it
is
not
going
to
affect
like
traffic
or
is
not
going
to
make
any
requests,
then
this
phase
is
basically
just
to
test
that
the
new
order
is
actually
working
and
that
new
jobs
are
actually
working.
C
Just
like
some
sort
of
safe
check,
then
the
second
phase
is
to
update
the
auto,
deploy
processes
to
use
this
new
order,
and
then
the
third
one
is
to
well
basically
to
roll
it
out.
We
merge
all
the
changes
that
are
in
a
merge
request
into
master,
and
that
means
that
we
are
going
to
use
this
well
new
order
forever
or
something
like
that,
and
when
it
comes
to
dates,
we
want
to
start
with
the
phase
one
today,
possibly
at
21
utc
roughly
so
I
need,
of
course,
permission
from
release
manager.
C
Awesome
so
that's
all
I
got.
A
Awesome
kristoff
and
you
have
d
as
well.
C
All
right,
I
also
have
the
next
point,
so
we
are
getting
closer
to
15
that
and
for
those
that
don't
know,
the
major
releases
are
the
only
one
releases
kind
of
releases
in
which
we
allow
breaking
changes.
A
I
suspect
not
so
I
mean
product
usually
lead
on
the
kind
of
gathering
these
together,
so
I
can
go
and
see
if
they've
got
that
issue
up
first,
it
like
it
would
be
best
if
product
kind
of
gave
the
initial
guidelines
in
terms
of
what
like
how
they
want
to
handle
it.
I
remember
last
year
there
was
quite
a
lot
of
discussion
around
how
they
wanted
to
handle
feature
flag
enablement
through
that
through
that
month,.
A
But
then
yeah
we
should
before
too
long
get
our
issue
prepared
as
well.
C
Yeah,
I
think
it
will
be
wise
just
to
remind
people
that
they
should
assume
that
every
emerge
quest
merge
is
going
to
be
deployed
in
the
next
10
hours
that
they
need
to
over
communicate,
communicate
on
support
and
whatever.
A
Yes,
does
somebody
want
to.
A
Great
thanks
for
bringing
that
up,
myra
henry.
Do
you
wanna
verbalize,.
B
I
just
wanted
to
quickly
add
a
point
here
because
we
know
already
that
a
prefix
configuration
will
be
duplicated
with
15
0
and
we
need
to
do
a
config
update
in
g-stage
and
production
for
that.
So
we
have
an
issue
for
that
already,
but
in
the
same
way
we
should
maybe
check
if
there's
something
else
which
is
getting
deprecated
and
if
you
need
to
fix
something.
A
B
I
think
in
the
our
template
for
the
monthly
release,
there's
a
point
which
is
saying
check
for
deprecations
and
there's
a
link
to
the
duplication
list
in
the
gitlab
project,
where
you
really
have
a
list
of
what
is
getting
deprecated
then,
and
the
instruction
also,
I
think,
which
is
there,
is
to
do
something
like
dropping
through
chef.
If
you
find
something
which
is
having
this
name
of
the
mentioned,
deprecated
configuration
there's
no
easy
resi.
You
need
to
search
a
little
bit
if
you
find
something
in
our
configuration
which
might
get
deprecated.
A
Awesome,
okay,
so
we
are
running
quite
close
on
time,
so
there
was
a
lot
of
deployment
blockers
last
week.
E
It's
still
looking,
okay
from
my
vantage
point
and
myra
has
been
super
helpful
during
my
time
zone
and
being
able
to
hit
an
extra
deploy
for
the
day
since
I
I'm
usually
ending
my
day
a
little
bit
earlier
than
myra
does
so
that's
been
super
helpful
as
well.
Awesome.
A
Are
there
any
actions
or
changes
that
you'd
like
to
propose?
We,
we
prioritize
to
make
release
management
easier.
A
A
Agreed
like
we
had
a
lot
a
lot
of
blockers
last
week,
so
great
work,
keeping
mttp
down
where
it
is
awesome.
So
let
me
stop
the
recording.