►
From YouTube: 2021-09-06 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
So
I
shall
just
wait
a
few
seconds.
Ruben
is
around
not
sure
about
robert.
I
would
expect
not,
but
we
shall
see.
Scarbeck
is
not
as
well.
B
So,
let's
begin
so
henry
yeah
you
can.
This
is
a
release,
manager
role
so
you're
unfortunate
that
robert's
not
here
again,
but
would
you
thank
you
for
putting
in
the
the
diagram
with
jonah
just
give
us
kind
of
like
highlights
of
mttp
this
week.
C
Yeah,
so
the
highlight
of
mttp
is
that
it's
slowly
getting
down
all
the
time.
So
so
it's
nothing
special
to
pick
out
here,
but.
C
Okay-
I
don't
know
what's
up
with
my
internet
connection
here,
so
it
looks
like
since
we
have
apec
support
with
green.
It's
it's
looking
a
little
bit
better
with
how
many
deploys
we
can
get
through
each
day
and
as
we
didn't
have
really
long
and
big
blockers
like
something
like
a
production
change
log
or
something
like
that,
preventing
a
lot
of
deploys
per
day.
C
I
think
in
general
we
are
doing
pretty
well,
I
think,
that's
the
trend
like
the
more
deploys
we
do
per
day
the
better
and
it
doesn't
really
matter
if
we
blocks
some
deployment
forever.
As
long
as
we
maybe
get
it
unblocked
before
the
next
one
is
getting
blocked,
because
on
average,
then
we
still
do
a
lot
of
deploys
and
get
a
good
average
on
timing.
So
I
think
you
should
keep
it
like
that
and
maybe
try
to
deploy
as
often
as
we
can.
B
Yeah
for
sure
great
to
see
it
looking
so
and
number
of
deployments
is
super
high.
So
that's
great
news
so
that
I
guess
that
I
have
to
think
about
that.
But
that's
kind
of
interesting.
The
number
of
deployments
is
so
high.
Is
that
maybe
maybe
that's
graham
right
so
we're
being
a
load
more
efficient
on
deployment,
but
perhaps
they're
taking
a
little
bit
longer,
like
I'm
surprised,
mttp,
perhaps
isn't
a
touch
lower,
given
how
many
more
deployments
we
have,
but
maybe.
C
That's
I'm
not
sure
about
the
I'm,
not
sure
about
the
numbers
from
previous
month,
but
I
think
we
often
just
skip
the
deployment.
If
you
felt
like
okay,
we
spent
some
time
fixing
some
issues
and
the
next
one
is-
and
here
now
so,
let's
skip
this
one
and
just
take
the
next
one.
D
D
C
A
I
don't
know
if
it's
related,
but
I
was
looking
at
the
number
of
packages
that
we
tagged
over
the
last
week
and
also
there
there
is
an
an
increase
because
usually
we
do
five
packages
over
the
course
of
24
hours
and
we
have
the
spike
up
to
seven
packages
at
the
beginning
of
the
week
and
then
we
were
trending
around
six
packages
a
day,
so
in
we're
kind
of
over
tagging
every
day
compared
to
the
regular,
also
deployed
schedules,
so
either
we're
picking
stuff.
C
Two
or
three
cases
where
we
picked
something
into
auto,
deploy
because
of
an
urgent
fix,
and
then
we
tech
because
of
a
lot
of
patch
releases
and
stuff,
but
that
shouldn't
count
into
this
right
for
auto
deploys.
So.
C
B
Nice
great
well,
it's
great
to
see
mttp
looking
so
low
now's
work
is
there.
Anything
that
you
can
think
of
that
would
make
like
would
be
something
we
should
prioritize
doing
to
improve
mttp
or
make
release
management
easier.
C
Yeah,
I
think
one
tendency
that
I
see
is
that
jobs
tend
to
fail
right,
not
picking
up
something
job,
getting
stuck
in
a
network
error
when
trying
to
pull
the
repository
or
something
like
that.
So
that
sometimes
happens
and
I
have
a
feeling
I
don't
have
numbers,
but
I
have
the
feeling
that
this
is
occurring
more
often
now.
Maybe
we
also
just
have
more
deployments
more
jobs
running
through,
and
this
is
annoying
because
you
can't
really
rely
on
our
pipelines
to
be
reliable.
C
So
often
you
need
to
you
know
you
wait
for
a
deployment
to
finish,
and
then
it
takes
time.
It
doesn't
finish
and
you
look
into
the
pipeline
and
see.
Okay,
a
job
got
stuck
and
you
need
to
trigger
it
again
and
you
didn't
see
an
error
for
that
one.
For
instance,
it's
not
very
often,
but
I
saw
this
a
few
times,
and
I
think
this
is
the
general
thing
with
gitlab.com
and
how
reliable
it
is
right.
I
think
our
customers
will
see
the
same
thing.
C
Not
yet
like,
if
a
job
got
stuck-
and
I
see
I
just
need
to
cancel
and
restart
it,
then
I
normally
don't
open
an
issue
for
that,
but
I
should
maybe
open
one
up,
because
I
saw
this
more
often
in
recent
recently.
Would.
B
Be
worth
doing
because
then
I
think
we
can
work
out
like
you
know,
even
even
if
it's
like
a
specific
error,
even
if
it's
like
a
specific
error
and
then
I
think
I've
seen
this
a
few
times,
we
can
then
start
to
work
out,
because
I
don't
think
it
necessarily
always
things
we
can
just
fix
within
delivery.
B
But
I
know
that
we
have
seen
cases
recently
of
like
jobs,
just
never
really
getting
started.
And
then
you
come
back
like
an
hour
later
and
like
there's
just
no
log
output,
you
have
to
cancel
it
and
restart
it.
So
there
are
a
few
of
those
flaky
ones,
but
yeah
it'd
be
worth
don't
necessarily
have
to
be,
incidentally,
tagged
but
it'd
be
worth
just
getting
an
an
issue
open
for
the
failure,
and
then
we
can
get
some
like
work
out
how
to
fix
them.
Basically,.
B
Great
so
announcements,
I've
just
I've.
I've
moved
the
read
only
because
you
probably
all
know,
but
the
360
feedback
deadline
has
moved
through
to
the
10th
of
september.
So
it's
friday.
B
What
I
need
to
find
out,
I
will
be
able
to
find
out
tomorrow
when
people
are
back
from
the
u.s
holiday,
is
how
that's
going
to
impact
the
rest
of
the
timeline.
So
the
original
timeline
had
kind
of
this
week
as
a
review
week
and
then
things
being
shared
next
week,
I'm
not
sure
if
it
will
mean
everything
shifts
out
a
week.
So
I'll
confirm
to
you
and-
and
let
you
know
on
that
one,
but
hopefully
hopefully
it
doesn't
push
it
out
too
far.
B
D
Sir
myra
yep
thanks
there
is
a
construction
next
door,
so
there
might
be
weird
noises.
I
was
reading
the
engineering
week
this
morning
and
I
noticed
that
there
is
a
proposal
for
a
feature
so
feature
pcl,
which
basically
means
that
if
a
team
causes
an
incident,
whether
it's
an
s1
or
an
s2,
the
next
five
days
or
so,
are
going
to
be
dedicated
to
develop
tools
or
something
that
could
have
prevented
this
incident
and
these
tools
or
issues
are
going
to
be
owned
by
by
the
team.
D
B
And
I
would
hope
that
we'll
be
able
to
get
reliability
involved
in
this.
Like
you
know,
I
would.
I
would
think
that
at
least
some
of
the
review
would
be
as
someone
involved
in
the
incident.
B
You
know
here's
our
thing,
so
it
hopefully
won't
be
only
only
on
us,
but
some
of
it
certainly
will.
A
I
I
didn't
have
a
chance
to
to
read
through
it,
but
does
it
sounds
like
basically,
the
team
that
is
attributed
the
the
outage
is
prevented
from
merging
stuff
new
features,
but
because
it's
not
it's
not
a
production
change,
look
right,
so
they
can't
they
are
not
allowed
to
merge
new
stuff
in,
and
so
they
have
to
work
on
reliability
and
things
like
that.
So
it's
kind
of
error
budgets
in
a
monolithic
application,
kind.
B
I
think
that
makes
sense
like
there's
lots
of
development
going
on
with
error
budgets
at
the
moment,
so
it
you
know
if,
if
it's
in
that
sort
of
sense,
I
think
that
makes
sense,
and
I
guess
that
this
might
also
be
sort
of
related
to
the
fact
that,
over
the
last
few
months,
when
we've
had
pcls
they've
very
disproportionately
affected
infrastructure,
whereas
actually
development
really
don't
have
a
a
huge
part
in
a
p
in
a
regular
pcr,
they
could
kind
of
just
continue
with
their
lives.
B
They
just
we
don't
deploy
any
changes.
So
I
think
this
is
also
part
of
trying
to
bring
the
teams
closer
to
actual
pcl's
as
well.
B
But
thanks
for
sharing
my
I
haven't
had
a
chance
to
read
that
but
yeah
you're
right
that
we
should.
We
should
review
that
and
but
also
consider,
if
there's
anything,
we
want
to
add
in
like
as
an
edition
from
delivery,
because
you
know
the
whole
once
again
like
the
the
really
big
thing
that
delays
us
on
mttp
is
incidents.
So
actually,
if
there's
anything
additional
that
we
can
add
in
that
would
make
things
safer.
B
Cool,
so
I
was
curious
about
so
we
are
now
one
month
in
to
this
quarter.
B
Is
there
anything
that
anyone
feels
like
we
need
to
adjust
on,
like
you
know,
process
vote
goals
like
things
like
that,
like
there's
things,
it
feels
like
there's
lots
going
on.
Yes,
that's
kind
of
normal
at
get
lab,
but
just
you
know,
given
we
are
one
month
in
yeah,
unbelievable
right
without
just
over
one
month,
so
like
6th
of
november,
we'll
be
in
q4.
B
So
is
there
anything
that,
like
I,
just
bring
this
on
you
I'll
I'll,
open
a
proper
issue,
so
everyone
can
contribute
on
this
one,
but
just
kind
of
off
top
of
your
head.
Like
other
things,
we
should
be
adjusting
on.
B
One
thing
I
was
kind
of
wondering
if
whether
we
need
to
think
about
at
all
is
how
so
how
much
is
new
staging
impacting
on
our
okr
work.
So,
as
we
went
into
this
quarter,
we
set
up
an
okr
and
it's
all
around
rollbacks,
putting
in
place
robot
practice,
building
out
more
of
the
visibility
you've
started
on
seo,
but
also
doing
more
stuff
around
the
cape's
workloads
and
the
deployment
stuff.
B
But
you
know
as
a
team,
and
I
think
what
we've
seen
coming
into
this
quarter
is,
although
we're
not
doing
a
lot
of
the
hands-on
staging
canary
work,
it
is
impacting
us
it
does
take
up
space.
It's
also
sort
of
shifted
the
priorities
for
the
work
you're
doing
mira
around
the
single
pipeline,
because
we
we
will
depend
on
some
of
the,
whereas
for
rollbacks
we
perhaps
wouldn't
have
had
to
have
all
these
single
pipeline
pieces
in
place,
they'd
be
more
of
a
nice
to
have
for
this
new
staging.
We
probably
do
so.
B
A
So
I
was
at
the
beginning
of
last
week.
I
think
I
don't
remember.
I
was
reviewing
some
of
those
changes
of
the
new
staging.
A
The
things
that
scares
me
is
that,
basically,
is
happening
outside
of
the
team,
so
we
there
may
be
things
that
get
overlooked,
or
things
like
that,
so
I
don't
know
I
mean
I
I'm
not
really
say
happy
about
how
we
are
doing
this
because
basically
is
something
that
is
pushed
on
us
and
I'm
not
really
sure
about
our
involvement.
In
this
let's
say
I.
C
Can
support
this
statement
because
I
was
affected
by
this
a
lot
of
times
last
week
during
release
management,
because
when
they
worked
on
this,
like
mostly
jarvan
and
pierre,
they
of
course
developed
some
things
and
I
think
the
they
often
worked
during
apac
times
like
pierre,
for
instance,
so
changes
came
in
which
I
didn't
know
about,
and
then
I
saw
pipelines
failing
for
deployments
right,
because
we
had
two
deployment
nodes,
for
instance,
which
made
problems.
C
D
C
They
often
had
the
ask
for
these
changes
out
for
review
by
the
team,
but
I
didn't
have
that
time
to
really
look
through
all
of
them.
So,
for
instance,
we
got
the
samar
to
include
three
station
canary
and
two
outer
deployments
and
it
was
merged
in,
but
I
didn't
have
time
to
review
it,
and
so
I
didn't
really
know
that
this
was
happening,
and
then
this
morning
I
saw
okay.
C
We
have
staging
generic
canary
stage
now,
which
is
an
auto
deploy
which
worked
but
surprised
me
a
little
bit
and
also
caused
some
trouble,
because
I
did
some
conflict
changes
and
forgot
that
I
need
to
put
this
into
canary
now
too,
because
it's
also
running
there
so
yeah.
I
think
this.
This
knowledge
of
what
change
is
coming
in
now
and
being
aware
of
that
could
have
been
improved.
Yeah.
A
This
also
reminds
me
that
I
was
commenting
on
that
merge,
request
and
yeah.
I
didn't
notice
that
it
got
merged,
so
I
was
looking
actually.
I
was
looking
now
that
you
are
speaking
that,
yes,
we
are
deploying
to
the
new
environment
and
yeah.
I
need
to
figure
out
what
comment
I
left
and
if
it
was
important,
because
I
don't
remember.
C
Getting
any
answer
yeah,
actually
it's
working
and
they
did
a
good
job
to
to
react
problems.
So
pierre
was
staying
up
very
late.
This
time
like
up
to
midnight,
I
think
for
him
to
to
fix
things
and
look
into
things,
so
they
really
did
a
good
job,
but
it's
still
a
lot
of
friction
coming
from
this
change,
which
is
kind
of
expected.
B
Yes,
so
we're
going
to
need
to
work
out.
I
think
how
we,
how
like,
if
or
if
we
have
additional
things,
I
I
haven't
had
time
to
think
about
whether
it's
better
or
not,
but
it
wasn't
like.
I
was
expecting
staging
canary
to
be
part
of
the
staging
deployment
pipeline,
because
one
thing
having
it
as
a
separate
pipeline
is
that
we
can
be
deploying
to
staging
canary
and
staging
at
the
same
time.
But
actually
I
need
to
now
dig
into
what
zephs
are
going
to
do
with
the
test
to
work
out.
B
If
that's
going
to
be
an
issue
or
not,
it
isn't
for
production.
So
that
could
be
fine,
but
staging
is
a
little
bit
different,
so
I
was
expecting
those
to
be
linked
together,
so
it
could
be
fine,
but
yeah.
We
need
to
think
through
about
whether
it's
kind
of
additional
tooling,
so
just
from
a
kind
of
initial
gut
feeling
for
you
all
like
do
it
for
staging
canary.
Are
we
going
to
need
to
go
back
now
and
spend
some
time?
C
B
Did
you
add
that
back
into
the
mr
templates
henry
because,
like
trying
to
have
things
that
you
don't
have
to
keep
in
your
mind,
all
the
time
would
be
helpful,
like
we've
all
or
in
the
like
kate's
workloads
kind
of
contribution
docs.
But
you
know
for
those
sorts
of
changes
like
if
there's
someone
we
can
write
it.
That
would
help
just
prompt
people.
Then
yeah.
C
A
Are
we
also
now
linking
the
gstg
canary
metrics
or
not?
Are
they
collected
by
because
there's
our
these
metrics
are
collected?
So
are
we
collecting
those
metrics?
Also,
no,
then
just
collect.
We
collect
the
metrics,
but
they
are
recording
rules.
So
are
we
recording
this
information
also
for
the
new
environment
or
not?
I
didn't
check
yet.
So
it's
like
all
good
questions.
A
B
Okay
yeah,
if
you've
got
ideas
like
for
things
like
that,
then
feel
free
to
let's
open
an
issue,
we
can
get
those
checks,
so
pierre,
craig
and
cindy
are
all
working
on
this
project,
so
you
know
like
they.
They
all
know
what
they're
doing,
but
we
probably
just
need
to
give
them
some
guidance
on,
like
the
things
we'd
like
them
to
to
check
or
or
give
us
feedback
on.
B
Cool
okay:
well,
let's
see
how
it
goes
like
for
this
one.
Let's
say
we're
in
the
kind
of
painful
early
stages
of
of
solving
the
staging
problem.
There
are
still
quite
a
lot
of
staging
problems
that
we
we
get.
B
The
benefit
of
the
main
one
is
that
we
use
staging
for
our
kind
of
deployment,
testing
and
rollback
testing
quality
user
staging
for
like
qa
tests,
but
we
also
have
all
the
developers
who
use
staging
for
actual,
like
testing
feature
flags
and
going
through
testing
of
things,
and
we
kind
of
disregard
all
of
that
stuff
at
the
moment,
and
we
just
roll
back
at
will-
and
you
know
we
do
what
we
need
to
do,
deploy
at
will,
but
hopefully
kind
of
longer
term.
B
Once
we've
solved
this
pain
of
like
doing
all
the
things
on,
one
staging
we'll
be
able
to
have
like
staging
on
demand
and
people
could
actually
have
the
environment
they
need
to
test
stuff,
including
including
us
and
our
conflict
changes.
So
hopefully,
things
will
get
better,
but
we're
in
the
kind
of
messy
first
stage.
Unfortunately,.
D
Just
one
big
question
since
robert
I'm
not
sure
if
he's
going
to
show
up
today
or
skarbeck,
I
know
he's
out,
should
I
be
covering
release
managers
for
america?
That's.
B
B
But
yes,
if
you
wouldn't
mind
like
it
doesn't
like
it
will
be
quiet
because
I
think
you'll
be
pretty
much
the
only
person
online.
So
you
know
yeah,
I
said:
there's
a
lot.
I
had
to
go
back
and
apologize
when
everything
broke,
like
literally
everything
broke.
B
But,
like
you
know,
if
we
could
just
get
like
one
hour
and
then
graham
can
do
another,
then
that
that
it
doesn't
have
to
be
a
all
the
all
the
deploys
like
our
regular
cycle,
if
you
have
other
things
planned,
but
yeah
thanks,
myra,
good,
good,
very
good,
important
point.