►
From YouTube: 2023-09-04 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
So
it's
gonna
be
a
quiet
one
today,
because
it
is
us
and
Canadian
holiday.
So
Myra
is
going
to
have
a
pretty
quiet
day.
B
So
might
be
all
of
us
I'm,
not
sure
if
Ruben's
gonna
be
around
today
but
anyway,
as
I,
say:
Myra
you're
gonna
have
a
super
quiet
day
today,
because
it's
U.S
and
Canadian
holiday,
so
very
small
group
of
year
rounds
for
your
day,
but
welcome
everyone.
This
is
the
4th
of
September
and
it
started
every
weekly
email
Americas.
We
have
a
few
people
out
on
holiday
and
McKelly
is
off
sick
at
the
moment,
so
he
hopefully
will
be
back
tomorrow.
B
B
So
do
we
have
any
suggestions
for
new
things
that
should
be
added
into
the
release?
Docs
changelog.
B
If
not,
I
actually
have
a
question
you
may
be
best
to
know
about,
but
on
our
current
charts
bump
process
am
I
right
in
thinking.
We
have
a
manual
process,
wait,
sorry,
the
Mr
gets
automatically
created
and
we
manually
merge
it
every
Monday
to
apply
the
latest
chart
bumps,
but
it
is
possible
for
that
process.
E
That's
true:
we
can
run
the
pipelines
during
the
week.
It's
it's
okay,
but
usually
it
runs
over
the
weekend
and
then
Graham
usually
takes
care
of
this.
If
he's
away
high
or
scar
back
take
care
of
it
so
yeah.
This
is
a
process
right
now,
it's
manual.
We
prefer
it
to
be
like
this
to
be
still
manual,
so
we
can
basically
assess
actually
what's
going
on,
because
we
review
what
happens
in
the
charts,
sometimes
there's
something
we
don't
want
like.
E
B
Oh
I
see
so
we're
basically
checking
that
the
new
change
is
coming
in,
looks
sensible,
but
also
that
we
haven't
lost
any
of
our
delivery.
Specific
stuff
I,
get
it
okay,
cool,
okay,
and
would
it
be
I
can
ping
you
on
this
issue,
but
would
it
be
reasonable
for
us
to
say
to
another
team
that
they
can
do
all
the
stuff
and
just
ping
us
new
like
do
they
need
to
Ping
new
Mrs
to
delivery
specifically
or
is
it
infra
specifically
or
like?
Can
another
team
basically
trigger
this
pipeline.
E
Currently,
like
I
think
it's
time
to
be
every
weekend,
but
I
think
other
teams
could
also
like
trigger
it's
it's
okay.
The
thing
is
like.
If
you
are
talking
about
responsibility,
we
can
ultimately
get
this
to
someone
else,
but
right
now
it's
just
like
on
us.
B
B
I
I!
Don't.
B
B
So
that
would
be
a
good
one
for
me
to
get
a
proper
answer
from
you
on
like
how
they
could
actually
trigger
this
more
frequently
sure,
oh
and
then
I
guess
a
follow-up
question
on
that.
One
is
so
not
everybody
is
in
the
charts,
rollout
process
right.
So
what
do
we
need
to
do
so
that
we
can
have
like
Dart
Vladimir
like
Jenny
and
like
have
everybody
inside
delivery
kind
of?
How
do
we
onboard
people
into
this
charts,
rollout
process.
E
It's
easy,
I
think
we're
just
like
assign
the
Mrs
to
them
because,
like
currently
only
Graeme
eons
Karbach
get
this
Mars
bomb
chart
from
ours.
So
we
cannot
assign
them
to
these
Mrs
as
well,
and
they
just
need
to
go
through
like
reviewing
it
and
yeah
for
the
whole
thing.
Maybe
you
can
do
like
a
recording
or
something
session
like
some
sort
of
session,
but
actually
show
what's
happening
there.
B
Oh,
that
would
be
super
helpful,
yeah
I'd
like
to
see
what
needs
to
be
checked,
and
you
want
to
watch
for
yeah.
That
would
be
really
helpful
cool.
Okay.
Do
we,
in
terms
of
the
MLS
being
assigned,
is
that
manual?
Is
that
something
that
Graham
manually
does.
E
No
I
I,
don't
actually
remember
I,
think
there's
a
bot
creates.
A
B
Cool,
let
me
spin
up
an
issue
then
and
see
if
we
can
get
that
together,
so
we
can
get
the
bot
or
whoever
it
is,
get
the
Mrs
assigned
to
every
SRE
and
then
plan
a
kind
of
how
do
we
onboard
everybody
onto
this
so
that
we
can
share
the
workload,
because
if
we
are
saying
to
other
teams
that
they
can
trigger
this
more
frequently,
we
may
find
we
get
like
one
a
day
or
something
along
those
lines.
Sure
awesome
thanks
for
that.
E
A
The
chart
change
so
often
for
what
you
need
to
run
multiple
deployment
of
those
every
day,
because
it
will
basically
it's
a
new
deployment,
so
it
takes
the
full
six
hour
it
takes
to
do.
I
mean
I,
think
it's
even
more
because
we
do
pre
before
right.
So
we
do
pre
and
then,
after
evaluating
on
pre,
we
do
the
full
rollout
on
all
the
environments.
B
Yeah,
probably
not
I,
think
it's
like
we
I
mean
even
daily,
maybe
too
much
it
might
be
a
couple
of
times
a
week.
Maybe
I
think
it's
all
quite
manual
even
to
get
to
the
Mr
stage.
So
I
I'll
be
quite
surprised
if
we
get
to
daily.
A
No
because
my
underlying
question
is
more
about
I,
do
understand
if
a
developer
is
adding
a
new
configuration
in
one
of
the
services
and
they
need
the
new
version
of
the
charge
to
to
test
it
to
run
it.
So
it
makes
a
perfect
sense
right,
so
I
would
expect
having
call
it
a
run
book
or
something
I
process
where
they
can.
F
A
A
Don't
think
we
want
to
take
the
risk
to
bundle
it
with
a
regular
autoplay
package,
which
means
that
we
I
mean
it's
like
I'm
on
a
bus
right,
you
get
the
risk
of
Omnibus,
you
get
the
risk
of
CNG
the
risk
of
Helm
charts
and
the
risk
of
the
code
changes
all
together,
and
today
the
helm
chart
is
the
only
thing
that
we
don't
take
that
risk
for
because
it
was
less
stable
when
we
started
doing
all
of
this
was
very
new,
and
things
like
that,
so
yeah
I,
don't
know
how
I
do
understand.
A
Why
I
I
don't
know
if
it
makes
sense
to
do
much
more
often
than
what
we're
doing
unless
there's
a
specific
need.
B
Yeah
I
think
it's
a
really
good
point.
I
am
I
I
think
at
the
moment
it's
a
bit
of
an
unknown.
It
would
definitely
be
interesting
to
see
this
go
a
bit
more
frequently
and
to
see
like
do
we
have
enough
changes
coming
in
and
what
sort
of
problems
are
we
seeing
because
I
mean
if
we
could
get
into
the
auto
deploy
that
it
would
be
lovely
to
get
rid
of
quite
a
manual
process
right,
but
I,
I
and
I?
E
A
E
A
Yeah
and
also
the
the
way
charts
are
released
was
so
convoluted
that
we
had
to
design
a
completely
different
way
of
running
that
build
to
have
another
employee
equivalent,
and
then
we
never
used
basically
because
it
was
just
creating
outages
over
outages.
C
I
have
a
very
naive
question,
but
do
we
make
the
change
to
the
chat
like
often
like
weekly?
Is
there
any
reason
we
make
some
changes
there?
Every
week.
E
F
A
You
know
well,
I
think
that
the
one
of
the
problem
there
is
that
we
are
using
very
thin
section
of
the
charts,
so
charts.
A
To
give
you
a
full
installation,
not
we're
not
what
we
are
doing,
so
we
are
injecting
a
lot
of
configuration
and
things
like
it
to
disable.
Almost
everything
and
just
say,
I'm
I
want
you
to
deploy
sidekick
only
in
this
cluster,
assuming
that
there
are
other
three
sidekick
clusters
somewhere
else,
which
is
not
what
charts
is
designed
for
as
a
basic
use
case
right.
A
A
It's
moving
the
configuration
along
those
things
and
making
sure
that
everything
is
configured
in
the
same
way,
but
it's
not
a
real
validation
of
our
charts,
still
able
to
give
you
a
Deployable
gitla
instance,
and
so
we
do
update
to
take
new
changes,
upgrades
and
new
configuration
options,
and
things
like
that
and
as
well
as
we
run
the
release
process,
so
that
when
we
release
a
new
version
of
gitlab,
there
could
be
a
chart
version
that
is
matching
that
and
has
the
I
think
it
gets
out
with
the
default
getlab
versions
into
it.
A
I
think
there's
always
been
a
changes,
and
so
no
two
different
version
of
the
charts
had
no
changes
other
than
bumping
gitlab
version
in
itself.
But
I
think
this
is
one
of
the
thing
that
happens
during
during
the
release
and
say
it's.
We
released
gitlab
version
this
it's
time
to
release
the
chart
and
this-
and
there
is
a
CI
thing
that
bakes
these
things
together
and
makes
a
new
version
of
the
chart
as
well.
B
I
just
want
to
make
a
quick
nudge
reminder
for
you,
Myra
sdri
of
the
epic,
to
adjust
for
the
monthly
release
date
change
as
we're
getting
quite
close
to
the
end
of
that
epic
start
having
a
think
about
what
sort
of
demo
you
want
to
either
do
for
this
group
or
do
in
a
demo
and
bring
to
this
group
so
that
we
can
share
some
knowledge
on
what's
coming
up
for
those
things.
B
Cool
thanks
very
much
great.
If
there's
nothing
else,
then
release
managers
over
to
you.
D
So
last
week,
I
wouldn't
say
we
had
like
very
bad
week,
but
I
think
the
week
wasn't
very
smooth
because
we
had
a
lot
of
deployment
blockers.
We
had
a
lot
of
incidents
and
a
couple
of
times
we
had
to
pick
fixes
like
a
super
hot
fixes
to
to
how
to
deploy.
So
this
this
one
and
this
one
I,
don't
remember
which
one
exactly
ISO,
but
definitely
the
one
that
we
had
like
the
one
that
ISO
was.
D
Things
slice
very
fast,
yeah
so
and
now
it's
getting
back
to
normal.
We
are
kind
of
it's.
The
first
day
today
was
also
not
very
smooth,
I
would
say,
because
we
had
smoke
tests
failing
a
lot
and
so
basically
the
failed
smoke
smoke
test
was
merged
to
the
master
today,
and
we
had
to
skip
multiple
versions
to
actually
promote
this.
D
This
smoke
test
to
fix
for
for
the
smoke
test
to
production,
then
what
else
yeah
as
the
result
of
this
a
little
bit
lower
Cadence
of
deployments
in
the
deployment
frequency
comparing
to
last
couple
of
weeks?
But
if
you
compare
it
to
the
you
know
to
June
it's
better
than
it
was
a
few
months
ago,
but
if
you're,
comparing
this
to
to
the
August,
it's
a
little
bit
lower,
but
not
critically,
I
would
say
so
lead
time
to
change.
D
D
So
the
last
week
we
had
a
nine
deployment
blockers,
which
is
more
or
less
yeah.
It's
not
that
bad!
Actually,
so
if
you
compare
it
to
all
all
the
other
weeks,
but
definitely
not
like
extremely
smooth
and
the
total
amount
of
hours
we
we
lost,
it
was
like
a
32
hours,
which
is
more
than
last
week.
D
Some
of
their
blockers
was
due
to
the
failing
tests.
Some
of
them
was
due
to
incidents
and
I.
Don't
recall
any
serious
production
incident
last
week,
but
there
are
definitely
things
that
actually
either
blocking
deployment
or
kind
of
postponing
the
deployment.
So,
for
example,
if
we
have
something
something
wrong
with
PDM,
so
it's
like.
Theoretically,
it's
not
blocking
the
deployment,
but
it's
kind
of
rather
better
way
to
to
make
it
working
yeah!
That's
that's
it!
Today.
D
Today
we
had
a
problem
with
with
opt
clusters
cluster,
so
basically
runners.
D
Runners
was
not
ready,
so
we
didn't
have
any
active
runner
in
in
in
the
queue
and
that's
due
to
the
maintenance
work.
Reliability
team
did
on
those
workers
and
they
just
disabled
the
guardian
Runner
coordinator,
and
that
was
fixed
very
quickly.
D
E
A
A
I
move
over
my
points
here,
so
let
me
see
so,
okay.
So,
first
of
all,
what
are
we
looking
at?
So
this
graph
is
showing
the
number
of
days
or
hours
that
change
that
got
merged
into
Master
took
to
get
the
deployed
to
production,
okay,
so
the
lower
the
better
cool.
So
before
we
get
into
what
we
are
seeing
here.
What
we
would
like
to
see
that
is
normal,
is
kind
of
this
Behavior
here
right.
So
you
want
to
see
this
hook.
A
I
mean
you
don't
even
want
to
see
the
hook.
You
just
want
to
see
something
that
is
higher
on
Monday
because
you
had
the
weekend
and
then
the
day
after
so
the
it's
a
one
data
point
every
24
hour.
Okay,
so
the
day
after
it
must
be
below
the
one
day
threshold,
which
means
that
you
were
able
to
process
every
single
merger
request
in
the
past
24
hour.
A
Okay,
so
it's
normal
Monday
to
have
spikes,
because
you
have
potentially
Friday
evening
nights
depending
on
your
time
zone.
Then
you
have
Saturday,
you
have
Sunday
and
you
have
up
until
your
ability
to
deploy
on
Monday
morning.
So
it's
a
lot
of
day
a
lot
of
hour.
So
that's
perfectly
fine,
but
then
let's
look
at
this.
As
an
example,
oh
come
on
yeah
four
days:
okay,
it's
a
long
one.
Maybe
we
had
a
family
and
friends
day
or
something
like
this,
but
then
the
day
after
it
goes
on
16
hours.
A
So
it's
good
right.
So
it
means
that
I,
don't
know
what
happened
but
July
11th
more.
We
in
theory
had
a
good
number
of
deployments,
or
maybe
just
at
the
end
of
the
day,
we're
able
to
process
everything,
and
so
this
set
us
back
to
for
success
for
the
day
after
say,
16
hour
11
hour,
and
then
this
goes
1.1
days,
yeah
still
good.
Maybe
we
missed
some
of
the
late
evening
packages
or
the
early
morning
packages
the
day
after,
and
this
gave
us
a
bit
more
than
24
hour
gap.
A
So
that's
what
we
would
like
to
see
better
if
this
is
stay
below
as
well,
but
it's
okay!
Now,
if
you
look
at
this,
so
we
had
family
and
friends
day
on
the
25th,
if
I'm
not
mistaken
right.
So
basically
this
means
that
you
go
here.
26
27-
and
this
is
the
first
day
you
came
back,
so
it
gives
you
almost
four
days
which
yeah
it's
it
really
depends
on.
One
was
the
last
deployment
on
the
24th,
but
it
makes
sense.
A
What
is
not
great
is
this
here,
and
this
is
probably
something
that
as
release
managers,
we
should
try
to
think
about
when
we
see
something
like
this,
where
the
day
after
we
had
2.7
days,
it
means
that
something
happened
here
right
either.
I
don't
know
just
give
an
example.
We
deployed
very
early
in
the
morning.
I,
don't
know
right.
This
is
this
is
interesting.
Why
there
is
something
here
that
is
taking
more
than
one
day,
and
so
maybe
here
is
where
we
want
to
like
and
try
to
remember
what
what
happened.
D
And
I
I
think
that
in
this
particular
case
we
had
the
problems
with
PDM.
If
I
recall
correctly,
and
we
had
to
wait
for
for
the
for
Mr
to
be
out
picked
into
how
to
deploy
if
I'm
not
mistaken.
A
It's
possible
I,
don't
know.
I
was
trying
to
see
if
we
had
some
clue
here
in
terms
of
number
of
deployments
between
the
28
and
the
29,
but
I
mean
five
is
decent
number
of
deployments.
So
I
don't
know,
I
think
this
is
more
something
for
us
to
try
to
understand.
I've,
never
done
myself
as
well.
I
usually
look
at
this
week
by
week.
So
maybe
it's
interesting
to
see
as
release
manager
starting
his
day,
because
in
theory
we
should
have
already
the
number
of
the
day
before
right.
A
So
maybe
thinking
yesterday
is
easier
than
thinking
one
week
ago,
and
so
maybe
trying
to
take
a
look
at
those
numbers
in
the
morning
and
said
that
does
does
this
number
make
sense
for
me
compared
to
what
happened
yesterday,
just
to
try
and
get
a
bit
more
into
the
habit
of
understanding
those
numbers.
B
Yeah
I
think
that's
the
super
suggestion
unless
yeah
one
additional
thing,
I
think
what
we
see
on
lead
time
for
changes
is
there's
sort
of
two
sites
to
our
metrics
as
well.
The
stuff
we
control.
So
do
we
have
our
Branch
schedule
in
a
good
place
and
you
know
are
we
have
we
you
know?
Are
we
doing
everything
within
our
control
for
deployments
and
then
we
have
the
things
outside
so
like
the
incidents
or
the
unexpected
stuff
and
I
think
they?
B
This
is
just
my
impression,
so
it's
sort
of
a
finger
in
the
air,
but
my
impression
is
that
when
we
see
the
sort
of
like
the
the
Clusters,
they
perhaps
are
two
things.
So
there
was
an
incident.
We
will
see
one
significant
event
and
it'd
be
good
for
us
to
make
sure
we
understand
what
that
significant
event
was.
B
But,
for
example,
like
last
week,
if
we
look
at
the
the
last
three
days,
we
can
see
things
trending
down.
So
possibly
we
had
a
slightly
healthier
end
to
our
week.
Like
was
the
stuff
within
our
control
at
the
beginning
of
the
week,
or
was
it
all
incidents
and
external?
B
So
I
think
those
are
the
sort
of
the
two
questions
which
are
worth
asking
because
all
of
these
things-
let's
say
all
of
these
metrics
We
Gather
with
a
what
can
we
do
to
improve?
So
do
we
need
to
be
going
to
Quality,
for
example,
which
is
on
my
to-do
list
as
to?
F
Yeah
and
I
liked
Reuben's
demo
of
the
without
the
kind
of
weekends
and
family
and
friends
days
taking
two
into
account,
because
that
was
just
a
a
different
view
to
help
you
think
about.
Well,
you
know
what
caused
that
problem
or
if
there's
anything,
irregular,
that's
happened,
or
does
it
look
generally
on
track?
If
you
take
out
some
of
those
non-working
Day
stuff
as
a
view
to
use
in
combination
with
the
other
one.
D
Well,
I
I
I
agree
that
we
have
a
lot
of
like
a
different
priorities
now,
but
I
think
I
already
mentioned
that
we
kind
of
dropped.
This
observability
work
in
the
middle
and
I
would
I
I
totally
agree
with
femi
that
we
we
need
to
understand
these
things
better
in
order
to
make
improvements,
but
in
order
to
understand
things
better,
we
need
to
have
better
observability
and
we
kind
of
we.
D
We
invest
a
lot
of
time
and
efforts
in
the
first
quarter
and
then
we
kind
of
switch
the
priorities
and
I
I
I'm,
really
thinking
that
at
some
point
we
need
to
come
back
and
finish
this.
B
I
can
find
it
and
yeah
be
good
first
just
to
chat,
because
we
do
kind
of
have
a
bit
of
a
sort
of
backlog
of
epics
that
we
we
try
and
pick
up,
but
yeah
I,
I,
totally
agree.
I
I
think
these
are
good
things
to
raise
as
we
discuss
our
KRS
and
things
like
that,
that
you
know
we
should.
We
should
look
at
completing
things
so.
C
May
I
ask
one
question
so
yeah:
it's
really
interesting
to
to
hear
what
Alessio
said
about
so
from
from
that
I
understand
that
we
expect
to
think
right.
The
first
one
is
there's
only
way
pick
after
of
course,
after
weekend
or
holiday
and
so
on
right
and,
secondly,
that
pick
should
only
last
for
one
day
right,
because
we
should
merge
everything
and
then,
after
that
everything
become
normal
for
the
whole
week
right
I
mean
I
I,
see
that
is
the
graph
from
gilab
I
I,
don't
know
anything
about
that.
C
So
don't
know
what
can
we
do
about
that?
But
can
we
I
mean
do
some
aggregation
on
on
grafana
or
something
to
to
have
I
mean
a
better
view,
for
example
like
moving
average
for
the
last
week
or
live
by
the
lowest
value
for
the
last
two
days?
Things
like
that
right,
so
we
we
can
easily
see
when
something
wrong
happened
or
something
we
need
to
fix.
Not
okay,
try
to
and
I
mean
yeah.
Now
it's
harder
to
see
right,
because
okay,
just
go
up
and
down
everything
seemed
to
be
I.
D
D
So
that's
exactly
what
happened
with
the
last
quarter,
where
we
worked
on,
you
know
like
a
traffic
shifting
and
a
service
mesh
and
all
these
things
and
then
suddenly
we
were
like
a
pivoted
to
the
completely
different
story
as
well.
G
B
G
B
And
epics,
because
I
think
this
stuff
is
all
valuable.
Is
it
more
valuable
than
changing
a
release
date
ready
for
the
16.6
Mustang?
It
probably
isn't
and
I
think
we're
dedicated
as
well
right,
so
I
think
this
stuff
is
valuable,
but
let's
not
like
I.
Think
Vladimir
to
to
your
kind
of
point
like
I
think
this
is
where
those
okay,
our
discussions,
are
so
valuable
right
because
that's
the
point
where
we
need
to
all
agree
like
this
is
the
right
direction,
but
I
think,
let's
be
careful.
B
A
Yeah
I
was
going
to
say
that
this
is
where
what
we
started.
When
we
were
talking
about
the
deployment
SLO,
we
had
two
options.
One
was
starting
with
updates
on
MRT
production.
The
other
one
was
sticking
at
the
end-to-end
from
package
creation
to
package
deployment.
We
choose
the
second
one,
not
the
wise
choice
in
long
term,
because
we
realize
that
the
those
numbers
are
so
huge
compared
to
it
takes
a
lot
of
time
right
so
because
it
takes
in
every
every
environment
to
account.
A
So
it's
really
hard
to
figure
out
where
the
problems
are,
because
it's
it's
a
it's
a
big
thing,
but
yeah.
Just
to
give
you
a
context,
we
thought
about
using
SLO
for
this.
When
then
we
we
never
completed
that
that
work
on
thing
on
number,
let's
say
only
chain,
only
lead
time
for
changes
or
things
like
that.
B
Cool
okay,
so
Reuben
can
I
leave
you
with
the
action
to
see
if
we
have
an
epic
consider
if
there
has
the
issues
for
what
we
want
to
do
to,
if
we,
if
we
had
spent
another
another
quarter
on
the
metrics,
that
that's
the
work
that
like
you,
don't
have
to
create
anything
new
just
see
if
we
have
stuff
already
in
existence.