►
From YouTube: 2021-09-20 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
How
about
you
put
the
diagram.
B
I
can't
I
can't
talk
to
it.
I
mean
it's
the
same
trend
that
we
saw
over
the
last
weeks
that
I.
B
Mtdp
is
going
lower
and
lower,
and
the
main
reason
why
that
is,
I
think,
is
because
we
are
really
good
at
each
day
to
deploy
five
times
to
production,
so
we
really
never
miss
five
deployments
per
day.
I
think
this
is
just
very
constant
and
gives
us
a
lot
of
deployments
and
so
not
much
time
for
mrs
to
land
up
in
production.
B
Even
if
you
had
a
few
blockers
and
not
not
very
few
long
blockers
but
in
most
cases
for
the
short
deployment
blockers,
we
were
able
to
catch
up
anyway
on
the
same
day,
because
we
just
we're
faster
on
deploying
the
piling
up
deployments,
because
we
do
only
do
five
times
per
day.
We
could,
if
we
wanted
to
do
even
more
per
day,
but
I
don't
see
a
reason
yet
to
do
that,
because
it's
nice
to
have
this
kind
of
buffer
in
between
deployments
to
date,
what's
happening
and
maybe
catch
up.
If
you
missed
something.
B
Here
is
a
picture
of
how
many
deployments
we
are
doing
each
day
so
really
in
august,
reached
76
and
I'm
not
sure
what
is
the
number
for
september.
But
it's
looking
pretty
well
like
we
are
doing
more
and
more
deployments
each
day.
Each
month,
nice.
F
E
Okay,
so
so
you
mentioned
about
having
five
deployments
each
day.
I
don't
have
numbers
for
this,
so
I
would
like
to
understand
where
this
five
comes
from
and.
B
From
the
schedule
right
I
mean
I
didn't
look
at
metrics.
I
just
used
the
schedule
right.
We
have
those
five
deployments
scheduled
for
each
day,
and
I
know
from
my
experience
and
gut
feeling
I
have
to
say,
never
use
metrics
now
for
this-
that
we
nearly
never
missed
to
do
all
of
them,
at
least
when
I
was
doing
them.
E
Take
a
look
together
because
it's
interesting
so
here
it
is
okay,
so
this
one
is
tagging
the
always
the
same
metrics
that
I
showed
last
time.
So
just
let
me
move
you
over
there,
because
otherwise
I
just
have
to
look
in
the
wrong
angle.
So
sorry,
okay,
so
this
is
tagging
we're
looking
at
one
week
and
the
still
is
over
the
24
hour,
so
we're
just
bucketing
by
one
day:
okay,
so
it's
kind
of
a
rolling
window.
E
So,
except
for
this
probably
very
complex
day
when
we
ended
up
tagging
nine
packages,
we
usually
stay
on
five.
So
this
is
the
good
when
we
say
we
have
five
out
of
deploys
scheduled.
This
are
the
five
so
here
it
drops
down
because
of
the
weekend.
So
this
is
all
expected.
Now,
I'm
looking
here
which
this
our
deployment
started.
E
So
it's
the
first
full
week
of
data.
We
have,
we
don't
even
know
if
it's
actually
working
properly
and
things
like
that.
So
that's
interesting
to
take
a
look
together.
So
if
I
look
at
here,
I
see
that
staging
canary,
which
is
automated.
The
first
stage,
actually
gets
all
the
deployment
we
have.
Four.
We
have
five,
so
it
it
matches
with
this
okay.
E
What
is
not
the
same
here
is.
This
is
a
production,
so
we
have
three.
Then
there
was
a
drop
down
here
around
september
16
and
then
it
was
going.
E
It
makes
sense
yeah.
Obviously,
then
we
went
back
to
tree
again
and
then
the
weekend
started
just
rumping
down
everything,
and
here
we
can
see
what
amy
was
asking
the
first
time.
I
showed
this
graph
why
the
number
were
kind.
The
environment
were
not
aligned
and
if
we
can,
we
zoom
in,
I
don't
know.
No,
we
have
to
do
it
this
way,
which
is
in
the
other
direction,
actually
yeah.
So
we
see
staging
canary
then
after
a
while
we
have
just
staging.
E
Then
here
should
be
production,
canary
yep,
and
here
so
from
here
to
here,
then
we
have
production,
so
they
are
kind
of
delayed
right.
So
this
is
why
I
was
asking
about
the
five
deployments
today,
because
here
we
counted
three
ranging
up
and
down
I
mean
it's
it.
I
think
it's
a
good
number,
so
I'm
not
just,
but
I
I
wanted
to
to
just
check.
If
you
it
was
just
your
impression
or
just
to
say
I
I
promoted
everything
in
my
shift.
B
I
did
that's
too
strongly,
so
I
didn't
want
to
say
we
always
deployed
five,
but
in
most
cases
I
think
vv
managed
to
get
all
scheduled
deployments
through
the
penalty.
Of
course,
sometimes
we
didn't
because
of
timing
or
blockers,
but
I
think
in
general
we
reached
the
five
deployments
each
week
each
day,
but
of
course
not
every
day.
E
Okay,
okay,
so
yeah,
thanks
for
for
this
was
just
wanted
to
double
check
the
what
we
gave
where
we
get
from
the
matrix
with
what
is
happening
so
yeah.
Thank
you.
D
E
Do
we
really
care?
I
mean,
I
think
we
should
look
at
the
sl
deployment,
slo
and
mttp,
so
if
because
there's
a
trade-off
between
also
doing
things
that
are
say
safe
for
github.com
right,
so
I
don't
want
anyone
in
the
team
to
just
working
late,
just
to
match
the
extra
deployment
or
just
starting
a
production
deployment
where
they
are
absolutely
tired
and
have
no
energy
for
following
with
consequences
of
the
deployment.
So
if
we
with
three
promotion,
we
get
the
good
mttp
I
mean
I
see
so
I
would
not
consider
a
goal
in
deploying
everything.
A
Yeah,
I
think
that's
a
good
point.
We
we
could
absolutely
lower
mttp
down.
If
you
know,
if
you
will
work
twice
like
long,
if
that
would,
if
you,
if
you
wanted
to
do
that,
mttp
could
be
like
half,
but
it's
not
a
good
strategy
right
like
it's,
it's
not
what
we
want
to
do
to
scale
up.
So
let's
use
the
metrics
as
our
guide
and
then
focus
on
like
tooling
and
rollbacks,
and
things
to
actually
adjust
those
numbers.
F
I
have
a
question:
unless
you're
the
metrics,
you
are
showcasing.
Do
we
play
to
push
those
into
one
of
our
delivery
dashboards
that
we
have
yeah
reuben
is
working
on.
There's.
A
E
A
Awesome
thanks
for
going
through
that
henry
and
thanks
for
the
questions,
everyone
I
am
just
going
to
skim
through
the
announcements.
The
ones
I
want
to
just
particularly
highlight
are:
please
make
sure
you
get
your
family
and
friends
days
or
a
substitute
day
into
roots
and
on.
Oh
announcements
are
not
numbered.
A
That's
annoying
on
this
one
that
I'm
highlighting.
I
will
hijack
the
delivery
weekly
invite
on
the
4th
and
put
in
a
coffee
chat,
so
we
can
all
chat
with
ahmad
and
just
because
I
know
that
he
didn't
meet
any
of
you
lot
to
an
interview.
Obviously
you
all
haven't
met
him
either.
So
just
be
an
informal
chat
come
along.
If
you
want
to
come
along,
it's
not
mandatory,
of
course,
but
I'll,
send
you
a
separate
invite
so
that
we
have.
We
have
the
shared
zoom
there.
A
Cool
so
onto
the
discussion
points
on
thursday
we've
changed
release
managers
so
read
on
henry
henry
and
robert
completing
your
shift.
What
I
wanted
to
just
check
in
on
as
we
as
we
roll
out
and
switch
switch
rotor
is.
Are
you
all
happy
with
this
sort
of
rough
set
of
priorities
for
projects?
Is
there
anything
else,
particularly?
I
think,
because
we've
opened
up
so
many
new
issues.
A
Recently,
we've
got
lots
of
kind
of
suggestions
for
improving
tooling,
so
are
there
any
adjustments
that
people
would
like
to
make
to
move
us
away
from
a
kind
of
rough
goal
of
registry
pages
rollbacks
and
then
recovering
the
helm
logs,
which
is
what
graham's,
mostly
working
on.
F
I'm
curious
about
the
nginx
ingress
that
recently
popped
into
conversation.
B
If
this
is
what
you
are
talking
about,
okay,
yeah,
so
I
would
try
to
tackle
this
one
and
come
back
to
you
with
questions.
Probably.
A
Cool
okay,
great,
I
will
put
in
a
team
kind
of
updatey
thing
in
slack
later
on
today,
but
it's
roughly
going
to
be
around
this
sort
of
priority
list.
So
if
people
have
suggestions
and
things
in
the
next
few
days,
then
please
just
give
me
a
shout
myra
over
to
you.
D
Thanks,
I'm
hoping
you
can
hear
me,
I
still
haven't
figured
out
the
best
combination
of
my
microphone
and
out
phone,
so
yay,
okay,
so
new
jobs
and
new
stages
have
been
added
into
the
coordinated
pipeline.
So
if
you
see
some
behavior
some
different
behavior,
please
don't
be
scared.
This
is
intentional
right.
D
Now
we
added
two
more
jobs,
one
for
sending
a
notification
when
the
deployment
starts
and
another
one
when
it
ends
for
testing
purposes,
we
are
sending
the
notification
to
other
channels
and
later
on,
since
this
is
all
included
into
the
single
pipeline,
coordinated
coordination
of
phase
2,
I
think
we
are
going
to
continue
moving
some
items.
B
D
E
There
is
also
another
difference
which
I
open
the
follow-up
issue
for
that
which
is
basically,
if
something
fails,
and
then
you
retry,
you
don't
get
the
successful
notification.
So
it's
something
to
be
aware
of,
because
it's
the
same
job
in
the
old
configuration
we
used
to
have
two
separate
jobs
and
basically
you
you
always
get
only
one
of
them,
so
only
the
first
failure
and
then
the
first
success.
E
So
the
first
success
usually
don't
have
a
second
success
because
you
move
on,
but
failures
can
be
more
the
one
and
we
didn't
have
more
than
one
failure
before
we
don't
have
now.
But
now
we
also
don't
have
a
success
after
a
failure
as
a
notification,
so
it's
very
easy
to
fix,
but
yeah.
I
was
kind
of
delaying
that
merge
quest
for
too
long.
So
I
said
just
let's
try
this
thing
out
and
we
can
quickly
fix
this.
So
I
don't
know
if
you
already
started
working
on
it.
C
E
I
think
we
need
something
like
a
re-armable
job
when
say
if
something
before
this
there
is
a
dependency
restart,
then
re-arm,
all
the
pending
jobs
like
we
do
for
failure
right.
So
the
tricky
part,
maybe
okay-
I
just
had
a
moment
bob
moment
here,
which
is
maybe,
if
we
fail
the
job
it
will
works.
E
A
Awesome
nice
about
getting
that
done,
myra,
that's
a
huge
huge
milestone,
we're
gradually
getting
getting
our
single
pipeline
moving
along.
So
yeah
nice
work.
A
Cool
and
you
have
the
next
one
as
well
mara,.
D
Yep
thanks
just
to
talk
to
you
about
a
little
bit
about
the
outcome
of
the
post-deployment
migration.
We
concluded
that
in
an
ideal
world,
it
would
be
better
simply
not
to
have
them
associated
to
the
deployer
and
to
the
release
process,
and
we
have
been
thinking
about
how
to
achieve
that.
It
would
be
a
very
long-term
solution
to
remove
them
from
the
deployment
process.
So
we
have
identified
three
iterations
or
phases.
D
The
first
one
would
be
to
simply
remove
the
job
from
the
coordinated
pipeline
and
have
a
manual
job
to
execute
the
pending
post
migrations.
This
job
will
be
executed
by
the
release
managers
and
we
should
aim
to
execute
to
execute
it
at
least
once
a
day.
D
The
second
iteration
would
be
to
classify
the
post-deployment
migration,
so
we
can
execute
them.
Based
on
this
classification,
some
examples
will
be
to
execute
the
background
migrations,
perhaps
every
three
days
or
at
the
end
of
the
week,
then
to
execute
the
schema
changes,
adding
columns,
regrouping
columns
every
day
and
perhaps
also
the
data
migrations
depending
on
the
duration.
D
We
can
execute
them
every
day
or
at
the
end
of
the
week,
or
perhaps
on
long
on
longer
batches
like
at
the
end
of
the
release,
and
then
once
we
have
that
information
we
could
work
towards
removing
them
or
the
word
remove,
is
kind
of
a
destructive
one.
It
would
be
more
like
replacing
them
with
a
feature
that
is
embedded
into
github.com
and
remove
remove
them
from
the
coordinated
pipeline
or
from
the
deployer
process.
D
So
these
are
very
high
level
steps
that
I
plan
to
post
a
comment
with
those
earlier
during
this
week
and
that's
it
your
cat
is
distracting
me
so
god's
carve
it
so
yeah.
E
Sure
yeah,
I
was
having
a
link
on
atheration
number,
one
that,
in
order
to
help
a
release
manager
to
figure
out
if
there
is,
if
there
are
pending
migration,
we
are
adding
a
new
metric,
it's
in
review
and
then
basically,
when
it's
in
the
pipeline.
So
when
we
deploy
something,
there
will
be
a
metric
exported
that
tells
us
the
number
of
pending
migrations
per
environment.
A
Nice
awesome
and
yeah
thanks
to
everyone,
who's
engaged
in
this
and
especially
myra,
putting
all
together
to
get
us
here
like
lots
of
comments
from
across
the
company,
which
is
super
so
exciting,
exciting
stuff.
We
talked
a
little
bit
last
week
about
the
steps
to
unblock
the
kind
of
immediate
work
of
rollbacks,
and
then
there
is
kind
of
like
all
the
rest,
to
get
us
into
the
into
the
future.
So
we'll
start
bringing
in
those
as
well.
In
the
next
few
weeks,.