►
From YouTube: 2021-09-13 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Can
you
hear
me
robert
okay,
funny
thing
so
usually
my
mic
is
was
connected
to
my
docking
station
and
had
some
kind
of
static
noise
to
kind
of
give
me
the
impression
that
everything
was
working
now
I
changed
it
through
another
hub
and
there's
no
static
anymore.
So
I
was
thinking
that
it
was
muted
in
the
it
just
works
better.
C
All
right,
I'm
just
popping
it
up,
so
you
can
see
it
now:
okay,
nice,
it's
looking
healthy,
which
is
good.
Is
there
anything
robert
from
your
kind
of
experiences
last
week
like?
Are
there
any
things
that
you
want
us
to
be
kind
of
like
re-prioritizing
and
bringing
in
this
week
to
help
with
mttp.
B
I
think
I
think
that
we're
supposed
to
do
that
big
posted
blue
migration
today
during
my
day
so
we'll
see
how
that
goes,
but
other
than
that.
I
think
we're
good.
Oh.
C
Awesome,
nice
and
kind
of
to
follow
on
to
from
what
scarbeck
asked
about
last
week.
So
there's
always
the
kind
of
the
the
question
around.
When
should
we
change
our
mttp
target
and
what
do
we
want
to
do
faster?
So,
as
we
went
in
like
I
think,
for
about
six
months
or
so,
we've
kind
of
had
a
target
at
12
hours
and
saying
that,
like
once,
we've
got
rollbacks
like
so
I
suppose
we
could
definitely
make
mttp,
probably
half
of
its
of
its
time.
C
If
you
all
worked
a
lot
longer
hours
and
promoted
more
builds
there's
only
so
far.
We
can
scale
that
before
we
burn
out
or
like
run
out
of
people
and
think
that
so
focusing
on
rollbacks
gets
us
way
closer
to
auto
deploys
and
that's
going
to
be
the
ultimate
big
big
jump
that
we
get.
So
we
should
keep
pushing
towards
mttp,
but
I
think
before
we
unless
we
go
under
12
hours,
which
we
we
may,
but
I
I
I
wouldn't
probably
expect
us
to,
but
rollbacks
should
get
us
closer
to
that
stuff.
C
Awesome
so
I
there's
a
bunch
of
announcements
in
there.
I
was
gonna
leave
those
as
read-only
unless
anybody
specifically
wants
to
cover
or
ask
anything
about
them.
C
Awesome,
okay,
and
if
you
have
questions
after
of
course,
we
can
follow
up
on
slack
and
things
so
discussion.
So
one
of
our
big
parts
of
our
q3
okr
is
going
to
be
around
reducing,
like
the
compliance
risk
of
rollbacks
and
really
big
part
of
that
is
us
having
like
a
kind
of
documented,
invisible
practice
process,
so
skull
eggs
comment.
I
think
something
up
brilliantly,
which
is
like.
C
Are
people
fine
with
us
just
picking
some
dates
and
having
those
like?
I
would
think
like
one
a
week
for
staging
and
just
picking
some
dates
that
are
always
on
the
release,
issue,
template
and
then,
if,
for
whatever
reason,
we
can't
do
them
on
that
date
or
his
managers
could
just
move
them.
So
like
say
we
say
we
had
it
set
to
today
and
then
we
found
we
had
this
big
post
deployment.
Migration
thing
coming:
release
managers
could
just
update
the
release
issue
to
say:
actually
you
know
we'll
roll
back
wednesday.
Instead.
C
Yeah
does
that
sound
reasonable
for
everyone
cool?
What
I
was
going
to
suggest
as
well
is
we
have
that
gives
us
four
staging
rollbacks
each
month.
I
would
expect
we'll
easily
get
through
three
and
then
maybe
around
the
monthly
issue,
the
monthly
release.
It
gets
a
little
bit
more
tricky,
but
based
on
those
three,
we
can
roll
them
through
time
zones
right.
So
we
could
have
one
in
apac
and
then
like
the
next
week,
we
could
do
one
in
america's
the
next
week.
C
C
Awesome,
okay,
great
stuff,
I
will
put
together
a
I'm
going
to
put
a
page
in
the
release
docs
without
kind
of
our
practice,
rollback
practice
process,
which
we
can
then
point
compliance
to
and
I'll
update
the
release
issue
with
just
some
and
I'll
put
one
in
for
maybe
wednesday
morning
when
I'm
around
but
ruben.
We
could
go
through
that
together
and
you
could
do
your
first
staging
roll
back
and
get
hands
on
there,
tick
this
week
off,
yeah,
great
stuff,
excellent
cool
and
then
number
number
b.
C
As
I
always
say,
surely
this
should
be
numbers,
not
letters,
but
number
b.
I
have
a
request.
Let's
see
it's
super
exciting,
but
could
you
teach?
Are
you
gonna
ask
a
load
of
questions?
I
was
like
I'll
just
ask
you
here
like
teach
us
to
read
this.
A
A
This
is
so
here
below
is
the
graphs
and
the,
and
the
query
that
amy
was
asking
me
about
here
above
is
just
another
one,
the
old
one,
that
we,
the
first,
that
we
implemented,
which
is
just
the
number
of
auto
supply
packages
that
we
are
tagging,
so
why
I'm
looking
at
both
of
them
together
because
they
correlate.
So
I
want
to
show
you
why
certain
things
are
happening
and
how
to
read
them.
So,
let's
start
with
this
one,
which
is
easier.
A
This
one
is
just
okay
from
ql
here,
so
basically,
starting
from
here
inside
I
say
here:
okay,
so
basically
we
are
taking
the
tagging
total
packages
that
we
are
tagging
over
on
a
bucket
of
one
day.
So
what
happens
in
within
one
day
we
are
getting
the
increase,
so
just
getting
the
number.
How
many
happened-
and
this
sum
the
external
sum
is
just-
is
from
ql
thing,
because
if
we
restart
something,
then
we
get
multiple
metrics
this
one
it
just
takes.
A
We
only
care
about
package
types,
so
some
of
them,
which
means
just
deleting
when,
when
the
application
restarts
or
things
they
just
disappear,
so
it
it
just
for
removing
noise
from
this.
So-
and
this
is
easy
to
understand-
oh
let
me
just
do,
oh
because
the
other
one
we
don't
have
data,
but
here
we
have
so
no.
I
was
going
the
opposite
direction
here,
so
this
is
one
week
right.
So
usually
we
we
are
around
five
package.
A
Each
day
last
week
was
more
of
an
exception,
a
lot
of
exceptions,
so
we
were
above,
but
we
have
five
house
deploy
schedules.
So
we
usually
we
just
tag
five
packages.
Then
we
have
the
weekend
where
it's
slowly
it
go.
It
gets
down
slowly
because
basically
it
covers
the
past
24
hours.
So
this
shows
us
that
when
we
enter
the
weekend
we
are
no
longer
tagging
and
again
the
thing
we
were
looking
at
is
monday.
So
this
today
we
start
building
up
again
up
until
five.
A
So
this
is
the
thing
let
me
go
back
to
and
no
this
is
not
just
drink
yeah
one
day,
so
I'm
going
back
to
one
day
so
that
we
can
see
the
values
how
they
align
with
the
deployment
started,
because
we
only
have
numbers
since
this
morning.
A
So
this
one
is
the
is
the
query:
there's
the
same
machinery
around
some
increase,
because
it's
just
cleaning
up
stuff.
We
only
want
to
see
the
number
of
packages
that
we
started,
deploying
over
course
of
24
hour.
This
machinery
here
is
just
because
we
had
there
was
a.
There
was
a
bug
in
the
metric.
I
mean
not
really
in
the
metric.
A
It
was
a
bug
in
the
way
we
scraped
the
metric,
and
so
we
merged
this
last
week,
but
basically
the
the
value
was
using
a
wrong
label
just
environment
which
is
overwritten
by
prometheus
when
it
scrapes
things
you
just
set
it
to
the
environment
where
the
application
is
running.
A
So
basically,
we
lost
those
information
and
we
renamed
it
to
target
amp,
and
this
trickery
here
is
just
give
me
something
that
has
a
target
amp,
because
if
I
remove
it,
we
just
have
an
extra
line
here,
which
is
the
previous
values
there
are
that
had
no
label.
So
that's
just
the
detail.
So
what
we
are?
What
are
we
looking
here?
A
So
the
yellow
line
is
production
deployment.
So,
let's,
okay,
so
I
want
to
show
both
things.
So
we
started
tagging
something
this
night
a
bit
before
4
a.m.
So
this
was
the
first
package,
the
second
one
was
after
six
and
then,
when
this
thing
started
counting
we
were
8
30
more.
No,
maybe
it's
almost
nine
yeah,
because
this
is
two
hour.
So
what
happened
here
is
that
at
this
point
in
time,
when
we
started
gathering
metrics,
we
already
have
two
packages
in
the
auto
deploy
in
the
azure
pipeline.
A
Then
it
starts
where
it
is
canary,
okay,
so
the
the
the
next
one
got
tracked
is
production
canary,
which
is
likely
the
package
that
got
promoted.
Here
I
mean
we
have
no
values.
This
is
just
counting
numbers,
so
we
don't.
We
can't
see
the
package,
but
if
we
look
at
how
the
number
increase
so
here
around
nine,
do
we
have
number?
Yes,
so
a
nine
utc,
the
other
canary
deployment
started,
and
here
show
me
this
at
11
yeah
it
it.
A
It
can
be
right
because
we
have
the
baking
time
plus
the
deployment
time
so
it
might.
It
may
be
that
package.
We
don't
know
we
don't
care,
because
this
is
just
for
knowing
how
many
packages
we
are
deploying
each
day,
but
it
kind
of
give
us
this
idea,
and
then
here,
where
is
1221
utc?
A
A
What
we
have
here
is
that
something
got
promoted
to
staging
canary,
so
the
first
environment
that
we
have
right
now
and
is
12
21.
So
it's
yeah
it's
one
hour
after
that,
so
it
could
be
that
package
so
and
and
then
this
basically
keeps
counting.
So
I
mean
this
is
only
the
oh
and
then
we
have
again
the
staging
deployment
here,
which
is
12.50.
C
C
Just
pretty
quick
that
one
yeah
yeah
so
quick
deployment
like
read,
if
I
in
terms
of
reading
this
graph,
then
so
I
I
was
kind
of
expecting
it
to
be
the
opposite
way
around,
and
why
is
production
on
two?
So
what
do
we?
What
does
this
graph
actually
say
if
I
was
like
if
we
were
explaining
this
to
like
steve
like
what
does
two
mean
that
two
packages.
C
A
A
A
C
Cool
and
give
visibility,
I
think,
forever,
because
I
know,
unless
you
know,
I've
chatted
about
this
one-to-one,
but
maybe
just
everyone
else,
but
this
is
like
part,
one
of
us
being
able
to
determine
how
many
packages
are
suitable
for
rollback,
which
kind
of
answers.
The
question
of
like
how
big
a
problem
is:
a
post-deployment
migration.
A
Yeah
yeah,
given
that
we
touched
on
this
topic
I
have.
I
was
thinking
that
other
than
this
we
can
also
build
an
another
metric,
which
is,
I
think,
it's
interesting,
but
give
us
a
different
information,
which
is
how
many
pending
post
deployment
migration
we
have
so
in
the
in
we
have
you
know,
let
me
stop
this
sharing,
which
is
useless
at
the
moment.
A
So
if
we
go
with
the
option
of
delaying
post-deployment
migration
so
that
we
bucket
all
of
them
together,
then
there
would
be
an
important
question
to
be
asked
as
a
release
manager,
which
is,
is
now
a
good
time
to
run
post-deployment
migration,
and
so
what
I
was
thinking
yeah.
I
think
I
thought
this
to
myra
and
last
week
we
were
just
chatting
about
this
and
I
wanted
to
write
an
issue
about
that.
A
So,
basically
is
because
we
have
a
script
that
we
know
can
count
bending
positive
migration,
which
is
in
the
issue
about
detecting.
If
a
pipeline
has
positive
migration
or
not,
we
can
do
something
on
a
side
which
is
easier,
which
is
just
running
this
script.
On
the
on
each
deployment,
node
counting
for
the
number
of
pending
post
deployment,
migration
and
just
updating
a
metrics.
So
then
we
have
a
dashboard
when,
as
a
risk
manager
can
take
a
look
and
say
yeah,
we
have
pending
five
positive
migration
or
nothing.
A
And
so,
if
there's
nothing
to
run,
we
don't
even
care
about
starting
the
process
of
running
post
deployment
migration
and
gives
also
us
better
information
about
how
many
positive
migration
we
have
each
day
that
things
like
each
other
things
like
that,
because
even
if
we
don't
delay
positive
migration,
but
we
still
count
depending
positive
migration.
We
will
see
spikes
in
this
metrics
because
we
roll
out
the
package
at
the
beginning
of
the
canary
deployment,
because
the
database
is
shared
between
the
two
environment
canary
and
main
stage.
A
A
Migration
in
in
the
deployment
that
we
have
today,
so
just
looking
at
those
bumps
will
give
us
an
information
about
how
many
packages
each
day
have
possible
migration
and
then,
if
we
decide
to
delay
them,
then
we
also
have
something
to
look
at
to
take
a
look
at
for
making
a
decision.
If
we
want
to
run
positive
migration
now
or
tomorrow,.
C
E
No,
I
am
not
expecting
a
lot
of
opinions.
I
was
expecting
an
opinion
from
a
database
maintainer
that
actually
posted
something
today
and
I
plan
to
answer,
but
with
that
I
was
thinking
of
just
leaving
the
issue
open
for
this
week
and
take
a
decision
even
thursday
or
friday,
and
I
mean
grab
all
the
all
the
comments
that
we
have
take:
a
decision
and
open
up
some
issues
and
probably
an
epic
to
start
with
these.
With
these
steps.
C
E
Yeah,
I
think
we
are
mostly
inclined
for
option
a.
I
saw
marin's
comment
about
option
d,
but
we
cannot
remove
post-deployment
migrations
because
they
are
not
only
used
for
cleanups.
They
are
also
used
for
scheduling,
background
migrations,
which
is
the
most
painful
point,
and
we
cannot
simply
move
the
background
migration
to
regular
migrations,
because
that
is
basically
moving.
The
problem
to
another
stage
of
a
coordinated
pipeline
and
we
want,
is
to
remove
that
problem
of
out
of
the
correlated
pipeline.
A
The
biggest
reason
why
we
need
them
is
because
of
on-premise
installation
we
as
gitlab.com,
don't
need
them
at
all.
We
could
just
use
regular
migrations,
but
this
is
just
one
half
of
the
business
yeah
for
sure.
B
C
And
like
just
to
give
content
for
the
other
ones,
the
other
people,
because
I
know
myra-
and
I
chat
a
bit
about
this-
like
what
we
all
need
to
trade
off
on-
is
again
the
complexity.
So
this
kind
of
comes
back
to,
I
guess,
release
managers
and
how
much
hands-on
stuff.
So
we
should
definitely
try
and
factor
in
like
how
much
release
management
overhead
do
we
add
with
each
option
and
how
much
extra
value
does
that
give
us
on
other
things.