►
From YouTube: 2022-06-13 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
So,
okay,
we've
got
a
bunch
of
announcements.
So
thank
you,
everyone
for
adding
things
in
there
and
does
anyone
need
any
help
on
those
things
they
were
just
announcements,
cool,
brilliant,
so
discussions
myra
over
to
you.
B
Yep,
thank
you
so
just
for
context.
Last
month
when
we
were
tagging
the
release
candy,
that
was
very
difficult.
We
faced
a
lot
of
failures
and
it
was
mostly
because
it
was
the
only
time
a
q,
a
was
running
on
that
environment.
B
B
I
still
think
that
is
come
some
sort
of
mandate,
because
we
are
still
only
testing
starting
from
the
15.
So
I
open
up
a
proposal
to
update
our
environment
in
a
more
frequent
basis,
basically
to
try
to
update
it
instead
of
one
day
at
the
moment-
probably
updated.
I
don't
know
one
week
each
each
week,
so
that
would
give
us
some
time
to
update
the
environment
and
to
test
it,
and
I
think
it
will
be
better
for
us
to
test
it
in
a
weekly
basis
instead
of
testing
it
in
a
daily
basis.
B
D
Oh,
if
I
remember
correctly,
this
is
a
great
idea
myra.
So
let
me
start
with
this.
So
if
I
remember
correctly,
we
started
thinking
some
times
ago
about
running
regular
rc
candidates
for
the
past
three
stable
branches.
Even
if
there's
nothing
there
we're
just
re-tagging
with
our
c
numbers
so
that
we
we
know
that
stable
branches
are
working,
are
capable
of
running
releases.
D
The
the
the
release,
the
the
release
tools
for
creating
a
release
is
still
working
and
everything
so
that
we
don't
get
caught
by
surprise
when
we
do
security
release,
which
goes
back
three
versions
so
yeah.
Definitely
a
good
idea.
B
D
Yeah
yeah,
but
if
we
go
with
the
last
three
stable
branches,
so
we
create
something
like
15
right
now,
1505
or
c1
for
instance,
so
it
it's
something
that
we
can
do,
because
we
already
have
the
stable
branches.
We
can
even
create
the
package
and
install
it
on
the
pre-environment
because
it
will
be
before
rc42
for
the
monthly
release
that
environment
is
still
open
to
install
new
packages.
So
running
a
rc
from
the
current
stable
release
would
just
insult
there
and
work
as
expected.
C
Awesome
thanks
for
bringing
that
up,
go
ahead.
A
Sorry,
I
just
wanted
to
ask:
wouldn't
that
count
as
a
downgrade
so
like
if
you're
installing
a
an
rc
candidate
for,
say
two
versions
back,
the
pre-environment
will
contain
the
last
monthly
release.
Yeah.
D
D
B
C
B
Well,
the
status
will
be
right
now.
I
just
wanted
to
check
that
it
was
a
good
idea,
something
to
move
us
forward
to
ease
the
pain
for
release
management.
I
will
add
some
comments
there
and
build
a
stronger
proposal
of
what
it's
actually
going
to
look
like
and,
probably
being
all
of
you.
There.
A
I
just
want
to
add
that
I
also
agree
it's
a
good
idea
to
test
earlier
before
we
actually
tag
the
final
release,
candidate.
C
And
for
context
for
everyone
who
didn't
get
to
experience
the
fun
of
the
last
monthly
release,
that
was
because
new
tests
had
been
created
that
worked
on
all
the
environments,
except
for
the
pre-environment.
We
only
test
on
the
pre-environment
right
up
close
to
the
release
date,
so
it
was
a.
It
was
a
fun
a
fun
twist
for
the
month.
I
think.
C
B
Do
you
want
to
jump
to
b
yeah?
This
is
something
that
came
up
last
week,
basically
about
who
is
the
maintainer
or
the
official
team
maintainer
for
the
chat
ups
project
they're
using
a
workflow
for
that
project
in
terms
that
someone
opens
up
a
merch
request
and
see
if
some
of
the
maintainers
are
at
listed
on
the
merch
request,
approval
and
just
pick
one
now.
That
project
was
mostly
maintained
by
robert,
which
makes
me
assume
that
the
delivery
team
is
the
owner
of
that
project.
B
But
if
you
are
the
owner,
there
are
some
other
engineers
that
are
at
listed
as
maintainers.
I
don't
know
if
there
is
an
historical
reason
for
that.
Some
of
them
are
even
outside
infra
and
I'm
I
don't
see
them
participating,
participating
enough
in
the
reviews
or
not
at
all.
So
I'm
not
sure
if
we
should
let
them
or
what
should
we
do
about
this
project
in
terms
of
reviews.
C
So
I
think
other
people
should
be
and
we
should
expect
people
to
be
contributing.
Certainly
we
haven't
historically
said
like
loudly
in
public.
Yes,
we
are
the
owners,
because
that
brings
with
it
quite
a
lot
of
extra
stuff
that
we
don't
necessarily
have
time
to
do.
However,
it
is
central
to
what
we
do
so
I
think
we
should
assume
that
we,
we
probably
are
the
owners,
but
let's
try
and
avoid
kind
of
bringing
a
heap
of
extra
work.
C
So
I
think
if
it's
something
we
need
to
change-
and
you
know
it
really
affects
us,
then
absolutely
please,
let's
keep
on
top
of
it.
Get
things
bring
things
up.
Let's
prioritize
the
work.
We
need
to
do,
there's
a
lot
of
stuff.
I
know
I've
talked
to
like
java,
for
example,
quite
a
lot
in
the
past
about
feature,
flag
changes
and
those
sorts
of
things.
I
don't
think
we
have
to
be
the
people
who
implement
and
care
about
all
of
that
stuff.
C
There
are
lots
of
other
people
who
care
about
do
feature
flags
get
updated
correctly
or
not.
So
I
think
for
the
question
you
had
at
the
time.
Meyer,
like
yes,
I
think
that
was
the
right
thing
to
do.
It
certainly
affects
us,
but
I
think,
let's
not
take
on
too
much
more
on
this
right
now.
If
we,
if
we
don't
have
to.
D
Yeah,
I
was
also
thinking
that
maybe
we
may
want
to
use
the
code
owner
features
so
that
we
make
sure
that
we
take
ownership
of
the,
because
it's
well
designed
every
command
is
its
own
file.
So
we
can
say
the
auto
deploy
command
is
something
that
delivery
team
owns.
So
we
want
to
be
sure
that
our
names
show
up
when
someone
changed.
Something
like
this
feature.
Flags
yeah,
maybe
not
our
our
thing,
but
still
so
that
we,
we
start
creating
some
clear
ownership
on
the
part
that
we
care.
A
A
Sorry,
I
think
there
are
a
lot
of
inherited
maintainers
kind
of
because
it's
under
the
gitlab
com
group
right,
so
I
I
think
I
saw
non-engineers
also
are
maintainers
in
shut
ups.
I
think
that's
from
the
inherited
permissions.
C
There
may
also
be,
I
think
we
were
quite
limited
in
times
where
we've
been
caught
in
times
of
need.
We've
been
quite
limited
on
who
was
around
so
it
could
also
be
fairly
broad
to
help
us
in
times
of
need.
Get
things
changed
there.
So
is
there
anything
immediate?
We
need
to
do
on
this
myra
like
I,
I
didn't
have
the
full
context
on
your
change,
but
do
we
need
to
actually
plan
to
make
some
changes.
B
No
well
just
one
announcement
that
I
have
added
all
of
you
as
a
maintainer,
so
if
you
receive
a
ping
like
hey,
can
you
review
my
code
is
because
of
it,
and
another
thing
that
I
am
going
to
do
is
that
I
am
going
to
add
the
code
approvals
on
at
least
the
auto
deploy
namespace.
So
we
so
that
any
change
on
the
output
deploy
command
is
kind
of
approved
by
us
because
we
own
it.
So
it
is
a
bit
more
organized
than
it
is
now.
D
D
D
D
So
there's
a
lot
of
not
promoted
packages
that
happened
last
week
we
were
always
I
mean
I
always
look
at
the
end
of
the
week,
because
it's
where
things
stabilize
and
we
usually
see
a
situation
like
this
one
where
we
deploy
everything-
and
this
didn't
happen
last
week,
so
we
were
around
always
at
least
three
four
packages
not
deployed
not
promoted
to
production.
D
So
what
happened
so
yeah
usual
incident,
but
we
had
a
lot
of
contention
for
the
environment
because
of
the
ci
decomposition
work.
We
had
the
environment
being
blocked
for
two
hours
three
hours,
and
so
this
reflected
on
the
result
and
the
result.
I
have
to
say
also
I
don't
like
saying
this,
but
we
also
had
some
days
where
graeme
was
in
pto
last
week
and
this
reflected
as
well,
because,
basically,
that
one
two
packages
that
he
can
promote
extra,
they
definitely
change
the
the
end
result
right.
So
that's
something!
D
Oh,
let
me
go
back
to
the
last
seven
days,
because
I
want
to
show
oh
why
this
is
refreshing.
D
We
have
yeah
computers
yeah,
so
we
have
a
new
thing,
also,
which
is
this
graph
down
here,
which,
as
you
can
see,
dropped
down
significantly
because
it
was
a
bug
before
this
morning.
So
we
are
going
to
take
a
look
at
this
morning
data
only
so
let
the
thing
refresh.
So
this
is
tracking
active,
coordinated
pipelines.
D
So
this
gives
us.
This
is
the
the
first
point
in
time
where
we,
actually,
we
have
real
values.
So
this
is
checking
from
the
last
successful
coordinated
pipeline,
the
status
of
every
newer
pipeline
past.
Yet
so
you're
gonna
give
us
if
there
are
manual
running
or
scheduled,
so
it
gives
an
idea
of
how
much
we
are
lagging
behind
with
promotion
or
if
we
are
just
wasting
packages,
because
these
numbers
will
get
higher
if
we
don't
promote
because
then.
D
Manual
mean
that
we
have
to
promote
manual
is
a
manual
job,
so
promotion
is
the
only
manual
job
we
have
right
now,
and
so,
if
you
see
something
in
scheduled
is
mostly
likely,
the
baking
time
can
even
be
the
delayed.
When
we
tag
the
packages,
we
are
delaying
the
check,
so
they
can
be
delayed
by
one
hour
30
minutes,
but
if
something
stays
scheduled
for
a
long
time,
that's
usually
a
promotion
and
yeah.
D
That's
that's
a
good
thing
to
see
how
I
mean
we
just
have
data
since
this
morning,
so
it's
a
bit
is
not
enough
to
make
some
extract
some
information
from
here,
but
we
will
see
next
week
and
finally,
those
are
the
graphs
within
the
product.
I
think
this
one
yeah
again
this
is
absolute.
The
struggle
we
were
seeing
are
definitely
reflected
with
the
deployment
frequency
that
we
can
see
from
the
gitlab
product
as
well,
because
yeah
we
were
peaking
at
four
when
we
usually
go
six.
D
This
is
last
month
go
five
six
deployment
a
day
at
the
end
of
the
week,
and
this
time
we
were
below.
D
D
C
Awesome
can
I
show
a
graph
as
well.
That
goes
alongside
this,
which
I
think
I've
just
caught
up
on
the
blocking
data
that
we
have.
It's
not
completely
accurate,
but
it's
better
than
zero.
So
we
can
see
here,
the
red
stuff
is
staging
and
the
blue
stuff
is
production,
so
we
can
see
the
staging
one
is
way
higher
than
it
has
been.
That's
going
to
be
the
cid
composition
and
production
flakes
a
lot,
but
you
know
it's
it's
generally
higher
than
than
some
weeks
we've
had,
so
we
can
say.
C
That's
it
exactly
so
we
see
we
see
those
things
there,
so
I
think
we're
starting
to
be
able
to
hopefully
start
to
build
a
picture
which
is
super
useful,
so
the
cid
composition,
stuff's,
really
interesting.
C
It,
I
believe,
is
heading
to
production
this
weekend,
so
they're
in
kind
of
critical
test
stages,
but
what
I
think
we
will
start
to
be
able
to,
or
what
we'll
start
to
see
is
over
the
course
of
this
year
as
infrastructure
grows,
these
sorts
of
projects
will
be
more
common,
and
what
we'll
want
to
do
is
be
able
to
show
the
environment
contention
for
these
types
of
problems
coming
in
so
useful
to
be
able
to
see
it.