►
From YouTube: 2022-08-31 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone.
I
hope
you
all
had
a
good
long
weekend,
so
we
have
a
few
announcements
in
the
in
the
agenda
which
I'll,
let
you
all
read
through.
A
Okay,
that's
totally
fine.
I
will
mention
a
couple
of
things
which,
if
probably
if
I
hadn't,
have
had
a
family
and
friend's
day
yesterday
I
might
have
got
to.
But
since
I
was
not
working,
they
haven't
happened,
but
just
so
you
know
we
haven't
forgotten
about
them.
A
We
still
have
the
retro
issue,
we'll
pull
some
issues,
some
actions
from
that
so
expect
that
to
be
coming
up
in
the
next
week
or
so
we
can
discuss
that
and
also
the
team
split
stuff
matt
put
a
sort
of
proposed
like
view,
I
guess,
of
the
team
split
which
is
slightly
different
to
the
one
we
had
up
before
on
the
issue
do
feel
free.
If
you
want
to
keep
discussing
things
on
the
issue,
please
go
ahead.
A
We
are
still
sort
of
refining
other
things,
so
those
things
are
certainly
in
progress,
even
though
they're,
perhaps
not
super
visibly
moving
in
the
last
week
or
so.
But
that's
for
really
good
reasons.
B
Do
you
want
to
do
the
honors
with
michelle?
We
care
it's
three
mile.
If
you
want
to,
as
you
wish,
I
can
do
it
you
want
to,
I
don't
know,
should
we
roll
a
dice
or
something.
B
Hi,
so
we
actually
started,
we
posted
the
auto
deploy
because
of
the
frontier
family,
so
there
were
no
packages
that
were
like
tag
and
not
promote,
and
it
started
again
last
night
when
green,
composed
to
deploy
again
and
right
now
we
are
kind
of
going
up.
We
had
some
problems
where
we
had
to
retract
some
retry
some
jobs,
because
they
were
like
timing
out
taking
longer
than
expected,
but
looks
like
that
everything
is
more
or
less
going.
B
Well
yeah
there
was
a
a
apparently
qa
pipeline
last
week,
he's
already
been
tagged
and
everything,
and
mainly
the
problem
was
related
to,
if
I
remember
correctly,
to
the
token
being
rotated
and
after
a
while
it
just
like,
went
back
to
normal.
B
B
There
was
only
one
fail,
I
think
was
because
of
the
timeout
the
threat
to
the
trigger,
but
I
didn't
see
anything
very,
very
important
so
far
but
open
to
get
more
experience.
Risk
manager
point
of
view
on
this
any
questions
around
this
before
I
move
on
to
the
other
graphs
or
anything
to
add
that
will
be
even
probably
even
better.
B
Deployment
frequency,
so
we
see
the
usual
patterns.
This
gap
is
slightly
bigger
because
we
started
with
the
friends
and
family
this
week.
So
I
guess
we
can
see
that
we
had
zero
deployment
frequency
kind
of
a
day
extra
compared
to
the
other
to
the
other
week,
but
more
or
less.
We
are
on
trend
with
the
previews
with
the
previous
week
of
the
day.
B
B
We
are
still
seeing
the
same,
the
same
trend,
probably
again.
The
trend
that
is
plotted
here
is
due
to
the
friends
and
family,
so
we
had
no
no
deployments
at
all.
So
it's
actually
the
days
from
marriage
to
deploy
is
actually
increasing.
I
guess
we.
We
can
see
that
probably
again
similar,
probably
like
here-
that
probably
was
another
friends
and
family
or
like
the
previous
one
that
I
think
was
end
of
june.
I
don't
remember
exactly
but
would
be
nice
to
double
check
them
yeah.
That's
it
something
to
add.
B
A
I
think
just
a
question
for
for
you.
Both
then
is.
Is
there
anything
that
we
need
to
prioritize
differently,
or
is
there
anything
that,
based
on
your
experience
so
far
in
the
last
week,
we
should
adjust
as
a
team.
B
I
would
say
that
these
deployment
blockers
thingy
that
before
I
would,
I
used
to
do
a
couple
of
times
for
uamy
used
to
draw
a
couple
of
times,
maybe
something
where
I
can
integrate
problem
and
race
management
cycle.
I
would
say,
because
I
I
mean
to
for
the
full
story
when
I
get
to
go
back
to
look
at
that.
I
maybe
I'm
missing
context
I
miss
in
part.
So
it's
also
difficult
for
me
to
gather
this
information.
B
It
was
actually
pretty
pretty
easy
for
me
to
mark
on
the
graffana
annotate,
the
brand
I
add
and
generate
the
tickets,
so
it
was
actually
probably
something
that
I
don't
know.
I
see
it
that
is
kind
of
naturally
fitting
within
the
job
of
a
release
manager.
So
maybe
I'm
gonna,
I'm
gonna
open
an
issue
for
that
now
to
start
to
understand,
which,
which
would
be
like
the
best
way
to
do
so,
and
maybe
even
try
to
see
if
we
can
automate
it
in
some
ways.
A
Yeah
that
would
be
super
awesome.
I
yeah.
It
certainly
feels
like
we.
It
feels
like
we're
a
lot
closer
to
being
able
to
automate
it
than
when
we
were
when
we
first
started
like.
I
think
the
labels
are
reasonably
well
applied
across
all
of
the
issues
these
days,
but
yeah.
It
certainly
feels
like
having
that
a
bit
closer
to
release
managers
would
make
that
a
better
representation,
because
ultimately
deployment
blockers
are
there
as
a
way
of
us
creating
some
visibility
around
the
things
that
causes
pain.
A
C
C
Right
yeah,
I
was
trying
to
take
a
look
at
the
the
failed
one,
the
question
that
you
had
about
if
we
can
get
more
information
about
the
that
failure,
so
I
was
quickly
looking
at
the
pipelines
and
there's
maybe
there's
something
we
can
learn
out
of
this.
So
maybe
can
I
share
my
screen.
I
want
to
see
yeah.
B
C
Ahead
yeah,
so
let
me
see
if
I
can
do
it
so
that
that's
that's
the
same
view
we
were
looking
at,
but
just
for
the
last
three
hours.
Okay,
because
we
we
were
looking
at
what
what
happened.
He
here.
B
C
C
So
basically,
what
we
want
to
see
is
what
happened
before
and
after
so
for
instance,
here
we
moved
from
two
running
to
one
field
and
one
running
and
one
scheduled,
so
basically
there's
a
still.
So
the
thing
that
we're
looking
at
is
something
was
a
valuable
pipeline
that
actually
failed,
and
when
we
go
out
of
this,
so
when
it
goes
down
what
is
actually
happening
here
right,
so
it.
B
C
If
it
doesn't
go
down,
it
means
that
someone
retried
something
and
we
are
trying
to
recover
from
a
broken
situation,
but
if
it
goes
down,
it
means
that
that
failure
is
no
longer
relevant
so
that
the
pipeline
was
kind
of
lost
forever,
was
a
missed
opportunity
to
deploy
something,
because
maybe
it
failed
overnight
or
whatever,
and
by
the
time
we
found
that
it
was
a
problem.
C
It
was
a
a
more
recent
pipeline
that
completed
so
I
mean
this
is
something
we're
taking
a
look
when
it
happens
to
see
if
we
actually
lost
an
opportunity
to
deploy
something
or
or
not,
yeah.
Sorry,
I
couldn't
find
the
the
right
moment,
because
I
would
there
was
another
failure
before
that.
I
was
looking
and
it
had
actually
actually
that
behavior
but
yeah.
A
C
C
B
A
Awesome
sounds
good
thanks
for
going
through.
That
is
there
anything
else
anyone
wants
to
bring
up
or
ask
on
the
recording.