►
From YouTube: 2021-04-03 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hi
everybody
happy
monday,
there's
some
announcements
there
we
don't
need
to
go
through
and
then
I
had
a
note
for
the
discussion.
I
wanted
to
talk
about
as
we're
moving
more
stuff
from
the
deployer
to
release
tools.
We're
doing
this
thing
where
basically
right
now
we're
triggering
a
deployer
pipeline
and
then
release
tools
has
a
delayed
job
that
waits
whatever
45
60
minutes
and
then
checks.
The
status
of
that
thing
be
triggered
in
deployer
to
see
if
it
finished
or
not.
But
this
is
going
to
create
more
and
more
issues.
A
I
think,
as
we
kind
of
transfer
stuff
over,
for
example,
like
the
slack
announcements
on
successful
deploys
if
we
trigger
deploy
and
then
wait,
55
minutes
or
60
minutes
for
it
to
be
done.
If
it
failed
after
five
minutes,
then
55
minutes
later
we're
only
sending
the
notification
that
it
failed
rather
than
in
the
deploy
right
now,
it's
immediate
because
it
knows
its
own
status
right.
So
I
guess
the
alternative.
A
The
first
alternative
is
trigger
the
the
pipeline
and
deployer
and
then
have
an
active
job
into
these
tools.
That's
constantly
pulling
it
and
at
least
then
we're
getting
faster
feedback,
but
then
that's
you
know
wasting
ci
cycles
or
whatever,
and
then
I
think
myra
linked
an
issue
in
the
rails
project
that
might
solve
our
problem.
And
then,
unless
you
look
like
you
said
it
was
just
merged.
So
maybe
we
can
talk
about
that.
B
Yeah
sure
yeah,
I
agree
with
you,
the
problem
is,
is,
is
a
big
one,
so
in
theory
they
merge
this
fixed
two
days
ago
and
with
that
one
we
can
retry
the
the
child
pipeline
and
the
status
of
the
child
pipeline
should
reflect
on
the
parent
one,
because
what
we
had
when
when
we
started
working
on
this,
so
if
we
had
a
failure
on
the
child
pipeline
and
we
retried
the
child,
the
failed
job
of
the
child
pipeline,
then
the
status
of
the
bridge
job
bridge-
I
think
the
bridge
will
be
okay,
but
then
the
rest
of
the
pipeline
will
still
be
not
not
even
cancelled
you're.
B
Just
getting
that
weird
state
like
you
can
play,
you
can
do
anything
so
in
theory
we
should,
I
think,
it's
under
a
feature
flag,
but
maybe
we
should
yeah
should
take
a
look.
It's
a
bit
too
much
because
it's
an
ops,
so
we
need
a
package
but
yeah,
so
the
fix
is
coming.
So
maybe
we
should
play
a
bit
with
it
on
dot
com
to
figure
out
if
it
really
solves
our
needs,
but
the
fix
should
be
coming
so.
A
Yeah,
I
think
myra,
did
you
look
into
this
at
all?
I
think
what
I
remember
from
it
is
that
for
bridge
job
you
can't
like
we
would
depend
a
lot
on
passing
variables
to
the
downstream
pipeline.
But,
as
far
as
I
know,
the
bridge
job
can
only
do
that
via
the
end
file,
and
I
can't
just
do
it
by
like
ci
configuration
if
they're
not.
B
B
Yeah,
if
you
want
to
create
things
dynamically,
then
it's
you
need
to
do
the
amp,
so
you,
but
you
don't
have
to
pass
the
so
you
can
generate
the
time
file
and
and
generates
variable
based
on
dot
m
file
in
the
in
the
bridge
job.
So
you
don't
have
to
to
push
the
dot
m
artifacts
to
the
downstream
pipeline,
but
you
can
basically,
you
can
refer
to
them
in
the
in
the
bridge
job
so
that
the
trigger
job
has
the
variable.
B
B
B
So
I
don't
know
about
the
api,
but
basically
this
is
so
no.
I
know
about
the
api
because
I'm
looking
at
this
the
other
day.
So
basically
there
is
an
api
call
which
is
for
bridge
jobs.
B
So
you
can
do
api
for
the
pipeline
slash
bridge
and
gives
you
all
the
child
pipeline
triggered
by
the
thing
so,
but
regardless
of
this,
so
if
the,
if
this
fix,
is
properly
working,
we
can
remove
a
lot
of
code
from
release
tools,
because
we
have
all
these
things
where
we
actively
wait
for
something
and
then
we
retry
and
things
like
that.
It
can
just
go
away
because
we
just
become
a
ci,
a
snippet
and
it
will
trigger,
and
then
it
will
be
just
a
job.
B
B
So
this
was
another
aspect
of
the
problem
because
you
couldn't
retry
the
bridge
job,
so
I
think
also
jarv
knows
better
than
me,
because
jarv
was
kind
of
advocating,
for
I
wanted
to
try
the
bridge
job
and
I
was
playing
with
it
and
it
was
just-
and
I
just
noticed
that
if
I
retry
the
job
and
the
pi
just
a
single
job
in
the
child
pipeline
and
the
child
pipeline
becomes
green
at
the
end.
We
are
reflecting
this
already
in
the
parent
one.
B
D
So
I
think
that's
that's
the
current
plan
and
I
yeah.
I.
A
D
D
D
Great,
oh
great,
so
then
maybe
we
can
start
using
that
unless
you're
kind
of
on
the
same
topic
and
sorry
like-
I
don't
want
to
like
like
make
this
to
a
huge
discussion
but
are
are
we
considering
how
slack
notifications
are
gonna
work
with
the
new
release
tools
pipeline?
B
D
B
A
Just
a
problem
inherent
in
our
ci
product
right.
If
we
we
have
a
stage
that
triggers
the
deploy,
that's
a
downstream
pipeline,
it
does
its
thing,
it
fails.
It
comes
back
and
says
this
job
failed.
Then
it
moves
to
the
next
stage.
It
says:
trigger
failure,
notification
right
that
sends
a
slack
notification.
Then
we
retry
the
deployer
it
fails
again.
Maybe
does
that
update
the
status
of
the
bridge,
stop
again
and
then
trigger.
I
don't
think
it
triggers
another
notification.
D
D
A
B
Sorry,
but
this
is
a
problem
regardless
of
triggering
pipelines
or
not
right,
because
the
unfailed
job
only
run
once
so,
but
I
think
we
could
so
maybe,
but
I'm
not
sure
about
this,
because
if
I
remember
correctly
basically,
by
default
jobs
start
only
on
success.
You
can,
you
can
say
something
runs
only
on
failure,
but
I'm
quite
sure
there
is
also
an
option
to
say
run
regardless
of
it
failed
or
not.
B
So
it
would
be
interesting
to
figure
out
if
in
case
of
a
failure,
if,
if
we
retry
so
if
we
say
run
regardless
of
failing
or
not,
if
we
can,
it
basically
runs
again
if
we
retry
something
that
failed.
D
Of
a
bug
I
was
thinking
you
could
trigger
a
job
that
would
just
just
fire
and
forget,
like
just
trigger
a
job,
to
be
running
in
the
background
for
the
life
of
your
pipeline.
It
would
just
be
like
constantly
checking
in
on
jobs
to
see
if
anything
has
failed
and
then
sending
slack
like
a
slack
notifier
job.
But
this
sounds
ridiculous,
like
the
product.
D
The
running
failure
jobs
more
than
once.
I
think,
there's
an
issue
open
for
this.
I
can
find
it
because
I
think
I
opened
one
and
but
who
knows
how
long
it'll
take
for
that
to
get
looked
at
yeah.
B
B
B
It's
a
manual
job,
it's
a
manual
step.
We
probably
we
can
automate.
We
can
have
something
like
retry
last
pipeline
whatever
and
we
do
the
the
right
thing
assuming
it
can
work.
B
A
B
Yeah,
but
I
will
also
test
the
behavior
with
on
failed
job,
so
let's
say
that
we
have
in
the
parent
in
so
in
the
triggering
pipeline.
B
Some
job
is
runs
only
on
failure,
so
I
would
like
to
see
if
they
run
when
the
downstream
pipeline
fails
and
if
we
restart
what
happens
there,
because
this
is
important
for
moving
notification
in
general.
Maybe
we
can
say
just
okay,
it
doesn't
work
yet.
So
we
don't
move
notification
and
we
let's
say
we
move
qa,
I'm
more
interested
in
moving
qa
at
the
moment
than
the
moving
notification,
for
example,
and
maybe
say:
okay,
we
can
open
an
issue
because
this
doesn't
work
yet
doesn't
solve
our
case
and
we
move
other
things
around.
B
So
warming
up
right
now
is
delayed
because
we
wait
for
the
for
the
previous
deployments
to
complete,
while
in
the
past
it
was,
it
was
running
immediately
because
we
started
warming
up
as
soon
as
we
completed
the
previous
stage,
the
previous
environment,
so
having
something
simple
like
a
triggerable
standalone
job
that
just
warm
up
the
environment
that
you
provide
could
be
valuable
because
we
say:
okay,
we
are
waiting.
Please
warm
up
next.
A
A
C
A
Okay,
good
and
thanks
for
that
discussion
that
was
helpful.
If
there's
nothing
else,
can
we
stop
the
recording
scar.