►
From YouTube: Cross Project `triggered by` discussion
Description
The GitLab Verify team talks about the plans for adding `triggered-by` to the GitLab CI yaml to allow downstream projects to automatically build when upstream dependencies build or are released.
Spoiler alert: part of the discussion is around the name `triggered-by` changing 😃
Join the discussion here: https://gitlab.com/gitlab-org/gitlab-ee/issues/9045
A
B
I
started
recording
to
the
cloud,
let's
cross,
our
fingers
that
we'll
be
able
to
find
recording
later.
Thank
you,
everybody
for
attending
I
realized.
This
is
really
difficult
for
some
people,
because
it's
really
early
in
the
morning,
hello,
Brendan,
so
right
off
the
bat
I
want
to
talk
about
what
what
is
this
meeting
and
what
we
need
to
do
in
this
meeting.
So
we're
here
to
discuss
the
minimum
bottle
change
for
the
cross
product
triggered
by
implementation.
B
The
end
goal
of
this
meeting
is
having
an
approach
defined
for
implementing
this,
and
that
approach
does
not
have
to
be
a
finalized
proposal,
but
we
need
to
know
which
approach
are
we
going
to
take
at
implementing
this
because
it's
really
late
in
cycling?
We
still
didn't,
don't
have
a
general
general
overview
of
where
we
want
to
go
with
this
feature,
and
we
don't
have
the
approach
figured
out
by
the
end
of
this
meeting.
Then
that
puts
this
cross
project
this
tolerable
for
11.9
at
risk
and
it's
basically
a
direction
tolerable.
B
So
we
probably
should
focus
on
not
missing
it
and
I.
Take
a
look
at
other.
The
deliverables
and
I
think
it's
medium
profile,
so
there's
some
community
attention,
but
not
a
lot
and
I
have
a
link
in
the
issue
and
basically
there's
two
approaches
that
we
talked
about
so
far.
One
of
them
is
adding,
so
just
so
we're
on
the
same
page.
This
is
basically
for
adding
implementations
for
adding
a
section
in
the
downstream
pipeline
llamó
that
defines
that
that
pipeline
is
triggered
by
an
upstream
pipeline
in
a
different
project.
B
B
C
C
So,
let's,
let's
first
off
define
what
is
already
inside
the
application
right
now
right.
We
got
triggered
so
not
functionality
that
this
issue
will
implement,
but
we
got
the
cross
pipeline
project
cross
project
pipeline
functionality
already
in
there.
It
currently
uses
the
triggered
keyword
and
it
has
it
from
that
implementation
already.
It
displays
both
up
streams
as
well
as
down
troops
right
and
how
I
thought
of
this,
and
regardless
of
how
many
problems
it
might
introduce.
C
D
The
trigger,
whatever
I,
think
that
we
are
missing
one
important
piece
of
a
puzzle
and
that
all
existing
code
we
have,
and
these
two
approaches
we
currently
started,
using
triggers
key
wording
in
lab
CI
Yunel
and
triggering
pipeline
using
CI
job
talking
through
the
API.
Both
solutions,
both
approaches,
result
in
having
a
multi
project
pipeline,
are
a
little
different
than
triggered
by
because
in
the
former
former
case,
this
is
always
one
to
one
relationship
and
what
we
are
trying
to
solve
here
is
one
too
many
and
that's
a
little
bit
different.
B
We
might
have
to
do
something
different
for
that,
which
is
why
the
approach
actually
defines
the
implementation
and
the
implementation
defines
the
approach,
which
is
why
we
have
to
discuss
this
beforehand.
So
if
we
go
with
relying
on
what
we
already
have
we're
kind
of
forced
to
put
all
of
the
trigger
jobs
at
the
end,
the
top
stream
pipeline
period
like
if
we
rely
on
what
bilberry
have
I,
don't
think
we
can
put
them
anywhere
in
the
pipeline.
I.
D
Don't
think
that
using
pre
jobs
or
trigger
jobs,
however,
we
called
it.
It
makes
sense
because
it
means
that
a
downstream
pipeline
is
going
to
modify
the
arms
upstream
pipeline
and
put
some
arbitrary
job
somewhere
in
the
pipeline
and
Wiley
perhaps
makes
some
sense,
because
then
we
have
a
connection
between
the
job
in
the
upstream
pipeline
and
a
job
in
the
downstream
pipeline.
D
Putting
some
job
into
the
upstream
pipeline
means
that
we
need
to
calculate
the
status
of
this
job
into
the
status
of
the
pipeline
itself,
so
I
mean
it's
know,
it's
a
little
tricky
because
we
are
putting
some
arbitrary
job
into
someone's
else
pipeline
and
we
having
a
state,
was
there.
We
got
a
job,
it's
a
status
in
the
backend
code.
It's
even
called
the
comet
status
right,
so
it
means
that
we
are
putting
some
status
into
someone's
else
pipeline
and
how
do
we
treat
that?
Do
we
change
the
building
status?
C
D
C
C
I
understand,
though,
I
want
to
make
clear
that
you
know
we
the
anchor
points
of
triggering
a
downstream
pipeline
from
like
and
I'm
thinking
here
from
an
ideal
ideal
situation
right,
not
something
intermediary.
But
so
when
do
you
want
to
trigger
your
down
streams
or
at
least
other
projects
that
depend
on
your,
for
example,
your
package
right
or
your
darker
image,
etcetera?
That
is
when
you
release
that
image,
you
send
out
a
release.
There's
a
new
release
that
triggers
all
the
downstream
projects
at
reliant.
Well,
yeah.
A
Yes,
but
that,
but
it's
the
inverse
case
that
this
is
I
want
to
trigger
my
pipeline.
You
might
not
even
know
about
it
right
when
you
release,
and
so
that's
why
we
have
this
like
concept
of
on
right,
so
I
might
say
not
on
every
pipeline.
I
might
say
only
on
tags
right
only
when
you
tag
your
upstream
thing.
That's
when
I
want
to
go
online
right.
C
The
the
thing
that
I'm
thinking
here
is
there's
a
difference
between
internal
use
and
external
use,
so
internal
use,
it
would
be
make
sense
to
triggered
things
based
on
the
repository
right,
but
from
an
external
use
case.
It
makes
more
sense,
in
my
opinion,
to
trigger
things
based
on
release.
Management
am
I
completely
wrong
here.
D
That's
very
interesting
because
in
software
development
we
do
have
this
concept
called
inversion
of
control
and
we
usually
introduce
or
invert
the
control
when
we
want
to
decouple
something,
and
in
this
particular
case
it
means
that
we
not
necessarily
want
to
tell
the
upstream
pipeline
owner
who
is
using
the
downstream
pipeline,
because
this
is
the
decoupling
we're
we
can
can
have
it
in
this
particular
example.
This
is
actually
quite
similar
Forks.
So
when
someone
Forks
your
project,
you
might
not
be
really.
D
You
know
interested
in
that,
especially
when
it's
some
popular
open
source
library
with
hundreds
or
thousands
of
Forks,
but
sometimes
you
might
want
to
see
who
actually
fought
through
a
project,
and
this
feature
is
usually
quite
hidden
in
github
or
github.
You
need
to
click
some
button
to
see.
You
know
what
what
the
forks
are
and
I
think
that
perhaps
the
most
simple
solution
here
is
to
just
forget
about
showing
that
to
the
upstream
pipeline,
maintainer
or
upstream
project
maintainer
and
I.
D
D
E
B
Another
like
thinking
about
this,
where
we
have
some
dependence
that
the
rely
upon
others
in
shell
scripting,
you
have
a
utility
called
watch.
So
that's
how
you
depend
on
some
change
and
then,
once
that
change
happens,
you
can
do
something
else.
The
reason
why
that
works
is
because
it
constantly
monitors
the
thing
that's
supposed
to
change,
and
in
our
case
we
can't
really
do
that.
So
that's
the
problem
here.
B
It's
really
where
the
triggering
the
pipeline
needs
to
originate.
So
how
do
we
know
that
the
upstream
pipeline
finishes
it's
going
to
trigger
the
downstream
pipeline
at
real,
really
carefully
without
changing
the
upstream
pipeline?
The
only
way
we
can
do
that
is
by
constantly
monitoring
if
it
happened
and
that's
not
really
something
we
can
do
so.
C
B
D
C
D
I
think
that
we
currently
do
have
three
jobs
implemented
right
and
we
currently
have
all
the
front-end
implementation
that
we
used
to
show
the
connection
between
a
job
and
the
pipeline
trigger.
So
it
might
make
sense
to
actually
use
three
jobs
indeed
like,
but
not
in
the
case,
in
the
form
that
it's
quite
intuitive
for
us,
because
it's
quite
intuitive
for
us
to
place
a
brief
job
in
the
upstream
pipeline,
because
this
is
how
triggers
by
works
it.
D
It's
now
going
to
this
direction,
so
I
mean
breed
job
that
triggers
the
downstream
pipeline
is
inside
your
upstream
pipeline
and
it
points
from
upstream
to
downstream
like
its
natural
direction,
something
that
we
understand.
However,
in
this
case,
we
are
inverting
the
control,
which
means
that
perhaps
the
bridge
job
should
appear
in
the
downstream
pipeline
and
point
another
direction.
D
This
way,
we
can
also
take
a
status
of
the
upstream
pipeline
into
account
into
our
downstream
pipeline,
because
we
have
the
status
attribution
feature
on
our
roadmap
and
we
want
to
make
it
possible
to
configure
a
bridge
job
in
a
way
that
the
status
attribution
works
right.
Do
you
guys
know
what
what
the
issue
about
stops
attribution
is
it's
the
I
think
it's
we
do
have
that
on
the
roadmap,
it's
about
waiting
for
the
upstream
pipeline
downstream
part
and
to
finish
anyway.
D
So
so
we
can
use
the
bridge
job
to
actually
and
create
it
in
the
downstream
pipeline
when
the
upstream
pipeline
gets
triggered
gets
created,
and
we
also
can
contribute
in
the
in
that
bridge
job
the
behavior,
whether
it's
going
to
have
status,
stops,
attribution
or
not,
and
we
can
reuse
existing
bridge
up.
It
means
that
we
would
need
to
change
the
syntax
of
our
future
proposal
because
it's
not
going
to
trigger
by,
but
it's
going
to
be
yet
another
job
very
similar
to
the
job
that
has
triggers
keyword.
C
From
my
from
my
understanding,
you
intend
so
first
off
we're
not
going
to
show
any
down
streams
in
the
upstream
pipeline.
We
are
going
to
show
the
upstream
pipeline
in
the
downstream
pipeline
and
the
job
that
triggered
truck
from
that
upstream
right.
However,
there
can
be
multiple
of
those
trigger
jobs.
You
say
because
it's
the
downstream,
not
the
upstream
right
and
we'll
what
I
get
from
you
I
think
is.
That
will
only
show
the
job
that
actually
got
triggered
yes,
and
that
makes
sense
to
me
as
well.
A
B
A
C
I
mean
we
could
even
say
the
current
situation
is
that
from
a
upstream
pipeline
that
triggers
the
downstream,
so
another
thing
that
we're
now
implementing
the
job
can
be
in
any
stage
of
the
pipeline
regardless.
However,
now
we're
looking
at
the
downstream
right,
it
should
be
one
of
the
like
the
first
stage
optimally
because
it
always
gets
triggered.
C
C
C
B
B
D
B
E
A
A
You
know
make
a
major
mistake,
so
I
think
we
should
be
able
to
assume
that
all
this
I'm
she'll
make
you
now
work
out
and
then
I
also
updated
the
description
that
has,
if
everybody
go,
look
at
that
real,
quick
I
put
the
link
back
and
zoom
in
case
you
missed
it,
but
yeah.
So
sorry
that
was
two
different
things
one.
We
should
take
the
path
of
least
resistance
on
these
smaller
things
so
that
we
can
ship
and
then
to
go.
Look
at
the
description.
I
think
this
is
what
Gregor's
is
proposing
and
everybody's
anymore.
A
D
Also
one
additional
concern:
perhaps
it's
just
something
that
appears
in
my
mind,
and
perhaps
this
this
problem
is
not
clearly
visible
for
us,
but
we
do
have
pipelines
on
many
different
branches
right,
so
we
are
able
to
define
the
upstream
branch,
but
on
which
downstream
branches
do
we
want
to
trigger
pipeline?
Is
that
something
that
we
consider
it
already.
A
D
D
D
Say
that
the
disparate
job
would
have
only
accept
and
all
the
features
that
we
currently
support
with
pre
job,
and
if
our
about
pipeline
evaluation
mechanism
tells
us
that
this
job
should
not
be
created
in
these
conditions,
we
are
not
creating
pipeline
at
all
right.
If
it
it
gets
created,
then
we
skip
all
the
jobs
up
to
the
bridge
job
in
the
pipeline
stage,
and
you
know,
follow
just
normal
pipeline
processing.
A
This
library
is
in
some
way
completely
independent
from
my
build
right
so
like
say,
I
don't
need
to
rebuild
my
application.
I
just
need
to
make
my
doctor
file
again
with
it
and
my
application.
That
would
be
in
use
case
for
it
right,
don't
rebuild
just
repackage
great,
basically
or
don't
rebuild
just
release.
Yeah
I,
don't
know
how
I
don't
know
that
that's
the
more
common
use
case,
but
it
is
one
that
maybe
yeah.
B
D
A
Maybe
rue
was
but
mark.
Puns
actually
came
back
to
me
and
said:
hey
it's
triggered
by
the
right
word.
I
was
like
you
know
what
I
don't
know
because
I
didn't
write
this
issue
and
I
just
been
kind
of
going
along
with
him
because
triggered
and
then
turned
goodbye,
but
we
should
have
problems.
We
did
because
we
already.
C
E
C
G
A
Yeah
I
mean
I
agree
like
the
night.
That's
another
argument
we
keep
coming
up,
for
maybe
we're
got
a
little
confirmation
bias,
but
we
keep
coming
up
with
new
arguments
for
why
we
should
do
it
this
way.
But
that's
another
argument
to
do
this
way.
Is
the
UI
we'll
just
just
work,
because
it's
a
job
right
think
it'll
work
like
the
existing
ones?