►
From YouTube: Feature Vision for Parent Child Pipelines
Description
Pipeline Authoring group and Continuous Integration group discussion about parent-child pipelines - what it is; which team owns what; how to improve it
A
Okay,
thanks
welcome
everyone
to
our
feature,
vision
for
pipeline
parent
child
pipeline
session.
I
hope
you
had
a
chance
to
take
a
look
at
the
the
issue
that
I
I
sent
to
our
team
in
slack
for
having
a
head
start
on
what
the
difference
are
between
multi-project
pipelines
and
the
parent
child
pipeline.
That
is
the
topic
for
this
session.
A
The
agenda
I
added
to
the
meeting
invite
this
morning,
so
you
didn't
get
to
see
that
go
ahead
and
go
out
there
and
grab
that
what
I
want
to
get
out
of
this
session
is
to
start
the
discussion
of
what
we
want
to
how
we
want
parent
child
pipeline
to
active
the
vision
for
it
as
a
whole.
But
I
I
know
we
probably
won't
get
through
everything,
so
I
did
put
on
the
agenda
at
the
top
a
link
to
the
issue
for
async
collaboration.
A
So
we
can
continue
some
of
this
discussion
and
then
out
of
that
async
collaboration.
If
it
seems
that
we
want
to
have
another
live
session
to
talk
through
some
more
things,
I
can
schedule
that
as
a
follow-up,
so
the
recording
from
this
I'll
add
to
to
unfiltered
as
well
upload
it
and
then
there's
the
link
to
the
the
playlist
where
I'll
add
it
to
so.
For
today's
topic,
we'll
start
with
fabio.
You
have
the
first
agenda
item
to
talk
through.
B
Yeah,
so
I
think
it
makes
sense
to
start
with
the
definition
of
varying
child
pipeline
or
at
least
like
what
child
pipelines
are
in
comparison
with
the
the
normal
standalone
pipelines
and
just
to
simply
put
there.
They
are
considered
the
subcomponents
of
a
parent
pipeline.
So
you,
the
the
purpose
of
child
pipelines,
is
to
break
down
a
complex
pipeline
into
smaller
pieces
and
more
manageable
pieces.
B
But
the
overall
goal
is
to
contribute
to
the
parent
pipeline
to
to
the
goal
of
the
parent
pipeline
and
the
analogy
there
is
like
yeah
breaking
down
like
a
program
into
smaller
functions
or,
as
shiragosh
was
saying,
also
a
lot
of
process
in
some
process.
B
B
This
just
happened
that
one
is
triggered
by
another
one,
but
they
don't
have
control
over
each
other,
except
for
maybe
passing
some
variables
downstream,
while
with
the
parent
child
pipelines,
the
parent
has
control
over
the
structure
of
the
pipeline,
because
it's
in
the
parent,
where
we
define
how
the
child
pipelines
the
structure
of
the
child
pipelines,
is
actually
defined
in
a
file
that
is
a
cause
by
the
parent.
B
A
Thank
you
fabio
and
then,
by
the
way
I
moved
the
agenda
item
that
was
about
the
differences
in
that
issue
to
the
top
of
the
agenda.
Since
it's
just
oh,
it's
just
for
read
only
jigosh
did
you
have
an
additional
comment.
There.
A
Okay,
so
that
was
that
thanks
for
that
overview,
I
I
actually
it
was
not
clear
to
me
until
you
framed
it
that
way:
fabio
how
to
think
of
parent
child
pipelines.
D
Sorry,
do
we
know
idea,
do
we
have
ideas
for
use
cases
for
the
parent
child
background.
E
Yeah
mono
repos
would
be
a
big
one.
A
Yeah,
so
when
we,
when
we,
when
we
deliver
parent
child
pipeline,
that
was
the
main
use
case
to
miguel
too,
for
teams
that
wanted
to
make
their
mono
repo
a
little
easier
to
understand
to
break
them
into
smaller
components.
That
was
the
main
use
case.
A
F
Aware
of
that
yeah,
something
that
we
added
in
1305,
I
think,
was
scheduled
before
that
we
got
based
on
the
an
old
issue
or
a
couple
of
months
old
issue.
We
got
like
a
request
from
customers
asking
to
be
able
to
trigger
a
child
that
lives
in
a
different
repo
up
until
13.5.
It
wasn't
possible
so
we've
added
this,
and
I
think
this
is
where
we
kind
of
like
maybe
confused
about.
Okay.
What's
the
differences
between
the
multi
project
pipeline
and
the
parent
child
pipeline,
and
at
least
for
me,
it
was
like
a
big
confusion.
F
E
Yeah
they
could,
it
would
just
mean
they
would
need
to
change
their
architecture
of
their
project
to
instead
of
using
the
single
repo
to
break
their
projects,
say
front
end
and
back
in
into
like
api
and
front-end
client
and
server
into
two
different
repos
to
handle
their
pipelines.
That
way,
but
like
a
lot
of
organizations,
prefer
the
monorepo
so
it
they
can
handle
their
ci
with
parent
child
pipelines.
E
B
Just
a
quick
clarification
on
the
feature
that
we
added
in
13.5
about
the
pipeline
and
other
projects,
and
so
the
the
child
pipeline
is
still
going
to
be
triggered
in
the
same
project.
The
the
difference
is
that
we
are
allowing
to
take
the
configuration
from
another
project
that
will
detect
this
destruction
of
the
pipeline
and
apply
and
use
that
to
generate
a
pipe
a
pipeline
within
the
same
reaper.
B
So
the
the
purpose
of
the
parent
jackpot
line
still
remains
the
same,
because
it's
like
breaking
down
a
pipeline
into
smaller
pieces,
but
rather
than
defining
the
structure
of
the
the
child
pipeline
in
the
same
repo
that
can
be
defined
in
a
different
rib,
which
could
be
could
be
also,
let's
say,
a
set
of
recipes
for
for
different
things.
And
you
want
to
say
I
want
to
create
a
shelf
pipeline
for
this
thing
and
you
actually
take
that
configuration
from
a
different
repo,
for
example,
and
it's
just
where
the
configuration
is
coming
from.
B
B
So
because
we're
using
include,
then
we
are
entering
all
the
power
of
influence,
so
we
can,
in
theory,
have
a
sort
of
a
template
that
we
can
spawn
as
a
child
pipeline
by
wrongly
running
whatever
is
in
that
template
or
to
take
the
configuration
from
a
remote,
url,
that's
or
whatever.
A
All
right,
I'm
glad
you
clarified
that,
because
dove
it
sounds
like,
though,
the
integrity
and
the
regional
intent
of
parent
child
pipeline
is
still
preserved.
Even
with
that
recent
feature
that
was
added
in
13.5,
because.
A
Yeah,
because
the
one
thing
that
dove-
and
I
wasn't
sure
is-
is
with
that-
recent
change
in
13.5
doesn't
mean
that
the
the
the
the
concept
of
multi-project
pipeline
or
parent
pipeline
has
kind
of
merged
into
one,
and
that's
not
the
case.
Okay.
A
Okay,
are
there
any
other
comments
from
folks
on
agenda
item
one
as
far
as
questions
about
what
a
parent
a
child
pipeline
is,
or
apparently
I.
I
Read
somewhere,
so
I
read
somewhere
in
one
of
the
issues
that
an
in
include.
We
cannot
pass
a
variable,
but
in
parent
child
pipeline,
that's
possible.
So
is
there
any
plan
to
enable
passing
variables
and
includes.
B
I
so
I
today
is
not
possible
to
use
includes
and
whatever
is
specified
in
the
include
to
be
taken
from
a
variable
or
to
use
variables.
There
that's
not
possible,
it
is
possible-
and
I
think
we
do
have
an
issue
for
that
about
using
variables
within
increments,
but
it's
possible
to
pass
variables
to
child
pipelines
from
the
parent,
as
well
as
for
multi-project
pipelines.
B
So
whenever
for
for
both
types
of
downstream
pipeline
pipelines,
it's
possible
to
say
trigger
this
pipeline,
and
the
upstream
pipeline
also
passes
some
variables
downstream,
and
this
is
a
kind
of
interesting
scenario,
because
we
could,
by
passing
variables,
we
could
also
use
the
concept
of
variables
as
sort
of
input
parameters
to
say.
The
child
pipeline
can
be
the
same
like.
B
Let's
imagine
it's
more
like
a
function
in
programming
language
and
we
pass
in
some
variables
that
would
be
equivalent
of
parameters
and,
and
we
would,
they
will
be
able
to
run
the
same
sort
of
structured
pipeline
by
passing
different
parameters
from
different
repositories,
for
example,
and
that
will
run
as
a
child
pipeline.
So
it's
like
the
blueprint
is
defined
in
the
yaml
file,
but
the
instance
of
running
of
that
pipeline
will
be
running
within
the
same
project.
A
B
A
Okay,
we
don't
have
an
issue
yeah
one
of
the
things
I
do
want
to
come
back
to
and
though
you
may
not
be
aware
of
this,
we
have
an
issue
to
standardize
on
the
just
the
the
terms
we
use
the
nomenclature
for
for
pipelines
upstream
downstream,
but
that's
another
topic
too
I'll
find
that
issue
job
and
share
it
with
you.
D
I
have
another
question:
is
there
who
is
the
parent
and
who
is
the
child?
So
so
I
just
want
to
understand
when,
when
we
have
a
pipeline
defined,
we
include
it
into
another
pipeline.
So
does
that
make
the
the
pipeline
that
is
included
that
the
child
pipeline
or.
E
So
imagine
you
have
your
repo
and
you
have
your
root.
Ci
yaml
file.
That's
going
to
be
your
parent
and
then
anything
you
include
most
likely
like
that.
That
will
spawn
your
child.
B
I
think
it's
not
so
we
have
to
def,
we
use
the
include
feature
for
two
different
purposes,
and
I
think
it's
important
to
highlight
that.
So
we
have
you
know.
The
the
one
we've
been
using
before
parenting
pipeline
is
include
a
snippet
of
yaml
configuration
in
this
current
yeah
and
that
that
will
cause
the
whole
configuration
to
be
merged.
B
So
but
we
are
using
the
same
feature
for
a
different
purpose.
So
what
we
are
saying
for
with
failing
child
pipeline
is
trigger
another
pipeline.
So
it's
like
a
spawn
another
sub
process
of
this
pipeline,
and
but
we,
when
we
spawn
a
sub-process,
we
need
to
tell
the
sub-process
what
configuration
should
we
use
like
what
do?
What
should
this
pipeline
look
like,
and
in
this
case
we
use
the
include
to
say,
use
this
yaml
file
or
use
a
combination
of
these
yaml
files
and
whatever
there
is.
B
But
then,
when
we
spawn
a
child
pipeline,
we
need
to
tell
where
this
configuration
file
is
because
it
can't
be
the
same
with
eclampsia,
otherwise
we're
going
to
create
another
identical
pipeline.
So
we
need
to
create
another
pipeline
with
different
structure
and
we
pass
in.
We
tell
basically,
this
child
pipeline
use
the
structure
and
we
pass
that
through
the
include
keywords.
So
that's
why
we
have
two
different
types
of
equipment:
one
is
to
merge.
B
E
Yeah
yeah,
that's
the
key
difference.
I
could
definitely
see
how
that's
confusing.
I
left
a
link
in
the
agenda
with
a
test
repo
that
I
use
personally
for
parent
channel
pipelines
is
public.
So
if
you
want
to
go,
look
at
that
and
kind
of
look
a
little
bit,
you
know
into
it.
It's
there.
J
Okay-
and
I
have
a
question
on
that
subject
because
we're
saying
the
includes,
with
the
trigger
from
from
what
I've
I've
used
personally
like
the
parent
child
pipeline,
I've
always
used
the
trigger
keyword
which
links
to
in
different
ci
config
file
and
even
if
it's
in
the
same
repo
right
and
then,
instead
of
using
the
enclosed
keyword,
I
can
link
directly
to
a
yaml
file,
and
that
is
that
technically
still
apparent
child
pipeline.
Because
it's
still
run
in
the
same
repository
or
is
it
different.
B
A
Thanks,
hey
miguel
back
to
your
question,
also,
what
were
the
use
cases,
one
of
the
reasons
that
users
wanting
to
be
able
to
break
down
the
monorepo
was
also
performance
when
the
monorepo
is
broken
down
into
smaller
child
pipelines,
those
child
pipelines
can
win
concurrently,
which
would
help
with
performance
of
the
execution
time
on
the
pipeline
and
then
and
then
for
visualization
as
well.
When
you
can
visualize
parts
of
the
pipeline
as
child
as
opposed
to
the
entire
thing,
it
made
it
easier
on
our
system
too,
for
visualization.
J
And
also
it's
good
for
name
spacing,
because
you
can
then
use
the
same
name
in
a
different
pipeline
right,
you're,
the
same
stage,
so
you
could
say
for
this
part
of
the
application.
It's
still
build
test
and
deploy
right,
and
you
could
have
these
three
stage
broken
down
for
each
part
of
your
application,
and
that
makes
sense.
A
Okay,
so
dove
I
might.
I
might
come
back
to
your
number
two.
We
we
might
talk
about
that
async
in
the
issue.
I
do
want
to
get
to
some
of
the
questions
of
how
we,
what
is
our
vision
for
this
feature,
what
it?
Basically
we
want
it
to
be
or
not
want
it
to
be
thanks
for
adding
those
linked,
epic
stove
number
four,
so
skipping
two
and
three,
because
these
are
a
repeat:
what
will
will
in
the
async
issue
that
we'll
come
back
to
number
four
peyton?
E
Sure
yeah,
I
brought
this
up
in
our
internal
ci
meeting,
but
now
we
have
some
pa
members
here
with
us.
So
initially
back
before
the
split
fabio-
and
I
worked
on
this
feature
and
now
that
I'm
on
ci,
as
fabio
still
is,
I
believe
back-end
will
still
support
that
feature.
E
But
now,
since
this
features
tightly
grouped
with
visualization
and
pipeline
authoring,
is
currently
handling
that
I
just
want
to
kind
of
clarify.
Will
pa
still
support
this
in
front
end,
but
we'll
have,
to
kind
of
you
know,
still
coordinate
with
our
back
end.
C
I
think
this
is
a
very
similar
problem
to
bridge
pipelines,
because
we
can
have
a
lot
of
work
needed
to
be
done
on
the
visualization
side
and
how
we
visualize
our
visualize
and
model
pipelines
using
a
syntax
in
the
crm
file.
Then
we
can
have
a
lot
of
workaround
processing
how
you
know
we
read
the
content
of
the
child
pipeline
and
stuff
like
that,
so
yeah.
J
And
just
like,
I
think
I
think
we
will
collaborate
on
this
fitting
just
because
well,
it
makes
sense,
because
we,
both
kind
of
are
going
to
touch
it.
I
think
syncing
is
going
to
be
the
most
important
and
we
might
we
might
find
a
formula
just
by
experience
like
oh.
J
This
is
this
kind
of
issue
on
the
front
end,
so
it
might
make
sense
to
send
it
to
pa,
or
this
might
be
nci,
and
I
can
tell
like
I
know
that
the
goal
of
how
we're
refractoring
the
graph
right
now
is
so
that
it
can
be
owned
separately
because
the
graph
is
going
to
be
the
same
for
visualization
than
it
is
for
the
ci
run.
But
the
difference
is
that
they
will
each
have
their
own
sub
components
which
they
can
own.
J
So
like
the
sub
component
for
downstream
and
upstream
in
there
could
be
owned
technically
purely
by
ci.
At
some
point,
just
because
it's
not
going
to
be
visualized
the
same
way
that
it
will
be
for
execution,
I
think,
but
I
do
think
we'll
have
to
sync
up
and
kind
of
see
what,
where
did
that
make
sense.
J
Yeah,
that
was
the
idea.
It
wasn't
to
kind
of
just
cut
it
off
entirely.
It
was
more
like
when
it's
purely
visualizing,
your
yaml
in
terms
of
like
how
does
that
structure
look
like
that,
would
be
purely
pa,
but
there's
gonna
be
overlap
in
the
graph,
because
sometime
ci
might
have
a
very
specific
meaning
like
how
it
executes
or
like
action
you
have
to
perform
on
the
graph,
and
that's
why
it's
important.
We
have
a
separation
there,
where,
like
we
can
each
own
our
subcomponents.
E
E
I
get
what
you're
saying,
and
that
makes
100
percent,
but
I
kind
of
I
guess
it's
more
of
a
question
for
like
product
and
the
ems
of
like
how
we're
going
to
handle
this.
So
the
split
isn't
confusing.
A
Don't
I
don't
have
a
good
answer
peyton
to
be
honest,
because
we're
gonna
have
to
do
it
by
feel
at
first
I'll
work
with
cheryl,
sam
and
and
dove,
and
between
the
four
of
us,
we'll
probably
have
to
figure
out
on
a
case-by-case
basis.
E
A
Yeah,
it's
a
tough
one.
I
don't
have
an
answer
today
for
that.
Okay,
so
that's
something
we'll
have
to.
F
F
A
No,
that's
not
what
we're
saying
peyton's
question
was
specific
to
the
front
end
work.
A
If
there's
something
that
ci
team
is
changing
because
of
the
processing
of
parent
child
pipeline
or
any
bridge
pipeline,
and
there's
some
front
and
work
that
has
to
change
what
happens
there.
Does
it
get
delayed
and
passed
over
to
pa
team,
and
then
you
would
have
to
prioritize
that
work.
My
preference
is
not
my
preference
is
whatever
team
is
working
on
that
feature
or
change
own
it
and
get
some
consulting
from
the
dri
team,
but
I
I
don't.
A
A
Yeah,
the
good
news
of
that
is,
there's
always
going
to
be
some
cross
training
shared
knowledge
across
the
two
I
think
of
us.
As
sibling
teams.
It
seems
a
little
more
pc
than
to
call
us
sister
teams,
we're
sibling
teams
and,
I
think,
we'll
always
be
joined
at
the
hip,
but
some
some
way
or
another.
Okay.
F
The
source
of
some
of
the
confusion
is
mainly
around
the
overlap,
for
instance,
artifact
handling
in
parent
child,
so
artifact
is
ci
talent,
child
is
is
pa,
and
so
we
have
issues
where
we've
been
asked
to
to
do
something
around
like
artifacts
in
parent
child
pipeline
for
bugs-
and
this
is
where
we
cannot
find
the
right
owner,
because
if
it's
like
artifact,
then
it's
yeah.
If
it's
and
it's
v8.
A
I
I
agree
with
cheryl's
comment
in
the
zoom
chat.
What
you're
saying
is
another
reason
we
need
to
get
do
away
with
those
scope,
labels,
which
is
a
big
discussion
for
another
session.
I
don't
want
to
hijack
this
one.
Let's
come
back
to
that
one,
but
there's
an
issue
for
that
too.
Okay,
so,
agenda
item,
three
fred:
let's
go
through
your
thoughts.
There.
J
So
it's
just
a
quick
thinking.
I
know
that
and
I
I'm
kind
of
anticipating,
because
I've
seen
fabio
answer,
but
would
it
make
sense
in
the
graph
to
differentiate
visually
when
you've
triggered
like
your
downstream
is
a
child
parent
child
pipeline
or
when
it's
a
multi-project
pipeline
and
with
fabulous
you
can
voice
your
thoughts.
B
Yeah,
I
think
it
well.
It
really
depends
also
how
other
users
are
using
it,
but
I
think
in
my
opinion,
it
makes
sense
to
distinguish
if,
if
a
pipeline
is
running
within
the
same
project,
so
also
in
the
same
scope
or
is
another
a
trigger
against
like
another
different
pipeline,
where
we
don't
really
have
a
control.
B
So,
for
example,
in
in
our
merge
request
workflow
when
we
develop
on
gitlab
work
code,
we
might
sometimes
trigger
a
downstream
pipeline
on
the
the
cube
to
run
the
qa
like
end-to-end
system
tests,
and
now
arguably
we
might
not
have
control
of
what
is
in
that
in
a
different
project.
So
again
we
might
want
to
distinguish
aware
or
something
I'm
running
this
pipeline
or
something
else
is
triggering
somewhere
else
where
it's
just
because
the
pipeline
is
configuring
certain
certain
way,
but
it's
something
outside,
rather
than
being
something
like
a
child
pipeline.
B
That
is
actually
an
important
part
of
the
pipeline.
So
it's
actually
a
soap
part
of
the
pipeline.
So
I
think
these
do.
Two
distinction
are
important
right
now
we
actually
putting
the
screenshot.
We
are
putting
everything
under
the
same
downstream
column
and
and
they
all
look
the
same
while
reality,
there
should
be
two
different
things,
so
I
think
we
were
thinking
about
having
a
way
where
you
can
expand
by
clicking
on
the
job
that
triggers
a
child
pipeline,
expand
the
pipeline
from
there.
J
J
So
you
keep
navigating
and
you
don't
know
if
you
don't
know
if
you're
looking
at
a
different
project
or
like
a
sub
part
of
your
pipeline,
and
I
think
I
think
the
way
to
think
about
it
internationally,
how
we
could
break
it
down
it's
like
upstream
and
downstream,
or
the
general
term
right
like
it's
like
they're,
inclusive
of
both,
if
I'm
not
mistaken,
like
parent
child
and
multi-project.
So
we
can
have
common
behavior
for
downstream
and
upstream
and
then
specify
behavior
for
parent
child
and
for
multi-project
pipeline.
E
I
would
like
just
to
note
that,
like
so,
it
was
kind
of
like
a
very
small
mvc
that
we
got
out
there,
so
we
just
you,
know
stuck
some
labels
on
there
to
do.
The
our
plan
was
to
keep
iterating
on
it
and
make
it
better,
but
it
just
got
put
at
a
halt
so,
like
all
these
ideas
are
great
to
like
improve
it.
J
Yeah
for
sure
I
I
didn't
want
to
my
comment
to
come
off
as
like
I'm
criticizing
what
we
already
have.
I
know
how
it
is
it's
more
like
that.
I
think
that
would
be
like
a
very
interesting
next
step,
just
starting
visually
and
making
sure
people
can
can
see
the
difference,
it
can
even
trigger
you
to
associate
what
you're
seeing
with
your
yaml
and
then
you
can
start
being.
Oh
when
I
wrote
this
differently,
there
was
this
visual
difference.
E
I
So
I
think,
by
doing
that,
we
can
also
figure
out
if
users
prefer
to
know
that
their
the
child
pipeline
belongs
to
a
different
project
or
not
or
those
kind
of
small
little
things
matter
to
them
and
should
be
represented
visually
as
well.
A
I
I
moved
yeah,
I
moved
dove's
question
earlier
to
number
four
there.
I
don't
the
answer.
Your
question
is,
I
don't
think
we've
done
a
validation
with
our
existing
users.
Around
parent
shop
pipeline.
A
Yeah
is
that
something
you
and
nadia
want
to
do
dove
or
you
want
vitica,
and
I.
A
Okay,
so
that
probably
folks,
the
outcome
of
some
of
that
validation,
research
with
users
will
also
feed
into
whatever
vision
we
have
for
this
this
feature,
but
that's
to
be
determined.
Okay,
any
other.
By
the
way,
though,
the
the
screenshots
that
you
pasted
in
in
the
agenda
item
e
3e,
I
don't
think
that's
only
parent
child
pipeline
right,
because
that
illustration
would
show
like,
under
the
downstream
column,
one
of
them
doesn't
have
a
child
label.
So
it's
just
a
downstream
project
external
to
it's,
not
a
child.
I.
E
Yeah
I
took
that
screenshot,
so
really,
what's
going
on
is
just
what
fred
and
everyone
were
talking
about
earlier
is
like
some
of
the
confusion
of
like
you
can
still
like:
it's
not
clearly
separated
between
parent
and
child
and
downstream,
so
like
right
now.
Currently,
in
that
column,
you
can
have
you
know
in
your
repo.
You
can
trigger
parent
child
pipelines
and
multi
project
pipelines
in
the
same
repo.
L
F
E
If
you
navigate
from
the
child
to
the
next
pipeline,
you
know
that
will
be
your
parent
pipeline
or
it'll
leave.
Sorry
it'll
lead
back
to
your
parent
pipeline.
It's
it's
super
confusing.
Sorry.
G
F
K
A
B
It
just
it
just
at
the
moment.
The
two
things
are
not
well
correlated,
so
there's
there's
some
improvements
and
we
have
some
an
issue
where
we
can
improve
that
and
basically,
when
you,
when
you
look
at
a
child
pipeline,
you
want
to
see
immediately
where
this
is
coming
from
which
job
triggered
that
in
the
parent
pipeline
and
that
could
also
take
the
name
directly
of
the
job,
a
triggering
job.
So
in
a
way
you
also
have,
because
the
trigger
job
doesn't
do
anything
else
than
triggering
a
pipeline.
So
it
doesn't
it
can.
E
And
I
think
in
a
previous
milestone,
we
did
where,
if
you
hover
over
the
trigger
job
so
like
microservice
a
there,
if
you
hover
over
it,
it'll
highlight
their
relative
downstream
pipeline
as
well,
so
that
that's-
and
that
is
the
issue
for
this.
F
J
A
A
A
N
A
N
Dependent,
but
it's
not
directly
dependent
to
it,
I
mean
we
have
two
options
for
passing
variables
to
downstream
pipelines,
but
for
this
we
had
a
conversation
with
marius
about
dependencies
keyword
here,
so
it
is
actually
so
right
right
now,
every
job
can
have
needs
keyword
and
dependencies
keyword,
but
bid
jobs
cannot
have
dependencies
keywords
because
bill.
E
N
C
However,
bridge
jobs
are
a
little
bit
different,
because
bridge
jobs
create
a
new
pipeline
in
a
slightly
different
context,
so
the
dependencies
context
in
the
downstream
pipeline
or
type
pipeline
is
going
to
be
a
little
bit
different,
so
it
might
be
actually
quite
difficult
to
implement,
but
if
we
ever
make
it
like,
if
we
ever
implement
this,
this
would
be
quite
useful.
I
guess.
N
H
B
N
N
Dependencies
in
british
up
do
not
use
artifacts,
but
they
use
artifacts
for
getting
data
and
v
environment
variables,
so
artifacts
are
are
used
for
getting
data
on
v
variables,
four
bit
jobs,
because
village
apps
can
pass
variables
to
the
downstream
pipelines,
so
they
can
use
artifacts
for
just.
C
I
think
if
we
ever
implement
dependencies
for
big
jobs,
the
way
it
should
behave
is
that
when
you
define
a
dependencies
keyword
on
a
bridge
job,
the
jobs
in
a
downstream
pipeline
receive
artifacts
from
the
jobs
that
were
mentioned
mentioned
in
the
dependencies
in
the
bridge
right.
So
this
would
be
this
kind
of
mechanism
that
is
actually
passing
variable
artifacts
to
downstream
pipeline
to
every
job
in
that
pipeline.
It
could
actually
be
interesting,
but
I
think
we
should
actually
create
an
issue
and
discuss
this
asynchronously.
A
Async,
okay,
so
actually
moving
on
to
agenda
item
six,
this
is
something
that
fabio
you
had
mentioned
in
the
ci
weekly
meeting
last
week
that
I
was
really
curious
to
have
for
the
discussion
on.
Do
you
want
to
talk
through
your
thought
there
on
item
six.
B
Yeah
so
it's
kind
of
linked
to
the
general
also
to
how
we
handle
artifacts
with
parent
child
pipeline,
but
so,
when
we
use
when
we
first
implement
the
mvc
for
parent
child
pipeline,
we
actually
use
the
strategy
depend
feature
that
it
was
already
implemented
in
multi-project
pipelines
and
which
by
default,
is
not
enabled.
B
So
it
means
that
when
you
run
a
pipeline
either
multi-project
pipeline
or
child
pipeline
without
specifying
strategy
depend,
it
will
run
as
a
synchronous.
So
it
will
be
more
like
a
sort
of
detached
pipeline
that
runs
and
finishes,
but
the
parent
pipeline
will
not
wait
for
it.
I
will
not
even
try
to
it
will
not
be
affected
by
the
the
result
of
the
child
pipeline,
and
so
this
is
kind
of
the
default
feature
a
default
setting.
B
So
if
you
want
to
wait
for
the
child
pipeline,
where
the
parent
pipeline
spawns
something
and
then
waits
for
it
and
also
gets
the
final
status
from
the
parent
pipeline
is
reported
back
into
the
trigger
job,
you
have
to
use
strategy
depend,
while
this
might
make
sense
for
a
multi-project
pipeline
when
working
when
working
on
new
features
that
we
are
implementing
about
handling
artifacts
with
parent
child
pipeline.
I
realized
that
it's
it's
a
it
should
be
highly
recommended
to
have
strategy
depend
for
parent
child
pipeline.
B
I
know
we've
kind
of
changed
this
as
by
default
to
be
enabled
for
a
parenthetical
timeline
because
it
probably
might
be
confusing
and
but
somehow,
which
I
think
we
should
move
towards
making
that
the
recommended
behavior
so
where,
when
you
create
a
parent,
a
child
pipeline
by
default,
the
parent
should
should
wait
for
a
child
pipeline
unless
you
want
it
to
run
asynchronously.
C
B
O
C
Same
old
problem
of
introducing
changes
that
are
not
really
backwards
compatible
and
allowing
users
to
opt
in
for
the
new
syntax
or
a
new
behavior.
So
there's
this
issue
about
ci,
yam,
versioning
or
behavior
versioning,
or
you
know
any
kind
of
mechanism
that
would
allow
us
to
break
compatibility
in
a
way
that
user
opts
in
for
the
new
behavior
and
we
never
break
behavior
for
users
that
are
not
that
do
not
know
about
this
change
right,
so
users
should
opt
in
for
a
new
behavior.
C
So
how
to
achieve
that?
I
think
it's
important,
because
we
are
seeing
more
and
more
problems
like
that.
Then
we
still
do
support
syntax
that
have
been
abandoned
like
six
or
seven
years
ago,
but
we
still
need
to
support
all
the
code
around
that
because
there
is
no
way
to
actually
properly
deprecate
anything
or
measure
the
the
usage
in
an
efficient
way.
But
in
this
case
I
do
agree,
it
should
be
default.
One
day,
perhaps.
B
Yeah,
so
I
I
think
there
might
be
a
kind
of
a
halfway
solution
we
could
use
where
so
right
now,
like
the
strategy,
depend,
means
wait
for
the
downstream
pipeline,
but
also
mirror
this
the
downstream
status
when
that
pipeline
finishes
and
so
with
the
handling
of
artifacts.
In
reality,
what
we
need
is
to
solve
the
waiting
part.
We
don't
really
need
to
care
about
the
whether
the
downstream
pipeline
passes
or
fails,
and
so
and
I'm
not
sure,
maybe
maybe
there's
some
kind
of
a
simple
discussion
we
can
have
about
that.
B
This
is
something
we
could
do
to
move
forward,
but
basically
the
main
problem
is
that
we
want
to
make
a
available
all
the
artifacts
that
are
generated
within
the
pair
and
then
child
pipeline
entire
hierarchy.
B
We
want
to
make
these
reports
artifacts
available
for
the
merger
question
to
be
visible,
and
so
because
the
merge
request
asks
the
pipeline,
the
head
pipeline
for
all
the
reports
that
can
be
displayed
in
the
merge
request.
The
parent
pipeline
should
look
itself
if
it
contains
any
reports
and
all
the
child
pipelines
so
kind
of
cascade
this.
This
search
of
art
of
artifacts
and
reports
and
then
whatever
is
being
collected,
can
be
displayed
in
the
merge
requests.
So
this
means
the
parent
pipeline
can
have
visibility
across
the
entire
hierarchy
for
reports
they
are
generated.
B
But
for
this
to
happen
effectively
and
we
have
to
ensure
that
all
the
child
pipelines
complete
before
the
parent
pipeline
completes,
because
otherwise
that
can
lead
to
inconsistent
results.
So
this
is
kind
of
the
problem
we
we
have
and
it's
not
something
we
have
to
discuss
now
in
detail,
so
we
have
to
solve.
B
But
there
is
this
need
of
trying
at
least
to
move
towards
us
struck
default
strategy,
depend
or
try
to
make
something
like
a
maybe
a
default
waiting
for
the
pipelines
and
and
medically
it
could
be
maybe
introducing
a
different
type
of
strategy.
But
this
is
kind
of
the
problem.
C
B
C
C
L
Yeah,
I
just
wanted
to
jump
in
and
say
that
if
we
use
the
future,
it's
also
it's
going
to
give
us
another
data
point.
But
it's
also
just
going
to
be
our
opinion,
so
the
problem
validation
effort
that
we're
planning
to
run
with
with
between
pm
and
ux.
I
think
it
will
also
help
us
a
lot,
so
I
think
we
should
consider
everything
like
we
should
consider
what
the
customer
is
requesting.
L
If
we
can
dock
footed,
that
would
be
great
to
get
that
kind
of
feedback,
and
you
know
firsthand
experience
with
how
it
should
work
and
like
what
what
is
annoying
and
then
we
should
also
just
talk
to
our
users
and
maybe
even
run
a
survey
to
get
some
quantitative
data
as
well.
That
will
help
us
understand
exactly
how
it
should
behave
and
how
how
needed?
Is
it
really
by
like
what
what
portion
of
users
and
so
on?
C
Yeah,
I
agree.
I
think
that
feedback
from
users
is
very
valuable.
It's
almost
priceless,
but
it's
even
more
important
and
valuable
when
you
understand
why
users
do
need
and
want
something
when
you
just
you
know,
collect
feedback
and
read
about
it
and
don't
really
understand
why
this
is
something
that
users
need
or
why
that's
something
that
you
want.
C
C
The
handbook
www.com
project
would
be
perfect,
because
I
know
that
we
are
trying
to
separate
pipelines
for
parts
of
handbook,
blog
blog
handbook
that
that's
going
to
be
a
separate
thing,
so
having
separate
pipelines
for
for
a
blog
and
handbook
could
be,
you
know
a
nice
use
case
to
use
child
parent
pipelines
in
case
of
gitlab
project
itself.
I
I
know
that
we
have
separate
pipelines
for
docs.
C
We
have
separate
pipelines
for
a
few
different
things
actually
right
and
we
could
use
child
parent
pipelines
to
separate
it
even
better
and
actually
talk
food.
You
know
so
that
that
might
be
interesting
that
that's
going
to
be
difficult.
I
think
that
dog
fooding
is
always
related
to
some
kind
of
pain
that
we
need
to,
like
you
know,
use
our
own
features
then
discover
all
the
short
comings
like
problems
fix
them
and
it's
never
smooth.
C
But
in
my
opinion,
it's
worth
it
how
to
deal
without
a
negative
impact
on
developers,
productivity,
that's
a
different
story,
but
something
we
could
actually
work
to
get
done.
C
B
Items
next
yeah,
so
so
I
had
there
and
we
have
an
issue
about
the
ci
recipes
somewhere.
I
I
just
need.
I
need
to
find
it,
but
basically
this
I
I
think
when
we
think
about
what
could
be
the
next
use
of
paragraph
pipeline
or
what
other
uses
that
can
be
there.
Aside
from
breaking
down
a
pipeline
and
in
smaller
pieces,
I
think
what
we
could
like
the
concept
of
cr
recipes.
I
think
I
imagine
more
apparent
child
plan
to
be
used
more
like
the
equivalent
of
github
action
actions.
B
We
have
a
defined
structure
of
a
piece
of
ci
that
does
something
specifically
is
self
self
contained,
and
but
then
we
can
call
that
from
any
pipeline
and
say
I
I
need
to
deploy
something
on
aws,
but
I
know
we
have
a
template
for
deploying
on
the
aws,
for
example,
a
yaml
file
to
the
problem.
Aws.
It
just
requires
some
input
data
and
then
can
do
that
for
you
so
rather
than
you
try
to
understand
how
to
include
that
into
your
pipeline.
B
You
can
possibly
just
call
that
as
a
child
pipeline
pass
the
necessary
data
and
treat
that
as
a
black
box
and
and
and
use
that
that
way,
so
that
could
be
another
kind
of
interesting
solution.
I
I
it
happened
like
in
in
the
past.
While
we
were
working
with
some
customers
that
they
were
very
interested
in
parent
child
pipelines,
they
were
also
interested
into
how
they
can
have
like
a
repository
where
they
only
manage
ci
files,
and
that
can
is
actually
developers
don't
have
access
to
that.
B
So
this,
I
think,
is
an
interesting
idea
of
how
we
can
have
a
repository
where
you,
you
have
all
your
recipes,
that
you
need
for
the
entire
group
or
or
for
instance,
and
and
then
you
have
different
projects
just
referring
to
through
snippets
of
it
to
actually
run
independently
as
a
child
pipeline
right.
So
that
could
be
another
kind
of
interesting
ideas
to.
L
L
B
Yeah,
because
today,
if
you
want
to
use
like
a
template,
you
still
have
to
know,
you
still
have
to
read
inside
the
template
and
understand
exactly
what
it
does
and
and
and
then
arrange
the
jobs
that
are
coming
from
that
template
within
your
pipeline,
because
you
have
to
see
where
they
fit
within
your
stages
of
the
pipeline.
While
if,
if
we
have
instead
of
being
like,
if
having
a
template
that
you
include,
you
have
ci
files
and
you
just
run
independently
as
a
child
pipeline.
B
Then
that
will
run
in
the
stage
where
you
decide
to
trigger
it
and
and
it
I
think
it
removes
a
lot
of
the
learning
curve
and
trying
to
understand
how
to
use
certain
templates
or
how
to
do
a
specific
thing,
and
it's
treated
more
like
a
black
box
or
a
building
block
where
you
can
simply
call
it
like
a
function
that
is
there
available
it.
Just
me,
we
would
need
to
document
for
every
sort
of
every
recipe
that
we
will
make
available.
B
B
F
Where
are
we
going
to
host
this,
I
mean,
but
maybe
we
can
work
on
that
async,
but
I
just
want
to
understand
when
we
trigger
in
a
child
pipeline.
We
need
to
mention
to
include
yaml
file.
So
are
you
suggesting
to
bundle
this
yaml
file,
but
we
cannot
touch
the
project
like.
We
cannot
use
a
project
so
like.
How
can
you.
F
B
So
either
so,
I
think,
like
a
boring
solution,
would
be
to
make
a
recipe
public
project
on
gitlab,
where
we
host
a
lot
of
yaml
files
per
category
different,
folders
whatever,
and
then
you
we
could.
B
Each
jump
of
each
folder
will
have
like
a
readme
and
a
yaml
file,
and
then,
if
you
understand
how
to
use
that,
you,
you
read
the
readme,
you
understand
what
input
parameters
are
there,
what
that
that
snippet
of
pipeline
does
and
then,
if
you
want
to
run
from
your
project,
you
could
be
on
a
different
group,
because
somewhere
else
completely
within
the
gitlab
instance,
you
would
do
you
will
use
the
trigger
include,
and
then
you
specify
the
project
and
the
file
that
you
want.
B
So
what
we
will
do
is
we
spawn
a
child
pipeline?
Take
that
file,
evaluate
it
and
that
will
be
will
become
the
structure
of
the
child
pipeline.
That
runs
we
pass
in
the
the
variables
and
that
and
you
don't
have
to
do
anything
else.
You
don't
need
to
even
maintain
that
you
simply
have
to
just
reference
it
for
sure
there
is
problems,
maybe
with
versioning,
that
we
want
the
money
to
look
at.
You
know
if
somebody
changes
the
upstream
the
recipe
you
can
break
every
everything
else.
B
There
might
be
some
kind
of
problems
like
versioning
and
security
things
like
that,
but
this
is
like
the
general
idea
of
how
what
that
would
look
like.
Then,
if
we
don't
want
to
make
it
like
a
public
project,
we
want
to
make
part
of
our
the
way
we
store
templates
within
any
gitlab
instance.
By
default,
there
could
be
maybe
instead
of
templates,
could
be
another
another
category
called
recipes
which
would
be
the
same,
and
then
you
can
include
that
recipe
like
like
a
template
and
the
same
thing
we
still
by
leverage.
B
B
A
So
I'll
I'll,
what
I'll
do
from
the
notes
from
the
agenda?
I'll
add
the
ones
that
we
want
to
take
further
action
on
into
our
issue
for
async
collaboration,
the
issue
for
future
vision
for
parent
child
pipeline,
and
we
can
continue
the
discussion
there,
we're
pretty
much
out
of
time
here,
I'll
upload,
the
video
for
anyone
who
wants
to
to
go
back
on
this
discussion
and
examine
it.
Let
me
know
at
some
point:
I
think
we
should
regroup
after
the
problem.
Validation.
A
Research
is
done
to
to
go
over
that,
because
I
think
that
is
probably
going
to
impact
what
we
want
to
work
on
next
as
well,
and
then
on
the
question
of
are:
are
we
dog
fooding
it?
I
made
a
note.
Nadia
will
include
in
our
research
our
internal
teams
to
find
out
who's
who's
dog,
fooding
it
and
find
out
what
their
their
feelings
are
about.
The
future
sounds
good
thanks.
Folks,
super
interesting
discussion.