►
A
All
right
should
be
recording,
so
the
purpose
of
this
meeting
is
to
talk
about
the
those
pipeline
issues.
We
have
are
the
issue
that
we
have
for
triggering
pipelines
when
another
project
is
rebuilt
and
there's
been
some
confusion
around
what
we're
building
I
think
that
I
think
that
everybody
understands
the
issue
and
just
the
question
is
whether
we
are
building
it
in
an
extensible
way
or
not,
and
if
it's
worth
it,
it
seems
to
be
like
what
a
conversation
is
coming
down
to
you.
A
A
So
it's
better
from
their
perspective
if
the
clients,
so
BD
in
this
case
can
just
say,
I'm
dependent
on
it
so
trigger
me
when
that's
run,
have
to
coordinate
anything
with
that
team
and
that's
it
so
the
original
way
that
we
were
going
to
build
this
was
there
would
be
a
top-level
needs
keyword
that
would
just
let
you
specify
the
project
and
branch
that
are
relevant.
That
would
then
trigger
the
pipeline,
but
the
way
that
it
looks
like
it's
being
implemented
is
that
there
is
a
job
inside
of
pipeline
B.
That.
A
It's
like
a
chicken
and
egg
problem
where
you
need
the
job
to
be
running
in
order
to
know
that
it
needs
to
trigger
itself
existing,
but
maybe
that's
like
an
implementation
detail,
but
it
does
seem
kind
of
strange
that
it
would
be
a
specific
job
within
the
pipeline.
That's
responsible
for
triggering
the
pipeline
that
contains
itself
and
then
the
other
I
guess.
A
The
other
thing
we
were
just
talking
about
was:
is
it
there's
some
I
guess
extensibility
benefits
to
doing
to
doing
it
in
a
more
general
way
where,
like
in
the
future,
the
pipelines
could
be
running
at
the
same
time
or
different
status
attributions
to
be
available,
and
is
any
of
that
necessary?
Is
that
there's
everything
I've
said
so
far
makes
sense
and
sound
crack.
B
A
B
A
B
That's
not
what
I'm
asking
about
so
in
order
to
set
up
relationship
and
subscription.
Let's
call
it
subscription
okay,
but
let's
use
the
single
language
yeah,
which
refer
to
all
the
things
we
are
talking
about.
So
in
order
to
set
up
a
subscription.
Do
that
at
some
point
right,
so
we're
doing
that
whenever
someone
notifies
up
siamo
and
pushes
a
new
configuration
and
the
most
simple
way
to
actually
create
the
subscription
is
when
the
pipeline
gets
created
and
we
evaluate
all
of
the
jobs
we
have
right.
B
B
B
A
Me
to
understand
the
other
way
that
you
could
build
this
feature.
Maybe
it's
a
bad
idea,
but
just
like
from
I
kind
of
like
understanding
perspective,
you
know
humor
me,
the
other
way
you
could
do
this
is
have
some
kind
of
event
in
the
system.
That's
when
a
pipeline
finishes,
then
things
could
happen
and
it
could
subscribe
to
that
event
like
when,
basically
a
pipeline
success
event
that
it
could
subscribe
to.
A
B
A
D
Yeah
could
be
that
we
I
don't
know
is
that
correct.
Oh
could
be
that
we
are
using
the
wrong
tool
to
define
a
subscription,
because
maybe
the
middle
of
CI
Yama
is
runtime
configurations,
so
it
means
we
have
to
run
a
pipeline
beforehand
before
discovering
that
they,
actually,
that
pipeline
wants
to
subscribe
to
something
else
that
might
cause
a
pipeline
the
downstream
pipeline
to
start
and
wait
there
forever
until
eventually
some
events
happen
in
the
upstream
pipeline.
So
what
would
we
I
think?
D
What
we
are
trying
to
do
is
to
create
a
subscription
which
is
more
some
sort
of
project
level
settings
that
I
want
to
say
when
something
when
that
happen
in
a
different
pipeline,
then
always
trigger
a
pipeline
in
this
project.
Maybe
you
can
then
define
some
rules.
Maybe
possibly
another
idea
would
be
that
when
every
time
we
changes
to
get
a
lab
CI
mo,
we
evaluate
the
subscription-
and
we
somehow
say
that
as
a
configuration
for
a
project,
so
you
know
that
will
always
run
and
will
always
be
created.
A
D
B
That's
a
very
interesting
idea,
but
then
it
means
you
would
need
to
create
a
completely
different
code
for
handling
pushing
it
lap
CI
without
actually
creating
a
pipeline.
So
in
a
way
that
we
can
detect
that
a
comet
contents
contains
any
version
of
github
CRM
and
we
just
process
it
to
persist
some
configuration
without
actually
creating
a
pipeline
so
that
that's
really
difficult
and
probably
more
complex
than
what
we
are
doing
right
now
and
the
iteration
that
come
in
proposed
to
actually
make
it
useful
for
the
pipeline.
C
Actually,
let
me
let
me
charge
like
these
like
different
approaches
because
seems
that
are
like
different
approaches
to
the
solution
and
that
they
imply
slightly
different
outcomes
and
what
we
can
actually
achieve
so
I'm.
Sorry
Jason,
you
posted
this
is
like
message
showing
two
different
syntaxes
right,
yeah.
C
I
think
that,
like
the
best
way
for
like
starting
looking
at
these
approaches
is
like
thinking
the
question
that
guy
goes.
You
have
had
like
when
we
trigger
the
downstream
pipeline
so
like
how
we
preserve
this
data
for
the
triggering
and
on
what
conditions
so
like
how
we
track
where
to
trigger
downstream
pipeline
and
for
what
branches
tax
we
track
to
trigger
downstream
pipeline
because,
like
in
whatever
solution,
we
chose
whether
this
is
like
top
level
needs
or,
like
top
level
needs.
C
We
need
to
have
a
solution
that
can
work
on
the
scale,
so
that
can
like,
because,
like
not
going
to
write
every
project
for
every
pipeline
finish
to
figure
out
whether
this
practice
configuration
to
like
to
trigger
downstream
pipeline.
So
we
have
to
like
have
some
constraints
that
define
that
on
this
particular
cases,
we
evaluate
these
like
needs
conditions,
and
we
may
be
trigger
this
pipeline
because
of
these
nice
conditions,
she's
a
correct,
yeah,
I
think
so.
No
yeah
go
ahead.
C
So
so,
like
one
one
way
of
confining
that,
like
it's
like
what
you
Fabio
proposed
like
to
solve,
that
there
is
like
two
different
ways
to
solve
that,
like
one
is
like
building
interface
for
that
like
having
tabbing
UI
with
associated
doctor
based
model,
this
is
basically
like
the
the
designs
crates.
Might
you
be
like
absurd
like
pipeline?
Actually,
it's
not
unique
pipeline.
It's
like
project
a
is
connected
with
project
B.
Maybe
on
this
branch,
maybe
on
some
other
conditions
we
are
not
able
right.
C
This
is
one
of
the
cases,
but
right
what
happens
if
we
have
this
configuration
as
part
of
the
autopsy
on
mode,
we
basically
have
the
Kitab
say:
I
will
on
every
commit
on
every
branch
will
never
talk.
We
cannot
really
track
like
every
branch.
I,
don't
think
so
like
we
know
they
don't
not
figure
like
1000.
A
C
So
so,
like
we're,
not
gonna,
do
that
and
likely
it
seems
that,
like
the
best
like
it's
like,
we
may
be
just
gonna
track
only
for
for
master,
so
maybe
like
we
create
associated
database
model
when
we
on
our
lives.
He
clubs
here
yeah
mu
on
master
I,
think
it's
regardless.
If
it's
like
part
of
the
Crete
by
nine
service
or
it's
not.
C
Maybe
we
just
update
inspiration
when
you
push
to
the
master,
which
means
that,
like
we
have
a
new
configuration
and
just
because,
like
we
have
these
config
changes
additive
the
soon
as
we
start
pushing
these
standards
because
start
tracking
these
changes
in
the
database,
so
so
there
if
I,
continue
et
so
it
kind
of
takes
on
associates.
You
can
egg
problem
because
there
is
not
to
connect
problem
because
the
first
you
start
by
creating
these
pipeline
trigger
on
the
obscene
prey
cannot
happen
before
you
post
a
CIA
mole
to
the
downstream
project.
C
When
you
push
to
the
dancing
project,
which
was
not
my
greatest
creation
and
then
the
sister,
then,
when
I've
seen
bright
triggers
again,
it
would
trigger
the
dancing
project
because
you'll
have
that
variation
already
created.
So
it's
more
like
the
ordering
of
their
of
the
operations
that
happens
right,
yeah.
A
C
C
All
of
that
depends
on
the
project
right
and
we
depend
on
the
project
race
because
we
always
want
to
automatically
bump
our
project
to
use
the
writers
race
version
automatically,
and
we
want
to
do
whatever
we
want
to
do
it
like
by
we're
testing
the
whole
our
application
and
the
race
for
it
gets
updated.
So
this
is
one
of
the
usage
of
the
argument
are
seen
dependencies
and
like
the
big
ecosystem
of
the
key
job
calm.
It
makes
it
really
convenient
that,
like
you,
have
maybe
some
upper
source
libraries
provider.
C
But
then
you
have
maybe
private
consumers
of
these
libraries,
and
we
want
to
have
as
much
optimizing
as
possible
to
update
always
to
third
like
to
the
latest
version
and
basically
like
how
it
would
work
in
this
case,
because
this
would
be
like
one
of
the
patterns
that
people
would
be
using.
Is
it
something
that
likely
be
it
only
for
like
private,
private
or
like
private,
maybe
210
another
projects?
Or
is
it
really
like
unlimited?
So
like
unlimited
in
terms
like
everyone
can
trigger
everyone?
C
Am
going
to
turn
this
on
a
point
but,
like
it
kind
of
like
I,
guess,
defines
exactly
how
these
features
should
work,
because
we
have
to
design
that
to
work
on
the
github.com
scale
yeah,
and
we
to
be
if
the
the
the
confinement,
the
would
have
this
case.
It
like
we
have
a
public
project
that
is
being
consumed
by
a
number
of
the
private
projects
of
public
other
public
projects.
That
may
be
triggered,
and
it's
gonna
happen
with
the
current
syntax,
because.
C
Example
would
be
grace
consumed
by
people
of
race
and
I'd,
like
the
desire
is
like
we
consume,
because
we
want
to
automatically
update
to
like
this
function
after
running
suit.
So
so
this
would
be
like
one
of
the
reasons,
and
probably
this
is
like
the
most
of
the
reasons
why
we
want
to
have
these
dams
in
upstream
dependencies,
because,
like
one
of
the
users
for
me
for
dancing
and
upstream
dependencies
is
right,
you
have
maybe
like
this
set
of
the
of
the
dependencies
which
is
like,
if
have
like
one
right.
This
is
API
backgrounds.
C
So
project
free
depends
on
the
project,
1
and
project
tool
when
project
for
depends
on
the
parade
free.
So
basically
we
depend
on
the
on
the
front.
End
depends
on
the
back
end,
but
back
and
depends
on
the
right
on
two
sides
of
the
api's
and
after
that,
maybe
like.
Let's
say
that
this
are
like
separate
projects,
project
a.
C
C
It's
all
about
like
how
you
define
the
like
the
relationship,
because
you
can
have
multiple
projects
E
that
can
perform
like
deployments
the
different
types
of
the
environments
without
requiring
you
to
define
that
as
part
of
the
project
D
the
front
end
so
like
you
might
have
like
multiple
separate
repositories
for
the
performs
deployment
for
different
types
of
the
server's,
based
on
the
different
conditions.
When
you
want
to
deploy.
C
For
example,
let's
say
that
if
you
push
a
tag
with
a
variety,
we
just
gonna
perform
a
deployment
on
some
other
project
which
is
built
as
I've,
seen
as
a
dependency
from
this
project
to
this
upstream
project,
based
on
the
dark
that
the
new
type
that
is
being
pushed
and
we
just
wanna
perform
like
semi-automated,
maybe
canary
deployment
to
the
environment,
but
on
the
searching
deployment
we
always
gonna
use,
for
example,
like
master
of
the
priority,
because
this
is
something
that
we
want
to
do
and
another
use
case
of
that.
Like
you.
C
Just
always,
let's
say
that
my
means,
like
it's
beginning,
triggered
from
the
bottom.
You
push
a
new
version
of
the
project,
a
the
API
back-end
and
it
triggers
proxy
rerun,
so
the
test
against
a
new
pocket
and
it
triggers
my
ad
we're
answer
or
the
best
on
the
new
backhand
and
maybe,
like
you,
have
another
project
that
performs
like
this
kind
of
review
option
deployment,
because
your
application
is
are
complex
to
them
like
what
you
can
put
in
like
in
varied,
like
single
report.
So
this
is
another.
C
A
C
Whether
this
is
actually
like
a
very
simple
triggering
mechanism,
or
whether
this
is
like
mekinese,
that
will
fuel
a
dependency
graph
because,
like
let's
consider
like
two
two
examples
of
the
syntax
means
I
think
that
we
started
using
needs
in
the
context
of
the
director
cyclic
graph.
So
it
kind
of
tells
you
that
you
depend
on
some
other
project.
But
is
it
really
like
a
dependency
on
another
project
or
is
it
like.
C
Was
just
triggered
by
so
because,
like
I
think
it's
it's
it's
to
figure
out
if
he's
like
dependency
or
is
it
like
only
the
trigger
event
because
needs
kind
of
implies
that
this
is
dependency,
and
it
also
implies
that
you
have
some
ability
to
consume
variables.
You
have
some
ability
to
consume,
maybe
artifacts
from
this
another
project
that
would
allow
you
to
use
that
in
the
subsequent
stages
or
tops
of
that
project.
It's
just
triggered
by.
C
So
it's
not
very
nice,
it's
more
like
the
trigger
by
I!
Guess,
I!
Guess,
because
it
has
it
with
you,
I
think
it
does
in
different
names,
because
because
because
it
needs
kind
of
implies
that
it's
way
more
than
like
only
simple,
simple
event
system.
You
want
to
pass
some
data
across
these
different
pipelines,
so
those
trigger
by
allow
you
to
pass
some
details
about
who
triggered
you
or
like
how
trigger
at
you
and.
A
B
Wanted
to
comment
on
the
dependency
versus
a
simple
trigger,
so
we
build
the
trigger
triggers
teacher
and
the
first
proposal,
like
the
very
first
thing
that
we
received
from
users
I
mean
the
feedback
that
we
received
from
users
is
that
they
need
a
way
to
pass
variables
there.
So
it's
not
only
a
simple
trigger.
It
becomes
a
dependency
because
the
the
downstream
pipeline
now
depends
on
the
variables
being
passed
from
the
upstream.
So
it's
no
different
I
mean
this
is.
C
C
So
I
guess
it
would
not
defer
in
this
syntax
will
still
like
your
CI
sauce
pipeline,
because
it's
it
would
be
like
relation
between
need
to
different
pipelines
that
are
being
triggered,
and
it
will
just
pass
like
some
turn
our
information
about
like
this
another
pipeline.
That
did
fire
that
that
make
sense
to
me.
C
C
Let's
say
that,
like
our
project,
staging
deploys,
different
components
deploys
API
parts
and
payment
patterns,
sorry
user,
IP,
IP,
let's,
let's
maybe
for
the
simplicity,
sided
like
we
have
backend
and
the
front-end
to
start
two
components
that
we
are
depending
on
I
want
to
deploy
them
so
like
one
way.
Another
way
for
looking
at
that
is
like
more
like
a
dependency
oriented
approach
when
we
say
that,
like
this
projects
needs
from
another
project
which
is
like
in
our
case.
C
This
this
approach
looks
about
at
like
the
direct
looking
at
independent
projects
as
a
dependency
of
your
project,
instead
of
like
the
triggers.
My
kind
is
because,
like
beside
that,
you
need
some
very
specific
versions
of
the
front
end
and
back
end
to
be
created
because
they
they
only
like
that,
can
write
work
together.
So
you
may
say
that,
like
I
want
to
deploy
this
staging
every
time
when
someone
else
like
bums
their
dependencies,
for
example,
let's
say
that
I'd
run
upon
that
on
any
Tarkus
kind
of
at
work.
C
Maybe
branch
master
so
like
if
they're
going
to
be
like
a
new
trench
on
the
branch
master
or
the
new
trends
of
the
branch
the
front
end.
We
were
just
gonna,
deploy
the
latest
version
of
this
depth
of
these
dependencies,
but
in
a
way
that
like
we
could,
for
example,
trigger
discipline
and
pipeline
as
soon
as
like
this
part
and
pipeline
starts
running
and
you'd,
maybe
could
wait
for
these
dependent
pipeline
to
finish
and
succeed
before
like
we
would
start
to
trigger
our
settings?
Oh
so
like,
then,
we
could
take
a
look
at
that.
C
D
C
A
Would
it
refer
to
which
particular
instance
that
it
cares
about
if
it
didn't
trigger
it?
So
can
you
ask
again
the
question
yeah?
So
how
do
you?
How
do
you
define
because
there's
always
pipelines
running
in
different
systems?
So
how
do
you
know,
which
instance
of
a
pipeline
in
the
other
project,
if
there's
multiple
in
parallel
running
I,
can
see
now
triggers
why
you
said
this
is
the
most
complicated
feature
we've
ever
built
and
see
I
think
if.
B
We
do
it
this
way
yeah,
but
I
just
wanted
to
mention
that
all
the
use
cases
we
talked
about
it
actually,
never
the
simple
trigger
mechanisms,
it's
almost
always
the
dependency
oriented
approach
and
that
that
simple,
triggering
mechanism
seems
quite
simple
and
it's
easier
to
understand
how
it
works.
But
there
are
so
many
hidden.
B
B
A
And
I
guess
the
second
point
would
be
that
this
feature
is
interesting,
but
also
not
one
of
our
most
important
features
that
we
would
want
to
spend
this
much
effort
building
it
sort
of
meant
to
almost
be
like
a
quick
win.
Reaaargh
attack
everything
the
kind
of
feature
the
ROI
isn't
there.
If
it's
this
big
I
just.
A
C
C
What
do
you
mean?
What
questions
I?
Don't
have
any
question
about
like
different
process
like
well,
we're
like
what
give
an
approach
would
fight
or
like
where
it
would
succeed.
A
C
Consider
because,
like
I,
think
I'm
on
the
backhand
side,
it's
not
that
complicated.
It's
really
like
thinking
about
the
problem
differently
and
it's
more
like
different
way
of
like
describing
exactly
how
the
things
are
behaving,
because
we
have
rigid
real
jobs
that
basically
are
suitable
for
having
dependency
oriented
mekinese
where
trigger
system.
B
It's
something
that
we
don't
have
at
all
in
any
form,
and
it
actually
does
not
like
it's
more
simple,
but
the
simplicity
might
be
false.
Oh
you
know
it.
It
actually
has
almost
the
same
set
of
problems
that
the
dependency
oriented
approach
right,
but
these
problems
are
more
hidden
and
are
not
visible.
That
easy.
C
So
like
there
is,
there
is
a
set
of
the
problems
that
is
like
common
how
we
like
prevent
explosion
of
the
pipe
noise
like
we
cannot
really
like
trigger
one
median
of
the
pipelines,
because
one
project
it
finish
on
the
task
independence.
It's
like
it's
not
realistic.
In
any
case-
and
this
is
this-
is
one
of
the
expectations
right
in
what
cases
like
the
trigger
mechanism,
a
necessary
entered.
Mekinese
is
working
on
because.
B
A
very
interesting
problem,
so,
let's,
let's
suppose
that
we
have
the
top-level
needs
and
we
can
define
there
that
we
depend
on
the
upstream
project
or
perhaps
depend
we
need
the
upstream
project
and
we
want
to
trigger
the
downstream
pipeline
when
the
pipeline
succeeds
on
master
in
the
Africa
right
and
we
commit
that
to
the
github
CRM
in
our
and
our
up
street
downstream
project.
The
project
that
we
are
working
on
has
1,000
branches,
the
gate,
laps,
Here
I
am
who
gets
propagated
to
all
the
1,000
purchase,
and
do
you
this
this
way?
B
C
D
D
B
D
D
E
B
D
B
Then,
if
you
want
to
configure
the
relationship
the
way
that
you
depend
on
kind
sum
of
status,
perhaps
you
want
to
run
something
when
you
get
the
upstream
pipeline
failed,
because
it's
the
even
that
they're
interesting.
For
me
this
way,
you
can't
really
use
that
information
the
pipeline.
When
we
create
that
relationship
and
divide
this
relationship
outside
of
github
CI
llamo,
we
can
consume
artifacts,
we
can
consume
before
you
lose
its.
You
know.
It's
not
the
relationships
not
really
visible
to
the
CI,
a
yahoo
yeah.
A
C
So
actually,
like
I,
start
to
make
me
like
this,
you
my
approach,
because
maybe
it
touches
like
very
specific
like
just
because
you
have
to
explicitly
defined
that
it's
if
not
surprises.
There
is
no
dependency
on
dump
on
upstream
project
unless
you
defined
it
and
maybe
like
it's
much
better
for
us
to
define
on
what
you
are
depending
it
could
be
like
given
branch
or
given
something
that
like
it
would
maybe
work
like
a
trigger
mechanism
that
we
have
today,
but
it's
more
like
for
for
than
other
project.
C
That
would
maybe
inject
some
set
of
the
variables
so
like.
We
would
have,
for
example,
a
little
like
a
pipeline
schedule,
but
more
like
a
pipeline
I
I
playing
that
like
more
like
praying
dependency
for
the
pipeline's
where
you
define
I,
depend
on
this
project
under
turn.
This
on
this
branch
and
I
inject
this
CI
variables
into
into
my
pipeline.
C
It's
not
part
of
the
configuration,
but
maybe
this
is
the
only
way
to
200
scalability
factor
and
usually
like.
We
assume
that
people
just
cannot
have
like
one
two
three.
This
can
do
dependencies.
So
maybe
it's
just
enough
because
it
will
not
configure
them
on
basically
on
the
on
the
parent
project.
I
mean
like
if
use
parent
fork
relationship,
but
it's
still
gonna
be
fine,
isn't
gonna
propagate
to
Forks.
B
C
In
the
advancement
they
used
that
flow
for
you
I,
he
basically
defined
a
like
the
display
depends
on
another
project,
but
it
was
dependency
nursery
if
I
remember
correctly
so
like
it
was
that,
in
a
case
that
you
defend
on
a
number
a
number
of
other
projects,
if
I
record
it
correctly
and
you
are
waiting
for
it
other
projects
to
finish
before
you
would
run
your
own.
So
it's
not
a
trigger
mechanics.
D
Think
the
dependence
approaches
it's
a
lot
nicer
for
maintainability
of
projects,
because
the
project
knows
what
depends
on
and
not
the
upstream
pipeline
doesn't
know
where
even
a
library
is
being
used,
it
can't
keep
kondal
and
anywhere
what
that
might,
but
it's
been
used.
So
it
makes
more
sense
to
the
kind
of
dependency.
A
B
So
one
other
idea
is
that
basically,
you
can
solve
the
scalability
problems
would
only
accept
the
same
way.
We
are
excluding
jobs
from
being
run
in
Forks
right
now,
all
right,
we
do
have
some
jobs
in
biloxi,
E
and
E.
That
can
only
run
in
the
parent
project
and
can't
run
in
the
forest,
but
this
should
be
actually
default
in
that
setting
and
in
town.
C
Yes,
but
like
the
question
is
like
how
we
have
sensible
defaults
because
I
think
what
can
happen
people
may
start
using
in
the
future
without
knowing
the
implications
of
that,
unlike
you
should
figure
out
a
way
that
makes
this
feature
for
us
too,
safe
to
run
on
the
scale,
but
makes
people
easy
to
use
and
like
whether
I
think
that
trigger
based
event
based
system
is
much
clearer.
That's
like
you
basically
like
one
you're
going
to
be
triggered
by
some
other
boy.
It's
not
really
that
this
boy
eats
some
another
point,
this
basically
trigger
system.
A
C
It's
not
very
true
because,
like
you,
gonna
have
on
the
top
level
means
I
need
another
another
job
to
be
run
before
mine.
Maybe-
and
this
was
like
one
of
the
proposals
for
the
director
secret
gas
and
then
like
you-
have
like
the
kind
of
semantic
collision
that
needs
on
the
top
level
means
completely
something
different
than
the
needs
on
the
top
level
for
the
director
city
graph
and
I.
Think
that
that
needs
it's
very
clear
for
usage
of
the
direct,
a
cyclic
graph.
C
C
So
like
about
since
it
would
be
false,
like
it
triggered
by
actually
like
my
seeds,
may
be
clear
that,
like
you,
kind
of
like
want
your
product
to
be
triggered
by
someone
else,
and
maybe
part
of
that
like
we
could
say,
it
doesn't
work
on
the
force.
But
it
also
means
that,
like
on
for
the
triggered
by,
we
have
to
replicate
a
lot
of
features
like
on
the
acceptor
or
the
one
that
rules
which
we
are
working
on
right
now,
because
new
theater
doesn't
have
these
like
its
extension
to
the
syntax.
C
So,
even
even
in
the
first
iteration.
We
cannot
really
like
provide
some
simple
defaults
or,
like
we
don't
kind
of
provide
like
the
like
configuration
for
that
underpin
necessary
entity
already
have
only
accept
and
force,
but
the
default
configuration
of
onion
except
he's
like
run,
always
not
run
in
the
specific
condition,
and
we
comes
now.
We
cannot
really
make
if
the.
C
B
C
I'm
not
sure
if
it
can
be
acceptable
if
there
is
no
sensible
defaults,
because
this
feature
is
very
easy
to
get
out
of
the
control
and
I'm
very
afraid
for
for
like
system
stability
and
for
the
harder
for
the
team
and
us
rushing
on
deciding
this
feature
and
breaking
that.
So
I
think
that,
like
sensible,
defaults
is
kind
of
like
requirement
for
us
to
ship.
These
feature.
A
C
C
B
My
laptop
just
disconnected
from
energy
from
the
moon
I
think
that
the
use
case
is
that
you
can
base
it
in
check
inside
your
pipeline.
What's
the
status
of
the
latest
pipelines
in
a
few
project
that
you
depend
on
I
think
that
we
talked
about
that
and
that's
a
very
narrow
use
case,
probably
but
Canada
advocated
for
finishing
this.
In
this
way,
I.
C
Right
because
my
perception
is
like,
if
it's,
if
it's
almost
done,
I
would
refrain
of,
like
not
sleeping
that
I
would
shoot
that
I
would
write
in
mine
deficit
better,
and
let
people
really
ask
if
this
is
something
that
is
useful
for
them,
and
documents
like
different
outreach
approaches
to
the
the
upstream
downstream
dependencies,
because,
like
this
problem
is
like
it's,
it's
very
complex
to
figure
out,
and
this
actually
is
like
very
close
for
us
to
getting
the
dependency
oriented
approach.
But.
C
B
So
that
was
originally
proposed
to
be
just
you
know
the
intermediate
step
towards
implementing
the
dependency
oriented
approach,
and
at
this
moment
it
just
mirrors
the
status
of
the
latest
upstream
pipeline
whenever
we
running
a
time
downstream
pipeline.
Alright.
So
now
you
can
make
it
tomorrow
that
has
the
status
of
some
other
pipeline.
A
B
C
C
So
so
so,
basically
like
this
approach
was
really
like
a
stopgap
for
more
like
dependency
oriented
because
technically
like
this,
would
allow
us.
If
we
trigger
dancing
projects,
it
would
allow
us
to
hook
to
currently
running,
but
there
is
like
a
usage
pattern
for
that
right,
like
if
we
hook
nodes
to
the
right
is
running,
but
to
light
a
successful,
we
could
basically
like
pass
artifacts
and
the
details
about
the
latest
successful
in
ver
like
stable
manner
so
like
time
in
back-end
like
life.
C
C
So
she
as
part
of
the
pipeline,
but
you
were
actually
like,
depending
on
it
wonders
dependency
on
another
project,
because
these
are
pendency
now
is
very
hidden
into
like
the
script
that
we're
running
is
not
visible
to
like
in
the
system,
because
it
does
have
whatever
one
happened
to
be
running
not,
but
that
was
that
was
the
first
iteration
like,
let's
hope
to
like
this,
but
the
next
iteration
would
be,
let's
hook
to
the
right.
The
success
yeah.
A
B
C
Be
able
to
beat
the
package
for
the
given
for
the
given
pipeline
so
so
far
they
for
the
given
Eclipse
ee,
because
because
when
you
have,
for
example,
by
mine
ID,
when
you
have
a
saw
it
kind
of
have
like
it
completely
be
touched
model,
then
on
another
project.
If
you
triggered
that,
and
if
you
pass
these
variables,
then
you
can
post
by
the
status
to
like
to
maybe
the
original
mesh
request.
By
going
through
the
pipeline,
ID
and
and
post
information.
C
A
A
B
A
B
A
C
C
E
C
Like
triggers
could
be
like
a
solution,
but
is
it
we
like
containing
us
later
on,
on
my
own
more
like
bigger
use
cases
and
I'm
worried
about
that,
like
the
trigger
mechanism
yeah,
this
is
actually
like
limiting
us
in
the
possibilities.
How
this
feature
could
be
used,
yeah
both
of
them
it's,
but
because,
because
it
seems
that
with
the
triggered
mekinese,
we
lost
actually
information
about
dependencies,
because
we
don't
know
if
this
is
actual
dependency
or
not
it's
more
like
we
are
watching
for
someone
else
repo.
D
Think
they,
the
problem,
both
problems
of
the
trigger
and
trigger
by
like
the
dependencies,
could
possibly
be
so
no
I'm,
not
sure
about
the
details
and
but
could
be
solved
by
using
the
subscription
approach
on
both
cases,
because
in
one
case
will
be
the
if
one
of
the
trigger
is
like
the
upstream.
That
creates
a
subscription
for
a
downstream
like
oh
no
needs
to
be
out,
and
the
other
way
around
is
like
I
option
subscribe
to
an
ops
at
all
an
upstream.
D
E
D
So
it
is
like
okay,
the
trigger
bar
is
like
the
relationship
has
been
created
by
the
upstream
in
the
other
one
it
delicious
be
created
by
the
downstream,
and
that
can
be
a
very
actually
trigger
different
behavior,
for
example,
take
care.
We
can
I
say
in
the
all
the
vitamins
and
all
the
information
to
the
downstream
pipeline,
because
we
actually
own
the
project,
the
upstream
and
the
downstream-
that
something
is
a
key
difference.
I'm.
B
C
So
select
so
I'd
like
the
underlying
architecture,
above
like
the
automation,
it's
basically,
the
same
is
like
different
way
to
express
what
you
want,
though,
and
in
fact
the
way
that
you
express
what
you
want
to
do
device
like
the
future
extensibility
of
the
future
and
like
potential
use
cases,
and
this
is
basically
like
the
biggest
problem
to
solve,
because
whatever
we
choose
the
approach,
the
underlying
problems
of
the
scalability.
They
stay
the
same
but
like
like
how
this
feature
is
useful
and
how
we
can
solve
this.
Gonna
be
18
issues.
C
B
We
perhaps
should
create
an
issue
about
next
steps
to
the
body
needs
dependency.
Have
we're
recovered
call
it
because
we
probably
need
to
document
our
finding
somewhere
and
we
need
to
move
people
like
Mattia
in
and
Brendan,
perhaps
as
well
and
decide
whether
we
should
just
revert
it.
What
we
have
already
done,
it's
just
a
one
merge
request,
not
that
big
I
think
we
can
easily
River
that,
or
should
we
actually
merge
the
one
what
he
is
currently
working
on.
A
First
emerging
it,
but
if,
if
we're
going
to
talk
about
it
in
document,
we
need
to
be
able
to
explain
in
some
reasonable
way
like
what
problem
amid
solving
and
maybe
changing
it
to
be
able
to
access
the
last
Tyson.
Is
it
helped
in
some
way,
but
even
then
I
am
scratching
my
head
a
little
bit.
What
like,
who
would
you
know
if
you,
if
you
put
like
what
would
the
section
of
the
documentation,
be
like
here's,
how
you
can
solve
problems
related
to
you
are.
B
Making
a
deployment
pipeline
in
a
separate
project
so
in
order
to
get
variables
from
another
one
to
get
versions,
I
think
that
the
core
component
that
you
would
pass
in
that
story
is
the
Shah
of
latest
successful
pipeline
in
all
the
dependencies.
You
have
that
one
question
is
when
you
actually
want
to
deploy,
because
this
mechanic
does
not
allow
you
to
people
are
continuously.
B
A
B
Know
I
prefer
to
create
an
issue
and
discuss
that
synchronously
come
from
this
point
on
it's
easier
to
think
about
possibilities
or
what
we
can
do
absolutely
than
Audrina.
So
if
you
have
more
time
to
think
about
them,
yeah.
A
Yeah,
okay,
that
works
I'm
gonna
set
up
a
meeting
with
a
couple
customers
to
talk
about
this
particular
issue
and
what
the
real
requirements
are
and
if
there's
a
way
to
solve
it,
without
even
kind
of
touching
on
the
scalability
issues
or
without
build
complex
features
or
build
a
new
event
subscription
model.
Like
avoid
all
of
that
stuff.