►
Description
Delivery team discusses an approach for removing complexity from deployment pipelines, and considers improvements to Changelogs
A
Okay,
hello,
so
this
meeting
is
some
time
to
us
for
us
to
have
a
look
through
these
two
okrs
in
particular,
the
coordinate,
deploys
and
the
changelog
feature
I
think
most
of
them.
We
have
a
similar
sort
of
goal,
which
is
they
both
could
be
quite
big.
So
probably
our
goal
is
to
work
out,
but
our
next
step
is
to
work
out
how
much
of
them
or
like,
what's
the
focus
for
this
quarter.
This
meeting
can
be
general
discussion
and
capturing
of
questions
and
things.
A
So
a
little
bit
of
background
for
me
on
number
one
on
coordinate,
deploys
and
then
alessio
dive
in.
If
you
want
to
add
anything
so
for
me,
I
have
based
this
on
as
there's
a
number
of
epics.
We
have
kind
of
in
the
delivery
backlog
that
cover
essentially
a
similar
problem,
which
is
complexity
of
pipelines,
and
I
think
we're
at
the
stage
where
removing
some
of
that
complexity
will
help
us
with
other
things,
such
as
rollbacks
and
moving
into
independently
deploying
things.
A
So
it
seems
like
a
good
time
to
try
and
remove
some
of
that
complexity.
I
have
been
mostly
well.
Actually,
I
think
jobs
diagram,
actually
something
like
that.
Sorry
same
thing
about
that
so
yeah,
so
that
was
kind
of
where
I
was
coming
out
with
this
one
is,
I
think
we
have
lots
of
ideas
and
rather
than
I
was
thinking
that,
rather
than
necessarily
doing
like
a
deep
horizontal
dive
like
we
have
lots
of
epics.
A
That
say
like
focus
specifically
on
independent,
releasing
of
services
or
deep
dive
into
coordinating
release
rules
in
release
tools,
for
example,
or
something
along
those
lines.
I
was
thinking
that
it
would
be
nice
to
try
and
do
a
horizontal
and
something
that
gives
some
value,
but
also
allows
us
to
bring
in
some
of
the
testing.
So
how
can
we
actually
make
changes
to
our
pipelines
and
validate
them
would
be
also
a
nice
addition
if
we
could
find
a
small
enough
piece
to
fit
into
this
quarter,.
A
Beyond
that,
though,
I
am
very
open
for
suggestions
and
ideas
that
you
all
have
around
like.
Basically,
how
can
we
make
things
easier
for
ourselves
in
the
future
alessio?
Is
there
anything
like
any
other
context?
That's
worth
sharing
this
stage.
B
Yeah,
I
was
thinking
that
I
I
do
agree
that
making
it
testable
is
something
that
we
definitely
need,
but
I'm
not
sure
this
is
something
that
we
can
do
while
moving
it.
So
what
I'm
thinking
here
is
that
if
we
take
the
current
situation
and
we
try
to
just
implement
testing
around
it,
it
will
be,
I
think,
a
quarter
effort
just
doing
this,
which
would
be
completely
wasted.
If
we
are
thinking
about
reorganizing
things
around
it.
So.
B
What
I'm
thinking
here
is
that
maybe
we
should
start
tackling
removing
complexity,
keeping
in
mind
that
we
also
want
to
be
able
to
test
the
release
process,
but
so
that
we
can
jot
down
issues
or
if
something
is
easy,
we
can
start
changing
it,
but
just
don't
don't
planning
for
it,
because
I
think
it
will
be
yeah.
A
lot
of
work
really
really
a
lot
of
work.
I
don't
know
if
others
have.
C
Ideas
around
this,
but
I
I
think
one
of
the
challenges
also
is
like
I've.
I'm
sort
of
thinking
we're
not
going
to
be
messing
with
the
omnibus
ci
configuration
and
their
pipeline,
and
that's
a
big
part
of
the
current
release.
Price
process
is
like
I
assume,
for
our
first
iteration
we're
going
to
continue
to
use
tags
to
create
packages
to
release
cng.
C
All
this
is
baked
into
their
to
those
projects.
Ci
config-
and
I
don't
know
if
we
want
to
take
on
you-
know
re
refactoring
that
and
changing
it.
I
think
that
makes
it
very
difficult
for
us
to
kind
of
come
up
with
a
testing
scheme.
B
Yeah
yeah,
I
was
thinking
that
the
skype
proposal,
which
is
linked
somewhere,
but
I
will
find
it
out-
it's
very
it's
very
nice
because
it's
it
breaks
down
the
point,
the
connection
point.
So
it
pinpoints
where
omnibus
triggers
deployer
things
like
that
with
code,
references
of
what
is
happening,
and
so
the
proposal
there
is
something
like
we've
wrapped
this
around
feature
flags
and
environment
variables
forever,
so
we
don't
remove
it.
We
migrate
and
we
start
playing
with
this
new
architecture
and
eventually
later
we
remove
it.
B
So
I
think
that,
while
we
are
in
this
mixed
state,
we
should
just
I'm
thinking
about.
We
are
just
cutting
connection
where
connection
are
triggers
and
moving
all
those
new
triggers
back
to
release
tools.
Once
we
have
something
that
is
less
deep
in
terms
of
chain
of
triggers,
then
we
can
pose
a
bit
and
think
okay,
there's
something
we
can
improve
here
and
move
on,
because
something
else
I
was
thinking
is
that,
as
as
amy
said,
this
is
really
a
multi-quarter
effort
if
we
want
to
go
from
beginning
to
the
end.
B
A
One
thing
I
will
just
say
on
the
testing
side
like
fully
agree
that
it's
a
huge
thing,
I
think,
as
we
go
through
and
change
pipelines,
we
should
definitely
be
keeping
in
mind
like
how
will
we
know
our
changes
are
working
like
I
think
you
know,
like
europe,
probably
taught
most
about
the
the
challenge
of
changing
things
and
the
sort
of
the
length
of
that
feedback
cycle
of
having
to
test
that
at
the
moment,
with
our
by
basically
doing
deploys.
B
Well,
there's
also
another
thing
that
came
to
my
mind
when
I
was
validating
some
information
with
york
before
is
that
we
we
keep
discussing
about
this
pipeline
as
a
general
thing,
but
you're
at
least
at
the
deployer
level.
There
are
two
big
distinctions
between
tagging
and
monthly
patch
security,
whatever
rc
release
and
how
to
deploy
because
otherwise
simpler,
and
so
I
I
don't
know
if
I
think
in
the
future.
B
So
what
we
are
doing
now
is
that
we
create
the
stable
branches
out
of
a
package
that
we
already
built,
which
is
our
previous
auto
deploy
build.
So
it
may
be
worth
thinking
and-
and
I
think
that
your
code
is
already
is
doing
this-
that
the
only
process
that
we
care
about
is
the
auto
deploy
one
and
that
shipping
package
for
to
customers
just
repackaging,
something
that
we
have
already
built
so
that
we
have
these
things.
B
You
would
say
this
is
a
package
deployed
in
this
environment,
which
is
what
the
deployer
is
doing,
and
so
maybe
in
in
this
regard,
then
the
changes
between
how
to
deploys
and
regular
releases
are
minimal.
So
we
can
still
visualize
everything
in
a
release.
Tools,
level,
but
I
think
it's
it's
something
that
is
worth
mentioning
and
thinking
about,
because
these
are
two
different
processes
and
the
feedback
loop
on
how
to
deploy
is
really
short.
A
D
I
think
it's
I
think
it's
worth
highlighting
that
like.
If
we're
gonna
talk
about
deployments,
I
think
it's
best.
If
we
for
a
moment
pretend
that
we
don't
have
any
self-hosted
packages
that
that
whole
process
doesn't
exist
because,
regardless
of
what
we
did,
what
we
do
to
deploy
to
gitlab.com
that
packaging
process
will
more
or
less
remain,
maybe
not
the
same,
but
quite
similar,
like
we'll
always
have
to
tag,
always
have
to
create
change
logs
and
have
to
build
packages
for
like
200
different
distributions.
D
So
I
think
they're
also
having
a
single
pipeline,
for
example,
would
be
nice,
but
it's
less
of
a
issue.
I
think,
because
we
do
it
less
often
like
we
don't
do
this
every
single,
multiple
times
a
day,
because
I
think
if
we,
if
you
try
to
build
something
right
now
that
works
for
both,
I
don't
think
we're
gonna
get
anywhere
because
we're
gonna
get
sort
of
stuck
because
we
have
these
two
different
process
with
their
own
needs
that
we're
trying
to
sort
of
shoehorn
into
a
single
thing.
C
I
tend
to
agree
with
that.
I
think
I
would.
I
would
rather
like
fix
autodeploy
first
and
then
maybe
take
another
look
at
regular
releases,
because
we
have
something
that
we
know
works
and
also.
I
just
think
it's
so
much
easier
to
iterate
on
the
auto
deploy
stuff
than
it
is.
If
we
start
messing
with
regular
releases,
I'm
just
afraid
to
touch
that
because
it's
like
we
said
it's
hard
to
test
impossible
to
test
really,
and
it's
so
infrequent
that
you
know.
C
I
guess
like,
for
example,
with
the
api
code
changes
that
was
really
hard
because,
like
we
didn't
really
know
it
worked
until
we
actually
did
the
release,
and
I
don't
want
to
do
a
major
refactor
and
have
to
deal
with
that
again.
If
we
can
avoid.
C
Yeah,
I
I
put
this
together
just
because
I
kind
of
wanted
to
give
you
guys
an
idea
of
what
I'm
thinking
about
and
also,
if
there's
anyone
who
thinks
that,
like
we
shouldn't,
be
doing
something
here
like
or
if
you
think
you
like
think,
maybe
for
our
first
iteration
whether
this
is
too
much
or
if
you
had
other
ideas.
C
C
I'm
not
sure
if
that's
a
good
idea-
and
that's
illustrated
here
at
the
top
with
git
lab
so
gita
lee,
maybe
would
just
do
a
version
update
when
it
when
it
has
a
green,
commit
on
master
to
gitlab
and
then
the
last
ci
job
would
then
trigger
release
tools,
and
then
it
would
go
through
the
full,
auto
deploy
and
then
in
the
box
over
here
on
the
left,
we're
having
more
and
more
projects
that
are
deployed
independently
of
rails
more
than
you
think,
like
we
have
five
right
now
and
that
might
grow.
C
So
I
think
we
do
need
a
scheme,
because
this
is
quite
painful
for
registry
in
cass,
and
these
other
projects
and
pages
will
be
one
of
these
soon,
where
you
have
to
open
up
multiple,
mrs
to
do
version
bumps.
So
I
was
thinking
that
maybe
we
just
have
like
a
generic
way
of
doing
that.
C
You
can
see
if
anyone
left
comments
here,
yeah
talking
only
about
auto,
deploys
yeah
dedicated
service
pipeline
mutex
yeah-
I
don't
know,
but
is
there
anything
here
that
anybody
thinks
that
we
shouldn't
be
doing
or
maybe
is
too
much
or
is
there
a
smaller
iteration?
We
could
take.
B
No,
no,
my
point
here
is
so
I
I
I
have
some
follow-up
question
here.
So,
for
instance,
here
we
are
talking
about,
I
think
kubernetes
only
so
I'm
looking
at
the
pages
pages.
C
Yeah,
so
we
could
do
pages
like
we
do
with
gitaly.
I
guess
like
I
was
thinking
pages
is
more
similar
to
gitlab
shell
mailroom
registry,
because
it's
going
to
be
running
in
pods
in
the
kubernetes
cluster
and
it's
sort
of
like
a
standalone
service.
C
B
Will
I'm
just
thinking?
How
far
are
we
from
this?
Because
if
it's
not
actionable
because
pages
is
not
running
kubernetes
so
also
also
the
very
beginning
so
triggering
release
tools
from
the
latest
merge
on
github
thinking
about
the
current
time
here
so
the
time
that
we
takes
for
deploying
something
yeah
I
mean
there's.
Basically,
there
is
a
point
zero
of
a
first
requirement
which
is
implementing
some
level
of
global
mutex,
so
we
need.
B
Lock,
while
if
we
forget
about
this,
so
we
don't
change
the
the
engage
point
with
the
release,
it's
still
we
talk
and
we
talk
with
a
scheduled
pipeline
and
we
decide
what
we
talk
and
where
then,
maybe
it's
it.
It's
easier
to
basically
build
only
the
the
release
tools
line,
the
one
that
is
on
the
on
the
right.
C
E
C
Commit
reverse
just
having
a
job
that
triggers
release
tools
when
there's
a
green
commit
other
than
you
have
a
little
bit
more
control
over
the
frequency
right
of
the
run
like
you
can
set
the
schedule
job
to
only
run
you,
but,
but
in
in
that
sense,
like
it's
kind
of
the
same
to
me,
I
couldn't
really
think
of
a
difference
between
the
two
you
still
have
to
have
locking,
but
maybe
that
locking
would
be
done
at
the
release
tools
pipeline
anyway.
Right
like
what
what's
the
difference.
B
Yeah
the
difference
is
well.
What
I'm
thinking
here
is
that,
if
you
do
have
control
over
the
frequency,
you
can
just
set
a
frequency
which
is
long
enough
to
do
not
overlap.
C
D
C
Yeah,
I
was
actually
thinking
that
you
could
just
say
like
okay,
there's
a
deployment
process
in
progress.
Sorry
I'll
just
skip
like
and
then,
which
would
be
essentially
the
same
right
like
you
would
have
a
trigger.
That
would
just
basically
be
a
no
op
if
there's
a
deployment
pro
progress
and
then
but
yeah,
maybe
a
scheduled,
I
I
think
like
yeah.
Maybe
a
scheduled
pipeline
is
makes
more
sense.
D
So
I
think,
if
I
look
at
this
graph,
for
example,
probably
the
first
step
would
just
be
to
get
things
in
release
tools,
so,
instead
of
having
a
separate
deploy,
etc,
we
push
that
in
one
pipeline,
because
we
need
that
anyway,
regardless
of
you
know
what
approach
we
take
for
triggering
it.
D
I
do
agree
that
eventually
we
want
to
have
a
system
where
you
know.
If
you
push
to
workhorse,
we
can
deploy
just
workhorse
and
not
everything
else.
You
know
which
you
could
do
by
passing.
It.
D
D
The
deploy
process
that
we
have
today
is
going
to
take
significantly
longer
than
it
takes
to
push
a
whole
bunch
of
commits.
So
you
just
end
up
like
piling
stuff
up,
so
I
think
the
order
will
basically
be
get
everything
into
one
pipeline,
but
keep
the
current
system.
Where
basically
we
decide
when
to
deploy.
D
Then
I
think
we
need
some
sort
of
official
mutex
like
feature
where
we
can
say
hey
for
this.
If
a
pipeline
has
this
field
somewhere
in
its
ci,
config
only
run
one
of
them
at
a
time,
because
we
have
this
functionality
where,
if
you
have
a,
if
you
have
multiple
pipelines
on
the
same
branch,
you
can
skip
redundance
pipelines,
I've
never
quite
figured
out
what
redundant
in
that
context
actually
means,
and
I
haven't
seen
it
work
reliably.
D
D
There's
the
third
problem
that,
depending
on
your
runner,
when
you
cancel
a
pipeline,
it
won't
actually
immediately
cancel
it
best
example:
mic
is,
if
you
use
virtualbox,
which
we
don't
use,
but
you
know,
if
you
do,
if
you
cancel
a
pipeline
uses
virtualbox,
it
can
take
30
to
60
seconds
before
the
job
actually
gets
cancelled
soon,
because
virtualbox
isn't
always
the
fastest
reasons.
D
So
ideally
you
don't
cancel
pipelines,
you
just
make
sure
they
never
start
to
begin
with,
and
I
think
that's
a
feature-
that's
probably
useful
to
multiple
people
or
multiple
organizations.
It's
one
that
we
currently
have
implemented
sort
of
ad
hoc
in
the
deployer
pipeline.
I
believe
using
an
environment
variable,
and
I
think
once
we
have
those
two,
then
we
can
see.
Okay.
How
are
we
going
to
sort
of
feed
release
those
the
data
to
determine
what
to
deploy
and
and
when.
B
B
The
other
one
are
just
waiting
the
problem
here
is
that
because
we,
so
the
point
is
that
you
want
to
make
sure
that
you
have
more
than
one
production
deployment,
but
it
really
depends
on
how
you
build
the
job.
So
if
you
have
a
way
for
saying
this
is
a
production
deployment,
maybe
it
will
work
now
that
I'm
thinking
a
lot.
So
I
was
thinking
that,
because
I
remember
jarv
trying
this
in
the
deployer,
but
he
had
the
problem
that
basically,
at
the
deployed
level,
the
deployment
is
spread
across
many
jobs.
B
But
if
we
move
this
to
a
triggering
stage,
which
is
here
so
at
release
tools,
we
can
say
this
trigger
triggers
production
deployment,
and
so
it
triggers
in
weight.
And
so
while.
C
Yeah,
this
is
what
I
was
thinking
like
these
trigger
jobs
here
would
have
resource
groups,
so
that,
and
this
would
trigger
in
weight
for
g-stage,
canary
and
g-prod,
so
yeah
those
would
have
resource
groups,
so
you
couldn't
have
more
than
one
round
at
a
time,
and
I
think
that's
one
of
the
advantages
of
having
this
coordinator.
B
Because
if
yeah
over
the
whole
pipeline,
you
mean
no,
I
mean
so.
This
is
not
persistent.
So
let's
say
that
you
have
some
kind
of
problem
for
all
503
from
the
checking
the
pipelines
whatever
and
your
job
is
killed,
but
the
pipeline
is
still
running
while
if
you
have
some
sort
of
distributed
lock
system
where
you
say
yeah,
I
started
this,
so
it
start
elsewhere
and
when
you
run
the
thing
again
say:
oh
wait.
B
A
Let's
assume
we
can
get
that,
like,
I
think
like
let's,
if
we
should
define
like
kind
of
what
behaviors
we
want
and
then
we
can
work
out
the
implementation
details,
so
that
sounds
sounds
sensible.
A
I
what
about
so
alessia
you
mentioned
that
this
looks
very
large.
Was
there
other
stuff
that
you
feel
like?
We
should?
Oh,
I
guess
for
me,
I'm
kind
of
interested.
What's
the
kind
of
order
like,
I
think
this
looks
like
absolutely
the
approach
like
the
direction
we
want
to
go
in.
So
what's
the
order
that
we
tackle
these
things,
because
that
might
help
us
scope
it
for
a
quarter.
B
I
think
that
moving
triggers
outside
of
omnibus
and
and
cng
should
be
just
the
the
first
thing.
So
just
really
following
what
was
scarborough
in
in
that
plan,
which
I
think
you
linked
or
someone
else
linked
in
the
in
the
agenda,
was
that.
B
B
B
D
C
A
Like
further
down
the
list
of
things
we
tackle
just
because
I
think
there's
some
unknowns
on
those
like,
I
think
the
stuff
unlocks
it
and
maybe
it's
trivial
later,
but
I
think
we
should.
I
would
rather
not
mention
it
in
this
quarters
plan
just
in
case
it
ends
up
we're
not
there
yeah.
C
I
think
I
think
the
challenge
there
is
we
need
to
persist
the
version
we
came
from
here
and
then
re-trigger
with
that
version.
Maybe
that's
not
too
difficult
like
we
could
just
use
like
an
artifact
or
pass
like
ci
variable
for
that,
but.
C
Okay,
so
is
there
anything
else
here
that
we
don't
like
the
stuff
on
the
left,
I
think
is
probably
not
something
we
need
to
prioritize,
but
all
of
these
are
going
to
be
done
manually
like
right
now,
registry,
mail,
room,
kaz
and
soon
shell
we're
these
are
all
deployed
manually
like
we're.
At
least
you
know,
cas
mail,
room
and
registry.
B
A
C
It's
the
same
thing
like
basically,
we
have
the
deployer
pipeline,
but
then
triggers
the
cage
workloads
pipeline,
just
like
we
have
now,
except
that
it's
single
environments,
not
a
long
chain
and
that
allows
us
to
move
the
deploy
checks
and
the
qa
into
the
release
tools
pipeline.
Instead
of
having
it
done
from
deployer,
which
I
think
will
be
nice.
B
A
B
C
Yeah,
it
happens
after
migrations,
which
is
done
by
the
deployer,
and
we
wait
like
just
like
now:
the
the
deployer
pipeline
triggers
gates
workloads
and
it
waits
till
it's
completed,
and
then
it
moves
to
post
deploy
migrations
and
then
that's
done.
I.
C
B
Want
to
check
intermediates
like
because
what
is
written
here
is
that
basically,
intermediate
checks
will
still
be
triggered
from
deployer
back
to
release
tools,
and
instead
we
may
be
willing
to
have
the
checks
directly
in
in
release
tools
so
that
they
do
do.
A
C
Yeah
yeah,
I
think
that's
where
we
will
eventually
end
up.
It's
tricky
with
migrations,
like
migrations,
would
then
get
triggered
by
release
tools
as
a
as
a
separate
job.
I
guess
and
then
after
that,
then
we
would
trigger
vms
and
kubernetes
have
yeah.
So
I
I'm
not
sure
how
that
will
work
but
yeah.
I
think.
A
C
B
C
Yeah
I
mean
eventually,
deployer
won't
have
anything
to
do
except
migrations
and
post-deploy
migrations,
and
you
could
argue
that
we
don't
even
need
to
use
ansible
for
that.
We
could
just
have
that
done
directly
from
release
tools,
in
which
case
we
don't
have
any
ansible
at
all
like
we
just
get
rid
of
it
completely,
and
everything
is
done
from
release
tools
and
I
think
we'll
just
gradually
shift
that
direction.
F
I
do
have
a
question.
It
is
not
clear
to
me
if
we
are
going
to
get
rid
of
the
auto
deploy
branch
and
deploy
from
master
for
the
for
this
coordinator
or
what
is
the
step
for
that.
C
That
was
that
was
my
thought.
I
was
kind
of
like
baking
that
in
as
an
assumption,
but
maybe
we
can't
just
make
it
in
maybe
we
should
just
do
that
first,
with
what
we
have
and
then
and
then
go
here,
but.
A
I
mean
the
only
concern
I
have
about
it.
Is
that
how
what
what
impact
will
that
have
on
releases
on
monthly
releases
like
the
way
we
tag
things?
Will
we
have
to
make
changes
to
tagging
stuff.
D
No,
so
the
the
way
it
works,
the
stable
branches
are
created
based
on
whatever
we
have
deployed,
the
exact
source
differs
so
for
forget,
lab.
We
use
the
deployments
feature
and
the
the
tracking
of
that
we
basically
get
the
latest
shot,
that
we
have
in
production
and
create
the
stable
branch.
Based
on
that,
I
believe
for
gita.
D
But
that
all
happens
either
based
on
the
shell
or
some
version
file
in
the
the
master
branch,
and
so
it
is
completely
detached
from
the
older
deployed
branches
has
no
knowledge
of
it
in
any
way.
A
So
at
the
moment,
when
we,
when
we're
preparing
a
release,
we
capture
quite
a
lot
of
like
auto
deploy
branch
names
and
things
like
that
like.
How
would
that
stuff
change.
A
A
Cool
okay:
we
have
a,
we
have
the
issue
or
the
proposal
to
to
switch
over
to
master
in
the
planning
board.
So
maybe
it's
an
action
for
everybody
to
take
some
time
to
read
through
that
and
check
that
that
looks
sensible.
Would
this
approach
work
without
doing
this
first,
like
would
would
this
moving
to
release
tools,
work
if
we
were
still
doing
branches.
C
I
think
it
would
yeah,
I
don't
think,
there's
any
because,
like
finding
the
green
commit
logic
would
be
the
same
as
it
is
now
where
it
would
just
look
at
whatever
auto
deploy
branches
the
current
one.
So
I
would
like
us
to
move
move
to
master
sooner
or
get
rid
of
the
branches,
but
I
don't
think
we
need
to.
A
Cool
yeah,
I
was
just
checking
in
cases
unexpected
blocker
comes
up
but
yeah.
I
agree
like
let's
see
if
that
makes
sense
to
do
as
a
first
step.
A
G
I
guess
I
have
kind
of
a
broad
question:
how
do
we
plan
on
visualizing
these
changes
like?
Are
we
just
going
to
add
pipelines
to
release
tools,
or
are
we
going
to
create
some
sort
of
app
that
has
a
wrapper?
That
shows
us
all
the
fancy
things.
D
So
I
I
would
imagine
it's
basically
going
to
be
part
of
the
resource
pipeline
config,
but
with
the
jobs
all
being
your
optional,
based
on
whether
we
actually
are
running
a
deployment
etc.
In
other
words,
I
think
best
way
is
you
basically
take
the
existing
deployer
pipeline
as
it
is
and
basically
copy
paste
that
into
not
quite
because
we
wouldn't
have
all
the
individual?
D
I
don't
know
if
this
works
cross
instance,
if
we
could
get
the
downstream
pipeline
to
show
up
in
release
tools.
So
it's
not
cross
instance
because.
C
E
D
For
the
other
place,
ideally,
if
you
have
a
single
pipeline,
you
would
also
see.
Oh
you
know,
here's
the
job
on
def,
that's
actually
building
the
whatever,
but
I
think
at
least
for
a
while.
That's
not
going
to
be
possible
because
we
need
some
sort
of
cross-instance
way
of
linking
pipelines
together.
D
C
G
B
This
starbuck
is
exactly
one
of
the
reason
why
we
are
doing
this,
because
what
we
will
have
at
the
end
of
this
is
that
you
will
have
one
pipeline
in
release
tools
that
shows
you,
okay,
you
cannot
get
the
the
downstream
the
downstream
pipeline
to
the
attacker,
but
you
see
something
like
here.
I
tagged
and
I
was
waiting,
so
you
may
have
a
link,
but
everything
else
should
be
visualized
in
the
same
pipeline,
and
now
that
I
remember
what
conversation
that
jarvan
I
had
in
the
past,
we
were
actually
thinking
of.
B
We
don't
tag
so
there's
another
point
here,
which
is:
how
can
I
find
the
right
deployment
pipeline?
So
the
idea
was
that,
instead
of
when
we
tag
today,
we
tag
omnibus
and
the
idea
was
why
don't
we
tag
release
tools,
so
the
deployment
pipeline
runs
on
a
target
pipeline
in
release
tools,
so
that
when
you
want
to
see
what
deployed
xyz
you're
looking
for
dead
talk
on
release
tools-
and
you
have
a
pipeline
which
will
attack
omnibus
and
and
do
everything.
C
A
A
So
I
think
we
have
some
kind
of
follow-up
things
to
do,
but
as
a
rough
scoping,
we're
thinking
we'll
we'll
somehow
move
the
trigger
to
release
tools,
we'll
build
out
the
sort
of
release
tools
pipeline.
So
we've
got
stuff
like
how
much
of
the
stuff
I
think
is
to
be
determined.
A
Is
that
it
or
are
we
also
fancy
trying
some
of
the
independent
deployments
as
well.
C
I
actually
think
we
can
do
that
in
parallel
or
not
at
all.
Initially,
like
I
don't
know,
I
think
starbuck
you
were
working
on
the
registry
auto
deploys
I'm
not
sure
how
that
ties
into
this.
Maybe
we
can
start
by
thinking
about
how
we
would
do
this
for
registry,
since
that's
probably
the
highest
priority
for
all
these
projects.
A
They
are
a
little
bit
further
out.
I
we're
working
with
them
to
get
qa
tests,
so
they
probably
I
mean
it
might
be
ready
sort
of
later
in
the
month,
possibly
maybe
it's
a
bit
further
out
than
that
so
yeah,
it
might
not
be
super
soon.
I.
C
Yeah,
so
for
now
I
guess
those
those
projects
are
just
manual
like
they
are
now
we
have
a
templated
issue
that
kind
of
tells
them
what
to
do,
and
I
think
people
are
complaining
a
little
bit,
but
it's
probably.
A
Okay,
yeah,
I
think
that's
kind
of
fun,
so
I
think
that's
probably
one.
We
can
just
know
that
in
the
future
we
can
make
easier,
let's
not
tie
it
into
this
okr
unnecessarily
and
if
we,
if
we
get
to
it
great
and
if
not
also
fine.
A
Cool
was
there
anything
else
anyone
wanted
to
bring
up
on
the
coordinate
releases
stuff,
or
should
we
move
on
to
changelogs.
G
G
A
Whatever
makes
most
sense
like
everything
can
be
edited
in
there
so
like
whatever,
which
one
would
be
the
best
approach.
G
A
So
I
think
that
okr
in
particular,
we
can
use
a
kind
of
starting
point
like
completely
fine
to
update
it
all.
In
fact,
I
certainly
should
update
it
or
once
we've
worked
out
a
little
bit
more
how
we
want
to
frame
this
for
the
quarter.
So
it's
a
little
bit
more
accurate,
but.
A
Cool
okay,
so
we
haven't
got
all
that
much
time
left.
So,
let's
move
along
to
changelog,
so
we
have
a
similar
kind
of
a
similar
question.
I
suppose
around
changelogs
euric
has
already
got
the
epic
up,
but
I
think
the
question
from
my
side
really
is
like
what
what
makes
sense
for
us
to
tackle
this
quarter.
D
Yeah
sure
so
I
started
out
basically
as
a
proposal
like
hey,
let's
use
everything
gitlab
we're
building
gitlab
to
generate
change
logs
then
essentially
has
to
start
thinking
about
more,
so
it
started
thinking
more
about
it.
Maybe
that's
like
oh
wait
a
bit
there's
a
couple
of
things
that
we
need
to
take
into
account.
D
I
think
one
of
the
big
challenges
is
that
the
way
we
do
releases
and
change
logs
is
very
different
from
many
other
projects,
in
the
sense
that
we
have
these
yamaha
files
right
now,
do
we
concatenate
them
into
a
change
block
using
some
api
code
etc,
and
then
we
want
to
move
to
this
system
where,
ideally
much
of
that
or
all
of
that
is
done
for
us.
D
But
then
you
get
into
these
problems
where
we
have
a
particular
format
that
we
want,
for
example,
in
the
markdown,
which
may
not
be
the
format
that
other
people
want
and
so
on
through
my
sort
of
thought
process.
D
I
kind
of
realized
that
if
we
want
to
keep
these
markdown
files,
we
can
never
fully
automate
this
right,
but
so
we
can
never
fully
build
it
into
gitlab,
because
we
have
this
issue
that
these
markdown
files
are
updated
quite
a
bit
before
we
tag
or
you
know,
release
which
means
we
need
the
data
ahead
of
time.
And
if
you
have
the
system
in
gitlab,
where
you
create
an
api
call,
and
it
will
generate
your
change
log
for
you.
D
We
wouldn't
have
that
data
ahead
of
time,
because
we
would
have
to
create
a
release
first.
Basically,
so
that
kind
of
led
me
to
the
first
question
at
the
bottom
of
the
epic,
which
is
basically
do
we
want
to
keep
these
markdown
files
or
do
we
or
are
we
willing
to
get
rid
of
them?
D
Because
if
we
get
rid
of
the
things,
get
a
lot
easier
because
then
we
can,
you
know,
attack
our
release,
etc
and
then
maybe
say
as
part
of
the
hey
up
also
generate
a
change
log
and
attach
it
to
the
tag
or
the
the
release
whatever.
D
Although
then,
you
still
get
issues
like
how
are
we
going
to
present
the
data,
because,
inevitably
we
want
to
group
things
by
feature
block
whatever.
So,
how
are
we
going
to
do
that
and
there's
different
approaches
there
they're
all
quite
difficult
in
the
sense
that
I
think
most
of
them
are
right,
are
gonna
require
a
a
change
on
how
we
work
in
in
one
way
or
another.
D
So
the
kind
of
the
issue
now
is
I'm
kind
of
trying
to
figure
out
okay.
What
are
the
options
that
we
can
take?
Would
they
work
at
gitlab,
because
I
think
that's
also
quite
important
and
would
whatever
that
solution
is
makes
sense
to
be
built
in
gitlab?
Would
other
people
use
it?
D
I
think
most
enterprise
users-
probably
wouldn't
because
they
probably
don't
produce
change
logs,
so
yeah,
that's
sort
of
the
the
overview
of
where
we're
at
now
issue
is.
This
is
just
my
thought
process,
so
I
I
don't
know
if
anybody
else's
great
idea,
so
it's
like
no.
We
need
to
do
it
this
way.
D
So
it's
kind
of
like
what
I
would
like
to
discuss
a
bit
figure
out.
What
we
can
do
here
and
so
on.
A
I'm
wondering
if
there's
a
so
our
goal
as
it
stands
is
to
create
a
changelog
feature
in
get
up.
A
D
Right
so
summer,
halfway
my
blurb
in
the
epic
I
thought
about
okay,
what
we
can
do
like
building
everything
in
gitlab
based
on
sort
of
the
current
requirements,
will
be
difficult,
but
what
we
could
do
is
perhaps
build
some
sort
of
basic
plumbing
in
gitlab
that,
for
example,
you
could
use
to
get
the
commits
that
might
be
included
in
the
release
and
then
maybe
their
associated
merch
requests
and
that
sort
of
thing
what
I
found
there,
though,
is
that
we
have
most
of
the
tools
for
that
today.
D
Like
we,
you
know
we
have
tags,
we
can
get
a
list
of
commits
between
two
tags.
We
we
have
the
code
in
place
to
get
merge,
requests
from
a
commit.
I
don't
know
if
we
expose
that
through
an
api,
so
we
could
use
that
to
enrich
the
data
with
you
know,
labels
whatever,
and
so
much
of
our
work
would
be
sort
of
taping
that
together,
but
then
it
wouldn't
be
really
something
we'd
be
building
in
gitlab,
certainly
not
something
that
somebody
can
use
by.
D
You
know
click
button
done
so
kind
of
it's,
not
quite
dog
fooding.
At
that
point,
although
I
think
there's
sort
of
a
trap
where,
if
we
frame
this
purely
from
dog
food
like
we
have
to
build
something
in
git
like
that,
we
use,
I
think
we
sort
of
immediately
restrict
ourselves
in
terms
of
what
we
can
do.
I
think
it's
more
useful
to
say:
okay,
what
is
actually
the
problem?
What
are
we
going
to
solve
you
know,
and
can
we
solve
that
by
building
something
in
get
lab
or
is
the
answer?
D
No,
but
so
yeah,
a
very
simple
approach
would
be
you
create
your
release
and
when
you
view
a
release,
it
will
show
a
list
of
commits
since
between
that
tag
and
the
previous
stack,
that
is
a
simple
thing,
as
in
you
have
a
list
of
commits.
You
know,
that's
it
I
do
air
quotes
because
it
requires
that,
given
a
release,
we
know
what
the
previous
release
is
according
to
semantic
versioning,
for
example,
if
your
latest
release
is
let's
say,
14.0
and
then
you
release
13.5.6.
D
The
previous
release
is
not
14.0,
it's
13.5
whatever
came
before,
so
that
part
is
going
to
be
a
little
tricky.
D
Then
you
have
you
know
your
list
of
commits
for
release,
but
I
I
think
that
in
itself
is
not
particularly
useful,
that
is,
you
can
already
get
it.
You
can
get
the
text
figure
it
out
yourself
and
see
the
data,
and
I
think
the
bigger
problem
is
we
kind
of
know
what
we're
gonna
need,
which
is
grouping
things
by
featured
team
whatever,
because
we
already
do
that
people
want
it.
D
The
using
merch
requests
will
be
easier
for
developers
because
you
already
use
those
they
already
have.
Labels
milestones
everything
the
challenge.
There
is
mostly
on
us
because,
given
a
commit,
it's
sometimes
very
difficult,
if
not
impossible,
to
figure
out
what
merge
request
introduced
that
and
thus
what
labels
to
use
et
cetera
and
the
alternative
is
that
we
add
that
information
in
commits
so
there's
a
quarter.
Court
standard
called
conventional
commits
where
you
like,
prefix
your
commit
title
with
stuff,
and
then
you
can
use
that.
D
So
it's
going
to
require
a
certain
amount
of.
How
would
you
say
effort?
I
guess
that
I
have
doubts
if
people
are
willing
to
do
that.
D
So
it's
it's
basically,
the
more
we
want
to
build
this
into
gitlab.
I
think
the
more
we're
going
to
have
to
change
how
we
produce
the
data
necessary
for
this
and
thus
the
more
challenging
it's
going
to
be
and
the
more
reluctant
I
think
people
will
be
to
it.
B
Yeah,
I
was,
I
was
looking
at
our
ci
reference
recently
because
I
was
searching
for
something
and
I
found
a
lot
of
new
features
that
I
was
not
aware
of.
One
of
these
is
the
release
generation,
so
sims,
the
the
documentation,
is
a
bit
unclear
because
at
certain
points
seems
to
refer
that
you
can
link.
You
can
generate
a
text
file
in
the
ci
job,
and
this
will
be
the
description
of
the
release
of
something
like
that,
but,
regardless
of
how
it
works
today,
I
think
seems
to
be.
B
So
what
I'm
thinking
here
is
that,
if
we
are,
I
think,
if
I
understood
correctly,
we
are
providing
a
release,
clean,
docker
image,
which
I
think
is
written
in
go.
That
is
basically
our
reference
implementation
for
this.
So
what
about?
If
we
do
something
like
that,
and
we
extend
the
thing
to
generic
collect
the
the
commit
whatever
type
of
changelog
generation,
but
then
we
have
an
extended
version
based
on
a
custom
parameter
or
a
different
image
that
will
run
our
currently
changed
generation
process.
Do
we
still
do
the
things
with
files?
Basically
right.
D
Yeah
so
like
we
have
this
go
to
layer
for
generating
releases.
We
could
do
something
similar
whenever
it
spits
out
a
change.
Look
in
a
certain
file
format
that
way
it's
not
really
built
in
gitlab.
It's
an
extra
project,
but
you
know
easy
enough.
B
A
B
No,
no,
no,
no!
That's
the
thing
that
I
was
pointing.
He
says
in
the
documentation
that
you
can
see
the
the
the
format
of
the
tag
and
if
the
tag
does
not
exist,
it
will
be
tagged
at
the
end
of
so
that's
the
thing
right
because
you
can
do
you
can
say
something
like
you
put
a
rule
based
on
variables
say
if
tag
this
release
version
is,
is
available
as
a
variable
then
run
this
job
and
this
job
will
attack,
but
the
jump
should
be
run
before
you
tag,
so
you
can
do
something.
B
D
So
that
is
an
option,
but
that
basically
requires
that.
So
we
have,
if
you
have
those
marked
on
files,
and
we
want
to
attack,
to
include
the
necessary
changelog
entries
here-
comes
this
process
where
we
essentially
have
to
say
we
want
to
create
a
release.
Give
me
all
the
data
in
it,
save
it
then
create
the
release
and
somehow
commit
those
file
changes
back
into
the
repository.
D
D
D
D
But
then
you
still
have
the
problem.
Okay,
what
do
you
use
as
the
data
source
are
these
commits?
Are
these
merge
requests
if
they
are
merchants?
Are
they
merchandise
that
are
merged
into
master
or
some
other
branch,
because
that
wouldn't
work
for
us,
because
we
all
march
requests
that
are
deployed,
in
other
words
to
merge,
requests
and
master,
doesn't
necessarily
mean
it
should
be
in
the
change
log?
D
And
then
you
get
into
problems
like
okay?
How
are
we
going
to
deal
with
back
ports?
So
if
we,
the
latest
is
14.0,
but
we
tag
13,
5,
whatever
you'd
have
to
know
what?
Basically,
the
start
of
that
range
is
so
13.5,
whatever
patch
version
came
before
it,
so
it's.
D
A
D
D
B
B
So
even
if
you
think
of
something
more
simpler,
which
is
stable,
tagged
on
master
and
then
you
branch
off,
if
you
want
to
do
back
ports,
it
still
works
because
you
are
going
to
run
the
changelog
generation
on
the
branch
and
that
branch,
if
you
do
a
github
describe
it,
will
tell
you
the
previous
tag.
Basically,
so
the
save
the
dot
zero
and
if
you
are
a
master,
even
if
you
are
doing
a
major
bump
or
whatever
it
will
still
find
you
the
previous
one.
B
But
I
would
like
to
insist
a
bit
on
the
generation
of
the
changelog
in
ci,
because
I
think
that
we
so
we
have
the
release
page,
which
is
a
feature
in
gitlab.
So
my
this
isn't
just
my
take
on
the
problem
right
if
we
say
that
the
very
simple
implementation
is
that
we
provide
through
the
api,
the
changelog
within
that
page,
so
without
committing
it,
I'm
absolutely
fine
with
it,
and
this
can
be
kind
of
that's
the
feature
shipped
within
the
product
and
then
because
we
know
how
this
works.
B
We
could
do
something
like
we
want
to
trigger.
We
want
to
attempt
a
release,
so
we
create
the
the
change
log
ahead
of
time
with
the
current
code
and
and
that
commit
with
with
ci
rules
can
be
the
one
that
triggers
the
tagging.
B
D
Yeah,
so
since
we're
running
out
of
time,
I
also
have
a
meeting
after
yeah.
I
would
love
to
have
whatever
feedback
thoughts
in
the
epic,
because
then
I
can
take
a
look
at
it
sort
of
over
more
time.
D
I
think
it
all
basically
begins
and
comes
down
to
do.
We
want
to
keep
these
markdown
files,
because
if
we
do
then
the
releases
page
we're
not
really
dog
fooding
it,
as
in
essentially
mirroring
the
data
there,
but
we
aren't
really
using
it.
Our
developers
aren't
really
using
it
because
they're
all
still
working
in
that
markdown
workflow.
D
Because
if
we
can
get
rid
of
that,
I
think
then
we
can
basically
do
whatever
we
want.
Then
it
gets
a
lot
easier,
yeah.
So
thoughts
on
everything
welcome
in
the
epic
I
have
to
drop
off
because
I
have
another
meeting:
are
there
any
questions
or
anything
before
I
drop
off.
A
Well,
thanks
everyone,
we,
let's
so
dear
eyes,
let's
try
and
get
this
stuff
into
epics
and
everyone
else
can
kind
of
comment
on
that
main
action.
I've
written
down
is
if
everyone
could
just
also
take
a
look
at
the
proposal
to
move
to
master
instead
of
using
branches
for
auto
deploys.
We
can
see
if
we
can
progress
that
one
as
well,
and
I
will
put
another
of
these
in
for
next
week.
A
It
might
not
be
exactly
the
same
time
because
of
performance
reviews,
but
I'll,
try
and
find
a
time
again
so
fully
optional
for
everyone,
but
to
continue
discussing
how
we
move
forward
with
these
okrs
and
as
we
go
into
issues,
I'm
sure
we
will
still
need
brainstorming
time
to
tackle
specific
complexities
as
we
hit
them
awesome
good
to
see
you
all
enjoy
the
rest
of
your
tuesday.