►
Description
Discussing https://gitlab.com/gitlab-org/release-tools/-/merge_requests/853
A
B
A
So
there
is
this
okay,
so
one
of
the
discussion
is
about
what
we
are
deploying
here,
because
we
had
some
problems
during
12/5
release
where
we
ended
up
deploying
commits
they
will
never
reached
production.
So
out
of
that,
we
corrective
actions,
we
changed
the
cherubs
command
for
creating
the
stable
branch
to
pick
exactly
the
latest
commit
in
production
as
a
source
for
the
stable
branch
creation
and
up
to
this
moment
in
time,
because
we
are
still
following
this
path.
A
What
happens
is
that
we
do
tag
LC
42
with
right
after
creating
the
stable
branch
husband,
so
we
create
our
C
42
and
basically,
we
always
reach
out
this
as
a
final
monthly
version,
because
we
never
happened
so
far
that
we
had
to
change
something.
It
happens
very
close
to
the
date
of
the
release,
so
it
we
just
test
it,
and
then
we
were
tagged
and
publish.
So
in
this
mode
wrinkles
we
are
creating
our
C's
automatically
late
earlier
in
the
process
right.
A
So
there
is
a
class
here
that
is
taking
care
of
back
back
porting,
the
basically
the
stable
branch
on
it
lab
itself
gets
created
earlier
in
the
process.
When
we
run
the
first
LC,
and
so
there
is
a
process
here
that
keep
merging
the
the
current
had
of
the
latest
out
of
the
void
into
that
stable
branch
so
that
it
it
converged
to
the
same
point.
My
concern
here
is
that
this
is
exactly
what
it
causes
the
problem
with
twelve
five,
because
we
created
the
same
a
branch
out
of
the
head
of
how
to
deploy.
A
But
it
was
not
the
thing
that
we
deployed
on
production,
so
long
story,
short
I,
think
we
can
keep
this
exactly
how
this,
but
when
we
search
the
source
branch
for
the
upgrade
process.
Instead
of
checking
the
outlet
probe
the
current
outlet
by
branch,
we
should
consider
looking
at
the
latest,
commit
deploy
to
production
so
that
we
still
have
the
same
thing
that
we
keep
creating
air
sees
based
on
what
we
deployed
on
github.com.
A
So
it's
more
in
line
with
what
we
are
doing
and
what
we
may
eventually
release,
and
we
have
this
and
we
are
off.
We
know
for
sure
that
when
we
keep
updating
the
stable
branch,
it's
updated
from
a
point
in
time
where
we
knew
that.
This
is
something
that
runs
on
the
thumb,
so
we
are
more
confident
on
the
end
result.
B
B
There
were
some
technical
issues
there
that
when
you
have
that
shot,
there's
the
issue
of
not
being
able
to
merge
a
source
branch
up
into
a
particular
point.
So
if
we
wanna
merge
those
changes
from
the
order
deploy
branch,
you
have
to
create
a
new
branch
based
on
the
commit
that
was
deployed.
Then
merge
tab
with
the
merge
request,
remove
it
fairly
minor
issue,
but
it's
a
bit
of
a
knowing
step
but
I.
Think
in
that
discussion.
I,
don't
entirely
remember
how
sort
of
concluded
that
that
wasn't
necessary
that
we
could
just
merge
order.
B
B
But
then,
when
I
later
thought
about
it,
it
was
like
okay,
wait.
You
can
still
have
this
issue
area,
maybe
it's
green
or
maybe
it's
red,
but
there
might
be
commits
we
just
haven't
deployed.
Yet,
for
whatever
reason,
okay,
with
the
whole
discussion,
I
kind
of
got
a
little
confused,
so
maybe
meirin
has
some
more
insight.
There.
C
The
point
here
was
and
why
we
decided
to
go
forward
with
this
type
of
situation,
where
even
we
can
get
a
situation
where
the
branch
is
red
is
that
we
still
control
when
we
tag
the
final
self-managed
release,
we
still
have
the
final
check
in
the
process
that
says
well
check
if
the
branches
are
green
and
then
release
right,
take
the
final
one.
So
this
is
why
we
decided
to
kind
of
wave
this
complexity
in
the
in
the
due
to
the
process
that
we
have.
C
C
In
the
run-up
to
the
release
that
will
then
be
completely
under
ployed
or
sorry
not
deployed
right
like
that
is.
That
is
the
problem
that
we
wanna
consider.
But
for
that
one
we
can
also
work
with
process
a
bit
and
expect
human
interaction,
because
I'm
I'm,
starting
to
kind
of
turn
around
a
tiny
bit
on
Urich's,
not
side
but
a
point
of
view
where
we
can
only
automate
so
much
before.
C
We
say
that
you
know
the
release.
Managers
have
to
do
something,
and
this
is
I
think
the
risk
that
we
should
reduce
to
zero
by
creating
a
couple
of
checks,
safe
clip
checks
in
the
final
days
to
the
release,
to
run
up
to
the
tagging
of
the
release
all
right.
So
what
we
are
right
now
saying
is
that
we
kinda
do
cut
that
stable
branch.
C
Two
days
were
to
work
days
prior
to
the
actual
release,
which
means
that
we
have
two
days
to
stabilize
things
or
other
control
things
manually,
if
necessary,
like
in
those
two
days.
We
actually
don't
expect
release
managers
to
do
any
work.
This
can
be
one
of
those
things
that
we
should
be
checking.
That's
the
general
concept
that
we
are
currently
thinking
about
right.
B
I
think
what
if
the
main
issues
there
that
we
couldn't
really
solve
in
a
last
discussion,
is
that
theoretically,
but
unlikely
it's
possible
that
we
deploy
a
particular
commit.
We,
you
know,
add
new,
commits
and
then,
for
whatever
reason
that
deploy
commit
has
its
pipeline
retried
it
fails.
We
come
in
to
create
a
stable
branch
to
try
to
find
the
last
screen
committee.
We
actually
revert
back
to
the
commit
before
and
as
far
as
I
know
get
there.
It's
not
really
an
easy
way
of
saying:
hey.
B
So
I
think
the
only
way
you
could
do
that
you
basically
start
at
the
tip
of
the
head
and
walk
back
and
then
the
moment
you
find
that
char.
That's
already
the
pole,
you
basically
break
out
of
it,
which
is
doable
and
I.
Think
if
we
sort
of
ignore
that
I
think
finding
the
last
Green
commit
and
merging.
B
That
is
that
big
of
a
deal,
but
let
me
just
quickly
check
if
our
API
actually
supports
creating
a
branch
from
a
commit,
because
if
it
doesn't,
then
we're
basically
screwed
anyway,
let's
see
create
repository
branch
yeah,
it
supports
created
from
a
ref
which,
according
to
the
documentation,
is
also
a
char.
So,
theoretically
we
could
do
that.
B
A
And
I
want
to
stress
this,
because
this
is
this
happen
very
often,
because
every
time
we
run
the
cherry-pick
job,
it
will
find
the
latest
green
master
and
get
early,
and
we
don't
always
promote
this,
especially
in
the
last
two
days
when
we
are
close
to
the
final
release.
This
happens
and
this
happens
a
lot,
so
this
may
affect
the
final
release.
A
Another
thing
regarding
the
merging
stuff
I'm
quite
sure
that
we
have
some
ref,
which
is
outside
of
the
default
web
spec
that
points
to
the
environment.
So
we
have
something
like
instead
of
ref
branches,
whatever
it's
something
like
ref,
environment,
gr,
PD,
and
if
you
change
the
ref
spec
you
can,
you
can
fetch
it.
So
maybe
but
I'm
not
sure
we
have
to
check
this.
We
can
create
this.
B
B
B
B
Let
me
see
if
that's
a
little
more
difficult,
yeah
I
think
if
we
use
these
environment
refs
may
be
less
of
an
issue,
because
then
we
never
have
to
actually
use
the
whole
deploys
a
source
bi.
If,
if
you
don't
have
those
refs,
then
you
basically
need
another
way
of
dealing
with.
It
think
how
it
shadow
ops
handles
this.
C
B
If
you
have
those
environment
drafts,
I
think
this
will
be
a
lot
easier
cuz
then.
Basically,
it's
not
an
issue,
although
actually
hold
on
I
think
it
actually
might
still
be
an
issue,
because
when
we
run
a
deployment
at
all,
you
know
that
ref
will
point
to
whatever
was
deployed,
but
they
can
actually
be
merge.
Request
picked
into
that
branch
after
that
and
if
we
then
come
in
and
try
and
merge
those
commits,
they
know
the
test
might
still
be
run
and
what
it
may
have
failed.
C
B
To
be
so,
let's
say
that
on
Monday
we
do
deploy,
you
know,
new
order,
deploy
branch,
etc,
everything's,
fine,
it's
green
on
Wednesday,
this
auto-tagging
stuff
starts
running
and
on
Tuesday,
some
numerous
have
been
picked
into
the
old
boy
branch.
But
there
are
some
test
filters.
You
know,
I
know
karma,
fails
or
:
e
or
whatever.
B
At
that
point,
if
we
don't
find
the
green
commits,
we
would
merge
those
red
commits
into
stable.
We
wouldn't
actually
run
a
release
because
the
test
would
fail,
and
this
kind
of
does
come
a
point
to
unless
you
we
can
then
deal
with
that
by
you
know,
putting
the
fixes
in
order
deploy,
rerunning
the
attacking,
and
then
they
end
up
in
stable.
B
But
you
are
in
a
bit
of
a
situation.
Well,
let's
say
on
that
Wednesday
we
say
no.
We
have
to
no
create
a
pet
release
or
whatever,
right
now
we
have
to
say.
Okay,
we
have
these
bunch
of
commits.
We
now
have
to
revert
those
manually,
I
guess
probably
the
best
way
wait
for
the
test
to
be
green
again
and
then
release.
So
there
are
solutions
and
I
think
there's
originally
wires
kind
of
like
yeah.
You
know.
Maybe
we
can
just
skip
the
green,
commit
thing.
I'm.
C
B
C
B
A
Valid
for
some
reason,
we
cannot
create
a
merged
request.
We
can
either
fix
in
it
in
product
or
for
the
first
couple
of
release,
create
a
branch
out
of
debt
ref
with
the
post,
which
is
the
create
branch
fer.
You
can
take
a
share,
create
a
ref
which
may
be
production
and
the
date,
and
then
we
said
this
as
out
of
the
lead.
So
we
created
Christmas
requests
about
science,
merge.
Everything
and
branch
disappear
right,
yeah.
B
So
yeah,
that
was
my
cue
I
thought.
I
was
pointing
to
the
old
of
the
Polly
branch
and
that's
why
I
saying
hey
new
commits
can
be
introduced
after
that.
But
if
it's,
the
commit
itself
should
be
fine,
assuming
those
drafts
are
there,
let's
say
for
a
moment:
those
refs
don't
exist
or
for
everything
it's
basically
an
option.
B
B
Basically,
if
stables
ready
means
older,
deploys
red,
which
means
there's
people
working
on
it
and
at
the
point,
when
there's
a
fix,
we
need
to
just
rerun
this
or
let
it
rewrote
automatically
and
if
fix,
will
make
itself
into
stable,
I'm
just
curious.
If
we
were
basically
willing
to
say
okay,
let's
say
we
have
a
not
a
critical
situation,
but
a
situation.
We
have
to
create
a
release
from
stable.
A
A
So
it's
guaranteed
could
be
to
be
green
in
what
we
are
doing
now
we
keep
merging
stuff
so
consider
that
all
the
sheriff
picks
will
will
yeah
would
appear
twice
basically,
because
if
we
start
creating
this
on
week,
one
let's
say
so:
we
cherry
pick
it
something
that
will
be
in
week.
Two
also
deployed
talking
about
weeks,
even
if
we
do
it
twice
a
week,
but
just
to
give
an
example.
A
B
B
And
I
find
those
merge
commit,
so
not
really
the
the
biggest
issue
and
I
mean
if
they
really
are.
We
can
always
put
some
code
in
place
so
that
we
basis
to
commit
sin
to
the
target
branch
sort
of
using
merge
commits,
but
I,
don't
think
that's
an
issue,
but
okay,
then
I
think
what
I'll
do
is
I'll
look
at
those
environment.
Refs
will
write
this
down
in
the
issue.
First.
Sorry
too
much
question.
B
B
C
C
A
C
C
Can
always
compare,
though,
right
you
can
always
compare
and
say,
like
I'll,
try
to
see
the
shell
deployed
in
production
where
which
branch
is
it
from?
Let
me
query
the
environmental
variable
to
understand
what
is
currently
set
there.
If
they're
matching
do
something
they're,
not
matching
yell
right
or
not,
yell,
like
I,
don't
know,
yeah.
B
B
It's
all
very
over
here,
but
it's
kind
of
at
a
point
like
you
can
error,
you
can
say:
hey
we
found,
you
know
three
order
deploy
branches,
but
then
what
should
a
release
manage?
You
do
right,
it
might
say:
oh
did
you
know,
find
the
right
one
and
delete
the
other
ones,
and
the
answer
is
much
like
which
ones
yeah.
A
It
can't
I
mean
the
only
reason
why
that
commit
is
in
more
than
one
out
of
my
branches
is
because
it's
the
source
branch
of
one
of
those,
because
the
right
after
creating
that
we
commit
the
get
early
version,
which
is
a
new
commit
created
on
that
branch.
So,
in
theory,
mm-hmm
a
deletes
of
is
a
fake
or
it's
a
it's.
A
gift,
I
upgrade.
So
it's
a
committee
which
is
specific
one
branch
right.
B
B
Yeah,
so
it
is
a
very
rare
case
and
I
was
that
point
where
you
can
use
air
say:
hey.
We
found
these
branches,
you
know
figure
it
out
yourself,
but
it
was
a
bit
of
a
bit
of
a
case
where
I
felt
like
okay.
If
this
is
sort
of
the
way
of
handling
it,
is
there
a
better
way
so
that
we
don't
have
to
do
this
in
the
first
place
yeah
we
can
do
that,
definitely
will
see
if
those
environment
refs
out.
C
C
B
C
B
It's
basically
I'm
just
trying
to
squeeze
that
into
a
format
that
works,
but
I'll
figure
that
out
Oh.