►
From YouTube: Run through GitLab.com auto-deploy releases
Description
Run through the GitLab.com releases and auto-deploy pipelines
Release page: https://about.gitlab.com/handbook/engineering/releases/
Auto-deploy design document: https://about.gitlab.com/handbook/engineering/infrastructure/library/scheduled-daily-deployments/
Periscope dashboard (Private): https://app.periscopedata.com/app/gitlab/573702/WIP:-Delivery-team-PIs
Release pipelines epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/113
A
A
Previously
I've
created
a
similar
video
on
how
we
are
creating
patch
releases
for
self-managed
release
and
I
just
want
to
be
able
to
contrast
a
bit
how
this
looks
to
get
labelled,
calm
and
how
the
two
are
connected,
meaning
how
the
releases
forget
load
come
are
connected
to
self-managed
release
that
we
ship
for
our
on-premise
customers.
I
want
to
run
you
first
through
some
of
the
projects
that
are
being
used
in
this
schedule,
namely
get
all
comm
releases
and
github
releases
in
general
are
orchestrated
through
the
release
tools
project.
A
A
The
branches
we
are
creating
on
github.com
are
defined
based
on
the
proposal
mates
in
the
early
2019,
where
we
are
creating
auto
deploy
branches
from
master
at
a
certain
cadence
and
those
branches
run
on
github.com
things.
Get
cherry
picked
into
those
branches
so
fixes
that
are
more
important
than
others,
and
that
automatically
gets
rolled
out
to
other
environments.
A
Before
on
the
next
schedule,
we
get
a
new
branch
freshly
created
from
master.
We
also
have
a
number
of
environments
that
we
deploy
to,
namely
we
deploy
to
staging
automatically
Canaria
environments.
We
also
roll
things
out
to
production.
We
have
a
pre
environment
that
is
used
to
verify,
fixes
quickly
for
self-managed
releases
and
similar
all
the
details
of
the
proposal
as
it
was
made.
Originally,
you
can
see
on
this
page
in
the
github
handbook.
A
So,
first
off,
let's
go
to
our
release
tools,
orchestration
project.
In
the
schedule
pipelines
we
have
three
separate
jobs
that
run
on
certain
schedule.
The
I'll
start
with
the
first
one
called
auto
deploy,
prepare
the
prepare
job
is
nothing
more
than
a
job
that
runs
on
a
schedule
and
creates
auto
deploy
branch
branches.
You
can
see
that
in
slack
we
post
a
message
whenever
we
create
a
new,
auto
deploy
branch
and
we
create
them
in
these
three
projects.
Currently,
if
I
go
to
the
pipeline,.
A
We
also
ensure
that
gotelli,
which
is
a
sub
component
of
gitlab,
gets
also
the
latest
version
into
this
branch.
Italy
is
deployed
from
master
in
this
setup,
and,
apart
from
that,
we
also
update
other
version
files
in
our
branches,
so
other
components
also
gets
the
latest
possible
versions
that
we
have
in
this
case.
So
you
might
notice
that
there
is
a
difference
between.
Why
is
get
early
server
version
this
one
while
get
loved
elasticsearch
version
is
a
tagged
version,
so
Shabbos
is
attacked.
A
We
are
slowly
rolling
out
changes
to
other
components
where
we
are
going
to
ensure
that
every
component
can
be
deployed
from
the
latest
version,
and
that
is
that
has
a
passing
built,
thus
making
all
of
the
changes
on
github.com
running
from
the
latest
commits
once
everything
is
created
and
updated.
We
push
all
the
branches
to
various
remotes
and
obviously
put
a
reminder
in
the
channel
that
we
have
created
the
new
branches
I'm
going
to
run
us
through
the
schedule
of
what
happens
when
the
branches
get
created
and
the
lifecycle
of
the
branch.
A
Every
time
we
create
a
new
auto
deploy
branch,
it
goes,
it
gets
created
from
Asha
from
master.
This
means
that
as
soon
as
the
branch
auto
deploy
branch
is
created,
auto
deploy
branch
is
going
to
be
slower.
Running
branch
than
master
master
is
going
to
continue
getting
changes
in,
but
auto
deploy
is
going
to
start
lagging
behind
to
ensure
that
we
get
any
critical
fixes
as
we
deploy
to
environment.
We
also
have
auto
deploy
pick
job
that
is
taking
a
specific,
merge
requests
and
try
applying
them
to
the
auto
deploy
branch.
A
So
if
you
take
a
look
at
this
job,
it
does
a
couple
of
things,
so
it
selects
the
auto
deploy
branch
that
we
have
currently
active.
This
is
set
as
CI
variable
and
it
changes
every
time.
We
create
a
new
branch
and
we
can
see
here
that
we
have
cherry
picked,
something
in
github
or
github
project
and
we
are
targeting
this
branch.
This
is
the
specific
merge
request.
So
let's
take
a
look
at
what
we.
A
A
Rushed
ahead
for
deployment
is
Brian
Abel,
cold
picking
to
auto,
deploy
in
cases
where
picking
through
the
auto
deploy
is
not
followed
by
p1
or
an
p2
label.
The
bot
is
going
to
leave
a
comment
that
it
is
on.
It
is
unable
to
pick
the
merge
request,
so
this
way
we
ensure
that
only
the
highest
priority
items
gets
picked
into
this
lower
running
Auto
deploy
branch
in
case.
A
In
this
specific
case,
we
have
a
situation
where
the
p1
label
is
applied
and
the
bot
leaves
a
specific
comment
and
removes
the
picking
to
auto
deployed,
thus
preventing
us
from
trying
to
deploy
to
cherry-pick
one
more
time.
You
can
also
see
here
that
system
note
was
left.
This
shows
that
the
cherry-pick
from
the
API
has
succeeded,
and
now
this
commit
is
in
our
auto
deploy
branch.
A
A
This
task
will
always
select
the
running
Auto
deploy
branch
as
we
have
it
here,
but
it
will
always
look
for
a
passing
commit.
This
means
that
even
if
we
pick
something
into
the
auto
deploy
branch,
but
it
has
a
failed
spec,
we
won't
be
selecting
this
one
for
deployment.
You
see
that
there
are
other
new
things
that
came
in
after
this
one
and
all
of
them
have
passing
bills,
which
means
that
we
look
either.
A
This
was
a
flaky
spec
or
there
was
something
in
the
meantime
that
got
fixed
with
one
of
these
updates
in
the
meantime,
and
we
will
always
be
selecting
just
the
green
build
just
a
successful
pipeline.
This
is
really
important
because
hope
this
was
read.
We
would
not
be
deploying
anything
there
wouldn't
be
no
new
commits
found,
and
this
way
we
ensure
that
all
our
specs
are
counted
and
can
be
depended
on
for
this
automated
system.
A
When
we
find
this
passing
built
in
this
case
for
c79,
this
job
will
do
a
couple
of
things.
It
will
update
the
version
file
in
github
project.
It
will
then
ensure
that
other
versions
are
also
updated
to
the
versions
that
are
currently
set
in
this
auto
deploy
branch,
and
it
will
ensure
that
we
either
tag
the
changes
or
don't
tag
the
changes.
So
in
this
case
we
have
not
found
any
changes.
This
means
that
we
already
have
created
a
specific
tag,
so
that
means
we
will
not
be
taking
another
action.
A
If
we
are
to
find
the
builds
that
originally
got
stacked,
we
would
see
an
output
here
where
a
couple
of
other
additional
things
would
happen.
So,
on
top
of
this
commit
that
updates
the
version
files,
we
would
see
a
new
tag
created
in
omnibus
project,
which
would
then
trigger
a
couple
of
other
things,
such
as
deployments
and
and
similar.
A
A
This
brings
me
to
mention
that
I
had
about
the
omnibus
project
where
the
packages
are
built.
This
is
the
tag
that
got
created
specifically
out
of
that
sha
that
we
earlier
we
saw
earlier.
The
tag
version
that
we
create
here
also
has
special
syntax,
the
first
three
items
you
can
already
recognize
from
the
auto
deploy
branch
naming
the
items
after
the
plus
here
show
that
this
part.
This
short
sha,
is
from
the
github
projects
with
lebra's
project,
and
this
part
is
from
the
omnibus
project.
A
What
else
is
important
to
note
here
and
this
same
tag
is
also
created
in
our
deployer
project
and
if
any
of
these
shows
changes
and
the
tag
job
that
you
saw
earlier
will
tag
a
new
version
on
top.
This
means
that
we
will
have
as
many
new
packages
as
necessary
to
ensure
that
there
are
changes
propagated
through
the
system.
If
you
compare
two
videos,
this
one
and
the
one
I
showed
in
South
for
self-managed
patch
releases
creation,
you
will
see
that
this
pipeline
in
omnibus
is
very
short.
A
This
is
because
we
only
use
the
Ubuntu
1604
or
our
infrastructure,
and
that
allows
us
to
only
build
this
one
specific
package,
because
we
are
only
deploying
this
on
github.com
environments
without
going
into
the
details
that
I
already
showed
here.
This
job
creates
a
QA
package
or
QA
docker
package
that
we'll
use
later
on
this
one
creates
the
omnibus
package.
And
finally,
this
job
is
uploading.
A
A
A
The
G
staging
one
is
related
to
staging
deployment
so
from
this
job
up
until
all
the
way
to
this
stage,
we
have
staging
related
tasks
from
this
stage.
All
the
way
to
this
stage
we
have
canary
related
tasks,
so
second
environment
and
then
finally,
we
have
production
related
tasks,
so
let's
go
through
them.
A
Prepare
and
warm
up
our
set
of
tasks
that
are
used
to
prepare
the
environments
and
download
the
packages
on
all
nodes
that
we
run
get
Lebon
second
part
is
ensuring
that
online
can
be
run
and
that
the
assets
are
fetched
and
uploaded
to
our
object,
storage
from
which
we
serve
assets,
migrations
fire
off
prior
to
so
these
are
the
database
migrations
they
fire
off
prior
to
any
other
action.
If
everything
passes,
we
move
to
rolling
the
change
out
to
our
literally
set
of
nodes.
A
We
roll
these
changes
out
in
a
specific
order.
You
can
see
that
there
is
like
a
lot
of
output
here
we
take
the
nodes
out
of
rotation
out
of
a
tree.
Proxy
rotation
upgrades
the
packages
there
and
then
put
them
back
in
rotation
in
with
a
certain
concurrency
that
does
not
jeopardize
the
stability
of
the
platform.
The
rest.
A
Happen
in
parallel,
we
also
use
the
concurrency
for
any
of
these
jobs,
so
that
means
that
in
any
at
anytime,
we'll
take,
let's
say
two
nodes
out
of
rotation
for
API,
sidekick
web
pages
and
so
on
and
bring
them
back
and
roll
the
changes
out
that
way.
If
all
of
these
completes,
we
go
to
one
of
the
final
steps
here
in
the
deployment
process,
and
that
is
running
the
post
deployments.
These
are
the
migrations
that
can
be
longer
running,
but
it
can
also
happen
while
the
application
is
running
and
is
fully
online.
A
I
usually
do
some
cleanup
tasks
on
migration
tasks
between
different
tables
and
similar,
and
then
we
also
have
parts
that
is
related
to
cleanup.
We
currently
have
this
track
deployments
tasks
track
deployment
tasks
is
sure
like
he
is
going
to
commit
the
changes
into
these
JSON
files.
This
JSON
files
show
who
created
what
deployment
for
production
specifically
for
staging.
It
should
be
all
automated
unless
there
was
a
test
like
we
executed
here.
A
So
if
we
go
all
the
way
down
to
the
end
of
the
file,
you
will
see
that
only
deployer,
which
is
the
name
of
automated
bot
used
to
deploy,
is
creating
these
versions.
We
are
slowly
moving
away
from
this
with
introduction
of
tracking
inside
of
the
gate
lab
itself,
so
this
file
is
going
to
go
away
together
with
this
track
deployments,
probably
in
the
next
couple
of
days
all
right.
The
other
important
things
to
note
here
are
that
we
run
the
club
QA
integration
tests.
A
So
with
this
pipeline
we
will
trigger
a
job
in
another
location,
and
this
job
is
going
to
run
a
set.
This
specific
job
is
going
to
run
a
specific
set
of
integration
tests,
ensuring
that
the
basic
functionality
of
gitlab
is
always
working.
This
specific
job
is
a
blocking
job,
which
means
that
in
case,
we
have
a
failure
in
this
job.
The
deployment
job
will
halt.
A
This
is
one
of
the
last
gates
we
have
prior
to
going
to
one
of
the
production
environments
and
if
the
job
fails,
then
we
need
to
work
with
the
quality
team
to
ensure
that
this
job
passes
prior
to
continuing
automatically
to
the
rest
of
the
environment.
We
have
other
jobs
as
well.
However,
these
jobs,
this
one
specifically,
which
runs
the
full
path
suit,
is
non
blocking,
meaning
it
runs.
A
But
if
there
is
a
failure,
we
are
only
going
to
alert
in
one
of
the
slack
channels
that
we
have
for
quality
team
to
look
into
them,
and
the
quality
team
is
also
working
on
ensuring
that
this
job
is
rock-solid,
so
that
we
can
make
this
specific
task
as
well
a
blocking
task.
This
would
allow
us
to
rely
fully
on
our
full
suite
of
integration
tests
and
then,
finally,
we
have
some
cleanup
that
we
are
doing
related
to
tracking
cleaning
cache
and
and
so
on,
and
so
on.
A
What's
important
to
note
here
is
that
if
these
set
of
jobs
pass,
we
automatically
propagate
things
to
canary
canary
has
the
same
set
of
jobs
as
staging,
and
it
rolls
in
the
same
similar
fashion
like
staging
as
well.
The
only
difference
here
is
that
the
job
automatically
stops
at
the
production
promotion.
We
don't
automatically
promote
the
production.
Yet
this
is
because
we
want
to
give
some
time
to
for
developers
and
everyone
using
canary
to
observe
issues.
If
they
do,
then
we.
A
A
So
far,
so
we
get
notified
when
the
job
starts,
the
duration
of
the
job
deployment
job
when
it
finishes
as
well
and
then,
after
a
while,
you
see
that
there
is
a
difference
times
that
difference
between
the
two
jobs
where
this
job
starts
after
the
QA
completes,
and
then
it
takes
another
outro
roll
out
this
change
on
canary.
During
this
time
the
tests
are
reported,
like
the
results
of
the
tests
are
reported
in
one
of
the
QA
channels
we
have
in
in
slack.
A
All
right,
let
me
move
my
zoom
controls
a
bit.
Why
is
this
important?
Well,
this
all.
This
is
important,
because
the
only
gate
we
currently
have
to
running
in
production
as
our
set
of
tests
and
set
of
QA
tasks
and
then
a
manual
action
here.
We
want
to
remove
the
need
for
any
manual
action
even
for
production
promotion.
Currently
the
only
task
that
we
take
before
clicking
this
button,
which
will
roll
out
the
changes
to
production.
A
A
A
We
currently
have
a
proposal
to
speed
up
the
number
of
times
that
auto
deploy
branch
gets
created.
Currently,
the
cadence
is
once
a
week,
but
with
the
number
of
changes
that
we
have
going
on,
we
usually
quickly
find
changes
or
buds
in
the
first
couple
of
hours,
maybe
up
to
a
day
and
that
gets
fixed
within
a
day
and
then
we
kind
of
have
a
slower
tale
of
changes
going
into
into
production.
A
We
want
to
ensure
that
we
have
smaller
batch
batch
changes,
so
we
have
a
proposal
where
we
are
going
to
start
creating
teo
to
deploy
a
branch
twice
a
week.
This
is
an
orchestrated
effort,
because
quality
is
also
working
on
stabilizing,
get
lab
QA
tests
and
that
will
allow
us
to
move
from
twice
a
week
to
once
a
day,
hopefully
very
soon.
A
There
are
a
couple
of
other
things
that
you
might
be
wondering:
why
are
we
jumping
between
different
instances?
I,
don't
know
if
you
observed,
but
we
have
three
different
instances
of
get
lab
on
which
we
run
different
tasks.
We
have
multiple
projects.
We
have
this
connection
between
release
tools
that
orchestrates
pipeline
and
then
sequentially
goes
through
the
rest
of
the
projects.
A
This
is
a
stage
as
it
was
found
when
we
started
working
on
this
project,
so
in
order
to
iterate
quicker,
we
just
built
on
top
of
things,
but
we
are
slowly
moving
towards
having
the
release
tools
being
the
main
project
to
orchestrate
the
whole
thing.
There
are
proposals
there
how
we
can
ensure
this
is
happening,
and
this
is
all
a
part
of
a
bigger
change
that
will
happen
over
time,
which
is
shown
in
this
specific
epoch.
A
Finally,
we
are
also
doing
some
tracking
of
the
time
that
takes
for
any
change
to
go
to
production
and
any
of
the
other
environments,
delivery,
team
performance
indicators.
We
have
a
very
ambitious
goal
of
having
production
changes.
Moving
from
the
moment
they
were
merged
to
production
in
eight
hours.
As
you
can
see,
we
are
not
there
yet
the
mean
is
around
hundred
hours.
It
fluctuates
depending
on
the
number
of
changes
that
go
in.
Obviously,
however,
the
reason
why
this
is
now
currently
100
hours
is
because
we
create
the
master.
A
We
create
an
auto
deploy
branch
once
a
week
and
that
evens
out
any
urgent
figs
that
get
can
get
deployed
in
like
three
to
four
hours
by
creating
branches.
More
often
we'll
be
able
to
drop
this
number
here,
you
can
see
that
for
any
of
the
environments
here,
for
example,
you
can
see
that
mean
for
staging
is
some
63
hours
because
it
gets
automatically
deployed
there.
A
Obviously,
the
target
is
not
is
still
the
same
8
hours
but,
more
importantly,
I
think
where
you
can
see
the
progress
is
the
gotelli
project
which
got
included
into
this
auto
deploy,
and
you
can
see
that
on
average,
any
of
the
merge
request.
We
literally
get
get
forget
early
project,
it
gets
merged
into
master,
gets
deployed
within
28
hours.
Staging
is
even
better.
Staging
gets
the
deployments.
A
I
cannot,
even
if
I
zoom
in
here
so
the
mean,
is
some
nearly
6
hours
total,
which
is
below
our
target,
and
this
is
exactly
where
we
want
to
be
with
the
rest
of
the
deployments.
Another
interesting
dashboard
here
to
show
is
the
batch
size.
You
can
see
here
any
of
the
deployments
we
had
like
the
size
number
of
merger
quests
we
had.
This
will
allow
us
to
correlate
any
deployments.
For
example,
if
we
have
26
changes,
how
does
that
affect
our
environment,
the
smaller
the
batch
size?
A
I
think
that
is
all
the
interesting
information
that
we
have
to
share
here.
I'll
share
these
resources
in
the
recording
comments
and
I'm
going
to
use
this
video,
as
well
as
an
opportunity
to
ask
others
for
yeah
any
questions
that
they
have.
Thank
you
so
much
for
following
and
I
hope
this
video
was
useful.