►
From YouTube: Webinar - CI/CD, How GitLab Does CI/CD
Description
During this webinar, we show the possibilities for pipeline architectures, the impact of merge trains and compliance frameworks, and how this comes together at GitLab. We are proud users of GitLab CI and our goal is to show you how we use our own product to accelerate innovation.
A
Environments,
the
ci
system
can
take
care
of
all
that
and
leverage
its
ability
to
integrate
with
tools
like
docker
and
kubernetes
to
take
full
advantage
of
the
power
of
the
cloud
without
ordering
a
laptop
best
of
all.
All
of
this
work
is
automated,
which
frees
up
the
developer
to
move
on
to
other
tasks,
while
the
build
and
test
completes.
A
Continuous
delivery
or
cd
is
the
next
step,
which
is
an
evolution
of
continuous
integration
and
focus
on
addressing
the
next
two
phases
of
our
software
development
lifecycle
pipeline,
which
is
a
review
in
staging.
We
have
our
automated
build
and
it
passes
all
our
tests,
that's
great,
but
why
stop
there
with
continuous
delivery?
We
can
have
the
capability
to
take
that
pill
and
deploy
it
out
to
an
environment
like
staging.
A
This
ensures
that
the
staging
environment
is
always
reflective
of
the
latest
changes
to
the
branch
and
that
it
is
available
as
soon
as
possible
for
the
broader
team
of
stakeholders
to
review
and
participate
integrated
past
performance
tests,
uat
testing
anything
else
that
needs
to
occur
here
cannot
be
done
once
that
is
completed.
We
can
then
click
a
single
button
and
that
same
build
and
goes
on
to
be
automatically
deployed
to
production.
A
This
results
in
a
few
key
benefits.
First,
we
have
a
build
that
is
always
ready
to
be
deployed.
It
has
passed
all
our
tests
and
has
been
running
in
staging
a
simple
button.
Click
is
all
that
is
needed
to
elevate
it
to
production
and
best
of
all
the
process.
For
doing
this
is
reliable,
we
have
been
using
it
for
all
the
deploy
so
far
and
it's
even
checked
into
the
repository.
A
A
The
key
difference
here
is
that
the
continuous
deployment
removes
the
manual
checkpoint
before
pushing
bills
to
production.
This
model
is
most
relevant
when
you
have
been
operating
with
continuous
delivery
and
have
built
the
confidence
and
maturity
in
your
operations
to
allow
changes
to
begin
to
flow
directly
to
customers.
A
A
That
is
a
beautiful
and
powerful
thing
and
it
also
reduces
the
amount
of
time
the
product
improvements
your
developers
have
built
it
on
a
shelf
waiting
to
be
deployed.
The
cycle
of
development
to
testing
to
deploy
is
highly
efficient,
freeing
up
your
team
to
focus
on
creating
value
and
not
on
repetitive
tasks.
A
We
have
a
couple
of
tests,
but
even
though
they
are
running
in
parallel
on
separate
runners,
they
are
still
blocking
the
deploy
job.
A
common
refrain
for
this
type
of
pipelines
is
that
they
are
slow,
and
this
is
a
good
opportunity
to
investigate
the
pipeline
architecture
and
dive
into
why
this
is
slow.
A
Those
pipelines
can
be
made
more
flexible
via
law
failure
and
metal
options,
and
this
is
most
common
and
easiest
type
of
pipeline
to
write
on
this
example.
Even
if
the
test
b
fails
by
plan
proceeds,
but
we
need
someone
to
click
the
play
button
to
deploy
the
application,
then
we
have
dags
or
directly
acyclic
graphs
and
those
can
help
solve
several
different
kinds
of
relationships
between
jobs
within
a
cicd
pipeline.
A
Additionally,
that
can
help
with
general
speedness
of
pipelines
and
helping
to
deliver
fast
feedback
by
creating
dependency
relationships
that
don't
unnecessarily
block
each
other.
Your
pipelines
run
as
quickly
as
possible,
regardless
of
pipeline
stages,
ensuring
output,
including
errors,
because
these
pipelines
might
get
a
bit
complex.
The
nas
visualization
makes
it
easier
to
visualize
relationships
with
independent
jobs
in
attack.
This
graph
displays
all
the
jobs
in
a
pipeline
that
need,
or
are
needed
by
other
jobs.
A
Jobs
with
no
relationships
are
not
displayed.
In
this
view,
then
we
also
have
parent-child
pipelines
and
they
are
very
similar
to
multi-project
pipelines
where
a
pipeline
can
trigger
set
of
concurrently
running
child
pipelines,
but
within
the
same
projects.
Those
are
useful
for
running
non-dependent,
long-running
jobs
like
code
scans
or
building
and
deploying
front-end
back-end
services
separately.
A
Here
we
see
the
same
pipeline
that
had
the
scan
and
state
test
stages,
move
it
to
child
pipelines,
and
with
this
we
don't
have
to
wait
for
those
jobs
to
complete
before
we
get
our
review
app.
In
this
case,
if
you
have
a
desk
job
which
requires
review
app,
that
will
still
be
a
separate
stage
and
we'll
still
need
the
review
app
to
complete
before
we
can
perform
the
following
steps.
A
A
A
B
So
now
that
we
covered
the
basics
of
gitlab,
ci,
cd,
the
concepts
and
the
tooling,
let's
talk
to
our
engineers
and
see
how
we
get
abuse
those
features
we
have
today
with
us
for
the
ci
part
remy
from
the
engineering
productivity
team,
and
he
will
try
to
shed
some
light
on
our
processes
around
ci.
B
So
I
mean
I
know
that
when
creating
ci
config
fly
files,
usually
the
the
struggle
is
to
find
a
good
balance
between
being
secure
to
being
flexible
and
efficient.
So
can
you
tell
us
how
do
we
do
this?
In
gitlab.
C
Good
question
so,
first
of
all,
we
try
to
talk
food
as
much
as
possible.
What's
what
gitlab
is
offering
so
every
time
there's
a
new
feature
on
the
sea
icons.
We
try
to
dock
food
it
as
much
as
possible
and,
and
that
way
we
can
also
you
know,
improve
the
future
over
time.
C
So
I
would
say,
for
the
efficient
part,
the
things
that
are
the
most
powerful
are,
in
my
opinion,
at
least
for
us
are
the
rules,
keywords
and
the
needs
keywords.
C
So
if
you
want,
I
will,
I
will
go
through
the
first
one,
the
rules
keywords,
so
I
I
I
share
my
screen
now.
So
the
worst
keyword
allows
you
to
define
specific
conditions
and
which.
C
Or
not,
and
for
us
it
allows
to
kind
of
define
four
types
of
pipelines
and
before
we
have
before
we
had
this
keyword.
Basically
we
didn't
have
that
it
wasn't
possible.
So
these
four
types
for
us
are
the
documentation,
one
which
basically
runs
documentation,
related
jobs.
C
Then-
and
it
takes
around
six
minutes
to
finish
so-
it's
it's
a
faster
pipeline
type
and
that
we
have
for
the
project.
C
C
This
one
takes
67
minutes
on
average,
and
then
we
have
the
longest
pipeline
type
that
we
have
is
the
qa
or
end-to-end
test,
pipelining
type,
that
kind
of
does
the
same
as
the
previous
one,
but
that,
in
addition,
creates
a
whole
gitlab
package
and
runs
end-to-end
tests
against
it.
So
these
are
the
examples
of
what
you
can
do
with
the
rules
keywords.
You
can
really
define
specific
types
of
pipelines
with
a
set
of
jobs
that
you
need
only
for
for
specific
cases
and
yeah.
C
So
for
us,
that's
that
works
really
well
and
it
really
yeah
got
us
to
the
next
level
of
pipeline
atomicity.
I
would
say
compared
to
before,
where
we
would
have
only
one
one
pipeline,
basically
for
everything.
C
C
You
can
ignore
the
the
stage
in
which
your
job
is
and
only
rely
on
the
needs
requirements,
so
that
allows
to
start
a
chain
of
jobs
independently
from
the
stages
that
you
have
defined
in
your
pipelines,
and
that's
that's
pretty
useful
and
that's
pretty
powerful
as
well,
and
one
thing
to
note
is
that
you
can
also
define
you
can
also
depend
on
jobs
without
downloading
the
artifacts
of
the
jobs.
So
in
some
cases
it's
useful
like
maybe
you
want
to
wait
for
a
job
to
finish,
but
you
don't
want
to
download
its
artifacts.
C
That's
further,
optimization
that
you
can
do.
C
One
challenge
that
we
face
in
regards
to
to
to
security
and
stability
of
our
pipelines
is
the
the
testing
of
the
ci
configuration
itself.
C
So
at
the
moment
we
don't
have
a
good
way
to
to
test
our
ci
config,
so
we
mostly
rely
on
our
own
knowledge
of
the
config
and
and
and
and
kind
of
creating
pipelines
for
the
cases
that
we
want
to
test.
But
ideally-
and
I
think
that's
on
on
the
roadmap
for
the
pipeline
authoring
group.
Ideally,
we
would
have
a
way
to
simulate
pipelines
in
different
contexts.
C
So,
for
instance,
you
could
say,
let's
simulate
the
pipeline
for
merge,
requests
with
this
set
of
changes,
and
you
could
see
what
what's
the
what's
the
shops
that
will
be
created
and
what's
the
yeah,
the
the
pipeline
that
will
be
created.
Basically,
so
that's
something
we
are
kind
of
waiting
for
and
we
will
be
the
first
users
for
this
feature.
I
think
internally.
C
Then
we
have
other
rules
that
are
based
on
variables,
so
any
other
variables
that
we
could
set
on
the
project
or
or
even
in
specific,
merge
requests
or
or
whatever,
but
yeah.
So
that's
the
rules,
that's
the
variables
rule,
and
then
we
also
have
the
project
and
several.
So
in
some
cases
we
don't
want
to
run
a
job
on
a
specific
project
or
we
only
run.
We
only
want
to
run
a
job
in
specific
server
or
github
instance,
because
we
mirror
our
projects
in
several
instances.
C
C
So,
for
instance,
we
have
the
db
patterns,
that's
used
for
running
specific
db
jobs,
or
we
also
have
ci
patterns
that
for
which
we
will
actually
run
specific
ci
related
jobs.
And
yes,
these
are
the
kind
of
the
four
types
of
rules
that
we
use
and
they
are
all
they
are
all
defined.
If
you
want
to
take
a
look,
if
you
are
curious,
they
are
defining
in
this
file
in
the
main
project.
C
C
B
Right
thanks
for
that
and
now
we
know
that,
as
a
general
rule
stages
run
one
after
another
and
jobs
running
parallel,
but
we
also
have
the
concept
of
parallel
jobs
and
parallel
jobs.
By
that
we
mean
duplicated
jobs
where
we
can
set
a
certain
job
to
get
duplicated
and
those
duplications
run
also
in
parallel.
So
how
do
we
use
that
feature,
and
why
does
it
make
it
more
efficient
for
us.
C
So
yeah
what
it
does
is
basically
it's
like
it.
It
will
duplicate
the
job
if
I
can
say
that
on
different
nodes
and
it
will
set
two
variables
and
the
ci
node
index
and
the
ci
node
total
ci
index
is
the
the
index
of
the
node.
That
is
running.
C
So,
if
you
say,
let's
run
on
five
parallel
nodes,
the
indexes
will
be.
Zero
will
be
one
two,
three
four
five
and
the
c
I
naught
total
is
the
same
for
all
the
of
the
jobs.
It
will
be
five
in
that
case
and
that's
mostly
it
for
the
parent
feature
and
the
sense
that,
for
the
rest,
you
have
to
handle
it
yourself,
like
you,
are
it's
up
to
you
to
define
how
the
polarization
works
given
these
two
variables
and
so
one
one
tool
that
we
use
for
our
aspect
test.
C
And
yeah,
actually
I
have
so.
We
have
a
small
documentation
on
how
we
use
it
internally
and
I
won't
go
into
details,
but
basically
what
knapsack
does
is
it
takes
the
list
of
all
the
your
test
files
it.
It
kind
of,
let's
say
divide
it
by
the
by
the
polarization
that
you
want,
and
then
it
spreads
the
test
files
on
each
job
that
is
paralyzed
and
yeah
with.
Without
this
feature,
we
would
like
our
pipelines
would
be
a
few
hours
long.
C
It's
less
than
an
hour
and
we
use
so
yeah.
We
use
this
problemation
feature
for
our
tests,
as
well
as
to
build
front-end
fixtures.
So
we
have
function
test
that
needs
like
html
features,
so
we
we
use
the
apparent
keyword
for
that
as
well,
because
we
identified
that
the
job
was
too
long
and
very
very
yeah,
using
the
polarization
basically
makes
it
like
twice
twice
as
fast
in
this
case.
B
We're
talking
about
efficiency,
what
about
our
cash
and
how
do
we
build
our
cash
and
what
is
the
cash
in
strategy
we
use
in
our
ci
pipelines.
C
So
again,
we
have
documented
internally
how
we
use
the
cache
for
our
main
project
and,
basically,
when
we
ended
up
with
a
strategy,
which
is
that
jobs
by
default
should
only
pull
the
cash,
they
should
never
update
it
because
uploading
the
cash
is
adding.
It
could
could
add
a
few
minutes
in
some
cases.
C
The
second
rule
is
that
the
cash
like
a
job
should
be
able
to
run
without
any
cash,
so
the
cash
shouldn't
be
something
that's
critical
for
a
job
in
terms
of
the
job
finishing
not
the
job
being
fast,
I
must
say,
and
then
we
have
a
set
of
caches
that
you
can
see
here
and
that
are
actually
defined
in
this
file
and
the
cache
the
caches
are
specific
to
different
jobs.
Obviously,
like
we
have
cash,
we
have
a
cash
for.
C
C
Here
we
have
a
cache
for
go
packages,
and
so
on
and
what's
interesting
is
that
we
use
the
multiple
cache
feature
that
was
introduced.
C
In
yeah
13.12,
and
what's
what
this
feature
allows?
You
is
to
really
define
every
atomic
cache
and
to
then
combine
like
this.
This
one
is
the
cache,
and
this
one
is
another
cache,
so
you
can
define
multiple
caches
per
jobs
and
that
allows
to
be
very
atomic
with
what
jobs
like
which
jobs
needs,
which
cash
and
yeah,
and
just
for
the
to
finish
on
the
cache
we,
the
last
part
of
our
caching
strategy,
is
to
update
the
caches
every
two
hours,
because
in
most
cases
a
merge
request
is
totally
fine.
C
B
B
How
is
it
working
together?
We
also
have
ci
conflict
files
that
take
care
of
the
overall
configuration,
for
example,
the
rules.
How
is
it
working
with
having
different
code
owners
on
different
files,
permissions,
etc?.
C
Yeah,
basically,
everyone
that
has
access
to
the
project
and
can
edit
the
rules,
but
then
we
have
specific.
As
you
said,
we
have
specific
codons
for
specific
ci
config
files.
C
So
if
I
look
at
the
codenose
file,
these
are
the
rules.
Basically,
these
ones
you
can
see.
For
example,
the
the
dox
related
ci
config
is
is
owned
by
my
team,
as
well
as
the
technical
writing
team
and
likewise
for
other
files
here
or
there
so
yeah.
Basically,
we
divide
our
ci
config
into
logical
files,
like
I
really
have.
A
C
Yeah,
so
we
currently
in
the
main
project.
We
don't
really
use
the
parent
child
pipeline,
but
we
do
use
the
downstream
pipeline,
so
we
trigger
pipelines
in
downstream
projects,
and
so,
for
instance,
that
allows
us
to
to
trigger
a
pipeline
in
the
omnibus
gitlab
project
which
takes
care
of
building
a
package
for
gitlab
based
on
the
changes
from
the
magiquest,
and
then
this
pipeline
itself
will
trigger
another
pipeline
to
run
entrance
test
against
the
package.
C
So
in
those
cases
where
you,
where
you
really
need
another
project
to
do
specific
actions,
I
think
it
makes
sense
to
use
the
to
trigger
the
downstream
pipelines
and
for
child
parent
child
pipelines.
We
don't.
I
don't
think
we
have
a
good
use
case
right
now
in
in
our
main
project,
but
I
yeah,
I
would
say
you
would
do
that
to
kind
of
separate
logically,
your
ci
config
into
very
specific
chunks
and
and
to
run
specific
jobs
on
in
a
you
know,
spread
pipeline,
but
that's
still
part
of
the
main
pipeline
yeah.
A
Sounds
good
and
we
talked
a
lot
about
making
things
more
efficient
today
when
thinking
about
making
them
more
secure.
How
do
we
run
end-to-end
tests
and
when
do
we
do
it.
C
Yeah,
so
that's
that's
exactly
what
I
was
describing.
So
thanks
for
the
question
I
can
go
into
more
details.
C
So
basically,
that's
this
page
describes
exactly
the
downstream
pipelines
that
we
trigger
so
that's
kind
of
complex
because
we
trigger
building
the
omnibus
guitar
mirror
project
which
builds
a
docker
image
for
the
for
a
gitlab
docker
image
for
the
change
in
the
mod
request
and
then
once
done
it
itself
triggers
a
pipeline
in
the
gitlab
qa
mirror
project
which
actually
spins
up
the
image
from
the
previous
pipeline
and
runs
end-to-end
zest
against
against
the
container,
and
these
end-to-end
tests
are
run
as
per
like
are
written
with
a
with
an
internal
framework
that
we
developed,
which
is
called
github,
qa,
very
simple,
but
yeah.
C
That's
so
we
don't.
We
don't
do
that
for
every
merge
request,
because
that's
an
evolved
new
process
and
that's
the
actually
the
pipeline
type
that
I
was
speaking
earlier.
This
background
tab
is
taking
like
100
something
minutes
to
finish
so
it's
kind
of
long
and
involved.
So
we
only
run
it
either
manually
or
or
open
specific
changes
that
warrants
to
run
that.
A
Cool
thanks
for
that
and
last
question
for
me.
I
know
that
many
customers
are
interested
in
how
we
use
the
danger
bots.
Can
you
tell
more
about
it
what
it
is
and
why
are
we
using
it
at
gitlab.
C
Sure
so
yeah
the
danger,
but
we
call
it
danger,
but
because
we
are
actually
using
an
external
tool
which
is
called
danger
and
what
dangerous,
what
danger
allows
you
to
do
is
to
run
like
checks,
or
you
can
actually
do
a
lot
of
things
like
it's
very
open.
It's
up
to
you
to
write
your
own
rules
and
I
I
I
need
to
say
it's
an
external
tool.
So
it's
not
shipped
as
part
of
gitlab.
C
We
would
love
to.
But
this
as
currently
it's
not
it's
not
the
case,
and
one
of
the
main
feature
like
one
of
the
main
rule
that
we
use
with
with
danger
is,
is
the
visual
roulette
which
is
described
here
and
basically
the
original
roulette
is
suggesting
reviewers
and
maintainers.
C
So
in
our
case
for
the
guitar
project,
reviewers
are
people
that
review
magic,
quest
and
maintainers
are
also
reviewing
the
magiques,
but
they
have
the
right
to
actually
merge.
So
that's
the
difference
and
yeah.
Basically,
the
reviewer
roulette
is
is
something
that
we
developed
ourselves
and
ideally
that
would
be
a
gitlab
feature
and
I'm
sure
it's
in
the
pipe
for
the
the
code
review
group
but
yeah.
C
You
need
to
check
with
them,
but
yeah,
if
you,
if
you
want
to
use,
danger,
yeah
feel
free
to
check
out
our
own
documentation
and,
and
then
it's
up
to
you
to
basically
like
develop
your
own
rules.
For
instance,
we
have
rules
to
ask
for
specific
players
to
be
set
in
some
cases,
or
we
also
have
rules
that
actually
add
labels.
Based
on
the
changes.
You
can
do
a
lot
of
creative
stuff
with
danger,
so
I'm
free
to
use
it
and
hopefully
we'll
have
something
implemented
in
gitlab.
Soon.
A
Thanks,
I
hope
this
was
helpful
thanks
for
your
time.
A
Be
discussing
next,
the
cd
part
as
well
at
hitler.
B
So,
to
continue
to
the
cd
part
of
our
conversation,
we
have
here
with
us
today
from
our
delivery
team,
amy,
the
engineering
manager
and
alessio
the
staff
pack
and
engineer,
and
they
will
help
us
understand
how
github
does
continuous
delivery
continuous
deployment.
Now
I
know
we
have
two
distinct
processes.
B
D
Sure
hosna,
this
is
a
great
question,
so
if
you
don't
mind,
I
will
start
with
gitlab.com,
because
it's
a
basic
part
of
how
we
make
the
package
for
the
customer
on
the
22nd
every
month.
So
we
we
used
to
refer
this
as
how
to
deploy
so
basically,
every
day
we
have
a
schedule
and
roughly
every
five
hours
so
five
times
a
day
we
create
a
package.
D
This
means
that
we
start
by
finding
the
latest
green
commit
on
master
on
the
relevant
projects,
and
then
we
create
packages
out
of
this.
Then,
when
the
packages
are
ready,
then
we
move
on
to
deployment
and
we
do
staging
first.
Then
we
do
cannery.
Then
we
have
a
baking
time,
which
is
about
one
hour
when
we
monitor
the
status
of
the
fleet
of
the
deployment,
and
then
here
it
comes
into
play.
The
release
manager
role,
which
is
a
rotation
rule
in
our
team.
D
So
this
roughly
gave
us
four
five
deployments:
each
day
really
depends
on
time,
zone
incidents
and
things
like
that,
and
so
this
means
that
engineer
working
on
the
on
code
base
can
expect
to
have
their
code
deployed
in
gitlab.com
within
12
hours,
one
day
kind
of-
and
this
is
basically
this
is
the
the
mission
of
our
team.
Then,
with
this
process
in
place,
we
can
talk
about
how
we
release
the
packages
every
month.
So
the
how
to
deploy
schedule
is
basically
the
it
happens
every
working
day
and
then
the
the
milestone.
D
When
we
release
the
package
for
the
maths,
then,
basically
the
milestones
start
on
the
23rd
so
the
first
day
after
the
release,
and
it
ends
on
the
22nd
in
in
this
period
of
time.
Every
day
we
deploy
packages
up
until
more
or
less
the
20th
of
the
month,
so
kind
of
two
days
before
the
release
date.
When
we
do,
let's
call
it
a
code
freeze.
Basically
on
that
date
we
pick
the
last
deployment
that
was
on
gitlab.com,
and
this
is
where
we
breach
the
two
processes.
D
So
from
around
the
20th,
the
20th
of
the
month,
we
are
no,
we
branch
off
and
from
that
point
on,
we
only
fix
high
priority
and
high
severity
issues,
and
this
gives
us
a
couple
of
days
for
last
minute
fixes
and
preparing
the
packages
so
that
we
can
release
it
to
our
customer
on
the
22nd.
In
the
meantime,
so
21st,
22nd
and
and
so
on.
D
B
D
The
the
most
important
one
is
release
tools,
which
is
a
custom,
tooling
project
that
our
team
owns
and
basically
we
we
make
use
of
ci
for
handling
our
tagging
life
cycle.
So
this
means
that
we
tend
to
have
as
less
as
possible
manual
action
and
if
there
are
manual
action,
they
are
based
on
checklists
that
are
generated
through
code.
D
So
all
of
the
things
that
I'm
talking
about
is
publicly
available
on
github.com.
So
maybe
we
can
share
the
link
to
this
project.
The
part
that
is
not
publicly
available
are
the
pipeline
themselves,
because
we
run
them
on
another
instance
just
for
having
isolation
between
github.com
and
the
infrastructure
that
we're
using
for
running
and
deploying
github.com,
but
the
code
is
there.
D
So
the
thing
is
that
the
project
itself
has
several
automation:
scripts
for
generating
tracking
issues
for
things
that
have
to
be
done,
for
the
monthly
release
or
keeping
in
sync
the
release
manager
rotation,
so
that
the
team
members
in
delivery
that
are
supposed
to
be
release.
Managers
for
this
milestone
are
properly
assigned
to
issues
and
they
are
able
to
receive
slack
notifications
and
things
like
that,
and
so
the
because
this
is
written
in
code.
Then
we
have
tests.
D
So
as
I
was
referring
before,
we
have
five
five
hours
during
the
day
where
we
run
the
the
new
deploy
creation-
and
this
is
just
this
happens
automatically.
We
only
get
a
notification.
If
things
goes
wrong,
something
goes
wrong,
then
we
we
are
notified.
D
The
as
I
was
mentioning
before
the
only
manual
action
in
this
step.
Right
now
is
the
promotion
of
a
package,
so
the
deployment
to
production
requires
manual
action
as
well
as
creating
the
stable
branch,
which
is
the
thing
that
happened
around
the
20th
is
is
a
manual
action,
is
a
description
to
the
release
manager
to
make
a
choice
of
when
creating
the
stable
branch,
because
it's
also
based
on
stability
of
the
package.
D
A
Cool
thanks
for
that.
I
also
have
a
question
on
my
side
with
us
having
the
typical
releases,
the
security
fixes
and
all
that
are
there
any
actions
we
are
taking
to
manage
the
complexity
of
this
process.
D
Sure
sure
so
I
was
briefly
mentioning
before
that
we
have
several
projects,
so
we
have
several
projects.
We
have
also
several
instances
of
github.com
that
are
involved
in
in
into
this.
So
let's
start
with
entry
points
to
this
process,
so
as
a
release
manager.
If
we
want
to
start
something
related
to
deployment
or
release,
we
usually
interact
with
our
tooling
using
chat
ups
command.
So
we
have
a
set
of
chat.
D
Now,
because
we
have
one
chat,
ups,
entry
point,
but
several
projects
we
evenly
relies
on
bridge
jobs,
which
is
another
features
of
the
of
our
ci,
which
basically
allows
us
to
trigger
a
pipeline
into
another
project
from
a
parent
pipeline,
and
this
is
extremely
helpful
because
just
try
to
give
you
a
brief
understanding
of
what
we
have
here.
We
have
chat
ups
that
interacts
with
release
tools,
for
instance,
but
then
release
tools
is
only
an
orchestrator
here.
D
So
the
the
main
goal
of
release
tools
is
having
checklists
and
knowing
the
procedure
for
having
a
package,
but
then
it
doesn't
deploy
it.
It
doesn't
create
packages,
and
things
like
that.
So
in
this
case
we
from
release
tools.
We
create
packages,
omnibus
packages
that
our
customer
installs
on
virtual
machine
on
bare
metal
as
well
as
kubernetes
images,
so
helm,
charts
and
things
like
that.
Those
things
happens
in
other
projects
that
we
refer
as
packagers,
so
release
tools
triggers
pipelines
on
packages.
D
D
This
can
go
down
boxes
and
boxes
because,
basically,
when
we
go
on
deployment,
we
have
a
mixed
environment
with
much
virtual
machines
and
kubernetes,
so
kubernetes
deployment
as
well
is
into
another
project,
and
so
with
bridge
jobs,
we
are
able
to
basically
drill
down
in
in
the
pipeline.
So
depending
on
what
is
your
task
or
what
is
your
day-to-day
activity?
You
can
watch
the
process
of
a
specific
level
or
as
a
release
manager.
D
You
can
just
watch
from
the
release
those
point
of
view
where
everything
is
triggered,
so
you
can
just
drill
down
into
the
specific
details.
Again.
The
same
thing
happens
for
qa.
We
have
some
another
qa
project,
so
we
have
other
pipelines
that
get
triggered
that
are
running
qa
on
the
environment
after
deployments
and
and
that,
basically,
that's
the
idea
breed
jobs
gives
you
the
ability
to
trigger
something
and
also
wait
for
the
result
of
the
of
the
pipeline
that
you
triggered.
A
Another
thing
is
that
we
always
talk
about
working
in
iterations
and
improving
our
processes.
What
metrics
do
we
have
do
we
use
when
working
to
improve
the
release
process
such
as
dora
form
metrics,
mean
time
to
merge?
Is
there
anything
we
can
suggest
to
get
those
relevant
insights
about
our
processes.
E
Yes,
absolutely
so,
as
alessio
mentioned,
a
lot
of
our
work
is
around
packaging
things
being
packaged
and
changes
going
out
to
users.
So
the
main
metric
we
are
tracking
against
with
all
of
our
work
is
mean
time
to
production,
and
we
have
this
in
the
handbook.
Let
me
I'll
just
share
it
with
you:
it's
available
publicly
available,
but
okay,
so
hopefully
you
can
see
this.
So
it
is
one
of
our
infrastructure
department,
metrics
and
what
we
show
here.
We
can
see.
E
We've
been
tracking
it
for
a
couple
of
years
now
and
what
this
is
showing
is
the
amount
of
time,
so
the
mean
time
that
it
takes
for
an
mr
once
it
has
been
merged
to
reach
production,
and
so,
as
a
couple
of
interesting
things
that
we
can
see
along
the
time
here
so
back
sort
of
early
early
last
year
we
can
see,
numbers
were
pretty
high
up,
these
are
in
hours
and
then
we
can
see
in
may
2020.
E
This
is
where
we
made
a
big
switch
and
we
moved
to
daily
deployments,
so
getup.com
was
receiving
daily
deployments,
so
you
can
see
a
huge
jump
there.
So
not
a
trivial
change,
like
lots
of
tooling
change.
Lots
of
process
change
to
allow
us
to
kind
of
coordinate
this
and
make
sure
these
changes
are
as
safe
if
we're
doing
that
once
a
week
or
once
a
day
and
then
what
we
did
then
following
this,
so
we
use
this
metric
within
the
team
on
our
weekly
basis.
E
We
do
a
review
of
how
has
the
current
week
been
it's
close
enough,
then
that
we
can
sort
of
remember.
Oh
on
tuesday,
we
had
this
thing
that
delayed
us
or
on
wednesday.
We
didn't
get
a
package
till
sort
of
lunch
time
or
whatever
the
things
are
still
present
enough
for
us
to
be
able
to
identify
fluctuations
in
our
metric
and
what
became
clear
between
may
and
august
was
one
of
the
biggest
delays
we
were.
E
Having
was
around
security
releases
and
at
the
time
when
we
prepare
the
the
self-managed
security
releases
that
go
out
each
month
is
they
were
blocking
our
deployments
to
gitlab.com.
E
So
then
we
started
doing
a
project
and
we
managed
to
remove
the
blocking
nature.
We
can
see
the
drop
here
from
39.1
down
to
24.4,
so
we
use
this
and
iterate
through
so
each
time
we
come
along
and
we
sort
of
look
back
over
the
last
few
weeks,
the
last
month
or
so.
And
what
can
we
do
next?
Is
there
something
in
our
pipeline?
We
can
make
faster.
E
Is
there
a
particular
pattern?
We're
seeing?
Are
there
certain
types
of
things
that
are
slowing
this
down?
We
can
just
see.
This
is
a
very
excellent
timing
to
record
this
stuff.
You
can
see.
We've
just
reached
our
new
milestone
just
this
month
and
what
we've
done
this
month
is:
we've
actually
expanded
the
release
manager,
time
zones,
so
alessio
mentioned
that
we
have
this
manual
promotion
of
the
actual
deployment
that
goes
to
production.
E
A
release
manager
must
press
that
button
to
say
you
know
they're
online
and
they're
ready
and
everything
looks
good,
and
just
this
month
we
have
expanded
so
that
within
the
apac
region
we
also
have
a
release
manager.
So
now
we
have
24-hour
coverage
for
release
managers
during
working
days.
So
this
has
been
our
big
big
step,
but
we'll
continue
to
iterate.
On
this.
E
There
are
lots
of
things
that
go
into
mttp,
so
this
is
a
bit
of
a
bundled
up
number
which
tracks
what's
happening,
but
it
doesn't
show
necessarily
what's
going
into
that
number.
So
mttp
is
affected
by
pipeline
duration,
it's
affected
by
incidents
or
other
changes
taking
place
on
the
infrastructure.
E
It's
also
affected
by
release
manager,
availability
and
there
are
lots
of
things.
What
we're
working
on
at
the
moment
is
distilling
this
down
further,
so
we
actually
have
really
good
visibility
of
other
certain
days
we
see
things
or
other
certain
parts
of
our
deployment
pipeline
that
perhaps
take
a
lot
longer
than
we'd
expect,
and
we
can
then
plan
out
what
are
the
next
steps
that
we
want
to
go
to
to
actually
start
breaking
this
stuff
down.
A
Thanks
amy,
this
was
super
helpful
and
it's
incredible
how
we
can
correlate
our
initiatives
with
the
metrics
here
and
congratulations
for
these
huge
improvements.
Thank.
A
Cool
so
thanks,
everyone
for
attending
apologies
for
some
of
the
connectivity
issues
which
prevented
us
from
streaming
with
better
quality
on
some
of
the
video
parts.
It
was
indeed
hard
to
read
but
we'll
send
the
original
video
in
hd
after
the
webinar
we'll
now
start
with
the
live
q,
a
panel
with
osnot
reading.
Some
of
the
questions
that
we
are
still
missing
and
inviting
someone
on
the
panel
to
to
to
answer
it.
A
So
you
may
want
to
keep
asking
questions
on
the
q
a
box
or
you
can
also
raise
your
hand
to
be
invited
on
stage
and
verbalize
your
questions.
A
So
we
still
have
one
open
question
there
from
adrian
regarding
copy
and
write
cash,
I
was
trying
to
find
any
feature,
requests
or
insane
or
roadmap
about
copy
and
write
catch
that
I
couldn't
find
yet
so
adrian,
please
go
ahead
and
open
a
feature
request
on
gitlab.com.
A
B
We
have
a
few
questions
coming
in.
I
will
just
verbalize
it
in
job
run
tests
in
parliament.
F
F
Yeah
then,
you
can
mute
yourself
and
explain
what
the
question
is
about.
G
Hi
guys,
so
we
use
a
lot
of
parallel
metrics,
because
we
have
multiple
environments
and
a
lot
of
times.
We
come
into
a
situ
situation
that
we're
on
different
roads
based
on
the
different
parallel
metrics.
F
Yeah,
I
think
this
is
this
is
exactly
the
question
that
austin
answered
now
we
do
have
a
concept
of
parallel
metrics
jobs,
so
this
is
also
presented
today
in
the
webinar,
I'm
just
trying
to.
Yes,
if
you
look
at
the
the
documentation,
we
do
have
a
parallel
keyword
in
a
job
that
would
allow
you
to
run
multiple
instances
of
the
of
the
job
and
pass
sort
of
the
variables
into
this
metric
so
that
different
jobs,
the
same
job,
would
run
in
parallel
with
multiple
configurations.
G
G
F
G
Yes
sure
so
think
about
that.
Sometimes
you
want
to
deploy
to
multiple
production
environments,
but
based
on
certain
scenario,
want
to
deploy
to
one
environment
and
don't
deploy
to
the
other.
For
example,
the
user
is
triggering
from
outside
a
different
variable
that
tell
it
to
deploy
to
one
environment
and
not
into
other.
G
F
D
I
can
chain
in
on
this
if
you
want
okay,
so
we
have
some
similar
problem
in
delivery,
so
we
were
trying
to
deploy
on
different.
As
you
were
mentioning
we
were
deploying
on
different
environment
and
at
certain
point
we
were
exposing
this
feature
from
chat
ups,
so
from
an
outside
job.
D
So
we
ended
up
with
the
solution,
which
is
basically
we
have
parametric
job
in
our
ci,
where
you
can
provide
with
a
variable
the
name
of
the
environment
you
want
to
deploy,
and
then
we
generate
from
this
and
we
basically,
we
have
an
external
variable
which
is
provided
by
chat
ops,
which
tells
which
environments
you
want
to
deploy,
and
this
comma
separate.
D
So
you
can
do
regular
expression
on
this
and
decide
if
you
want
to
run
it
in
a
specific
environment
or
not,
and
then
we
have
the
deployment
target,
which
is
another
variable
that
can't
be
passed
from
the
outside,
which
is
defined
in
the
generated
job.
So
just
try
to
make
it
easier
to
understand.
D
Let's
say
you
want
to
deploy
to
staging
canary,
but
not
production,
so
you
you
trigger
something
that
has
a
variable.
This
is
something
like
desired:
rollout
staging
comma
canary.
Then
you
have
a
job
that
defines
how
to
deploy
in
in
any
environment
and
expect
environment.
D
D
Basically
in
your
github
camera,
you
have
several
jobs,
one
per
environment
and
they
are
triggers
if
the
the
the
you
want
to
deploy
in
something
that
contains
staging,
and
then
you
override
the
variable
and
say
this
is
a
deployment
for
staging
so
that
in
the
end
you
can
just
trigger
a
pipeline
and
it
will
run
one
two
three
or
as
many
deployment
and
different
environment.
As
you
are
asking
with
a
single
variable.
D
G
Okay,
so
we
achieved
a
similar
behavior
using
a
reference,
but
it
would
be
useful
if
reference
can
have
environment
variable
also
inside
of
them,
so
you
can
dynamically
choose.
What
is
the
reference
key
you
are
trying
to
inject,
so
is
that
something
you
guys
also
thought
about
supporting
in
the
reference
function,
environment
variable.
B
G
B
Good,
thank
you.
We
have
a
couple
more
questions
coming
in.
B
The
pipeline-
and
I
I
will
just
verbalize
it-
the
python
analytics
are
nice,
but
if
you
have
child
pipeline
that
can
distinguish
between
different
pipelines
in
the
charts
parent
versus
child,
are
there
different,
visualizations
or
a
way
to
access
the
raw
data.
F
Yeah
also
because
I
think
that
the
follow-up
message,
also
from
devin
about-
are
there
any
plans
to
expand
analytics
such
as
tracking
time
for
specific
jobs.
So,
yes,
we
understand
that
sometimes
it's
not
possible
to
get
the
full
view
of
the
analytics
about
your
pipelines
and
jobs
performance,
something
that
I
would
recommend.
There's
a
great
open
source
project
called
gitlab,
ci
pipelines,
exporter
I'll
type
it
in
the
answer
and
share
the
link
over
there.
F
F
It
still
potentially
will
not
solve
your
current
problem
with
the
child
pipelines
because
it
doesn't
differentiate
the
pipeline,
but
it
will
let
you
zoom
in
on
specific
jobs.
So
that's
an
expert
that
does
gets
all
these
metrics
and
presents
a
nice
ingredient.
I've
seen
some
customers
use
that,
so
we
of
course
it's
something
we
we're
considering
implementing
further
in
gitlab,
but
already
now
before
we
do
that.
You
can
already
achieve
this
using
that
third-party
solution.
B
We'll
go
to
the
next
question
when
building
containers,
we
have
separate
projects
to
build
the
base
container
used
for
several
other
projects.
Is
there
an
existing
mechanism
to
allow
those
sub-projects
to
run
a
build
when
the
base
container
project
is
changed?
I
know
you
can
use
triggers
to
start
that
the
other
pipelines,
but
that
means
continually
having
to
add
new
triggers
to
the
base
container
pipeline.
Is
there
a
way
to
tell
the
sub
projects
to
look
for
changes
in
the
parent
project.
F
I'll
take
this:
we
do
have
a
concept
of
pipeline
subscriptions,
so
you
can
actually
do
it.
The
other
way
where
your
child
pipeline
can
subscribe
to
a
parent
line
would
act
in
the
same
way,
but
the
difference
is
that
you
will
configure
it
on
your
base
project
and
your
parent
project
to
check
the
child.
You
will
actually
tell
the
child
to
subscribe
to
successful
pipelines
and
parent
I'll
see
if
I
can
find
a
link
on
this
and
share
this
also
in
the
chat.
B
And
moving
on,
we
have
one
last
question
in
the
chat
use
case,
maybe
assume
I
have
bill
job,
for
instance,
mvn,
clean
install
and
I
have
two
tests
at
the
moment-
test
run
in
sequence,
test
one
and
then
test
two
and
I
configure
job
not
touching
my
project
at
all,
so
my
test,
one
and
test
two
will
run
in
parallel
in
job.
B
Yeah,
let
me
know
if
this
doesn't,
if
you
want
to
elaborate
a
little
about
the
use
case,
if
this
is
not
the
duplicated
job,
just
raise
your
hand-
and
I
can
let's
just
speak
here.
B
All
right,
moving
on
to
the
next
question:
what
type
of
pipelines
would
you
recommend
for
project
where
we
would
like
to
trigger
different
tests
and
build
tasks
depending
on
which
files
were
changed?
For
example,
distinguish
by
file
type
of
path?
Parallel
jobs
would
be
also
nice
to
have
yeah.
I
think
this
is
what
the
the
keyword
change
is
for.
You
can
check
where
the
changes
were
made
and
based
on
that
and
run
certain
jobs
or
not
run
them.
F
If
I
can
chime
in
so
there's
something
already
shared,
I
think
in
the
q
a
session
earlier
today
in
chat,
but
I
think
this
is
one
of
the
very
very
well
requested
features
that
we
are
hopefully
delivering
14.3.
I
think
it's
it's
now
scheduled
the.
What
we
call
conditional
includes
what
this
means
is
that
right
now,
of
course,
we
all
know
that
you
can
use
rules
to,
for
example,
to
set
well
to
customize
the
job
only
to
run
when
a
specific
file
is
changed.
F
Now
what
we
are
adding
in
the
coming
releases
is
ability
to
run
conditional
includes.
So
what
this
means
that
you
would
be
also
when
you
include
another
external
gitlab
side
definition,
you
would
soon
be
able
to
provide
rules
upon
which
that
file
can
be
included.
That
would
allow
you,
in
your
main,
gitlab
cima
file,
to
create
a
number.
This
conditional
includes
and
then
sort
of
keep
separate
files
where
separate
logic
is
defined.
B
Do
we
have
more
questions?
I
see
a
couple
of
questions
are
answered
in
the
window.
B
We
don't
have
any
more
questions,
we
can
include
your
and
we
will
include
all
the
answers
to
the
questions,
as
well
as
a
better
quality
video
and
in
the
follow
up
to
this
webinar.