►
Description
Alessio and Marin discussing the status of CD on GitLab.com
A
Okay
Martin,
so
this
is
our
little
game
which
is
trying
to
challenge
or
itself
from
thinking
why
we
cannot
deploy,
get
rubber
come
directly
from
master.
So
what
are
the
blockers
and
and
I
try
to
figure
out
if
we
are
not
doing
this
because
we
use
it,
do
not
do
this
orient
it.
So,
let's
try.
Okay,
good.
A
B
A
B
Example,
couple
a
couple
of
things
worth
mentioning
there
thing
that
comes
to
mind
number
one
is
the
amount
of
commits
that
we
have
in
a
master
is
overwhelming
in
general,
even
like
within
an
hour.
You
can
just
go
and
see
how
many
comments
there
are.
We
do
not
have
any
sort
of
confidence
that
any
of
the
commits
that
lands
in
master
in
its
own
is
a
stable
candidate
for
deployment
to
any
environment,
let
alone
production.
B
You
also
see
that
we
have
very
frequent
broken
master
situations,
which
means
that
any
commit
that
lands
after
the
one
that
is
causing
a
problem
is
a
potential
problem
because
it
might
be
hiding
bigger
issues
because
of
the
previous
failure,
so
the
volume
itself
is
already
creating
a
situation
where
you
create
a
snapshot
in
time
and
by
the
time
you
want
to
roll
it
out.
You,
you
have
a
very
big
diff
already
to
you
know
like
get
to
the
queue.
That's
one!
That's
one
thing
that
comes
to
mind.
B
Testing
cycle
is
extremely
long,
even
if
you
just
consider
get
lab
rails
paint.
A
picture
here
merge
request
itself
every
all
of
the
tests
in
the
branch
that
need
to
happen
run
around
for
I,
don't
know
80
minutes,
and
that
is
on
average.
So
on
average
we
are
around
80
minute
mark.
Maybe
70
doesn't
really
play
a
part
here.
This
is
branch
now
consider.
Every
merge
request
that
goes
into
master
is
not
going
to
be
immediately
green,
because
you'll
need
probably
around
an
hour
for
each
of
them
to
actually
pass.
B
When
that
happens,
we
actually
also
need
to
create
all
of
the
artifacts
to
create
a
deployment.
So
it's
not
only
building
get
lab
rails
container
image.
It
is
also
all
of
the
other
components
that
are
directly
tied
to
the
compared
to
the
comitia
here.
So
we
will
have
to
check
out
what
is
the
get
early
version?
What
is
the
registry
version
and
blah
blah
all
of
that
and
build
all
of
those
from
that
specific
shot?
B
That
is
supported
at
that
moment
that
takes
anything
between
half
an
hour
in
very
good
weather
to
two
hours
in
bad
weather.
You
already
have
like
addition
there
of
time.
That
is
very,
very
long
now,
once
that
is
already
like.
All
of
that
is
done.
You
need
to
have
a
system
in
place
that
will
be
able
to
roll
those
changes
out
to
a
non
production
cluster
and
then
execute
QA
toss.
B
On
that,
let's
say
we
are
doing
smooth
that
smoke
tests
and
reliable
deaths
on
average
they
take
around
half
an
hour
to
an
hour
to
complete
before
you
can
even
progress
to
the
next
environment.
If
we
are
running
the
full
test
suite
it's
going
to
be
an
hour
and
a
half
two
hours
to
complete.
If
you
consider
the
time
lines
we
are
talking
about
here
and
so.
B
If
you
have,
the
fix
immediately
wait
for
all
of
the
tests
and
all
of
the
pipelines
to
pass.
So
we
are
talking
here
about
just
the
timelines
that
is
fixed
for
our
systems,
to
turnaround
things
and
then
add
a
human
factor
to
it
right
if
the
human
that
needs
to
be
resolving
this
issue
is
not
there
they'll
not.
They
will
take
more
time
right.
If
you
need
to
find
someone
else,
take
over
more
time
to
get
familiar
with
it.
B
So
we
are,
this
is
number
two
number
three
is
severity
of
breakage.
That
can
happen
in
master,
say
we
have
s
fours
and
as
trees
that
we
find
in
all
of
this
process.
Can
we
live
with
that
passing
further?
Yes,
in
majority
of
the
cases
we
can
live
with,
that
it's
s
3
and
s,
4
minor
inconvenience-
there
are
workarounds
right
in
those
cases
will
be
fine
in
cases
where
we
have
s
2
and
s
ones.
Think
of
pipelines
not
working.
Think
of
project
creation.
Not
working
is
the
delay
between.
B
Commit
being
here
right,
1015
other
20
other
commits
being
added
in
a
revert
here
plus
the
other
time.
Is
this
an
acceptable
risk
for
us
to
accept
that
we
can
that
we
will
roll
out
10-15
other
commits
together
with
a
revert
I?
Don't
think
so
because
in
those
10
15
20
hundred
other
comments,
you
might
have
more
s
ones,
and
s
choose
that
you
don't
know
about
because
of
the
broken
thing
that
happens
which
the
commit
here.
B
B
A
B
A
A
It's
good
now,
yeah
did
the
door
was
closing
so
there's
something
about
the
volume
of
changes
that
and
it's
interesting
to
me,
and
it's
comparison
between
how
to
deploy
and
deploying
from
master.
My
point
here
is
that
we
have
no
special
guarantee
that
the
point
in
time
we
pick
for
auto
deploy
is,
it
is
a
point
of
time
and
master.
So
there's
no.
B
As
a
response
to
what
you
just
said,
I
fully
agree
with.
You
absolutely
agree
with
you
there,
and
this
is
exactly
why,
like
in
the
document
I
mentioned
to
you,
I
wrote
that
what
we
are
doing
right
now
and
the
path
we
are
on
right
now
is
an
intermediary
step
towards
getting
more
discipline
or
getting
quicker
or
more
frequent
snapshots,
so
to
speak
right
like
instead
of
like
I,
said
like
we
were
creating
a
branch
once
a
week.
B
While
you
recover
the
situation
as
quickly
as
possible,
obviously
because
you
don't
want
this
diff
to
grow
and
then
continue
the
process
as
you
go
further
right,
like
you
fixed
the
underlying
problem,
there
you
isolated
the
problem,
you're,
not
going
to
add
any
new
things
that
are
coming
up
and
then
as
soon
as
you're
ready,
you're
gonna
pick
the
next
one
and
then
run
that
further.
So
that's
that's.
The
ultimate
goal
does.
A
B
From
this
step,
the
thing
that
prevents
us
from
from
moving
on
it
is
we
still
have
manual
parts
in
the
process.
We
cannot
have
any
manual
part
in
the
process.
If
we
are
to
do
this,
if
we
want
to
go
into
the
music
into
this
situation,
we
need
to
go
away
from
continuous
delivery
to
continuous
deployment
all
right
now
we
are
doing
continuous
delivery.
We
are
delivering
software
right
in
this
case,
because
someone
ultimately
needs
to
be
the
one
who
is
going
to
be
clicking
a
button
saying.
B
Can
I
deploy
it
to
production?
Yes,
you
can
all
right
I'm
rolling
this
out.
This
cannot
happen.
We
need
to
have
the
system,
and
this
is
the
the
task
that
your
it
is
working
on.
Next,
that
will
think
the
system
itself
check
exceptions,
check
the
status
of
the
you
know.
All
of
our
monitoring
systems
like
are
we
greenish
to
go
if
we
are
you're
rolling
things
out
automatically
and
informing
after
the
fact,
once
you
have
that
system
in
place?
How
frequently
you
do
this
doesn't
really
matter?
B
A
And
this
moves
us
to
the
second
point
that
we
need
to
challenge
which
is
pipeline
time
in
built
time
and
I.
Think
that
yeah
just
saying
takes
a
long,
a
long
time
we
can
get
working.
This
quality
can
help.
We
can
reduce
this,
but
it's
not
the
point.
The
point
that
I'm
interested
in
is
there
our
for
building
a
package.
Now
the
question
here
is
I,
don't
know
the
details
of
each
image.
A
What
is
a
bit
different,
yeah
yeah?
My
question
here
is
I'm
not
extremely
familiar
with
how
the
details
of
how
we
build
this,
but
what
I'm
thinking
from
the
higher
perspective
most
of
the
time
commit,
is
just
several
Ruby
rails
files,
several
Ruby
files.
Them
are
the
only
thing
that
changed
within
the
package.
So
all
the
dependencies
are
built
from
the
same
source
code.
Everything
is
just
the
same.
Just
a
delta
of
10,000
I,
don't
know
Ruby
files,
so
are
we
using
caching
build
server
or
we
just
so.
B
Is
a
management
unoptimized,
the
unoptimized
system
that
we
have
in
place,
it's
what's
taking
so
long
and
the
fact
that,
even
though
the
distribution
is
doing
a
great
job
in
keeping
this
under
control,
no
significant
effort
went
into
optimizing
this
further
because
when
all
of
these
pipelines
were
built
were
aiming
for
good
enough
for
the
time
at
that
time.
This
turnaround
time
was
outstanding.
B
It
took
us
less
time
to
build
a
package
than
run
the
unit
tests.
Now
we
are
changing
the
topic.
The
topic
now
is:
how
quickly
can
we
deploy
and
that
directly
influences
the
build,
is
influenced
by
build
times.
So,
to
give
you
a
bit
of
perspective,
omnibus
package
itself
takes
around
25
to
30
minutes
to
build
when
the
cache
is
warm.
What
does
that
mean?
It
means
when
only
github
rails
have
changed.
Every
other
component
that
goes
into
github
on
the
bus
package
is
not
being
built.
B
At
that
time
we
I'm
gonna,
say
we
I
mean
distribution
optimize
that
process
as
much
as
possible.
The
additional
15
minutes
on
top
of
that
package
build
time
are
fetching
the
cache
expanding
the
cache
building,
a
new
cache
bundle.
Uploading,
the
cache
doing
some
license
checking
a
bunch
of
other
accounting
tasks.
B
Not
for
all
to
deploy
an
unforgettable
at
home
right
because
well,
the
answer
is
somewhere
in
between,
for
example,
we
are
doing
licensed
ranking
at
the
moment
where
the
package
is
built
because
of
the
way
the
systems
are
built
there.
It
is
important
for
us
to
abort
immediately
and
very
loudly
and
not
deploy
any
library
that
has
a
license
that
is
offending
right,
like
we
cannot
deploy
things
that
we
don't
agree
with
and
instead
of
like
we
are
doing
it
at
the
end
of
the
pipeline.
B
A
B
No,
that's
not
really
true,
because
you
say
a
lot
I'm
saying
not
everything.
That
is
the
big
big
difference.
So
you
have
couple
of
projects
running
this
now
and
there
is
an
ongoing
effort
to
make
this
true.
But
do
you
know
how
many
libraries
we
pull
from
the
internet
that
we
build?
It
is
unbelievable
amount
of
gems
of
various
libraries
dependencies
of
those
libraries
and
so
on,
but
those
are
not
centralized
in
individual
projects.
They
are
centralized
in
omnibus
when
this
is
the
build
is
happening
again.
B
So
again,
not
strictly
true,
because
when
a
merge
request
in
rails
updates
a
jam
file,
for
example,
it
might
be
pulling
a
new
jam
with
a
new
license
right
and
it
has
maybe
a
native
extension
and
a
native
extension
needs
to
be
added
in
omnibus
and
that
might
actually
fit.
You
know,
like
its
I,
understand
what
you're
saying,
and
we
should
absolutely
look
into
finding
a
way
to
optimize
that
I'm.
Just
also
saying
that
it's
not
as
simple
as
all
right
we're
just
gonna
ship
left
to
a
project
and
offload
it
there.
A
B
The
package:
let's
go
to
the
image:
if
you
go
to
the
image
pipeline,
we
build
everything
in
one
and
a
lot
of
things
are
cached,
but
the
problem
is:
the
pipeline
is
not
fairly
optimized.
The
team
hasn't
focused
on
that.
We
keep.
We
kept
the
pipeline
under
a
certain
number
that
we
thought
was
ok.
Now
again
the
discussion
is
changing,
so
we
build
I,
think
we
cache
Ruby
image
still,
but
it's
a
sequential
sequence.
Right,
like
you
build
the
base
image,
then
you
build
Ruby
on
top
of
base.
B
Then
you
build
like
a
couple
of
others
on
top
of
this,
and
then
you
build
workers
on
this,
but
there
is
a
lot
of
room
there
for
improvement
and,
first
of
all,
why
are
we
building?
All
of
that
in
one
pipeline
is
a
great
question.
Right.
I
know
the
answer
to
that,
but
I'm
telling
you.
Why
are
we
doing
that?
Well,
the
reason
why
we
are
doing
it
is
because
we
never
went
back
to
optimize
that
because
it
was
never
important.
It
was
good
enough
due
to
the
turnaround
time
we
have.
A
Okay,
so
this
is
sounds
like
we
could
offload
some
of
the
check
before
there
is
a
room
for
optimization
which,
but
is
not
a
trivial
task.
Something
that
came
to
my
mind
is
that
if
we
know
the
building
image
takes
a
long
time
and
also
all
the
license
checking
takes
time.
We
can
kind
of
build
the
package
trigger
the
next
stage
and
at
the
end
of
the
next
stage,
which
is
prettier
to
define
check
back
the
status
of
the
original
omnibuses
I
know.
A
We
have
a
house
signal
here
because
there's
a
it's
not
good,
but
we
can
buy
some
time
because
we
start
yep
doing
but
yeah.
This
is
just
a
optimizations,
rather
a
lot
of
them.
We
can
talk
about
it,
yeah
so,
but
yeah
understood
the
point
here
so
and
I
think
that
the
severity
of
breakage
is
the
same
problem
that
we
discuss
at
one
point.
A
One
is
as
soon
as
we
can
switch
to
a
branch
inversion
and
fix
that
I
think
so,
in
my
opinion,
that
that
one
is
is
not
a
real
problem,
because
it's
the
same
problem.
If
we
have
to
resolve
first
place
in
being
able
to
deploy
quicker,
which
is
reduce
the
volume
of
change,
we
cannot
change,
it
will
increase
instead
of
decrease.
So
the
way
there
would
be
more
chance
to
find
P
1
s
1
the
more
we
go
with
the
move
back
yeah.
B
There's
trade
they're
very
much
related
I,
consider
them
separate
because
they
are
actually
separate
problems
to
resolve.
So,
for
example,
if
a
smoke
tests
fail
like
the
ingot
lab
QA,
something
we
depend
on
very
heavily
in
our
deployment
pipeline,
it
is
a
stop
the
world
situation
and
that
blocks
everything
else.
So
when
you
have
a
you
know
a
train
and
then
like
one
train
stops
and
you
have
a
train
another
coming
up.
You
know
what
kind
of
mess
is
gonna
be
created
like
you're
gonna
have
like
a
huge,
huge,
huge
backlog
and
cleaning.
B
When
you
know
I,
don't
know
how
to
put
this
nicely,
but
when
the
the
the
window
is
broken
on
a
train
you're
not
gonna,
stop
it
you're,
just
gonna
move,
maybe
slow
slower.
So
this
is
why
the
severity
I
kind
of
take
as
a
separate
thing
right
like
it's,
an
s3,
s4,
okay,
broken
window
curtains
are
dirty
whatever
move
on.
You'll
you'll
live,
but
if
you
see
fire
in
one
of
them,
you're
absolutely
gonna
stop
and
that
is
gonna
have
a
like
huge
effect
on
everything
else.
B
You
do
if
you
have
a
system
where
you
can
move
the
train
from
the
track
quickly
right.
This
is
where
we
are
talking
about
step
number
one.
Then
they
become
very
much
related,
but
for
me,
but
knowing
when
the
train
is
on
fire,
is
it's
a
very
important
thing
and
knowing
how
quickly
we
can
move?
That
train
would
be
a
problem
on
its
own
so
because
you're
not
gonna
be
spending
time
putting
out
fire
while
the
train
is
burning,
your
time
is
gonna,
be
spent
evacuating
people
and
moving
the
Train
of
the
track.
A
The
last
one
or
or
the
300
go
through
this
okay,
so
I
think
that
pipeline
one
is
a
big
big,
big
one.
The
other
one
are
more,
it's
more
protests
problem,
so
there
are
consequences.
They
are
not
really
related
to
me
to
it.
We
have
to
update
our
tooling,
but
it's
more
about
making
sure
that
every
bit
of
documentation
is
updated.
Everyone
knows
the
process,
and
so
it
they.
The
effort
goes
into
different
areas,
but
Wow
people
more
than
what
we
have
at
the
moment.
Yeah.
B
B
Very
effective,
it's
gonna
take
a
lot
more
focus.
Iteration
is
gonna.
Take
us
a
long
way,
so
we
will
be
able
to
do
a
lot
of
these
things,
but
focus
it's
it's.
It's
gonna
definitely
take
less
time.
Then
then,
what
we
are
doing
right
and
that
is
well
unfocused,
so
to
speak.
We
are
focusing
on
other
things.
Rather,
that
might
be
a
better
yeah.
A
This
we
have
to
build
a
lot
of
context
in
our
mind,
for
because
you
have
to
think
about
the
process
from
the
beginning
to
the
end.
Yeah,
every
time
you
just
go
on
say
working
on
some
fire
for
the
train.
Probably
so
you
kind
of
and
destroy
all
this
context,
and
then
you
have
to
rebuild
the
game
and
then
yeah,
so
I'm
gonna
use
an.
B
Example,
here
of
when,
when
we
set
out
to
automate
the
deployment
process,
on.com
radiohalos
deployed
thing,
it
did
not
even
cross
my
mind
that
we
will
have
to
go
very
deep
into
how
deployment
has
been
socialized
with
everyone
else.
Remember
the
type
of
problems
did
this
exposed
to
rest
of
the
engineering
and
how
it's
affected
productivity?
No
one
knew
where
things
were
when
they
were
deployed,
because
it
was
you
know.