►
From YouTube: Getting Git right - Git Merge 2018
Description
Presented by Andrey Devyatkin, Software Engineer, Self-Employed
About GitMerge
Git Merge is the pre-eminent Git-focused conference: a full-day offering technical content and user case studies, plus a day of workshops for Git users of all levels. Git Merge is dedicated to amplifying new voices in the Git community and to showcasing the most thought-provoking projects from contributors, maintainers and community managers around the world. Find out more at git-merge.com
A
All
right
him,
hello,
so
it's
a
little
bit
of
a
but
I,
think
we're
gonna
manage
so
first
thing.
First
I
would
like
to
say
a
big
thank
you
to
organizers
of
the
conference
every
year.
Git
merge
is
a
highlight
of
the
conferences
and
I'm,
always
looking
forward
going
here
and
this
year
I
delighted
to
have
a
possibility
to
talk
and
a
gift
merge.
So
let's
give
a
round
of
applause
for
organizers
and
making
that
possible.
A
A
Area
of
knowledge
is
mostly
continuous
delivery
and
automation,
tooling,
being
system
engineer
or
sree
or
continuous
delivery.
Devops
people
call
it
differently,
I,
also
father
of
one-year-old
I'm,
a
runner
and
traveler,
so
I
run
to
travel
and
travel
to
run
different
countries.
We
have
a
plan
for
today's
presentation
and
I'm
going
to
talk
about
how
get
change
what
we
mean
by
continuous
integration.
A
So,
as
you
can
see,
is
this
one
as
a
post
from
Facebook,
where
we
actually
finished
immigration,
so
it
took
a
year
and
a
half
to
finalize
it,
but
we
managed
to
do
that
without
downtime
whatsoever.
So
some
of
the
project
managers
didn't
even
notice.
They
came
over
to
my
desk
Andre.
We
heard
that
is
going
to
be
a
migration.
Well,
it's
actually
happened
two
weeks
ago
and
they
didn't
even
notice.
So
we
consider
that
as
a
success.
It's
not
my
personal
achievement
that
was
a
great
team
doing
that.
A
But
during
that
regression
we
observed
multiple
issues
that
perhaps
you're
again
and
again
when
I
worked
as
some
kind
of
internal
consultant
within
Erikson
moving
from
department
to
department,
helping
with
migrations
and
then
also
outside
of
Ericsson,
where
I
worked
with
various
companies
in
Scandinavia
to
name
few
Volvo
Atlas
Copco,
rune
Forge,
where
I
was
as
involved
directly
in
terminating
immigrations
or
advising
the
team's
actually
with
some
companies.
We
work
together
with
github
engineers,
helping
them
move
not
only
to
get,
but
also
to
give
half
enterprise
I,
also
trained
about
three
hundred
plus
people
life.
A
Most
of
that
was
done
in
the
Ericsson,
but
then
we
Rama
is
it
like
having
me
for
two
days:
training
about
1020
people
at
a
time.
It's
not
efficient
if
you
just
replace
that
with
a
video
recording
eventually
so
otherwise
that
number
could
be
bigger,
so
going
back
to
continuous
integration
and
how
get
affected
that
this
is
a
my
summary
of
the
Martin
Fowler
article
I
left
few
titles
away
from
here.
A
So
I'm
just
have
only
some
grouped
into
two
groups:
one
is
the
build
of
termination
and
another
is
processes,
so
this
article
was
written
first
I
believe
in
2012
and
then
we've
written
in
2006.
So
what
it
says
is
that
you
should
have
tomates
you
build
to
the
continuous
integration
which
is
kind
of
obvious.
You
want
to
have
a
machine
that
runs
you
build
on
some
basis
as
scheduled
on
average.
Then
you
want
to
keep
it
fast.
A
You
want
to
have
it
self
testing,
meaning
that
if
something
fails,
it
fails,
and
you
noticed
you're,
not
just
it
straight
away.
What's
most
important
that
I
would
like
to
highlight
here:
is
you
want
to
fix,
broken,
builds
immediately
and
I?
Will
explain
why
I
want
to
highlight
it
and
the
rest
is
self
explaining,
so
you
make
it
easy
for
everyone
to
get
laid
as
executable
just
in
case
of
something
fails.
You
want
them
to
be
able
to
troubleshoot
that
or
just
reuse,
something
built
from
the
main
line
plus
process
here.
A
Actually,
he
says,
maintain
a
single
source
repository.
It
might
be
a
little
bit
misleading.
It's
not
exactly
what
Google
and
other
company
does
with
Mona
repose
it's
about
versioning
your
dependencies,
making
sure
that
you
don't
have
sliding
dependencies
and
you
have
all
your
dependencies
under
version
control
in
one
way
or
another
right,
so
going
back
to
fix,
broken
builds
immediately.
The
statement
actually
assumes
that
you
can
have
a
broken
build,
which
means
that
back-in-the-day
is
continuous.
Integration
was
different
from
what
we
see
today.
Back
in
the
day,
it
was
about
running
the
Ella
Beals.
A
Basically,
science
automation
didn't
really
allow
to
run
more
often
or
it
might
be
other
constraints.
But
you
check
the
code
when
it
hits
mainline
post
factum
and
you
just
discovered
that
there
is
a
problem,
but
problem
is
already
there
and
we
had
exactly
this
issue
back
in
Erickson.
The
project
was
distributed,
so
it
was
India
China,
Russia,
Krrish
stays
Sweden
like
basically
all
over
the
globe.
Work,
never
stops
and
clearcase
had
a
limitation.
With
the
capacity
of
the
view.
Servers
I
will
not
go
in
the
details.
What
is
that?
A
But
basically
we
could
build
on
the
number
of
slots
during
the
day
or
during
the
night.
During
the
day
we
couldn't
do
myself
build
science
that
will
slow
down
the
work
of
developers,
science,
they
relied
on
performance
of
their
view
servers.
So
we
did
a
back-end
age.
We
did
a
daily
build
and
then
I
had
a
colleague
called
Christian
whose
duty
was
to
come
and
work
about
5:00
a.m.
so
he
can
check
daily,
build
result
and
get
back
to
Chinese
and
Indians
science.
A
They
were
about
to
take
off,
get
off
what
get
off
their
workspaces
quite
soon,
so
they
can
have
a
chance
to
fix.
If
something
is
broken,
so
we
can
have
an
actual
actual
green
build
and
we
have
a
test
package
to
test
and
it
could
be
so
that
we
would
be
like
without
green
bill
for
weeks
having
no
possibility
to
test.
A
So
the
green
build
was
an
achievement
just
to
get
everything
built
at
the
same
time
so
Sam
it
actually
changed
the
way
we
integrate
and
I
I
worked
for
some
time
for
a
company
called
pragma
as
I
see
all
of
Brockton
Sweden.
That's
a
branch
and
also
senior
consultant
and
I
use
this
picture
law
to
explain.
Continuous
integration,
continuous
delivery
and
I
know
that
the
product
mint
doesn't
don't
mind
using
me
or
me,
using
the
graphical
heart,
I
even
asked.
So
it's
fine
right.
A
So
going
back,
what's
happened
when
you
use
git
or
any
other
distributed
version
control
system
is
that
you
have
a
developer
who
would
work
on
his
computer
and
then
in
order
to
share
his
code,
he
doesn't
just
check
in
his
change
as
it
would
be
in
a
centralized
version,
repository
version
control
system,
so
his
change
immediately
becomes
available
to
the
mainline
and
other
developers.
He
actually
need
to
make
a
action
of
pushing
that
to
the
golden
repository.
A
A
Basically,
you
would
push
a
branch
and
then
you
open
the
pull
request
from
the
branch,
which
means
that
you
can
have
some
optimatus
systems
that
can
pick
up
your
branch
and
build
it
before
you
actually
merge
it
exactly
as
it
was
shown
in
the
previous
presentation,
where
you
would
run
a
style
board.
So
you
give
a
feedback
for
developer,
say
that
well,
you
might
have
a
unit
test.
You
might
have
build,
you
might
have
a
static
analysis.
A
Whatever
is
whatever
is
the
gating
activity
for
the
knowledge,
and
that
was
a
huge
improvement
back
in
the
day
when
we
moved
from
daily
bills
to
doing
the
pretest
of
integration
signs,
our
Greenbuild
rate
went
from
one
in
a
week,
maybe
one
in
two
weeks,
more
or
less
couple
of
everyday
I
was
huge.
I
can
2011,
we
used
Garrett
for
that.
Garrett
has
a
similar
model.
We
didn't
know
that
was
a
github
enterprise.
A
A
In
most
cases,
it
works
just
fine,
because
the
probability
of
issue
happening
here
is
super
low
and
you
might
get
something
happening,
but
most
probably,
if
you
have
continues
delivery
pipeline,
it
will
catch
catch
it
down
the
stream
anyhow,
something
to
keep
in
mind,
especially
when
you
have
a
conversation
about
future
branches,
because
people
have
a
feature
branch
and
you
say
but
I
tested.
My
feature
on
my
feature.
Branch
and
master
is
like
one
month
ahead
already.
So
all
the
testings
they
did
is
somewhat
irrelevant
because
they
tested
the
feature
in
isolation.
A
But
you
cannot
deliver
the
feature
itself.
You
deliver
the
product
from
the
main
line
right
so
up
to
you
to
decide
in
your
organization.
Will
you
tackle
this
problem
or
not?
Is
it
a
problem
for
you
or
not?
You
can
just
say:
alright,
it's
not
a
big
of
a
risk.
We
just
live
with
that,
and
this
is
what
we
did
back
in
Erickson,
but
I
actually
saw
multiple
departments
who
went
on
solving
it
in
various
ways,
and
one
of
the
way
is
a
large
queue.
A
Another
way
is
to
basically
force
the
developers
to
always
rebase
the
change
before
you
merge.
If
you
have
high
traffic
intensity
in
your
repository
developers,
will
hate
you
science.
There
will
be
someone
ahead
of
you
all
the
time
and
you
have
to
rebase
a
barista
base.
If
gating
activity
takes
longer
than
five
minutes,
they
will
hate
you
even
more,
but
it
could
be
so
that
your
quality
criteria
is
super
high
or
the
builders
taking
very
long.
A
So,
basically
sleeping
the
issue
to
the
main
line
is
costly,
so,
for
instance,
your
build
for
any
apparent
reason
takes
one
hour
and
a
half.
If
something
sleeves
to
the
main
line,
you
will
impact
well
all
your
development
team
with
this
issue
for
at
least
one
hour
and
a
half
before
it
discovered,
and
then,
if
you
will,
some
will
take
some
time
to
fix
it.
So
it
might
be
half
as
a
workday.
A
In
those
cases
you
actually
might
consider
doing
merge
queues,
but
still
they
are
complex,
but
I
think
the
best
well
known
example
is
dual:
what
people
use
an
OpenStack
project
to
give
you
an
idea?
How
it
might
work
is
imagine
you
have
a
github
your
pull
request,
and
then
you
might
have
a
board
that
listens
for
the
certain
comment,
and
your
comment
might
be
merge
as
soon
as
you
pull
that
comment
to
your
full
request.
A
The
board
will
pick
up
your
change
and
pull
it
to
merge,
Q
and
then
automated
system
will
take
the
commit
one
after
another
apply
on
top
of
the
mainline
branch
you're
getting
criteria
if
it
passes
it
will
push
it
to
the
main
line.
Take
the
next
come
hit,
apply.
Take
the
next
commit
apply
that
ensures
a
lineal
history
and
Elsa
fixes
the
issue
that
you
can
have
a
sleeping
problem
to
the
main
light,
but
again
in
most
cases
it
not
force
it
implementing
the
merge
queue.
A
If
you
have
a
lot
of
board
engineers
who
love
to
do
something
might
be
a
task
for
them
moving
on,
so
that
was
the
continuous
integration
part
that
we
also
needed
a
branching
strategy
back
in
2011.
This
was
a
branching
strategy
like
they're
branching
strategy.
Today,
I
guess
most
of
you
agree
that
is
was
over
complicated.
You
had
a
long
live
branches
master
and
develop.
You
do
a
lot
of
merges
back
and
forth.
You
have
a
hotfix
of
branches
that
probably
the
only
want
to
have.
A
We
decided
right,
that's
all
fine,
but
we
want
to
start
with
something
simpler.
So
we
will
go
with
master
branch
and
we
will
branch
as
necessary
and
then
ask
ourselves
as
well
as
like
why?
Why
do
we
want
to
branch?
What
the
problem
to
resolve
is
branching
and
in
our
case
the
answer
was
it's
a
lack
of
tomato,
so
we
branch,
because
we
cannot
verify
that
the
legacy
still
works
fast
enough.
A
So
that's
why,
when
we
want
to
do
a
release,
we
would
create
a
release
branch
just
to
make
sure
that
there
are
no
new
changes
breaking
anything,
and
we
will
try
to
verify
that.
But
if
you
would
have
automated
test
you'd
that
would
give
us
a
confidence
that
legacy
still
works.
Then
you
could
probably
run
fast
enough
and
every
commit,
which
means
that
we
don't
need
to
be
on
the
release
branches
that
long
or
we
might
be,
or
we
might
skip
some
of
the
feature
branches
plus,
if
employ
feature
toggles
for
quad
branching.
A
Then
we
don't
need
to
do
a
branches
in
a
subversion
control.
That's
another
choice
you
make.
Where
do
you
branch?
Your
quote
here
is
a
branch
in
a
version
control
system
or
you
use
some
other
tools
like
or
techniques
as
a
feature
toggles
that
allows
you
to
branch,
how
your
code
executes.
So
you
can
develop
the
feature
on
the
mainline
and
then
turn
it
on
when
you're
ready
with
your
testing.
A
So
we
came
up
with
a
name
cactus
branching
model,
which
means
that
we
had
a
release
branch
and
then
we
would.
We
have
a
master
branch
as
a
primary
development
branch,
and
then
we
will
branch
off
to
do
releases
science
we
actually
had
in
that
project.
We
had
a
requirement
to
be
able
to
maintain
Swanton
releases.
There
is
even
a
person
who
came
up
with
the
same
name
independently
from
us.
I,
don't
know
he
might
heard
about
our
work.
A
Anyhow,
there
is
a
name
for
that.
Cactus
branch
model
and
the
I
think
the
trickiest
part
with
this
branching
strategy,
is
that
you
have
to
cherry-pick
fixes
between
release
branch
and
your
mainline,
and
you
need
to
keep
track
where
the
fix-it
fixes
are
just
making
sure
that
you
not
only
fix
it
on
the
release
branch
that
you,
health
actually
picked
it
to
the
main
line.
Plus
you
need
to
run
the
analysis
like
if
the
if
the
fix
is
still
applicable
because
you're
developing
on
the
mainline
might
move
on,
and
you
already
don't
have
that
function.
A
So
that
brings
a
little
bit
of
complexity
here
and
as
a
common
issue
that
I
saw
was
when
you
have
a
project,
oriented
organization
where
you
have
like
one
project
starting
after
another.
You
know
what
project
do
right
say
it
there
wait.
So
you
have
one
project
which
is
late.
You
have
another
development
project,
that's
supposed
to
be
started,
but
they
share
the
same
branch
they
so
they
need
to
transfer
the
branch
between
themselves.
A
So
one
project
is
to
stop
developing
on
a
mainline,
somehow
another
one
need
to
take
over
and
then
the
Mesa
starts
as
missus
thing
starts.
So
that's
another
common
issue
to
keep
in
mind.
If
you
are
moving
towards
that
strategy,
you
need
to
know
how
you're
going
to
handle
it
and
if
you
are
using
projects,
if
it's
like
one
continuous
project,
there's
no
such
issue.
A
The
funny
thing
is
that
the
cactus
model
is
basically
exactly
the
same
as
advertised
by
trans
base
development.
You
have
a
trunk
you
release
for
for
to
do
fixes
and
you
only
do
a
couple
of
commits
and
release
branch
has
to
reach
the
release
candidate
quality
and
what
we
noticed,
but
I
can
add
a
is
like
we
actually
did
the
same
thing.
We
worked
on
a
trunk.
We
worked
really
hard
on
optimizing,
continuous
integration
suit,
making
sure
that
the
trunk
is
in
a
good
shape.
A
Then
we
switched
on
automating
stuff
on
the
release
branches
and
we
noticed
that,
with
every
release
we
had
less
and
less
commits
on
the
release
branches,
because
our
automation
improved.
We
sleep
less
and
less
issues
to
release
branches,
meaning
that
we
need
to
fix
less
things
there
and
branches
became
shorter
and
shorter.
Eventually,
they
were
just
like
a
tags
on
the
trunk
say
that
we
release
here,
but
we
haven't
fixed
anything
on
top
of
that
yeah.
A
So
another
thing
that
we
tackled
during
that
migration
and
I
discussed
many
times
in
azam
equations
very
common
question:
shall
we
use
sub
models?
There
is
no
right
answer
to
this.
That
would
work
like
in
every
case.
I
think
you
should
ask
what
is
the
problem
and
you're
trying
to
solve
these
sub
models?
In
my
personal
opinion,
the
problem
that
someone
was
trying
to
address
is
a
build
system
that
sucks
basically
a
wheel
system
should
be
able
to
resolve
your
dependencies.
If
your
build
system
cannot
resolve
your
dependencies,
you
start
to
try
to
patch
it.
A
In
some
other
ways
and
some
models
as
one
of
them,
so
imagine
that
you
using
I,
don't
know
Hugo
for
generating
static
websites,
and
then
you
want
to
have
a
certain
C.
You
can
bring
it
as
a
sub
model,
because
Hyuga
actually
don't
have
a
means
to
resolving
dependency
from
remote
repository
and
cloning
it
when
it
generates
their
site.
So
by
using
the
modern
build
systems
like
Cradle
basil.
A
A
We
have
a
build
system
that
that
requires
you
to
store
big
binary
blobs
in
your
repository
or
it
just
might
be
inefficient
if
your
blobs
are
just
too
big
for
some
reason
like,
for
instance,
if
you're
I
I
work
with
multiple
projects
where
they
would
use
a
quad
generators
and
the
source
of
the
coal
generator
is
a
binary
like
XML
files
that
you
cannot
merge,
it's
basically
a
binary,
that's
your
source!
You
have
to
work
with
that.
A
Then
just
be
clever
next
time
when
you're
picking
a
vendor
for
your
tools,
you
want
to
have
something
with
a
knowledgeable
source
code
base
organizations
as
well
as
you
deliverable,
dictate
the
structure
of
repositories
quite
often
and
yeah.
There
is
a
ongoing
debate
about
having
multiple
repositories
versus
mono
repo
I.
Don't
want
to
be
dogmatic,
but
I
think
we
should.
A
This
is
how
people
used
to
scale
continuous
delivery,
and
let
me
explain
what
I
mean
so
when
the
gray
RF
finishes,
when
you
have
commit
that
passes,
gating
criteria,
you
merge
it
to
the
mainline
and
then
you
would
usually
run
more
stuff
like
deploy
to
production
like
environment
run,
function,
test
to
manual,
validation,
maybe
exploratory
testing
and,
in
the
end
you
supposed
to
end
up
with
the
release
candidate.
So
in
this
way
continue
delivery
pipeline
is
automated
definition
of
done,
or
you
can
also
call
it.
A
The
way
of
keeping
the
software
entropy
in
bay
because,
like
your,
you
have
a
software
entropy
in
your
project
and
continues
delivery
pipe
right,
allows
you
to
continuously
produce
release
candidates
and
then,
if
you
like,
you
can
do
continuous
deployment,
which
means
you
deploy
your
release.
Candidates
continuously,
but
that's
completely
different
story
which
is
irrelevant
right
now.
So
at
all
fine,
it's
really
nice.
If
you
can
run
your
continuous
delivery
pipeline
on
every
single
comet
because
then
get
meta
data
contains
email
address
of
the
person
who
committed
it.
A
So,
for
instance,
a
function
test
failed
and
you
run
the
function
test
on
every
commit.
You
don't
need
to
have
a
trouble
report
going
to
your
trouble
report,
handling
team.
You
can
automatically
send
the
email
to
that
developer.
Saying
like
you
know,
there
is
a
something
broking
a
function
test.
That's
your
commit
that
is
involved,
so
it
might
be
a
very
environmental
issue,
but
still
we
would
like
you
to
take
a
look
and
science.
You
run
continuously
video
pipeline
and
every
commit,
which
means
that
it
might
be
like
only
one
or
two
hours,
past
science.
A
Both
works,
but
boss
became
increasingly
complicated
when
you
have
a
high
committing
flow.
So,
like
imagine
this,
your
car
breaks
down
on
the
side
of
the
empty
road.
The
drivers
who
pass
by
they
will
probably
stop
and
help
you
let
the
same
when
you
do
continue
delivery
in
a
team
of
five.
If
you
break
something
in
the
pipeline,
it's
fine.
Everyone
goes
down
and
fix
it
all
together.
A
But
if
you
are
a
team
of
200
working
on
the
same
repository
and
you
block
the
continuous
delivery
pipeline,
there's
a
high
chance
of
angry
person
arriving
to
scream
on
you
depends
on
the
culture
of
your
company
man
in
some
places,
at
least
so.
How
do
you
tackle
that
this?
Actually,
some
science
applied
on
that
cute
little
slow
apply
to
Q's.
A
So
if
you
apply
that
to
continued
delivery
pipeline
and
imagine
the
Kami,
it
has
customers
arriving
to
the
queues
and
time
when
the
queue
is
a
time
and
a
delivery
pipeline,
you
can
easily
figure
out
how
much
time
is
enough
you.
What
is
your
time
constraint
for
your
continuous
delivery
pipeline?
So,
for
instance,
if
you
take
eight
hours
workday
you
have
one
commit.
It
means
that
you
have
eight
hours
to
verify
it
like
as
simple
as
that.
You
have
to
you
have
first
time
so.
You'll
continue.
Delivery
pipeline
should
move
faster
Elsa.
A
You
people
do
not
commit
you
know
evenly
most
probably
they
will
do
a
push
before
they
go
to
lunch
and
they
do
a
pool
before
they
go
home.
You
might
have
couple
of
spikes
during
the
day.
Something
to
keep
in
mind,
but
that's
what
will
guide
you
and
so
far
common
principle
of
scaling.
Continuous
delivery
pipeline
was
either
improving
through
output
or
reducing
work-in-progress,
which
means
that
you
can
slice
your
repository
in
a
components
like
bacon
90s.
It
was
called
component
based
software
engineering,
now
hipsters
call
it
micro
services,
but
essentially
the
same
thing.
A
You
slicing
your
software
in
pieces
and
then
you
can
have
reusable
pieces
of
software
that
could
go
to
the
different
product
lines.
So
it
looks
like
this.
You
have
a
components:
every
component
have
its
own
repository
in
the
hand
you
might
produce
a
binary
output,
it
might
be
a
docker
image
might
be
a
library
might
be
something
else
that
you
could
assembly
in
a
system.
So
if
you
have
distributed
monolith
like
you,
think
you
doing
Accra
services,
but
you
have
a
distributed
monolith.
You
can
only
deploy
all
your
containers
all
together
in
as
a
baseline.
A
That's
what
you
do
if
you
deploy
independently
than
those
pipelines
they
just
keep
running
till
production
so,
but
so
far,
that
was
a
common
way
of
scaling
continues
just
slicing
stuff
and
the
pieces
making
sure
that
you
have
a
less
comets
coming
into
every
repository.
So
you
can
have
a
more
deep
terminal
testing.
You
can
even
have
like
a
multiple
project
pipelines
product
lines.
Looking
like
this,
that's
word
before
now:
I
I,
don't
have
a
tech,
magic
opinion.
I
am
trying
not
to
have
dogmatic
opinions
about
many
repositories
in
the
versus
malaria.
A
Posit
aureus,
but
I
see
that
many
organizations
trying
to
embark
on
this
journey.
If
you
are
feeling
like
Google-
and
you
are
not
few
things
that
you
might
consider-
there
are
no
tooling
available
how
to
the
Shelf
to
implement
on
a
repository.
So
you
will
have
some
pain
just
to
consider
something
if
your
tools
vendor
that
might
be
a
golden
hour,
because
there
are
many
companies
willing
to
pay
for
that.
A
Sarah
hell
the
cultural
issues
and
also
discussion
about
the
dependencies
management
is
quite
big
in
this
area,
because,
with
the
tools
like
Cradle
basil,
you
explicitly
define
your
dependencies.
You
say
that
my
project
depends
from
this
library
from
set
library,
but
when
you
have
one
huge
repository,
it's
becoming
very
easy
for
person
to
say
well:
okay,
I
just
go
and
link
header
file
from
over
there
and
you
ending
up
with
implicit
dependencies
that
might
be
hard
to
spot
so
that
might
deplete
your
high
tech
chure
going
forward.
I
mean
those
are
different.