►
From YouTube: GitLab 11.0 Release Retrospective
Description
Follow along in our doc: https://docs.google.com/document/d/1nEkM_7Dj4bT21GJy0Ut3By76FZqCfLBmFQNVThmW2TY/edit#
A
Okay,
hello:
everyone
welcome
to
the
get
lab,
11
release
retrospective,
the
first
item
for
the
free
preview.
Retrospective
improvement
tasks
is
mine,
so
on
10.8
we
noticed
that
a
really
large
number
of
marsh
request
was
being
merged
really
close
to
the
feature
freeze
day,
causing
our
seas
to
have
a
huge
amount
of
commits.
These
are
seas,
normally
were
bougie,
and
it
was
really
hard
for
release
managers
to
find
what
was
the
original
problem
because
it
contains
so
many
commits
so
to
deal
with
that.
A
Philippa
and
I
decided
to
track
all
much
request
for
11
that
were
merged
between
the
fifth
and
the
seventh
to
understand
what
kind
of
merge
requests
are
being
merged
like
in
the
last
moment,
and
the
data
is
really
quite
interesting.
We
noticed
that
30%
of
the
merge
requests
were
merged
on
those
days
for
seee.
A
26%
of
the
merge
requests
were
merged
on
those
dates
for
EE,
and
surprisingly,
there
were
all
all
types
of
Mercia
quests,
not
just
regressions
and
box
deliverables.
Also
and
411.
We
had
like
a
lot
of
front
&
ambu
struck,
merge,
requests
as
well
and
well.
We
had
a
lot
of
more
interesting
data
on
the
issue.
Please
jump
in
and
check
it
out
and
leave
comments
and
with
that
I'm
going
to
pass
these
to
Robert
Thank.
B
You
Meyer
so
in
fact,
a
lot
of
the
stuff
we're
doing
to
kind
of
ease
our
entire
release
process
through
automation.
The
first
thing
that
we
did
we
shipped
this
month
was
weaken
out
egg
or
release
via
chat
outs.
Emira
actually
had
the
honor
of
doing
the
first
one
parks
k9.
It
actually
had
a
bug
that
we
discovered,
but
we
fixed
that
and
then
all
the
subsequent
tagging
Seaborg
pretty
well,
so
really
happy
with
that.
This
is
a
minor
thing.
It
wasn't
a
major
speed
issue.
B
It
wasn't
a
major
bottleneck
for
our
release
process,
but
it
did
actually
shave
a
few
minutes
off
of
each
tagging
so
we'll
take
whatever
we
can
get.
We
also
it
automated
that
really
spend
our
missions,
so
it's
easier
for
onboarding
out
boarding,
release
managers
and
definitely
there
of
the
rough
road
map
of
where
we're
headed.
The
next
thing
is
going
to
be
releasing
packages
through
get
off
standard,
employee
and
we're
heading
towards
automated
voice
to
staging
speed
that
whole
process
up
Stan,
you
need
to
see
me
upgrade.
That's
cool.
C
Yeah,
the
child
stuff
is
useful,
especially
when
you're
a
different
country
that
is
never
level
I,
think
I'll
pay
off
in
places
like
when
we
were
in
Crete.
So
last
month
we
had
a
feature
that
we
ported
from
E
to
C
II
and
then
I
caught
a
lot
of
different,
weird
edge
cases,
because
the
migrations
have
to
be
just
right.
So
we've
added
a
new
test
that
basically
will
test
the
previous
EE
version
and
upgrade
to
seee
the
next
C
version,
so
potentially
they'll
catch
some
of
these
problems
in
the
future.
D
Me
so
there
were
a
few
good
points.
This
release,
the
first
are
seen
to
be
deployed
to
production,
was
our
c5
and
it
was
the
point
of
the
seven
we
made
80
points
to
production
with
eleven
zero
version
and
there's
a
link
to
comment
in
Asia,
where
you
can
see
literally
everything
that
happened
from
tagging,
to
staging
and
Canarian
protection
deploys
and
why
weren't
some
of
the
RC
is
ready
to
protection.
D
We
deployed
1102
protection
on
the
21st
so
one
day
before,
and
we
tagged
1101
as
a
critical
security
release
on
this
22nd
that
was
Friday
and
released
it
on
the
25
on
next
Monday
and
we
I'm
taking
over
James
because
he
couldn't
attend,
but
we
we
tried
more
system
every
before
we
take
14
our
C's
for
this,
for
this
room,
I
refer
to
you.
Yes,.
A
D
We
also
made
an
addition
to
document
it
to
documentation
on
exception
request,
because
there
were
a
lot
of
a
lot
of
questions
on
what
to
expect,
in
this
exception,
request,
and
we
hope
that
is
more
clear
to
to
everyone
and
well
Robert.
Also
already
comment
is,
but
a
great
improvement
was
the
shadow
ops
command
and,
basically,
for
you
to
understand
what
this
means
is
before
we
had
this
comment
that
would
take
the
release
in
slack.
D
We
would
have
to
run
the
command
locally
in
our
in
our
machine,
and
we
would
have
to
wait
a
few
minutes
and
pray
that
we
wouldn't
lose
connection
or
anything
else,
and
we
then
had
to
manually
announce
it
on
a
couple
of
channels.
In
slack
and
with
this
changed
everything
is
automatic.
We
just
from
the
common.
You
don't
need
to
do
anything
else,
which
is
great
miron
over
tea
with
him.
Thanks.
A
Again
so
the
communication
with
the
GCP
team
was
really
good.
We
managed
to
do
the
release
work
around
that
so
take.
So.
Thank
you
for
that
for
being
really
open
with
us,
and
also
the
trainees
did
a
really
good
job
about
we're,
really
enthusiastic
and
eager
to
help
and
I
am
passing
this
to
Victor
now.
E
So
yeah
I
just
wanted
to
mention
on
this
question
side.
We
ship
there's
three
really
big
features.
Second,
listen
boy
drama:
have
you
presets
an
expanded
weights?
These
are
all
pretty
big
features
from
a
perspective
of
a
customer
impact
and
user
impact.
So
even
though
some
of
these
might
have
been
easier
or
smaller
scope
from
a
development
and
testing
perspective
and
nonetheless
they
were
pretty
big
from
a
customer
facing
perspective.
E
F
D
So
there
were
also
a
few
things
that
were
less
good
in
the
release
so
and
the
first
one
was
the
boots
track
for
update
at
the
big
impact
on
the
release.
It
was
a
very
first,
a
very
stressful
first
week,
so
this
update
was
for
the
ones
that
don't
know
was
marching
to
master
with
some
known
bugs.
There
was
some
miscommunication.
The
release
manager
were
or
not
informed
that
the
UX
team
was
ok
with
some
of
the
free
questions.
D
The
free
questions
were
the
release.
Managers
were
very
hard
to
track.
There
wasn't
a
figuration
level
and
the
way
it
was
being
used
to
reflect
the
priority
and
it
was
upside
down,
and
this
was
very
hard
for
us
to
understand
if
we
should,
if
we
had
to
wait
longer,
if
we
really
to
move
faster,
distillates,
the
first
five
are
seized,
and
in
total
there
were
more
than
two
hundred
three
questions.
We
had
to
pick
more
than
one
during
the
15
15
requests,
which
means
a
lot
of
time.
D
D
And
I
can
read
it
so
basically,
so
I
Tim
says
that
what
is
the
secession
in
such
a
framework
updates?
Should
we
couple
all
of
the
bugs
in
one
much
request
and
merge
them
again?
My
answer
to
that
is
the
best
thing
wouldn't
have
been
to
merge
I
to
master
with
notebooks.
We
could
have
worked
in
a
range
from
master.
We
would
keep
upstream
up-to-date
daily
and
deploy
that
branch
to
an
environment
where
it
would
be
possible
to
test
and
only
merge
that
branch
over
to
master
again.
D
Maybe
you
would
have
taken
a
bit
more
development
time
like
a
week
or
two,
but
it
would
have
been
a
lot
less
stressful
on
the
release
thing,
and
it
would
also
probably
that
the
fix
that
we're
seeing
are
causing
other
bugs.
So
the
rush
into
fixing
203
questions
is
breaking
a
lot
of
stuff
that
wasn't
broken
before
I
forgot
to
write
that
down,
but
I
will
Tim
there's
a
new
point
that
says
comments,
people
not
sure
when
to
add
to
it.
Yeah.
F
D
G
Thanks
Mara,
so
something
that
went
wrong.
This
releases,
the
amount
of
them
the
amount
of
requesting
the
police
overwhelm
the
QA
task,
so
just
for
reference
is
1/6
times
the
amount
we
saw
in
turn
abate
in
in
11
of
0
we
have
650,
plus
and
internally
it
was
610
mish.
So
initially
we
used
to
get
lock
command,
which
has
proven
that
it
will
no
longer
scale
with
us.
G
We
actually
got
2,300
commits
just
for
I
actually
went
alone,
so
we
then
fall
back
to
use
the
work
in
progress,
QA,
automated
generation
task
that
we
were
working
on
and
that
tone
it
down
to
its
393
items,
but
still
this
will
not
scale.
So
we
are
waiting
back
to
a
1,
QA
RC
task
for
one
release
going
forward
and
we're
also
adding
a
feature
assurance
task.
G
That's
gonna
be
over
watching
all
the
the
RC
q,
a
tasks
going
forward
and
the
product
managers
will
be
working
inside
this
effect
if
they
tasks
and
set
the
release
document
updates
there.
The
task
template
is
also
there
for
the
improvements
that
we
did.
So
we
have
now
fully
automated
the
creation
of
the
RC
q.
A
task
and
with
better
readability,
so
this
is
based
on
the
merge
request,
information
with
tags
and
labels
and
not
just
the
commits.
This
is
fully
done.
G
The
sample
sample
content
is
there
documentation
is
there
and
we're
also
looking
to
link
this
to
the
shire
adopts
in
the
release
process.
The
way
we
doing
it
done
in
one.
In
addition,
we
also
tracking
forensics
on
what's
needed.
The
missing
set
up
on
staging
that
people
need
to
perform
validation
tasks,
so
we're
actively
gathering
information
there
on
to
you,
Tommy
kershaun,
thanks.
H
Matt,
can
you
I'm
standing
in
for
Sean
here
for
this
item?
He
wanted
to
remind
everybody
to
please
make
sure
to
keep
our
tooling
and
documentation
about
migrations
up
to
date.
This
is
one
good
way
that
we
can
improve.
So
far,
we've
been
leaning
a
lot
on
the
database
team
who
is
awesome,
but
not
very
large,
compared
to
the
rest
of
the
engineering
organizations.
So
in
the
agenda
here
we
have
a
link
to
a
merge
request
as
an
example
case.
I
Me
before
we
move
on,
if
we
could
make
this,
you
know
we're
looking
for
action
items
and
champions
coming
out
of
this
release.
If
we
could
make
this
one
very
actionable
and
and
put
it
on
a
specific
person
or
group
of
people,
is
there
a
way
to
do
that
like?
Could
this
be
the
release
managers,
responsibility
to
create
these
issues,
or
is
it
truly
something
that
needs
to
be
spread
across
the
whole
organization
and
therefore
harder
to
sort
of
track
and
affect
change?
I
think.
H
C
I
Well,
maybe
so
what
I'm
suggesting
is
maybe
there's
something
where
we
put
it
on
the
release.
Managers
to
you
know,
go
around
it,
the
various
engineering
managers
and
and
ask
them
like
pull
the
information.
Do
you
need
to
update
your
documentation
or
and
and
make
sure
that
it
sort
happens,
as
was
just
sort
of
a
general
call
to
action.
J
F
I
hope
you
can
hear
me
despite
the
rain
outside.
So
what
have
you
learned
from
the
good
strengths
upgrade
so,
first
of
all,
what
we
needed
to
upgrade
for
was
that
we
want
to
bring
in
your
using
the
components
the
next
two
really
psycho
stuff
for
that
library
that
we
are
using.
We
need
bootstrap
for
and
also
to
bring
our
CSS
up
to
power
and
basically
clean
it
out.
What
have
we
learned?
Overall,
we
needed
to
touch
over
700
different
groups
and
all
the
CSS
class
has
changed
and
a
lot
of
elements
changed.
F
Even
HTML
setup
changed.
So
it
was
simply
a
lot
of
changes.
We
should
definitely
go
get
to
having
easier
review
C
to
setup
and
make
it
clear
to
everyone.
As
we
had
a
joint
testing
team
with
VX,
we
should
have
had
a
full
instance
that
is
running
that
range
and
for
quite
some
time
and
really
make
it
that
each
and
differ
each
part
of
the
application
is
tested
by
someone
else,
especially
as
we
had
the
blocking
issues
where
example
X
by
example,
on
the
terms
of
condition
page.
F
That
was
something
that
was
not
tested
because
it
was
not
set
up
on
the
administration
or
that
at
the
end
of
the
github
import.
We
had
like
white
font
on
white
background,
so
this
is
also
something
I
think
we
should
get
at
some
point
if
we
need
such
big
overall
changes
to
all
the
hammer
pages.
Currently
we
don't
for
seed
and,
unfortunately,
we
should
have
for
test
10
poops
or
really
have
documentation
like
checklists.
Okay,
those
are
they
use.
Major
use
cases
at
least
admit
that
have
not
test
by
test.
F
The
think
that
we
are
actually
pushing
for
will
also
help
us
in
this
regard,
so
the
UI
component
library
will
have
mutually
regression
testing
for
a
component
in
Purdue's
case.
So
in
the
future,
we
should
be
able
to
see
such
changes
directly
on
the
visual
regression
test
and
really
have
tests,
because
currently
all
those
changes
were
untested,
as
we
simply
don't
have
visual
regression
test
over
there
all
whole
application,
and
the
other
big
thing
is
really
that
at
the
moment
we
have
in
the
CSS
stylesheets
a
lot
of
very
specific
legacy.
F
Css
that
is
really
targeted,
specific
pages.
So
there
is
some
CSS
that
is
pushing
the
pattern
in
five
pixels
in
that
direction
and
that
another
CSS
that
comes
in
position
it
5
pixels
to
the
other
direction,
etc,
and
as
soon
as
you
now
update
those
changes,
they
either
get
lost,
and
this
is
by
example,
and
moving
the
button
to
a
different
one.
So
we
definitely
need
to
reduce
that,
and
that
is
really
the
target
of
our
UI
component.
F
I
Thinks
so
yeah
this
is
an
interesting
retrospective
I
mean
there's
there's
a
lot
that
went
well,
there's
a
lot
of
good
improvements
here,
I'm,
not
actually
seeing
very
specific
things
that
we
normally
choose
for
our
champions
and
action
items
here.
I
mean
mech
made
some
suggestions
about
the
QA
tasks.
It
looks
like
those
have
already
been
implemented.
I
You
know
boots
start
for
something
we
did.
We
want
to
make
sure
we
do
that
a
better
next
time
that
comes
around,
but
obviously
this
doesn't
happen.
Quite
often,
we
we
had
son
Shawn's
suggestion
about
you
know
reminder,
but
there
there
isn't.
You
know
one
or
a
set
of
clearing
things
which
is
which
is
sort
of
good
news.
I
guess
I
would
just
you
know,
ask
a
catch-all
question:
is
there
anyone
that
would
want
to
nominate
something,
particularly
their
lease
managers?
Is
there
anything
that
we
we've
missed?
I
would
say?
I
Well
we
need
this
person
to
do
this
specific
thing
before
the
next
release
to
tangibly
improve
things
or
are
we
satisfied
saying
like
this
was
overall
you
know
a
pretty
good.
It
obviously
required
a
lot
of
effort,
particularly
on
the
release
managers,
but
but
you
know
the
the
results
were
definitely
good,
so
Philippe
amira
I
lost
less
chance.
Anything
you
would
really
want
to
see
happen
before
you.
You
know
rotate
off.
Yes,.
D
I
have
two
suggestions.
The
first
one
is
to
use
the
data
we
tracked
for
the
merge
requests
that
were
merged
between
the
5th
and
the
7.
We
don't
really.
We
didn't
really
understood
that
we
could
turn
that
into
action
items,
and
it's
not
worth
to
know
that
the
biggest
percentage
of
our
worship
is
sandwiched
between
the
fifth
and
the
seventh.
We
don't
do
anything
with
that
later.
D
I
Are
Robert
and
Marian
are
investigating
switching
to
weekly
releases,
it's
more
of
a
q3
initiative
and
no
cares.
It
was
a
hasidic
action
item
but
I
think
that
would
help
with
this.
If
you
can,
if
we
can
decompose
the
future
for
use,
we
would
have
less
of
this
big
moment
unless
there's
something
else
you're
talking
about
like
if
there's
something
specific
we
can
put
on
managers
or
specific
people
to
avoid
this,
you
know
spread
out
the
work
smooth
out
the
curve,
I'm,
not
sure
what
we
can.
I
I
A
Think
I
think.
Another
thing
that
we
can
improve
is
the
use
of
the
peak
in
211
labeled
because
for
different
leaves
and
for
the
Petrelli
Israeli,
Philippe
and
I
had
to
review
every
much
request
to
see
if
they
are
actually
are
regression
of
the
theories
actually
a
book.
So
we
can
pick
it
or
not,
and
that
was
extremely
time-consuming
for
us
and.
I
I
You
know
at
any
time
in
their
lease
obviously
doesn't
have
to
be
the
the
fifth
and
the
seventh
there's
understandable
reasons
of
why
that
happens,
because
we
are
on
the
cadence
but
tried
to
flow
and
work
smoothly
over
the
over
the
month
and
then
developers.
You
know
try
to
use
that
that
picking
tool
or
that
started
a
label
to
make
that
easier
in
their
release,
managers
to
avoid
manual
work
but
again,
I'm,
not
quite
seeing
that
that
sort
of
like
the
one
big
deliverables
that
we
would,
we
would
put
on
a
on
a
champion.
I
C
It's
just
one
thing
because
I
mean
if
we
look
at
the
two
big
things
that
happen
just
release
the
bootstrap
for
that's
the
framework.
We
can
talk
about
that.
But
if
you
just
ignore
that
one
and
talk
about
that
rc5
deploy
that
Felipa
pointed
out
and
the
option
and
and
the
reason
that
happened
is
we
were
running
gitlab
calm
without
all
the
post
deploy.
My
creations
finished
wondering
if
we
need
to
champion
hey,
let's
run
get
lab
without
all
these
migrations
finished
and
make
sure
it
functions
right.
C
So
I
can
imagine
QA
scenario
where
we
run
everything
except
for
the
post,
deploy
migrations
run
all
the
QA
make
sure
it
works
and
then
run
the
post
employee
and
make
sure
things
continue
to
work.
So
I
could
see
that
as
being
like
one
question,
I
came
out
of
this
because
I
think
that
was
a
big.
We
forget
because
that
was
so
long
ago,
but
I
think
that
was
the
first
RC
really
that
we
deployed
that
kind
of
blew
up
on
us
so
well.
I
C
We
test
staging
when
things
are
done.
Things
are
in
the
happy
place,
but
I'm
talking
about
we
run
in
a
lot
of
situations.
We've
run
get
lab
comm
we
without
having
finished
the
post,
deploy
my
gray
shoes
and
we
run
them
these
batch
migrations,
and
so
it's
in
a
kind
of
different
state
that
we
haven't
necessarily
tested
against
and
so.
G
C
C
I
C
We've
gotten
a
state
where
we
ran
the
first
set
of
migrations.
We
basically
were
in
the
state
where
we
hadn't
finished
all
these
secondary
migrations,
so
we
lost
get
putting
push
and
pull
during
that
time
until
we
finish
the
post
deploy
and
then
everything
started
working
again,
so
it's
like
this
black
hole
that
we
we
never
usually
test
against,
because
dev
never
runs
in
this
state.
Staging
is
usually
done
by
then
it's
just
a
big
flying
spot
right
now.
Yes,.
I
C
I
G
I
All
right
thanks
so
I'll
give
that
to
you
and
then,
if
you
want
to
delegate
it
just
just
let
us
know
and
update
this
stock.
Well,
thanks
thanks
Stan,
that's
exactly
the
sort
of
thing
we
were
looking
for!
Apologies!
It
took
a
little
while
to
get
there
but
yeah.
It
looks
like
we
do
have
a
highly
leveraged
test
that
we
can
begin
put
on
one
person
to
deliver
for
us.