►
From YouTube: GitLab 10.8 Release Retrospective
Description
Follow along in our doc: https://docs.google.com/document/d/1nEkM_7Dj4bT21GJy0Ut3By76FZqCfLBmFQNVThmW2TY/edit
A
Hello:
everyone
welcome
on
today's
CIA
onto
desk
job,
which
respective
45.8
release
the
first
item
from
the
previous
with
respective
improvement.
Ask
is
mine.
The
last
time
we
had
a
trouble
figuring
out
who
is
like
the
maintain
their
active
maintainer
of
github
pages,
so
we
actually
fixed
the
situation
by
assigning
by
actually
Inc
is
was
a
rather
my
time
there,
but
also
assigning
Alicia
as
a
maintainer
of
github
pages,
so
you're
talking
I.
B
Think
it
Tommy's
out
I'll,
take
Tommy's
item
here,
so
we
have
added
the
exception.
Request
template
that
requires
approval,
for
multiple
engineering
leads,
and
that's
that's
done.
You
can
see
the
improvements
in
this
release
which
we
touch
on
later
down
the
road
we
have
accepted
two
out
of
total
of
seven
exception
requests
and
that's
the
improvements
we
made
in
this
release
and
going
forward
to
you
Robert
thanks.
C
I
took
a
production
down
from
50
minutes
7:45,
which
is
a
huge
quality
of
life
improvement
for
release
managers
when
you're
the
one
that
has
to
sit
and
babysit.
That's
thinking
not
just
watch
it
go.
There's
more
improvements
coming
serie,
mostly
around
lowering
the
barrier
to
performing
the
release
and
doing
a
lot
of
stuff
from
slack,
and
that's
there's
more
she's
like
there
and
that's
what
we
got
going
on
this
month
comes
in.
D
C
D
A
few
monitoring
improvements,
the
big
ones
that
happen
this
past
month
were
delayed,
email
notifications
and
CI
jobs
and
Java
Thomas
added
alerts
around
those.
The
other
blind
spot
we've
had
for
a
while
is
a
PG
bouncer.
Everything
goes
through
PG
bouncer
and
we
don't
have
any
monitoring
around
it.
We've
shipped
the
and
now
Prometheus
exporter,
an
11
point.
Oh
we
haven't
deployed
I,
get
loud
combo.
We
did
it
for
the
Google
production
site,
so
you
can
see
the
dashboard
there.
D
B
Thank
You
Stan,
so
another
fall
item
from
the
last
improvement
task.
Is
we've
now
made
sure
that
all
the
cue
items
were
released
is
tracked
in
a
single
place,
so
we
have
one
bird's-eye
view
to
everything.
This
is
now
in
effect,
going
forward
for
the
regular
monthly
release
and
also
the
patch
releases.
The
issue
is
there
and
you
can
click
on
the
1.10
da-da-da
task,
which
has
everything
contained
in
one
issue.
E
E
The
timing
for
the
deployments
improved
like
drastically
deploying
to
production
now
only
takes
50
minutes
and
to
a
stage
in
20
minutes
more
or
less,
and
that
is
great
and
I
cannot
emphasize
that
enough,
because
prior
to
that
deploying
to
production
was
taking
two
and
a
half
hours,
sometimes
six
hours,
sometimes
eight
hours
enter.
Staging
was
taking
one
hour
so
really
having
shorter
deployments,
really
helped
us
make
our
job
faster
and
really
thank
you
James
Lopez
for
that,
because
you
did
that
improvement.
E
F
F
Sorry
for
not
for
Evan,
then
it
it
down
here
and
the
RC
fool
was
the
first
released
acted
after
the
server
after
the
7
and
he
adds
added
1000
more
than
1000
commits
in
full
working
days,
which
causes
a
few
problems
and
causes
the
deploy
to
be
a
little
bit
longer
than
it
should
be
so
and
I
also
wonder
what
went
well
this
month
we
know
we
haven't.
We
didn't
need
any
extra
hours
to
deploy
anything
and
I.
F
Think
put
me
and
my
array
of
the
same
feeling
that
I'm
working
with
James,
Robert
and
Marian
and
that
they
had
our
backs
all
the
time
was
very
helpful.
Also,
Victor
and
Bob
made
a
very
good
job,
making
clear
exactly
what
needed
to
be
done
regarding
gdpr
and
they
they
were
very
aware
of
everything
on
the
release
process
and
also
the
same
towards
Nick
and
the
chimney
question
between
the
GCP
migration
and
the
release.
Managers
was
very
good.
Corporal
point
luxury
yeah.
C
G
Thank
you,
tender
Ted
was
a
great
milestone
for
us
to
as
the
security
products
team.
We
are
now
able
to
ship
some
back-end
code.
This
is
pretty
new
for
us.
Since
we
joined
gitlab
3
months
ago,
we
were
doing
mostly
docker
embedded
images.
So
that's
a
great
step
for
us
and
speaking
of
the
core
images
we
have
refactor
disaster
engine,
which
is
now
a
lot
faster
and
a
lot
more
efficient,
because
we're
using
docker
images
and
everything
is
now
completely
isolated.
B
H
H
Consequently,
we
had
to
involve
a
lot
of
folks
operationally,
read
production
team
release,
managers,
marketing
team
to
do
the
marketing
folks,
of
course,
legal
and
design
and
product
and
so
forth,
and
so
not
an
excuse,
but
we
have
very
little
experience
doing
these
types
of
things,
for
example,
when
as
a
prime
manager
myself,
when
you
know
I'm
involved
in
a
feature,
I,
don't
consider
what
happens.
The
features
shipped
out
and
people
receive
start
viewing
it
do.
H
We
need
to
change
anything
in
real
time,
so
we
have
very
little
experience
in
this
area
and
therefore
that
really
showed-
and
in
particular
the
one
obvious
one
that
a
lot
of
people
saw
was
on
the
22nd
we
put
a
banner
up.
That
was
was
a
terrible
decision.
We
had
very
little
eyes
on
it
prior
and
it
was
a
unforced
error
and
had
UX
looked
at
that
banner
message.
They
would
probably
have
said
that
that
was
a
bad
idea.
So
it's
a
my
final
comment
here.
H
Is
it's
hard
to
improve
this
type
of
thing
going
forward,
since
this
is
a
one-time
thing,
you
know
Gd
power
once
in
a
lifetime
thing,
hopefully,
and
also
the
fact
that
we
need
to
do
operations
type
of
thing
in
real
time
is
not
something
we
typically
do
at
gala.
Maybe
we
will
in
the
future,
but
I
don't
see
it
on
the
horizon.
So
unfortunately,
it's
for
me,
I
can
have
I,
don't
see
a
lot
of
opportunities
for
improvement
here,
I
think
now
we
had
some
points,
probably
removed
it
Nick.
H
H
Nikki-
and
you
know
Nikki
I'm,
just
reading
a
point
here-
Nick
it
says:
can
we
set
a
better
morning
of
compliance?
Changes
Jerry
was
well
give
me
filter,
geez
ya,
know
that
that's
a
that's
a
great
question.
I
gave
this
feedback
to
job,
and
so
gdpr
was
put
into
law
I
believe
two
years
ago
and
and
in
terms
of
a
thing
well,
I,
don't
know
what
the
legal
term
is,
but
it
was
passed
two
years
ago
and
companies
had
two
years
to
get
ready.
H
J
The
point
is
just
the
legal
changes
happen
from
time
to
time.
Tv
PR
is
a
once
in
a
lifetime
thing,
but
next
year
another
lure
effecting
a
different
area
of
get
elapse
could
come
into
effect
and
it
would
be
good
if
we
could
pick
up
on
that
sooner
and
presumably
there
are
feeds
we
can
consume
for
these
kinds
of
changes
right
right.
So.
I
I'm
on
that
coffee
hear
me:
yes,
okay,
well
just
to
reply
to
make
oh
I'll
talk
with.
You
gave
me
about
this
as
well.
I
agree.
We
should
have
known
this
earlier.
I
wasn't
aware
of
it
much
earlier,
so
my
bad
as
well,
but
other
than
that
I
I
thought
it
went
really
well
like
the
banner
that
was
up.
I,
really
don't
think
it
was
a
big
issue.
I
It
was
confusing,
so
we
turn
it
off,
which
is
very
much
the
giveaway
and
it's
either
Stan
Lincoln
issue
that
there's
no,
we
still
haven't
fixed,
so
we
should
get
them
to
that.
What's
that
other
than
that,
I
would
say
that,
yes,
we
should
have
more
time
next
time
overall,
given
the
time
that
we
had
this
time,
I
think
we
did
pretty
well.
H
B
You
Victor,
so
I
want
to
touch
on
the
instability
that
we
had
around
the
rc8
timeframe
onwards.
This
is
this.
The
term
onwards
and
timeframe
scares
means
a
lot
because
the
team
doesn't
really
know
what
went
wrong
and
I
think
we
still
have
an
open
issue
that
we
stood
in
to
resolve.
So
we
add
multiple
attempts
by
our
team
to
try
to
fix
it.
Thank
you,
everybody
involved
and
Philippe.
You
had
a
point
there.
B
B
D
I,
don't
have
anything
else
to
add
other
than
you
know.
We
look
really
hard
of
the
changes
that
land
up
during
our
c7
and
RCA
and
they're
really
really
small
changes.
And
it's
it's
not
clear
to
me.
It
was
just
coincidental
that
we
had
some
events
or
something
hitting
our
system
in
a
different
way,
but
I
agree
with
the
need
to
do
incremental
deployments.
It's
it's
beneficial
in
general
and
I.
Think
we've
worth
moving
in
that
direction,
so
I'm
happy
to
see
that
the
our
C's
are
at
least
smaller
in
in
disks,
got.
B
It
so
that's
why
I
think
the
team
should
strive
for
for
this
and
meet
on
a
manual
part
for
11,
that's
three,
because
we
also
have
the
gtp
migration
happening
this
release
and
it's
one
of
the
most
important
initiatives
for
us
right
now.
Thank
you.
So
if
nobody
has
anything
else,
moving
on
to
Philly,
yes,.
G
Thank
you
like.
We
have
learned
the
hard
way
to
plan
and
to
schedule
or
issues
and
features,
and
especially
if
we
have
to
synchronize
without
a
team
like
the
UX
and
QA
team,
meaning
if
we're
waiting
for
the
UX
to
be
ready,
it
could
be
one
or
two
weeks
and
if
we
plant
we
have
one
week
of
QA.
That
means
literally,
we
have
one
to
two
weeks
to
develop
new
features.
So
that's
something
pretty
new
for
us,
because
we
were
used
to
work
on,
or
you
know,
isolated
team.
So
now
we're
planning
accordingly.
G
Another
issue
we
had-
and
it's
still
the
case
the
RCS
are
deployed
to
production
before
the
code
freeze
with
some
code.
That
is
not
from
the
code
freeze
day,
so
meaning
the
seven
of
the
day.
It
could
be
some
code
from
the
two
and
to
answer
marine.
That
is
not
here.
The
issue
is
that
we
are
not
only
shipping
code
directly
inside
gitlab
were
shipping
code
as
well
as
docker
images.
G
So
we
have
to
synchronize
these
new
docker
images
that
were
pushing
through
the
code
that
is
pushed
with
the
merge
request
or
anything
rated
to
get
that
directly,
and
it's
really
hard
to
us
for
us
to
synchronize
that,
and
sometimes
it
could
lead
to
having
SAS
toward
us
or
container
scanning
completely
broken
for
some
day,
because
we
didn't
realize
that
our
C
was
deployed,
and
it's
it's
really
hard
to
anticipate.
So
we
need
to
figure
out
a
way
to
fix
this.
G
So
we
lost
a
lot
of
time
waiting
for
CI
these
months
and
I
just
wanted
to
raise
that
to
you
also
regarding
the
planning
we
are
under
an
estimated
or
deliverables,
and
also
because
we
need
to
synchronize
with
other
themes
like
the
UX
and
Q
8
cm
and
know
we're
planning
way
ahead
of
that
during
the
kickoff.
We
are
planning
the
world
iteration
at
once.
That
means
no
one
is
able
to
take
holidays
or
three
days
during
this
iteration,
if
we
don't
schedule
it
the
time
before.
G
So
we
are
planning
a
lot
more
in
advance
and
it's
a
lot
more
than
that,
because
we
are
planning
also
a
weeks
one
iteration
before
and
make
you
say:
oh
can
we
factor
in
better
planning
and
I
couldn't
four
more
time
in
the
schedule.
What
we're
doing
actually
is
reducing
the
number
of
features
that
could
be
the
river
the
river
in
a
feature
in
a
milestone
so
that
we
have
time
to
handle
bugs
and
everything
and
also
make
sure
that
we
can
synchronize
with
the
other
teams
and
I.
Do
you
Eckstein,
for
example?
K
Right,
Thank,
You
Philip,
so
on
May
8th,
we
had
a
particularly
painful
CTE
emerge
and
the
rc4
merger
cos
10.8.
Our
chief
former
Traverse
was
affected
by
that
issue
as
well
on
the
7
for
the
6
or
whatever
we
merged
two
features
into
seee
that
were
actually
features
that
were
back
ported
from
E
and
on
see.
Everything
was
green.
K
You
fix
that
issue.
The
Cee
merge
request
that
was
open,
was
open
for
I
think
day
and
a
half
for
almost
two
days.
Well,
usually
these
get
merged
within
six
hours,
so
that
didn't
go
great
to
fix
this
we've
done
one
thing
already:
we've
added
a
new
CI
built
to
both
C
and
E.
That
will
automatically
verify
that
a
CET
EE
afraid
as
possible
and
stem
suggests
doing
the
opposite
thing
with
ECE
as
well,
which
is
a
great
idea
and
I'm
guessing
we're
going
to
discuss
that
separately.
K
K
What's
the
word
I'm
looking
for
non-intuitive
aspects
to
it,
where
you
really
have
to
think
about
people
going
from
C
to
E
and
back
to
C
e,
it's
a
huge
headache
and
we're
going
to
do
better
in
the
future.
Mac
asked
should
get
up
to
a
cover,
destiny,
downgrade
path,
I
think
that
would
be
useful.
We
can
test
it
on
the
Cee
site,
but
of
course
you're.
K
G
Yes,
going
back
to
this
planning
and
scheduling
shoes
that
we
had,
we
decided
to
schedule
you
in
as
possible,
meaning
we
are
doing
that.
One
release
earlier,
for
example,
for
the
security
touch
border,
we're
going
to
ship
for
10.0.
We
are
doing
right
now,
the
UX,
so
that's
a
back-end
that
the
front-end
team
will
be
able
to
start
at
the
beginning
of
iterations.
It's
the
only
way
for
us
to
be
able
to
plan
into
every
to
schedule
everything
for
the
iteration.
G
Otherwise
we
would
need
to
wait
for
the
UX
level
to
be
ready
and
we
can
just
wait
for
issues
like
that.
We
have
too
many
features
and
easier
to
fix.
So
that's
the
new
way
to
work
for
us
and
it's
working
well
and
as
Joan
pointed
this
is
no
what
we
call
the
design
artifact
when
we
need
something
from
the
UX.
That
is
the
deliverable
in
one
iteration,
it's
a
design
artifact.
L
F
Towards
what
actually
just
said,
there
was
a
discussion
on
this
yes
and
you,
it
respected
that
later
evolved
into
us
like
discussion.
That
is,
it
goes
in
the
same
lines,
but
that
I,
might
you
have
to
link
their
dialects?
Are
that
we
are
kind
of
working
on
this
kind
of
waterfall
methodology
process
instead
of
the
iteration
one
between
everyone
like
UX,
product
and
engineering.
F
When
this
an
issue
is
scheduled
to
be
delivered,
there
is
a
feeling
of
my
king
of
ownership
when
we
ship
broken
things
and
then
also
we
are
not
quick
enough
to
react
after
after
we
shipped
broken
things
and
there's
also.
Another
point
that
is
enlightened
is
discussed.
Discussions
is
that
often
the
issue
scope
is
so
small
that
both
UX
and
engineer
teams
have
a
very
hard
way
of
understanding
exactly
what
you
should
be
doing.
F
We
can
move
to
the
sharing
the
released
malicious
process
so
waiting
to
what
we
said
earlier.
We
learned
a
few
things
regarding
the
mismanagement.
The
first
is
that
release
manager
requires
a
lot
of
planning
and
a
lot
of
a
lot
of
culture
through
hours
to
process.
We
also
learned
that
not
all
taking
to
stable,
labeled
merge
request
are
legit
regressions.
F
There
were
three
of
nine,
only
3
of
9
xx
requests
were
approved
and
there
seems
to
be
a
misunderstanding
between
the
picking
to
stable,
bugs
versus
regression.
So
a
figuration
from
10.6
is
no
longer
a
figuration
in
10.8.
It's
a
bug
and
should
not
be
labeled
there
speaking
to
stable.
We
also
went
that
being
resumed
and
release.
Manager
requires
to
say
no
a
lot
and
that
sometimes
you'll
have
to
fight
for
it
and
blues
a
few
hours
doing
that
and
that
the
security
rubies
is
very
hard
to
manage.
F
We,
we
have
a
few
points.
What
can
still
be
improved,
so
the
security
release
is
forest
artists.
It's
it's
very
hard
to
be
aware
of
everything
that
needs
to
go
in
the
security
release.
It's
difficult
to
understand
which
team
is
responsible
for
each
parts
of
the
process,
and
we
will
feel
that
there
is
a
lack
of
coordination
and
communication
between
the
release
manager
and
the
security
team.
Often
we
just
fall
back
to
Marin
and
Mary
leather.
The
point
I'm
not
sure.
F
If
you're,
you
Marin,
probably
not
and
Mary
points
that
if
they
are
also
difficult
because
they
are
the
opposite
of
the
normal
process,
everything
is
not
open
under
security
events.
There
is
also.
We
also
think
that
Q&A
ownership
and
priority
needs
to
be
improved,
although
the
Q&A
task
is
a
lot
better
than
it
needs
to
be.
The
things
are
not
tested,
quick
enough,
which
we
often
release.
F
We
often
the
point
of
production
before
the
the
Q&A
task
is
closed,
which
kind
of
defeats
the
process
the
the
purpose
of
the
Q&A
task,
and
we
also
think
that
if
we
didn't
watch
everything
on
the
6
&
7,
we
could
make
the
voice
a
lot
faster
and
the
release
process
or
better
Meyer
Andy
when
I
go
to
the
next
week.
Yes,.
E
Thank
You
Khalifa,
so
in
general
we
have
the
impression
that
the
release
is
not
completely
understood
by
most
of
the
engineers,
and
that
might
be
logical
because
it
will
be
say,
the
release
manager
page.
We
can
notice
that
not
even
that
the
20%
of
the
engineers
have
participated
at
release
managers,
so
that
means
that
the
80%,
the
other
80%,
is
not
fully
aware
of
the
implication
that
are
released
conveys
and
we
totally
get
it.
E
I
mean
Philippe
and
I
were
expected
to
fully
understand
the
process
with
just
a
week,
and
that
was
not
realistic
because
we
have
been
doing
the
release
thing
for
the
last
three
milestones
and
we
still
have
like
many
question
and
it
really
needs
a
lot
of
commitment
and
a
lot
of
work
even
as
a
trainee.
There
is
something
there
is
something
always
to
do
each
day
and
sadly,
in
different
lives
will
not
attract
me,
and
that
was
like
completely
out
of
our
hands
and
well
another
thing
that
can
be
improved.
E
Our
essential
request,
a
release
manager.
It
is
really
uncontrolled
to
say
no
to
our
teammates,
because
our
teammates
are
awesome
and
we
don't
run
and
we
don't
really
enjoy
being
the
bad
cop
and
push
and
push
back
and
say
no,
but
then
again
saying
no
is
our
default
answer
to
all
the
exception
requests,
so
we
should
really
rethink
when
opening
an
exception
request
like
we
should
really
think
that
I
am
assured.
E
D
A
But
also
like
my
question,
then,
is
how
do
we
make
a
difference
between
parties
and
RCS
because,
like
I
know
that
we
can
react
faster
on
github.com,
but
our
customers
accept
but
releases
to
be
more
stable
and
are
speaking.
Changes
into
Patrice's
that
may
be
exceptional
actually
makes
things
worse
tables.
So
we
actually
reduced
the
stability
of
our
on-premise
customers,
which
is
not
really
like
something
easy
to
like
revert
and
react,
because
our
base
of
users
that
are
using
that
are
much
higher
than
just
keep
that
cone.
I.
F
Think
I
can
assume
your
your
questions.
So
the
reason
why
we
were
so
picky
on
our
C's
and
not
on
the
first
page
release
was
I
think
it
was
around
our
c8.
We
were
picking
everything
that
had
peaking
to
stable
label
and
then
we
realized
that
there
were
a
lot
of
things
that
weren't
legit
regressions
and
they
were
causing
more
questions
that
they
weren't
on
our
sheet.
I,
don't
know
seven,
so
there
was
why
we
were
so
picky
about
it
for
the
others.
F
D
B
Ok,
I
think
time
check.
We
are
only
one
minute
left.
I
would
like
to
take
some
time
to
go
over
the
list
and
then
assign
the
action
items
to
people
so
going
up
from
the
top
I
think
a
picture
and
Philippe
I
think
we
are
getting
the
same
theme
from
you
guys
where
we're
saying
that
we
don't
have
enough
time
to
plan.
Does
it
make
sense,
a
team
bullying
you
up
to
factor
this
process
changing
and
then
broadcast
it
to
the
rest
of
engineering
if
it
works
out
for
you
guys?
Ok,
thank
you.
B
I'm
like
this
for
supposedly
then,
and
then
Stan
and
DA
way.
Should
we
create
an
issue
to
to
cover
the
ecce
up
we're
test,
yep,
yep
I
will
do
that
and
then
well.
The
big
trump
is
the
feature
freeze
and
a
lot
of
changes
coming
in
I
know
Maya
and
Filipe
I
used
to
the
release
managers
for
it
opened
up
0
and
we
have
to
secure
migration.
I
will
assign
new
folks
to
this
and
then
for
any
fallback
or
any
any
findings,
let's
factor
it
into
the
release
documentation.
So
ok,
thank
you.