►
From YouTube: GitLab 10.4 Retrospective
Description
Follow along in our doc: https://docs.google.com/document/d/1nEkM_7Dj4bT21GJy0Ut3By76FZqCfLBmFQNVThmW2TY/edit
A
Okay,
stop
of
the
hour
here
and
I
have
the
first
item
so
I
think
we
can
start
previous
retrospective
improvement
tasks.
So
I
was
tasked
with
changing
some
of
the
release
process
that
we
had
that
caused
problems.
Last
time
around
I
linked
the
issue
in
the
documentation,
so
I
wrote
a
report
there
about
it.
I
think
couple
of
items
are
interesting
to
highlight.
So,
first
of
all,
we
managed
to
deploy
the
first
deployable
RC
on
the
tent
in
production,
the
first
RC
or
RC,
one
that
was
not
deployed
in
production.
A
A
B
A
A
Overall,
we
did
a
really
great
release
and
I
want
to
thank
everyone
in
engineering
for
rallying
around
the
QA
tasks,
where
we
just
pinged
people
directly
to
review
their
tasks,
and
people
did
that
and
also
c2e
merges
became
very
much
easy
to
look
after
because,
as
soon
as
someone
got
to
mentioned,
they
would
check
it
out
or
find
someone
else
who
would
check
it
out.
So
a
huge
huge
thank
you
for
everyone
involved
in
in
this
process.
C
Want
to
echo
what
what
Maren
said:
I
mean
Big
Ups
to
him
for
for
organizing
this,
for
for
structuring
the
improvements
for
getting
everybody
involved,
thanks
for
everybody,
stepping
in
and
a
reminder
that
the
reason
we're
doing
this
is
it's
the
most
short-term
way
that
we
can.
We
can
make
gulab
comm.
You
know
more
stable,
more
highly
available,
more
ready
for
people's
mission,
critical
workloads,
it's
good
prep
for
this
future
world,
we're
going
to
be
doing
continuous
delivery
and
we've
seen
that
improvement.
We've
measured
that
we
see
our
availability,
improving,
I,
think
in
December.
C
We
checked
in
with
at
least
our
issue.
Monitor
was
a
hundred
percent.
Obviously
the
monitoring
is
not
complete,
but
we
demonstrated
ingre
mental
improvement
to
that
in
January.
It's
been
the
same
with
the
exception
of
the
azure
Intel
bug.
We
took
some
downtime
for
that.
We
can
kind
of
annotate
that
one
away
and
say
the
things
that
were
under
our
control
we've
been
doing
much
better,
so
I
just
want
to
say
an
additional.
C
A
B
Sure
so,
thank
you.
Mary
actually,
we've
been
pushing
the
right
direction
on
and
helping
in
making
CCD,
actually
tasted
I
suppose
on
staging.
Given
our
past
problems,
they
can
actually
link
on
the
first
issue
in
the
city
structurally
see
what
we
managed
to
do.
There
are
like
two
different
paths
that
we
wanted
to
follow.
First
is
like
ensuring
that
you
can
manually
test,
see
ICD
against
aging,
so
basically
you
can
create
it
up.
Cm
will
push
your
project
to
staging
and
see
like
all
the
features
that
you
would
normally
see
on
old
it
up.
B
It
is
fully
possible
now,
but
the
second
one
is
actually
making
it
possible
to
easily
contribute
to
get
away
with
anything
Antron
testing
of
the
CIS
CD
workflow,
which
actually
also
includes
the
runner
in
this
workflow.
What
is
actually
the
important
part
of
of
what
we
dirtiest?
First,
it's
actually
easy
to
contribute
to
that
by
just
replicating
what
we
did.
We've
just
simple
pipeline,
but
the
second
one
is
actually
Jochen
around
I
keep
track
you
a
against
aging.
So
this
is
like
the
first
iteration,
where
you
can
basically
use
get
up.
B
B
D
Wrote
this
before
anyone
had
anything
else,
but
again
I
Cohen
the
things
I'm
Erin
and
Eric
talked
about
Remy
for
creating
the
CED
II
merge
automation,
it's
really
awesome
to
go
to
the
channel
and
be
able
to
see
exactly
where
the
merge
is
and
who
needs
to
do.
What
or
even
if
you
wanted
to
jump
in
to
do
the
to
the
merge
yourself.
D
It's
really
easy
now
so
that
that's
a
huge
win
in
my
mind-
and
the
second
thing
we've
already
talked
about
is
I-
think
this
was
the
smoothest
release
ever
I
didn't
see
any
downtime
from
psychic
or
web
workers,
or
anything
like
that
and
your
credit
amar
and
Luke
and
Robert.
For
that
so
Tim
did
you
want
to
add
anything
to
that
yeah.
E
Just
that
it
was
really
nice
from
the
from
the
deployment
teams
to
let
us
actually
know
and
have
really
clear
communication
what
we
need
to
do
clear
tasks,
especially
look
to
the
great
job
in
our
direction
to
that
everyone
actually
was
pinged
and
needed
to
get
in,
and
then
I
already
knew.
Okay
I
have
to
do
this
to
these
dudes,
and
it
was
really
incredible
to
see
that
this
was
most
most
of
the
time
done
in
minutes
or
some
hours.
E
So
thank
you
for
the
clear
communication
and
I
will
go
on
as
I
have
also
the
next
point
which
what
is
what
went
wrong
this
month
and
one
of
the
points
was
that
the
pep
ID
was
lost
in
no-man's
land
with
in
10.4,
which
meant
that
there
was
the
decision
during
the
release
cycle
to
move
it
from
c
e
to
e
eu.
So
a
couple
of
much
requests
were
needed
and
in
the
end,
the
one
for
deleting
the
switch
was
much
but
I
want
to
reactivate.
E
E
We
had
issues
with
service
workers,
which
were
only
discovered
only
in
production
with
our
CDN
setup,
because
there
are
problems
with
service
workers
going
across
domains,
we're
currently
figuring
them
out
and
also
that
there
was
a
fix
that
was
created
that
was
created
and
the
problem
of
the
tests
that
were
like
transient
and
was
not
really
obvious
with
where
it
happened.
So,
thanks
to
Mike
for
fixing
that
but
yeah
I'm
coming
to
the
point
how
to
help
I'm
pleased
later
on
what
we
can
improve.
Stan
Yonex.
D
B
Basically,
we
had,
we
have
seen
two
different
types
of
issues
last
month.
Basically,
this
came
from
our
intern
on
CI
CT
retro,
which
is
finding
this
document.
The
first
one
is
actually
we
had
some
confusion
on
related
to
how
devouring
process
of
beef
fer
moon
looks
like
because
we,
it
was
quite
kind
of
confusing
whether
what
is
part
of
the
stable
branches
in
the
end,
because
currently
this
is
mostly
a
manual
process.
What
is
important,
it
seems
that
we
have
some
conclusion
on
mentioned
in
the
issue:
how
to
resolve
that.
B
So
thanks
everyone
who
did
participate
in
this
discussion.
Second
one.
We
also
have
seen
some
issues
with
the
feature
testing.
Basically,
we
had
the
problems
we've
bought
us
between
developer
Q&A
and
the
future
of
feature
assurance
or
a
feature
feature
testing
of
from
like
the
product
perspective
and
like
what
is
what
is
kind
of
important,
that
we've
been
interacting
fast
on
the
code,
basically
like
always
having
the
green
unit
test,
but
right,
we
missed
I,
believe
a
few
times
or
like
twice
or
three
times.
Basically,
this
kind
of
like
go
to
Ike.
B
Whatever
this
feature
does
work,
which
kind
of
ended
up
having
the
first
RC
is
not
fully
being
stable
with
the
future
that
we
see
that
we
might
have
had
to
pass.
We
had
a
very
long
conversation
on
this
se
dirige
sure,
but
like
no
matter
outcome
of
that
is
like
that,
will
ensure
that
we
test
these
features
before
we
merge
them,
but
we
also
think
about
introducing
future
assurance
in
this
process.
B
A
Yes,
during
temple
for
development,
we
realized
that
there's
some
unclear
part
in
the
process
for
security
fix
development.
So
this
is
it's
not
really
only
related
to
development
itself,
but
also
to
the
release
process
and,
in
general,
everything
around
the
security
fix
releases.
So
we
started
discussing
this.
There
are
there's
an
issue
there.
A
Something
has
been
already
merged
in
the
handbook.
Something
is
still
open
to
discussion,
but
yes,
this
has
been
a
huge
time
sink
during
ten
points
for
development.
I
can
I
can
just
add
a
bit
to
that,
given
that
it's
a
release
process,
so
security
releases
are,
in
general,
very
confusing.
We
have
multiple
sources
of
truth
and
I'll
be
working
with
Cathy
on
streamlining
this
in
in
this.
A
This
part
of
the
process,
I,
guess
and
we'll
start
off
with
with
the
next
release,
where
we
are
going
to
first
observe
all
of
the
issues
that
we
are
encountering
before.
We
can
make
changes
the
problem
with
the
security
release.
Is
it
actually
changes
our
process
of
development
completely,
so
we
go
from
open
only
to
closed,
and
this
is
where
problems
start
kicking
up
and
also
we
have
too
many
teams
involved.
So
synchronization
is
sometimes
a
problem,
but
we'll
be
working
on
that
in
parallel
with
other
changes.
F
F
Tab,
I'm,
sorry
in
the
issues
page
when
you
create
a
new
merge
request
and
believe
I,
don't
know
if,
when
he's
on
the
call,
but
I
believe
this
was
due
to
some
of
the
work
there
and
I
don't
know
if
you
could
provide
more
context,
but
the
the
two
problems
I
see
here
is
that
these
two
user
facing
things
seem
to
be
pretty
obvious.
I,
don't
know
we
could
have
done
a
better
job,
whether
it's
QA
or
future
assurance
to
catch
them,
because
these
are
not
hidden
features.
F
But
you
know
features
that
that
I
would
presume
a
lot
of
people
use
all
the
time
and
then
the
other
point
I
wanted
to
make
was
with
the
modal's,
making
the
modal's
consistent
and
also
the
dropdowns
consistent.
So
these
are
actually
two
separate
causes,
but
on
the
same
theme,
so
I
correct
the
point
after
this.
But
these
are
big
changes
across
the
entire
platform,
and
so
you
know
making
modal's
consistent
across
the
board
and
dropdowns
across
the
board.
F
E
The
whole
and
modal
thing
is
a
huge
ongoing
task,
so
at
the
beginning,
was
a
task
of
make
four
five
modal's
more
consistent,
and
then
we
discuss
discovered
that
there
are
over
120
or
so
in
the
whole
kit
lab.
So
we
started
already
on
the
model,
making
them
more
consistent
and
in
that
process,
sometimes
simply
that
they
take
very
implementation,
gets
replaced
by
new
implementation,
or
we
have
even
on
jQuery
implementation.
E
So
we
will
pick
this
up
and
see
how
we
can
make
there
also
better
QA
on
that,
if
you're
doing
such
kind
of
thickness,
which
are
necessary
to
get
them
actually
on
one
hand
the
same
design
but
on
the
other
hand,
also
on
the
same
code
thing
so
that
we
can
can
make
progress
and
on
those
things.
But
this
shouldn't
happen,
and
we
will
have
a
closer
look
on
those
thanks
for
bringing
that
up.
Yeah,
as
I
mentioned
before.
E
What
I
would
suggest
is
that
we
have
on
staging
also
the
same
CDN
settings
as
on
production,
so
that
we
have
actual
full
staging
configuration
which
is
mirroring
production.
So
we
can
also
find
such
things
as
those
service
work
across
two
main
problems,
etc,
a
little
bit
easier
and
a
little
bit
earlier
during
the
QA
tasks.
D
Along
the
same
lines,
there's
no
way
for
us
to
test
I'll
tap
on
staging
right
now
and
the
problem
I
mentioned
earlier
was
only
affecting
customers.
So
either
we
have
to
have
held
up
on
staging
and
gala
calm,
but
I
think
it
makes
sense
to
start
with
staging
right
now
and
then,
at
the
same
time,
on
development
side
to
work
on
a
QA
task
to
exercise
LDAP
and
there's
an
issue
related
to
that
back
to
you.
Tim.
E
Wait
playing
ping-pong
with
you
today,
so
I've.
One
thing
that
I
had
in
my
notes
from
the
beginning
of
the
release
cycle
was
that
I
got
a
lot
of
questions
that
where,
if
we
cut
off
day
was
on
a
Sunday,
so
it
wasn't
clear
to
some
ifs.
If
it's
now
on
a
Sunday
or
if
it's
already
on
the
Friday
or
will
it
happen
on
Monday,
so
I
think
simply
having
pure
communications
beforehand,
so
that
everyone
is
simply
aware.
E
Okay,
this
will
be
the
total
cut
of
time
and
don't
even
think
about
pushing
something
in
afterwards,
peps
makes
sense,
but
everyone
is
aware
of
it
and
the
other
question
is
perhaps
we
think
about
a
bank
holiday
rule,
because
everyone
was
then
hustling
after
the
Christmas
holidays
to
get
everything
in
and
then
the
even
lost
two
more
days
due
to
the
weekend
thing.
So
perhaps
we
can
also
think
about
that.
One
yeah.
F
Tim
I
quickly
suggest
that
we
that
the
date
is
obviously
should
be
a
weekend
believe
but
but
due
to
holidays,
we
should
agree
like
on
a
given
calendar
in
Google
Calendar,
and
we
pick
one
of
them
like
whatever
locale
and
then
you
know,
we
kill
that.
We
don't
expect
people
to
work
on
that
date,
but
then
that
person
can,
if
they're
responsible
for
an
issue
that
they
can.
You
know,
communicate
with
their
member
team
members
group
members
that
they're
not
available
on
that
day.
F
So
therefore,
you
know
somebody
else
should
pick
it
up,
but
gitlab
wide.
We
should
have
a
day
to
agree
that
it's
always
that
day.
So
so
it's
always
on
the
22nd
or
if
it's
a
weekend,
then
it's
the
Monday
or
if
that
Monday's
a
holiday
per
this
consistent,
google
calendar,
then
then
we
find
we
reveal
our
link
to
a
Wikipedia
page
with
like
holidays,
just
like
that
make
sure.
G
F
B
I
mean
like
in
the
past.
It
also
happened
that
we
basically
created
a
stable
branch
from
that
past
commitment
from
the
master
like
the
calming
that
was
feed
like
beef
in
the
seventh.
So
maybe
this
hard
cutoff
respect
that
this
is
like
the
exact
I
mean
our
when,
after
that,
you
actually
are
confident
that
we
are
just
creating
from
the
last
community
for
that
hour.
B
G
We
already
say
midnight
PST,
don't
we
I'm
sure
we
discussed
this
before
and
we
just
said,
like
I
think
said,
like
we,
you
know
you
use
the
home
office
since,
based
in
San
Francisco
we
use
PST
midnight.
Pst
is
the
car.
If
that's
not
documented,
we
should
because
that's
what
I've
been
working
under
the
assumption
of
yeah.
H
I
love
that,
like
one
minute
before
midnight
or
like
before
midnight,
Pacific
time
and
we're
San
francisco-based
company,
because
it
increases
our
value
by
two
weeks.
So
no
because
I
live
here,
but
because
it's
just
good
to
be
a
San
Francisco
company
and
I
love
that
no
one
has
to
work
on
on
the
weekend,
not
to
people
merging
it
in,
but
also
not
people
creating
disabled
branch
that
I
love
that
suggestion
Camille,
that's
great
yeah,
the
22nd
is
a
fixed
date
and
we
don't
want
to
work
on
the
weekends
for
that
either.
H
A
C
I
think
this
is
a
really
straightforward
action
item
in
Marin.
If
you
can
just
capture
it
in
the
process,
they
are
you
writing
this
one's
almost
you
know
it
can
be
done
within
30
minutes.
So
like
this
one
yeah,
so
Rob
and
I
are
typically
Koh
benevolent
dictators.
When
it
comes
to
choosing
actions,
he
happens
to
be
off-site
at
a
customer
meeting.
So
it's
up
to
me
today,
but
I
think
you
know,
based
on
what
Marin
and
Camille
were
able
to
accomplish
coming
out
of
the
last
hard
perspective.
I
think
we
can
do
everything
here.
C
I
think
these
are
actually
smaller
and
then
somebody
won't
even
more
achievable.
So
the
third
one
is
kind
of
all,
but
done
we'll
put
that
on
Mara
Mattos.
The
Marin
has
a
lot
of
other
stuff
going
on
so
it'll
kind
of
rule
them
out
for
another
action,
I
think
coming
out
a
retrospective
Tim.
Getting
to
your
first
one
CDN
settings.
Is
this
something
the
front
end
team
can
do,
or
is
this
something
that
would
fall
into
the
production
team
to
set
up
production.
C
C
D
C
Well,
why
didn't
we
say
we're
doing
we're
doing
both
of
these
will
we'll
figure
out
who
the
champions
are
just
based
on
what
people
who's
in
the
best
position
to
do
it
I,
don't
think
for
these.
These
are.
These
are
two
relatively
small
actually
items
compared
to
the
ones
we've
done
in
the
past,
so
we'll
figure
out
the
best,
the
best
person
to
do
them,
but
I
think
we
can
say
if
they
say
that
we'll
do
both
of
these
great
anything
anything
else.
It's
Victor.