►
From YouTube: GitLab 10.6 Retrospective
Description
Follow along in our doc: https://docs.google.com/document/d/1nEkM_7Dj4bT21GJy0Ut3By76FZqCfLBmFQNVThmW2TY/edit
A
A
B
Camille
so
I'm,
stepping
in
for
Marc
and
Roberts
or
not
on
the
call.
The
item
was
to
establish
a
process
in
which
even
smaller
items
would
be
included
in
future
assurance
testing
and
QA.
We
tried
some
things
out
this
release.
You
can
see
the
issue
link
there,
116
with
mixed
results.
I
would
say.
Basically
it's
really
difficult
to
figure
out,
what's
small
worth
testing
and
what's
not,
and
it's
really
seriously
time
consuming
for
everyone.
B
If
you
open
that
issue,
you'll
also
see
how
many
tasks
there
are
so
I
want
to
say
amazing
efforts
from
everyone
to
pitch
in
and
work
on
it,
but
it's.
It
also
is
bringing
to
our
attention
that
we
need
to
definitely
find
a
better
way,
both
from
the
release
process,
side
and
also
from
from
people
that
are
actually
doing
QA
NFA.
So,
from
the
release
process
side,
we
are
trying
something
different
for
10.7.
B
C
Try
that
doesn't
work!
I,
don't
know
if
that
works
for
all
of
the
issues
we've
had,
but
certainly
for
some
of
them,
where
I've
merged
them
I've
not
been
totally
sure
like
how
I
can
QA
that.
So
maybe
if
it
was
in
the
merge
request,
or
we
just
added
it's
the
issue
description
that
would
help
other
people
chip
in.
If
someone
who
knows
the
most
about
the
changes
away,
because
for
a
lot
of
these
is
quite
you
know,
the
bug
fixes
it's
quite
sort
of
a
straightforward,
QA
step
right.
B
Ideally,
ideally,
we
should
be
putting
things
into
github
QA
here
so
because
we
do
run
get
some
QA
against
every
deployment.
Theoretically,
if
we
also
include
github
QA
in
our
process
of
development,
we
would
be
saving
ourselves
a
lot
of
time,
but
yeah
Sean.
It's
it's
a
difficult
task.
We
should
definitely
try
it.
We
are
testing
things
out
and
we
are
making
progress
as
you'll
probably
see
in
the
next
item
that
I
am
also
taking
over
from
James
who's
the
release
manager.
B
D
Man,
so
it's
for
my
sacred
thing
that
was
about
my
team
shipped
to
my
new
features
in
this
release:
custom,
metrics
and
cluster
monitoring.
This
is
probably
the
the
highest
we've
had
for
relatively
small
team,
so
she
wanted
to
thank
the
team
for
all
the
hard
work.
It
was
a
really
good
release
for
us.
Martin
back
to
you
is
another
major
item
for
us
here.
Yep.
B
We
shipped
the
cloud
native
charts
with
an
alpha
label,
so
the
reason
why
I'm
putting
this
in
here
is
because
it's
a
huge
effort
from
everyone
in
engineering,
so
the
distribution
team
is
working
on
the
charts
themselves,
but
we
wouldn't
be
able
to
do
it
with
help
from
Sean's
team,
Camille's,
team
and
and
others
as
well.
That
are
actually
changing
the
architecture
of
get
lab.
So
it
is
absolutely
amazing
and
that
we
managed
to
do
this.
Come.
A
On
so
I
would
actually
find
everyone
who
solving
GC
ICD
contention
problem.
We
had
so
many
different
tasks
and
so
many
different
of
deliverables.
But
with
the
joint
effort
from
platform
discussion
teams,
we
actually
managed
to
do
the
majority
of
what
we
plan
to
do
pretty
much
almost
everything.
So
thank
you,
son
and
thank
you
all
dollar
for
typing
out.
Josh
yeah.
D
Thanks
Camille
I'm,
sorry,
echo,
Lauren
and
Camille's
comments.
All
the
effort
going
into
shipping
these,
but
your
changes,
the
after
storage
work,
so
really
appreciate
that
looking
forward
to
the
fruits
of
our
labor
here
with
work,
it
lab
is
going
also
with
our
charts
for
for
customers.
So
we're
really
exciting
stuff
also
mention
a
killer
thing
is
presetting.
We
shipped
an
alpha
or
experimental
version
of
chaps.
D
This
was
a
kind
of
fast
moving,
a
feature
with
aggressive
timelines
and
your
climb
into
it
and
a
ton
of
work
into
it
and
it
really
turned
out
really
well.
So,
if
you
haven't
checked
it
out,
check
it
out
and
Camille
thanks
for
the
review,
the
feedback
and
all
the
other
effort
you
put
in
to
help
provide
some
some
guidance
on
how
to
best
interact
with
CI
as
well.
So
thanks
to
you
both
this
is
a
pretty
cool
teacher,
so
I
appreciate
it.
B
There
is
some
discussion
on
how
to
prevent
those
in
future,
but
check
out
link
there.
James
also
says
that
rc1
got
delayed.
We
were
supposed
to
do
our
see
one
way
earlier,
then
we
actually
did.
It
was
still
within
the
time
written
in
our
documentation,
but
we
were
trying
to
do
our
see
one
long
before
that
cutoff
date.
However,
due
to
security
Larry's,
that
was
happening
at
the
same
time
where
Robert
and
James
were
involved,
they
just
couldn't
handle
two
releases.
At
the
same
time,
it's
it's
a
lot
of
work
there.
B
E
Here
we
upgraded
get
from
two
point,
one
four
to
two
point:
sixteen
and
it
had
huge
improvements
in
terms
of
performance
for
losing
for
geo,
but
because
we
have
a
lot
of
these
stale
trees
from
squash
and
rebase.
Yeah
get
changed
its
behavior
and
actually
looks
at
all
those
work
trees
when
it
tries
to
do
some
work.
So
in
total
about
1250
project
thinking
about
calm
were
actually
affected.
It
seemed
like
more,
but
not
as
many
as
I
thought.
E
We
have
something
in
giddily
NAB
that
wall
in
the
housekeeping
tasks
will
sweep
anything
it
sold
in
six
hours,
but
it
doesn't
guarantee
that
we'll
actually
get
everything.
So
I
think
this
is
next
step.
We're
gonna
want
something
to
kind
of
just
walk
through
all
the
projects
and
does
a
test
and
sweeps
anything
that's
bad
there,
but
that
caused
some
pain
to
for
production
as
well
Camille.
E
A
For
a
few
months
already,
we
work
on
the
object,
storage
and
we
were
basically
steadily,
but
it
could
be
I
wish
I.
We
could
do
it
faster,
but
sees
it.
We
kind
of
worked
on
the
life
organ
ease,
which
is
keep
calm.
We
also
tend
to
be
very
careful
and
basically
amount
of
the
things
that
we
have
to
fix
along
the
process
to
ensure
that
we
get
reliefs
a
stage
of
the
product
in
better
shape.
A
C
C
A
So
I
think
it's.
We
gonna
basically
like
slowly
resolve
these
problems,
but
practically
it
takes
longer
than
we
kind
of
anticipated
because
of
the
amount
of
the
things
that
we
have
to
improve
on
the
process.
Unfortunately,
it
has
impact
on
club-
in
the
end,
because
we
have
to
divide
and
conquer
on
solving
the
smaller
problems
and
make
this
first
story
to
be
complete
slightly,
lighter,
unfortunately,
Marvin.
What
we
can
improve.
B
Thanks
Camille,
so
one
thing
that
came
from
one
friend
went
wrong
this
month
was
security.
Release
process
was
happening
at
the
same
time
as
a
regular
process.
So
what
we
are
trying
to
do
now
is
define
a
security
release
process
a
bit
better.
It's
working
progress,
some
documentation
is
merge.
Some
is
still
being
written,
but
the
proposal
there
will
be
to
have
a
date
where
you
can
definitely
expect
that
a
non-critical
Security
release
will
be
created
which
will
give
a
bit
more
structure
to
both
developers
and
release
managers
product.
B
B
We
do
need
to
improve
canary
experience
a
bit
more
because
right
now
you
need
to
change
your
remote
get
remotes
to
canary
in
order
to
actually
use
it
properly
and
I.
Don't
think
the
whole
company
will
be
changing
everything
over
so
we'll
need
to
do
some
more
improvements
there.
Also,
it's
not
really
easy
to
know,
at
which
point
you
are
on
a
canary,
in
which
point
you
are
not
and
I
think
there
is
some
ongoing
work
there
to
to
improve
that
situation.
B
A
C
I,
don't
actually
know
what's
happening
here,
I
think
Eric's
having
some
really
good
questions
below,
but
I
was
just
curious
because
we
were
talking
about
catching
regressions
earlier
before,
and
I
definitely
feel
like
qualitative
leads
that,
like
later,
our
C's
are
much
better
than
later.
Our
C's
were
before,
but
looking
just
at
a
simple
search
of
like
milestone,
plus
the
regression
label.
There's
not
a
super
obvious
trend
there
over
the
last
couple
releases,
it
seems
like
we're
around
about
50
regressions
every
single
time
in
terms
of
improvements.
C
One
thing
that
we
can
do
is,
if
you
add
the
regression
label.
Please
add
a
team
label
the
same
time,
because
unless
I'm
looking
at
a
list
like
this
or
a
list
of
I'm
triage
issues,
I
really
look
at
a
list
without
the
discussion
label
filter
on,
because
you
know,
there's
a
bunch
of
teams
like
I'm,
just
focusing
on
my
area.
So
if
it
doesn't
have
any
team
label
and
everybody.
C
See
it
my
one
thing:
I
sort
of
suggested
was
like
not
for
new
country-house
issues,
but
just
for
regression
issues
for
that
team,
like
maybe
rotating
around
the
team
and
having
someone
check
in
on
them
and
just
make
sure
that
if
there
are
any
new
regressions
that
someone
hasn't
been
pinged
on
that
they
get
picked
up.
So
it's
a
it's
like
a
facilitator.
A
C
B
For
now
we
don't
have
it
as
a
process,
it's
more
of
try
what
works
best
for
you
so
spend
some
time
per
day
and
write
down
what
you're
actually
try
edging
so
things
that
are
confirmed
as
valid
bugs
things
like
stale
issues
and
so
on
and
I
said.
We
see
it
working
with
mixed
results
only
because
I
can
expect
it
to
see
the
number
of
issues
going
down.
However,
in
the
past
three
weeks,
we've
seen
pretty
much
a
steady
number,
so
I
think
it's
almost
like
one
on
one,
the
same
number.
B
Obviously
new
issues
are
coming
in
and
we
are
closing
issues.
So
if
we
didn't
do
that,
probably
the
number
of
issues
would
be
way
higher,
but
it's
important
to
note
that
this
is
time
consuming
so
I.
Don't
know
we
need
to
do
a
cost-benefit
analysis
they're
a
bit
better.
We
will
present
the
results,
probably
in
our
functional
group,
update
when
everyone
does
a
rotation
and
we
get
like
an
average
results
there
cool.
Thank
you
forward.
C
F
Know,
thanks
for
thanks
for
raising
this,
like
I
say
here,
our
metrics
should
agree
with
our
with
our
instincts
and,
if
they're
diverging,
let's
look
into
that,
so
I
don't
want
to
I,
don't
want
to
just
like
you
know,
write
it
off
as
oh.
No,
where
things
are
definitely
better.
You
know,
but
this
stuff
can
get
complicated
like
have.
F
We
changed
our
we're
looking
at
a
time
series,
but
have
we
changed
our
accounting
method
over
this
time,
series
we're
bugs
there
that
we
weren't
catching
before
and
now
we're
catching
them,
but
the
overall
number
of
bugs
have
gone
down.
There's
all
kinds
of
things
that
could
be
going
on
here,
but
I
think
there's
a
good
argument
to
spend
a
little
bit
of
time
here
now
that
we've
got
four
months
of
data
to
dig
into
a
little
bit
and
strengthen
our
opinions
about
what's
what's
going
on,
are
we
making
things
better?
F
Are
we
spending
too
much
time
when
we
no
longer
efficient
with
the
serv
stuff?
So
maybe
this
could
be
one
of
the
action
times.
As
you
could
elect
the
right
person
to
kind
of
dive
in
here
and
see.
You
know
is
the
proportion
of
regressions
2mr
changing
or
you
know
what
what
may
be
coming
on
here,
but
but
thanks.
First
thanks
for
raising
this
Victor
regression,
hi
Jean,
yes,.
E
Please
please
correct
me:
if
I'm
wrong
is
that
those
should
just
be
turned
into
bugs,
so
in
terms
of
hygiene
that
that's
a
problem,
because
we
have
open
regressions
that
are,
you
know
not
ill-defined,
because
don't
find
regression
to
my
understanding
is
we
fix
them
right
away
so
and
then
so
so,
that's
to
me,
maybe
one
symptom
of
just
issues
getting
logged
in
nobody
is
taking
care
of
them
and
then
so.
So
another
instance
of
this
is
which
I
think
is
worse.
E
Not
regressions,
but
just
issues
that
have
been
assigned
to
individuals,
developers
or
or
you
Xers,
and
if
they
don't
get
to
them,
that's
fine,
but
even
after
the
code
freeze,
nobody
has
looked
at
them.
Indeed,
they're
still
sitting
in
that
release
and
then
there's
no
merger
quest
on
it.
There's
no
comments
on
it
and
then
so
a
customer
comes
in
and
they
can
essentially
trust
the
milestone
anymore,
because
there's
no
confirmation
of
that.
So
we're
inconsistent.
C
C
Your
thing
you
were
talking
about
like
making
sure
that
the
close
releases
don't
have
any
issues
in
them
like
you
know,
if
we
make
sure
that
the
assignee
or
assignees
have
ownership,
where
that
would
help.
They're,
like
you
know,
mention
them
in
a
comment
say
like
please,
like
you
know,
update
with
the
current
status
of
this
should
have
be
in
the
backlog.
Should
it
be
on
the
current
release
for
the
regression
hygiene
yeah,
you're
right
10.6,
originally
on
my
list
set
at
54,
and
then
I
was
just
looking
at
the
list
of
open
ones.
C
F
Yeah
I
think
it's.
This
is
yet
another
argument
for
a
single
priority
field
and
one
that
doesn't
change.
I
mean
if
we
have
a
regression
label,
then
we're
talking
about
removing
that
label.
Are
she
losing
data
and
were
hampering
our
ability
to
look
at
these
things
sort
of
historically?
So
what
I
would
prefer
is
we
have
like
a
you
know
of
her?
You
capture
a
version
found
in
and
that's
just
persistent
that
would
never
change.
F
We
have
a
priority
field
that
doesn't
change
unless
the
priority
for
some
reason
changes
and
then
you
can
have
like
you
know,
a
fixed
version
or
you
could
look
at
things
historically
like
time
to
close
and
be
able
to
analyze
things
that
Miss
their
intended
fixed
version
that
is
sort
of
dictated
by
priority.
You
know
something's
at
p1.
It
should
be
fixed
immediately
or
let's
say
a
p2
fixed,
so
next
release.
F
Then
this
is
that
we
should
be
able
to
look
at
those
things
and
we
I
think
we
don't
quite
have
the
text
to
do
that
effectively
today,
but
Mexia
SEC
met.
Commenting
he's
got
lots
of
good
thoughts
about
this
stuff
that
we've
done
it
many
times
before
so
I
think
we
can
start
to
start
to
look
at
this
stuff
deeper.
F
Okay,
so
if
there
aren't
any
more
thoughts
on
that,
it's
up
to
me
and
yob
to
decide
what
the
actions
are
gonna
be
for
this
next
release.
I
think
defining
the
security
release
process
is
a
good
one.
I've
been
hearing
more
and
more
about
that,
and
so
I
love
for
that
one
to
continue
meirin
I'd
love
for
you
to
find
someone
else
besides
yourself
to
drive
it
if
possible,
because
you
know
many
of
someone,
so
it's
not
falling
on
your
shoulders.
B
That's
a
good
question:
I'll
have
to
think
about
it.
I
don't
have
anyone
at
the
top
of
my
head.
Think.
F
About
it
and
then
I
think
Sean's
point
about
look.
You
know.
We
have
four
good
data
points
here.
Four
months
of
data,
so
I
think
looking
in
here
looking
in
here
and
just
developing
an
opinion
on
hey,
does
the
lines
of
code
gun
up
in
a
released,
so
the
number
of
merged
requests
coming
up
in
our
releases?
Are
we
finding
more
bugs
that
are
there?
You
know
developing
opinion
on
why
this
metric
is
not
coinciding
with
some
of
the
other
pieces
of
information.
We
have
this
sort
of
a
sort
of
good
one
Sean.
F
C
I
think
well,
like
my
plan,
was
to
just
because
there's
like
200
old
issues,
there
was
just
a
like
look
through
them
see
you
know,
figure
out
how
many
a
valid
for
each
like
take
those
numbers
look
at
the
numbers
for
the
totals
for
each
release
as
well
and
go
from
that
and
I
don't
have
a
great
plan
at
the
moment,
apart
from
start
looking
at
the
individual
issues
and
see
if
I
can
see
a
pattern.
Okay,
you.
F
Know
pretty
good
luck
taking
on
two
major
actions,
so
let's
do
that
TBD
for
defining
the
release
process
marital
nominate
summer
and
then
Sean.
If
you
can
look
into
the
this
regression
data
and
tell
us
if
we're
missing
something
and
I
think
you
know,
Victor's
point
of
regression,
hygiene
is,
is
a
good
one.
I
think
we've
been
having
this
discussion
I
think
that
the
discussion
will
move
forward
about
priority
about
other
fields.
F
I
think
this
is
something
that,
as
Mac
onwards,
he'll
help
help
advise
us
and
make
some
recommendations,
but
what
the
label
taxonomy
should
be
and
I
think
we
want
to
do
a
small
iteration
here,
but
the
trick
with
time-series
is
that
we
have
to
do
an
iteration
and
leave
it
be
for
a
period
of
time.
Otherwise
you
don't
get
time
series
data,
so
we
this
is
one
that
we
want
to
plan
carefully
and
hopefully
change
it
once
and
then
leave
it
alone
for
a
while.
Otherwise
we
continue
to
invalidate
our
time
series
data.
F
E
You
Eric,
you
compare,
takes
just
30
seconds
to
answer
a
mex
question
here
with
customer
expectation.
I
wanted
to
highlight
that
there's
two
point:
zero
one
one
Eric
is
talking
about.
We
don't
have
the
mechanism,
we
don't.
The
data
to
to
highlight
an
issue
is
going
through
different
stages,
a
regression
or
issue
or
a
feature
request
and
so
forth.
E
E
The
issue
milestone
tells
you,
when
the
issue
will
be
worked
on
and
shipped
the
issue
label
telling
you
the
taxonomy
of
the
issue
itself,
and
so
those
are
not
complying
at
you
know
at
a
real
time
you
know
basis
which
we
should
strive
for
and
therefore
that
is
causing
customer
confusion
because
they
can
expect.
We
can't
tell
them
anymore
that
the
issue
is
a
single
source
of
truth
and
that
those
sort
of
the
example
I
was
providing.
E
So
that's
why
my
point
being
that
we're
not
here
we're
not
complying
to
our
own
process
and
that's
a
problem.
Perhaps
the
root
problem
is
changing
the
process
or
what
have
you
but
I
just
wanted
to
highlight
that
or
answer
your
question
specifically
like
that
managing
customer
expectations.
We
don't
give
them
a
list
of
anything
we
just.
We
just
give
them
the
issue
if
they
ask
for
a
specific
status
on
the
issue.
We
just
say:
look
at
the
mouse
and
look
at
this.
You
look
at
the
label
and
you'll
know
so
that's
to
give.
A
F
B
If
I
can
only
add
like
one
more
thing
before
we
leave,
so
there
is
a
hole
I
noticed
in
in
the
process
earlier
when,
when
we
started
working
on
the
cloud
native
helm,
charts
just
like
we
are
using
weekly
milestones
there,
so
just
shifting
milestone
did
not
give
us
any
visibility
in
how
things
are
progressing.
So
I
pasted
a
link
there
in
in
the
in
the
document
where
you
can
see
how
our
milestone
currently
looks.
B
You
can
see
some
of
the
issues
that
have
milestone
in
which
they
will
work
Tom,
for
example,
and
that
means
like
every
issue
has
like
multiple
labels
that
actually
show
a
progression
of
an
issue
like
it
was
worked
on
instead
of
one
week.
It
was
worked
on
like
five
weeks
and
then
we
can
actually
talk
about
why
this
happened
and
so
on.