►
From YouTube: GitLab Retrospective 11.5
Description
GitLab 11.5 Kickoff
Learn about what we're working on in GitLab 11.5! Follow along in our kickoff doc: https://docs.google.com/document/d/1E...
Read more about our product vision: http://bit.ly/2IyXDOX
Learn about FOSS & GitLab: http://bit.ly/2KegFjx
Get in touch with Sales: http://bit.ly/2IygR7z
A
B
Okay,
we
have
the
top
of
the
hour,
so
I'd
like
to
welcome
everyone
to
the
retrospective
for
the
11.5
release
of
KITT,
lab
we'd,
like
to
take
a
couple
of
minutes
up
front
to
talk
about
some
improvements
from
the
last
retrospective.
We
do
have
the
issue
of
list
it's
available
there.
If
anybody
would
like
to
take
a
look
at
that,
we
have
a
couple
of
things
to
call
out
as
notable
completions
we've
begun
a
training
process
for
new
back-end
maintain
errs.
B
We've
also
added
some
QA
maintainer
x'
to
improve
the
efficiency
for
test
automation,
merge
requests,
we've
also
updated
the
retrospective
process
and
there's
a
link
to
the
merge
request
there.
That
discusses
that
one
of
the
biggest
changes
that
we're
going
to
be
trying
out
for
this
time
is
a
timed
agenda
to
make
sure
that
we
have
as
much
time
as
possible
for
discussion
of
how
we
can
improve
at
the
end
of
the
cause.
That
is
important
for
us
to
try
to
maintain
focus
on.
So
there
is
a
possibility,
as
we
go
through
particularly
woman.
B
Well,
this
month,
we'll
went
wrong
this
month
that
we
may
run
out
of
time
and
needs
to
kind
of
cut
off
and
move
to
the
next
step
in
the
process.
So
please
be
aware
of
that
as
we're
going
forward
and
try
to
keep
updates
in
those
sections
brief,
so
that
we
can
focus
on
discussing
how
we
can
improve
at
the
end
of
the
call
with
that,
I
will
turn
it
over
to
Sean
to
start
off
what
went
well
this
month,
yeah.
C
So
the
thing
I
wanted
to
mention
was
very
specific,
but
I
think
it
was
also
a
good
general
pattern.
Shinya
had
a
merge
request
that
needed
to
be
reviewed
urgently
because
it's
fixed
an
important
production
issue,
but
initially
the
mode
request
was
very
complicated
because
they've
been
working
on
something
related
at
the
same
time
and
therefore,
as
you
know,
it
went
into
the
same,
merge
request,
but
as
not
all
of
that
was
actually
urgent.
C
C
It
went
really
well
in
fact,
so,
while
we
should
try
and
make
smaller
merge
requests
in
the
first
place,
they
don't
feel
that
once
you've
sort
of
gone
down
that
line
don't
like
full
victims,
like
son
costs,
you
can
always
iterate
like
after
the
fact,
as
well
as
iterating
up
from
so
yeah
I,
don't
see
and
the
bail
but
I'm
not
sure
the
list
is
100%
in
alphabetical
order
for
me.
So
I
might
just
take
her
item
unless
she
speaks
up.
C
So
the
discretionary
design
got
merged,
which
is
great-
it's
been
a
lot
of
positive
feedback.
Fatih
was
really
happy
about
the
collaboration
between
him,
Annabel
and
Mike.
We
did
have
some
regression
issues
which
are
just
so
I
Sean
copied
in
from
our
retro
as
well.
But
you
know
it's
a
retrospective,
it's
a
land
of
contrasts,
so
I
figure
that's
fine
to
have
good
and
bad
in
the
same
point
Dylan.
How
about
you
yeah.
D
Sort
of
similar
to
what
you're
talking
about
about
splitting
up
merge,
requests
our
teams
kind
of
getting
in
the
habit
of
doing
this
more
and
more,
and
this
last
month
was
like
a
pretty
good
demonstration.
There
was
a
lot
of
merge
requests
that
were
made
out
of
the
issues
and
and
people
figured
out
how
to
break
down
into
independently
deliverable
chunks
of
work.
So
that's
been
working
well
for
us
over
to
Gabriel.
F
Also,
okay,
cool
and
so
from
outside,
and
there
was
a
production
incident
last
month
which
and
in
part
word
was
caused
by
some
end
to
end
testing
gaps
that
we
identified,
which
we
think
kind
of
had
we
had
been
in
coverage
we
would
have.
We
would
have
picked
up
and
say
we
invested
a
fair
bit
of
time
looking
into
where
the
gaps
were
around
our
LDAP
and
implementation,
and
we
definitely
managed
to
sort
of
plug
some
gaps
here.
F
We've
been
getting
lead
into
manual
work
that
we've
had
to
do
around
that
application,
and
so
Rubin
identified
almost
like
a
top
three
and
priorities
of
fixes
that
we
could
do
to
try
to
reduce
those
interruptions
and
which
kind
of
either
have
been
chipped
or
will
be
shipped
in
Internet
release,
and
so
we're
hoping
that
that's
going
to
have
a
fair
impact
going
forwards
and
over
to
Elliott.
Oh,
it's.
G
Not
here,
I'm
gonna
be
filling
for
him.
My
name
is
Maddie
I'm,
I'm
I
back
in
engineer
and
the
verify
team.
So
the
team
felt,
like
we've,
made
a
lot
of
progress
in
this
release.
It's
the
general
manager
and
the
team,
that's
really
afraid,
and
we
feel
the
progress
that
we
were
making
and
we
really
valued
requested.
We
are
implementing,
and
the
second
thing
is
that
all
of
the
front
the
deliverables
were
merged
well
before
the
seventh,
which
is
really
amazing.
We've
struggled
to
merge.
G
H
Thank
you
so
John
Hampton
I'm,
the
front-end
engineering
manager
on
releases
verify
so
for
us.
In
the
similar
fashion
of
the
other
teams,
we
had
some
issues
that
we
broke
down
in
a
small
iterative,
merge
request
and
that
worked
out
really
nicely
for
us.
It
seems
like
kind
of
a
common
so
far,
that's
it
for
us
over
to
flee.
I
And
kill
a
big
milestone
for
us.
We
managed
to
ship
grip
security
dashboard
which
slipped
from
the
previous
iteration.
When
mentioning
that,
because
that's
that's
a
huge
Shake
point
for
us,
it
was
an
issue
that
required
a
lot
of
moving
pieces
splitting
together
and
we
managed
to
do
that
during
that
duration.
So
we're
pretty
proud
to
do
to
be
able
to
do
that.
Also,
we
have
the
reports,
the
new
report,
syntax
available
for
all
the
features
that
we
have.
So
that's
for
some
good
achievement
for
us
and
with
that
over
to
Bob
lon
thanks.
J
Philip
so
I
wanted
to
thank
the
people
on
the
guillotine,
yet
Janka
pansy
J,
for
helping
the
create
team,
adding
new
features
getting
to
know
their
code
base
for
lots
of
us
go
is
also
not
or
the
language
you're
most
familiar
with
and
they've
been
super
helpful
and
we
should
some
new
features
that
required.
Gately
changes
in
11.5
so
very
happy
about
that
with
that
over
to
set
ok.
B
K
Thank
you,
so
we
have
felt
to
deliver
promote
and
it
should
read
to
an
epic
issue
and
it
was
because
of
the
bad
planning.
First
of
all,
we
changed
the
scope
of
the
issue.
After
the
planning.
First,
we
decided
it
will
be
copy,
only
basic
information
like
title
and
description,
and
afterwards
we
decided
that
we
will
copy.
Also
notes
and
uiv
are
linked,
and
it
was
a
big
scope
change.
Also,
besides
that
there
was
more
work
than
expected
on
the
backhand
side,
so
we
had
to
leave
right
applause
and
preferences.
D
Yeah,
so
the
first
ones
about
emerged
requests
adding
significant
workload.
We
had
one
particular
issue
that
ended
up
having
six
merge
requests
that
needed
to
be
created
in
a
Judah
conflicts
in
the
C
code
base.
That
was
quite
a
lot
of
time
spent
like
keeping
those
branches
up-to-date
and
stuff-
it's
not
just
as
a
matter
of
creating
them,
but
also
constantly
rebasing
them
throughout
reviews
and
all
sorts
of
other
challenges.
So
it's
quite
time-consuming.
D
We
slept
deliverables
on
a
couple
of
issues.
We
were
working
on,
possibly
under
estimating
the
amount
of
extra
work
it
would
take
for
our
team,
collaborating
with
an
external
team
working
on
features
for
our
products
and
the
other
one
is
maintaining
backwards.
Compatibility
for
a
lot
of
the
stuff
our
team
owns
is
quite
challenging.
We
don't
have
a
clear,
coherent
strategy
of
what
that
really
looks.
Like
we
support.
D
You
know
previous
versions
of
kubernetes
clusters,
people
in
all
kind
of
halfway,
mid
states
where
they've
installed
some
stuff
through
get
lab
in
the
past
and
have
an
upgraded
certain
things.
So
we
have
a
lot
of
challenges
with
that
and
it
caused
a
fairly
significant
bug
to
get
shipped
as
well
that
none
of
us
noticed
in
the
last
release.
But
we
did
luckily
fix
that
bug
pretty
quickly
over
to
Gabriel.
D
L
G
G
H
You
so
for
us
three
three
points:
we
had
a
very
bad
regression
where
a
runner
stopped
respecting
tags.
We
have
a
link
in
the
document
for
some
more
information
about
that.
There's,
a
lot
of
confusion,
also
around
documentation
and
who
is
supposed
to
be
doing
what
and
then
we
had
some
confusion
and
misalignment
of
the
requirements
of
the
deployment
widget
and
that's
it
for
us
over
too
sweet.
I
We're
still
building
especially
the
security
dashboard
on
top
of
the
new
report,
syntax,
which
is
quite
new
and
bringing
a
lot
of
ongoing
issues.
So
we
are
kind
of
building
the
plane
water
flying.
It's
the
same
signs
Thank
You
Dalia
for
the
for
the
commercial
link
and
LEDs
to
a
lot
of
issues,
especially
for
the
the
usage
ceiling,
and
also
we
had
some
problems
with
Auto
reverbs.
Where
are
the
new
syntax,
was
not
fully
supported,
so
we're
still
building
on
something
that
is,
that
is
ongoing.
With
that.
Moving
to
osvaldo.
M
Okay,
so
we
had
a
couple
of
issues
with
this
merge
request.
I
think
the
first
one
is
that
we
should
have
used
it.
The
feature
fight
from
the
beginning,
like
keeping
this
in
the
mindset
of
just
using
the
feature
flight.
So
we
merged
this
very
close
to
the
seventh,
and
we
know
that
note
said
that
it
wasn't
working
well
with
one
feature
of
EE,
so
yeah
you
should.
M
We
should
have
you
tested
this
on
the
east
side
as
well,
so
we
have
a
merge
request,
I
believe
from
Yorick,
that's
working
on
automatically
creating
more
Juke
ways
on
ie
from
the
sea
side.
So
that's
a
thing
that
I'll
discuss
a
little
bit
later,
so
yeah
we
had
to
convert
this
magic
west.
What
they
could
point
is
that
we
applied
a
the
the
feature
flag
and
then
we
could
have
merged
this
before
and
picket
this
much
request
so
yeah.
So
this
next
one
is
set
actually.
B
K
It
was
also
an
example
of
an
issue
that
we
decided
that
we
will
deliver.
Malcolm
works
feature
instead
of
iterating
over
small
additions,
which
would
be
possible
in
that
case
and
I,
think
we
shoot
under
our
iteration
value
and
class.
Think
we
in
that
case
we
could
have
discovered
how
difficult
the
implementation
will
be
before
which
we
didn't.
So
we
should
try
to
discover
it
sooner.
However,
that
probably
will
always
happen
over
to
Gabrielle.
K
L
For
the
dual
team,
we
need
to
push
harder
to
complete
the
rollout
of
the
hash,
if
storage
but
is
being
used
not
only
by
Egyptian,
but
also
our
genes
are
getting
features
that
depend
on
it.
We
would
like
to
have
larger
data
set
as
a
testing
bad
for
June
we've
been
spoiled
by
the
GCP
migration,
why
we
could
use
the
d
block
on
replicas
as
a
testbed
for
genome
and
as
we
don't
have
that
anymore,
this
is
the
lanius
on
on
getting
validation
on
big
queries
and
other
stats
for
geo.
L
We
also
would
like
to
restate
the
teal
team
cow
or
the
demo
team
calls
that
we
used
to
do
during
the
GCD
migration
and
even
though
we
have
people
in
different
time
zones,
and
that
would
mean
that
someone
will
have
to
join
in
exchange
time.
But
we
want
to
do
that
at
least
once
per
month,
and
we
want
to
shorten
our
vision
cycles.
B
I'm
just
doing
that
review
cycles
came
up
a
lot
in
last
month's
retrospective
as
well,
and
here
it
is
coming
up
again.
This
month,
we've
taken
some
steps
to
add
more
maintainer
Xand
and
try
to
make
that
process
a
little
more
efficient
by
means
of
scale.
But
I
am
wondering-
and
this
is
really
a
question
for
the
whole
group-
not
necessarily
just
for
geo.
If
we
need
to
be
considering
some
more
wide-reaching
options
in
order
to
improve
the
cycle
times
for
code
reviews,
the.
C
Mo
lab
the
more
we
have
to
review
like
when
we
say
we
need
UX
approval
when
your
back-end
approve,
we
need
database
approval,
we
need
dogs
proving
we
front-end
approval
all
in
the
same,
mr
that's
going
to
take
a
longer
time.
If
you
can
split
those
things
up
as
much
as
possible,
you
will
get
things
reviewed
and
merge
quicker
because
they
don't
need
to
go
through
the
five
different
people.
I
just
mentioned
there,
even
if,
in
some
cases
the
same
person
can
do
the
database
review
in
the
backend
review
or
whatever
so.
G
I'm
gonna
fill
in
for
Elliott
that
we
sorry
I
thought
we
were
at
Liam
their
minds
so
yeah
on
the
were
fighting
when
we
need
to
better
do
a
better
job
at
introducing
people
to
the
code
base,
meaning
new
people
that
get
her
to
get
a
basically
introduction
fault.
You
get
introduced
to
that
code
base
and
we
need
to
have
a
working
roadmap
for
our
technical
depth.
We
have
a
roadmap
for
our
features.
What
we
don't
have
a
roadmap
for
technical
depth,
I'm
sorry
for
barging
in
like
this
I
thought
it
was
my
turn
Liam.
F
So
you
know
worries,
and
so,
but
by
the
nature
of
the
scope
of
what
the
manage
team
work
on,
and
we
have
quite
a
few
issues
that
come
up,
that
kind
of
have
specific
customer
interest
and
for
enterprise
customers,
and
so
we
had
one
in
particular
in
the
last
milestone,
which
was
our
smart
card
integration.
And
we
put
a
lot
of
effort
in
towards
the
end
and
milestone
to
finish
it.
But
it
didn't
quite
make
it.
F
And
so
one
of
the
issues
that
we
found
throughout
the
process
was
that
probably
we
weren't,
quite
as
good
as
we
could
have
been
in
terms
of
communicating
updates
back
to
the
customer
and
say
we
found
that
they
used
to
be
a
promise
label.
That
was
used
and
we've
kind
of
decided
to
revive
that
back
within
the
team
and
so
we're
using
the
the
P
labels
for
prioritization.
F
So
we
see
we
pick
up
key
ones
first
and
if
any
shoe
also
has
a
promise
label
and
we're
basically
gonna
go
our
way
to
make
sure
if
any
status
update
is
over
communicated
and
certainly
if
it
means
that
that
that
issue
is
likely
to
slip.
The
milestone,
and
so
it
might
be
a
good
idea
if
we
can
kind
of
try
to
roll
it
out
across
other
teams
as
well,
and
that's
it
for
me
and
then
I
guess.
That's
then,
on
to
John
H
who's
covering
for
Elias
other
part
yeah.
H
Thanks
Liam,
so
on
the
release
side,
we
would
like
to
try
and
avoid
having
multiple
back-end
engineers
work
on
the
same
feature
at
the
same
time,
and
then
we
also
would
like
to
consider
the
documentation
requirements
as
part
of
the
engineering
discovery
and
issue
planning.
That's
it
for
release
over
to
Philippe.
I
Thank
you.
We
are
a
lot
of
issues
in
this
situation
following
the
progress
not
only
from
the
engineering
side.
Before
from
also
the
product
side,
we
managed
to
split
big
issues
like
the
dashboard
in
very
small
pieces
and
small
Amar's,
but
at
the
end
we
are
a
lot
of
trouble
following
all
of
this,
and
especially
the
progress
so,
as
you
may
know,
I'm
using
again
shot
to
follow
all
of
this.
I
But
it's
not
ideal
and
the
two
states
that
we
have
open
and
closed
for
the
issue
doesn't
react
because
sometimes
there
are
closed,
because
we
can
do
anything
for
for
them
or
they
are
closed
because
the
work
is
done.
So
we
have
to
go
through
all
issues,
one
by
one
to
make
sure
that
the
work
is
done
or
not,
and
also
we
discovered
that
until
the
very
end
we
didn't
have
a
clear
overview
of
the
progression
on
the
dashboard.
I
B
I
If
you
do
that,
it's
not
because
we
have
10
issues
left
and
just
one.
It
is
done
that
we
have
90
percent
of
the
job
remaining,
so
not
sure
it's
going
to
out,
but
at
least
we're
doing
that
internally,
with
the
security,
we
are
already
eating
the
again
shot
to
make
sure
that
we
note
the
progression
of
the
issues
from
a
higher
point
of
view.
So,
if
you
add
that's
with
the
issues
directly
I'm,
not
sure
that
is
the
right
place.
I
B
Sure
I
guess
to
be
clear.
So,
like
you
were
talking
about
having
difficult
going
through,
you
know
having
several
issues,
some
of
which
are
open,
but
you
don't
know
what
their
state
is
and
I
was
just
thinking.
It's
smaller
ish.
Smaller
issues
would
mean
that
the
issues
are
open
for
less
time,
which
should
mean
there
are
fewer
issues
open
at
any
one
point
in
time,
so
that
you
not
to
spend
all
the
time
trying
to
figure
out
what
state
everything
is
in.
Is
that
fair?
Am
I
misunderstanding,
the
problem
that.
M
So
I
think
they'd.
The
idea
with
the
issue
that
we
had
is
that
like
having
to
avert
a
merger
quest,
that's
not
working
well
on
AE
I,
believe
we
need
at
least
a
checkbox
on
the
templates
of
the
merger.
Quest
on
the
sea
side
should
at
least
make
sure
that
people
are
testing
like
at
least
creating
manually,
creating
the
module
class
on
the
east
side
to
make
sure
it's
working
that
the
pipeline,
we
will
succeed
and
you
have
a
rivet
to
to
work
on.
M
So
that's
an
idea
that
there's
some
discussion
on
your
a
key
that
my
point
was
not
exactly
right,
because
we
are
not
automatically
creating
the
east
side
wide
request.
So
for
now
it
we
need
to
create
a
merge
request
manually.
So
yeah
I
think
that's
something
that
we
should
do
from
now
on,
at
least
as
at
least
changing
the
ad
templates
to
make
sure
that
people
are
creating
the
east
side.
My
request
so
next
mark
from
create.
B
J
Mark
do
notice
that
were
using
different
package
managers
and
across
the
different
go
projects
that
is
causing
some
confusion
and
he
suggests
to
add
better
guides
for
that.
I
also
know
that
we
also
we
have
done
the
get
lab
kit
project
or
the
lab
kit
project
now
which
will
collect
some
of
the
tools
we
reuse
across
different
go
projects,
so
we
can
add
them
as
dependencies.
I.
N
Had
the
same
question
about
that
in
the
Golan
Channel
today,
because
we
distribution
are
introducing
another
project
that
is
going
based
and
we
want
to
have
the
same
standards
or
learn
about
standards
that
we
have
cross
go
projects,
and
apparently
we
don't
have
that
so
I
need
to
open
an
issue
for
discussion.
I
just
don't
know
where
exactly
so.
If
anyone
can
guide
me
where
to
open
this
I'll
be
happy
to
do
it.
G
N
E
Right,
yes,
I've
been
so
we
have.
One
thing
that
came
up
is
that
we
want
to
make
sure
that
the
UX
designers
are
involved
much
more
in
the
development
process
and
that
they
were
viewing
the
hem
ours.
Earlier
we
had
an
issue
where
one
of
the
major
things
we
were
delivering
the
UX
designer
didn't
get
a
chance
to
really
see
it
in
action
until
you
know
very
shortly
before
the
code
freeze.
E
Another
somewhat
related
issue
is
is
that
we
want
to
make
sure
we're
keeping
UX
design
and
engineering
concerns
in
the
same
issue.
We
had
a
time
in
this
past
cycle,
where
we
had
two
different
issues
going,
one
that
was
the
design
for
the
future
and
one
that
was
sort
of
the
more
engineering
focused
part
of
the
future,
and
there
there
was
some
confusion
there
and,
and
some
friend
engineering
had
to
be
reworked
when
it
shouldn't
have.
E
Another
issue
which
is
some
are
related
to
the
the
bringing
designers
in
earlier
is
making
sure
that
we
fully
vet
the
DB
schema
that
we're
gonna
use
for
a
given
solution,
and
we
had
an
issue
where
we
didn't
get
some
really
useful
feedback
about
that
schema
until
sort
of
late
in
the
game
and
and
we
had
to
go
back
and
change
a
bunch
of
things
that
we
wouldn't
have
had
to
if
we'd
figured
out
earlier.
If.
C
That
was
that
might
have
been
me
death.
If
it
was
one
thing,
I
would
say
there
was
that
it
was
also
kind
of
a
question
because
I
wasn't
sure,
like
the
schema
in
that
mr
could
have
been
in
either
option
that
we
had.
It
was
just
whether
the
team
knew
that
we
were
going
to
add
more
to
that,
in
which
case
one
schema
was
better
than
the
other
as
well,
which
was
useful
information
to
have
there.
O
Thanks
Seth,
so
a
couple
things
that
that
we
think
would
be
an
improvement.
One
is
writing,
issued
descriptions
so
that
the
scope
and
requirements
are
clear
to
people
who
are
brought
in
later.
Sometimes
we
find
that
things
aren't
necessarily
written
in
a
way
that
that
someone
coming
from
out
of
the
blue
can
understand.
There's
a
lot
of
kind
of
internal
knowledge
intimated
in
the
issue,
but
not
not
explicitly
written
out.
O
So
that
would
be
really
helpful
for
people
coming
into
to
get
up
to
speed
a
lot
faster
and
one
big
recurring
theme
for
us
is
just
capacity.
Increasing
the
headcount
for
UX
design
and
UX
research
to
meet
both
the
demand
and
allow
us
to
become
more
generative
would
greatly
increase
our
productivity
and
just
ability
to
get
ahead
of
things
at
faster
on
Tim
Maren.
N
N
The
documentation
is
written
and
there
is.
There
are
documents
for
front
end
and
back
end
as
well,
but
until
everyone
starts
using
it,
we
we
don't
really
know
what
we
need
to
improve.
So
please
check
it
out
check
that
out.
Try
it
out.
Let
us
know
what
items
are
unclear
and
what
kind
of
problems
you
ran
into
and
we're
going
to
try
and
improve
it
for,
for
everyone,
like
Eureka,
also
mentioned
they're.
Turning
on
and
off
feature
flags
is
simple:
it
can
be
done
through
slack
and
we
have
been
using
it
successfully
in
the
past.