►
From YouTube: GitLab Retrospective 12.8 (Public Livestream)
Description
Please add feedback here: https://docs.google.com/forms/d/12-QPpvggEsqCvZnDCnuCjqP53joKwjRMPy4PH-Mqp-I/viewform?edit_requested=true Thank you!
A
Good
morning,
good
afternoon,
good
evening
get
loud,
this
is
the
12
a
retrospective.
My
name
is
Christopher
Ola
volts
on
the
senior
director
of
development
and
I'll,
be
your
emcee
for
this
retro.
Our
goal
is
to
get
through
the
content
and
roughly
25
minutes.
So
please
keep
that
in
mind
as
you're,
giving
your
updates
in
particular
I'd
like
to
highlight
that
we
have
changed
the
changed.
The
format
a
little
bit
to
see
how
this
goes
in
the
spirit
of
experimentation
and
continuous
improvement.
A
We
wanted
to
see
if
we
could
make
out
more
collaborative
and
also
more
voice,
more
voices
to
be
heard,
but
while
keeping
to
the
25
minute
goal
of
overall
eating
time.
So
if
you'd
like
there's
a
highlight
for
them,
are
at
the
top
there
that's
been
merged
and
we'll
proceed
to
talk
about
that.
So
first
item
up
on
the
list
is
previous
rhetoric
at
retrospective
improvement
tasks.
John
is
not
on
the
console
all
basically
verbalize.
His
we've
implemented
a
pilot
for
domain
experts
to
see
how
that
goes.
A
Assuming
that
things
go
well,
we're
gonna
have
a
formal
proposal
and
April.
Basically,
we
want
to
see
how
the
different
pieces
of
that
work
we've
also
performed
a
code,
serve
a
code
review
survey,
I
should
say
which
the
results
are
also
posted
there
and
we'll
look
at
those
to
see
how
potential
improvements
will
happen.
Nick
are
you
on
the
call
to
verbalize
your
piece
of
this
effort?
Yes,.
B
Yes,
so
there's
an
issue
to
measure
and
report
on
review
times,
and
so
within
the
last
month,
we
fought
with
engineering
productivity
and
the
data
team
to
better
account
for
weekends
in
the
days
from
reviewer
assignment
to
review,
metric
and
then
I
also
created
a
proof
of
concept
for
metrics
around
the
usage
of
reviewer.
We
let
recommendations,
and
so
the
next
steps
there
gathering
feedback
and
just
refining
that
we
finding
those
metrics.
C
Right,
improving
the
efficiency
and
I'm
training
and
I
create
an
epic
attack
where
I
would
be
training
as
well
as
I,
have
a
few
examples
of
a
video
that
I
created
as
well
as
an
assessment.
So
please
review
and
comment
as
well
as
there's
an
issue
down
here
that
I'm
taking
sick,
where
I'm
taking
suggestions
on
different
ideas
for
a
code
review,
training
courses.
C
D
You,
yes,
so
just
to
recap,
we
took
up
two
improvement
items
from
the
last
release.
One
is
to
audit
the
tests
put
to
a
day
recovery.
We
made
some
progress
here
by
identified
a
test
way.
It
needs
to
be
looked
into
deeper,
with
the
current
ICT
zone
on
access
and
import.
There's
no
challenges
with
our
two
SATs
who
assigned
to
this
also
working
on
the
other
working
groups,
which
is
high
visibility.
So
we
will
still
continue
to
work
on
this
in
the
next.
The
next
action
items
for
next
release.
D
The
second
one
is
C
to
enable
the
test
block
and
get
the
backend
we've
started
working
on
this,
but
there's
some
challenges
that
our
cities
do
not
know
this
code.
Well,
so
we're
reaching
out
for
more
and
more
deeper
pairing
programming
with
the
engineers
in
the
Access
Group.
That's
the
current
status
of
up
now
I'll
pass
it
back
to
you,
Christopher
I,
think
so.
Thanks.
A
E
Part
of
our
feedback
in
12-8
was
that
we'd
like
to
work
with
our
PM's
closer
to
break
issues
into
smaller
deliverable
slices
of
technology
that
were
more
clearly
defined.
So
in
12:9,
we've
introduced
weekly
group
write
wide
synchronous
meetings
where
we
approach
planning
break
down
as
a
team.
This
has
resulted
in
what
I
believe
is
more
clear
deliverables
for
12:10,
but
it's
not
a
bit
of
a
side
effect
in
ensuring
that
the
asynchronous
conversations
that
happen
are
in
a
logical,
central,
ok
for
the
team.
That's
easy
to
find.
A
F
Think
improving
the
visibility
of
future
flags
is
sort
of
coming
to
a
close.
At
this
point
and
the
issue
there
were
several
suggested
action
items
to
be
taken
and
some
of
those
were
deemed
unnecessary.
So
the
first
one
is.
We
wanted
to
modify
the
issue
template
because
our
process
has
changed
to
include
feature
flag
information
and
the
issue
template
when
we
decided
against
that
there's
a
link
there
on.
Why
and
the
second
thing
is:
did
you
know
that
there
is
a
feature
flag,
rollout
template,
because
it's
incredibly
useful?
F
This
is
a
standardized
way
to
track
the
removal
on
the
feature
flag.
So
you
don't
forget
about
it
and,
and
then
the
one
last
change,
there's
an
mr
open
I
would
love
feedback
on
it.
There's
already
a
few
suggestions
on
there
and
to
have
danger
but
suggest
adding
the
feature
flag
labels.
So
we
can
pay
attention
to
that.
It's
not
a
it's,
not
a
foolproof
method,
and
so
we
really
need
to
consider
if
that's
a
direction
we
want
to
move
in.
So
any
feedback
is
welcome.
There.
A
Thanks
Michelle
there's
a
good
some
additional
items
that
we're
not
gonna
verbalize,
but
please
read
through
those
we're
going
to
quickly
move
into
the
what
went
well
section
just
FYI,
there's
a
lot
of
things
that
went
well
this
month.
If
you
look
down
in
the
lower
section,
I
edited
this
down,
mainly
because
we
focus
on
the
things
that
we
can
improve
in
the
discussions
around
those,
so
don't
treat
it
as
we
did
less
well
this
month,
we
actually
did
great
in
a
number
of
areas,
but
I
didn't
want
to
verbalize
a
few
highlights.
Mech.
D
First
one
sure
so
the
the
coop
planning
has
been
coming
to
fruition,
I
think
with
games.
We
have
more
engagement
with
the
counterpart
teams
with
our
SCT.
So
that's
great
and
not
sure
we
want
to
focus
on
mobilize
the
next
one.
Mister
should
I
take
on
quality.
Oh
yeah,
you
can
go
ahead
and
start
that
one
up
as
much
sure
and
the
new
master,
broken
Trojan
resolution
has
led
to
a
reduction
of
a
mean
time
to
resolve
a
master,
broken
issues.
Thanks
to
the
EP
team.
D
For
leading
this,
the
first
cup
chart
is
there
for
you
to
take
a
look
and
the
last
one
is
on.
We
have.
We
have
detected
performance
improvements,
cross
support,
so
thank
you,
everybody
for
contributing
to
better
performance
overall,
and
then
there
are
a
few
items
here
listed
by
grant
on
the
end
points
that
have
been
improved
significantly.
So
thank
you.
Everybody
out
here,
yeah.
A
A
That
we
need
to
look
at
and
focus
on,
so
we're
gonna
take
an
action
from
that
for
folks
that
have
interest
in
learning
about
infrastructure
and
availability
issues
between
infrastructure
and
development.
Please
let
us
know
because
we
can
add
you
to
that
meeting
as
well
as
a
shadow
Nick.
Do
you
want
to
talk
about?
What's
going
on
the
ecosystem
and
iteration
office
hours?
Sure.
B
Yeah
we
we
took
some
inspiration
from
since
iteration
office
hours
and
held
one
with
with
ecosystem
engineering
and
product
team
members.
So
on
this
call,
we
focused
on
breaking
down
our
mass
integrations,
epic
and
also
planned
the
transition
from
surface
templates,
to
instance,
level
integration,
which
is
part
of
that
epic
and
yeah.
Multiple
participants
in
the
call
highlighted
its
usefulness
in
in
our
retro
issue
are.
B
Yeah
I
think
what
we
decided
we
sort
of
asked.
Should
we
do
these
regularly
and
or
should
we
just
kind
of
do
them
as
needed,
I
think
the
team
just
agreed:
let's,
let's
do
them
as
needed
when
they're,
when
we
find
that
there's
I,
think
or
like
a
sub
epic
that
that
we
feel
needs
breaking
down
synchronously
and.
A
Not
that
I
will
so
it
might
be
good
to
update
the
handbooks
I
gonna
reflect
that
I
think
this
is
a
good
one
where
it'd
be
useful
trust
to
do
it,
especially
for
handbook
first
hour
hand
for
hand
book
first
process,
cool,
we'll
move
on
to
results,
the
other
one
Craig.
If
you
can
verbalize
the
impactful
features
that
we've
added
and
then
we
can
jump
straight
into
the
next
one.
Since
you
have
the
one
that
what
on
what
went
wrong
sure.
G
Yeah,
so
across
enablement,
several
impactful
features
were
shipped
from
Munich
from
enable
midsection,
sorry
repeating
search
team
enabled
advanced
global
search
for
six
customers.
This
month
and
Puma
was
enabled
on
get
lab
comm,
resulting
in
some
it's
celebration,
time
for
sure,
lots
of
memory,
reduction
in
CPU
usage
reduction
as
well,
so
that
was
a
year-long
journey,
a
grad,
glad
that
it's
out
there.
We
still
have
some
things
that
we
need
to
wrap
up,
but
I
was
huge
for
us
and
then
jumping
over
to
what
went
wrong
this
month.
G
So
there
were
some
production
issues
and
impacted
our
teams
this
month
they
were
unfortunate,
but
the
response
by
the
teams-
it
was
great
and
actually
other
teams
helping
throughout
the
engineering
organization
was
great
and
also
found
that
the
RCA
process,
while
nerve-racking,
was
very
educational
and
invaluable
for
team
members
to
figure
out
how
we
can
avoid
it
in
the
future.
What
we
could
have
done
better
to
avoid
these
and
just
a
learning
opportunity
for
everyone
involved.
A
Cool
as
I
note,
particularly
for
development,
if
we
have
code
changes
that
are
affecting
production,
operational
issues,
I've
asked
that
we
start
implementing
rca's
on
those
and
that's
just
part
of
our
continuous
process.
Please
remember:
RCA's
are
a
blameless
process,
it's
not
about
who
did
what
wrong
it's
about
understanding
what
went
wrong
and
how
we
don't
repeat
it.
That's
the
key
aspect.
Is
we
don't
want
to
repeat
previous
errors
and
an
execution
so
just
think
of
it
in
those
terms,
great
next
up,
we
have
a
couple
from
Quality
and
Kyle.
B
H
So
the
first
one,
I
would
say,
is
probably
the
biggest
item
from
engineering
productivity
in
the
last
milestone
transition.
Our
triage
automation
created
a
lot
of
confusion
specifically
related
to
removing
deliverable
labels
for
items
that
were
in
the
expired
release,
we're
looking
to
improve
that
in
the
issue
that
is
linked
here
in
the
retro
doc.
If
you
have
feedback,
please
a
comment
there
and
we'll
look
to
act
on
that
as
well.
H
D
Thank
you.
So
we
we
made
up
made
a
mistake
in
cost
performance
issue.
Comm
will
unintentionally
performance
test,
think
it'll
ever
come
off
working
on
test
data.
This
was
unexpected.
We
also
uncovered
a
Redis
bug
as
part
of
this
there's,
a
mini
I
fear
that
we
will
create
in
the
next
following
week.
We
think
it
has
also
been
useful,
but
it
also
it
shouldn't
have
been
unwarranted
or
people
should
be,
knowing
that
we
are
testing
that
comic
viewer
to
test.
D
A
Cool
I
think
so
yeah
Nikolas
I,
don't
think
is
on
line,
so
I'll
go
into
verbalizes,
which
is
they
shipped
a
bug
that
caused
many
critical
issues
all
over
the
place
or,
moreover,
it
happened
close
to
the
milestone
cut,
so
it
caused
some
additional
issues.
One
thing
I'll
look
at
is
is
whether
or
not
we've
we've
done
in
our
CEO
and
then
follow
up
with
Nick
out-of-band
on
that
aspect.
From
that,
as
from
from
that
process,
perspective
cool
next
we'll
go
to
live
discussions
and
I.
Think
Craig
and
Steven
had
the
first
one
on
process.
G
What
happened
with
his
team
here,
but
both
the
memory
and
the
distribution
team
noted
issues
with
proper
usage
of
the
distant
the
deliverable
labels
and
for
the
memory
team.
It
was
the
lack
of
actually
applying
them
as
we
were
going
through
and
committing
to
our
milestone,
and
it
sounds
like
from
the
distribution
side.
They
were
removed
late
in
the
milestone,
so
they
weren't
counted
in
this
retro
and
then
we
were
listing
deliverable
ships.
So
from
the
memory
team
side,
it's
just
discipline
in
our
processes.
I,
don't
think.
G
There's
any
automation,
changes
that
we
need
to
make
down.
But
if
it
continues
to
be
a
problem
for
us,
then
we
will
look
into
ways
to
maybe
just
automatically
adding
the
deliverable
label
when
the
milestone
kicks
off
or
anything.
That's
in
that
milestone
and
if
Stevens
on
the
call
I
don't
know
if
he
has
anything
to
add
to
this,
sir.
A
I
can
just
pro
just
give
a
little
context
on
we've
been
using
the
deliverable
labels
for
the
past
several
releases
to
populate
the
board
and
use
the
workflow,
the
ugly
Beck
his
crew
put
together
working
great.
We
got
kind
of
dependent
on
it,
and
then
we
noticed
when
the
milestone
ended.
We
were,
we
were
expecting
the
the
process
to
to
start
up
and
forward
those
and
apply
the
proper
Mis
labels
and
whatnot.
A
And
so
we
happen
to
noticed
during
our
next
planning
session
that
you
know,
after
the
release
ended
that
week,
that
things
weren't
being
populated,
and
so
we
dug
in
a
little
bit
djay
on
my
team
filed
an
issue
and
it
looked
like
maybe
there's
just
a
tweak
to
the
logic
that
needs
to
happen
to
prevent
that
from
happening
in
the
future.
So
that's
that's.
D
A
I
guess
one
question
there
is:
we've
been
noticing
some
feedback
from
product
management
around
the
fact
that
not
every
team
sees
in
deliverable
I
want
to
just
consistent
on
that
aspect
of
it,
because
it
is
in
our
handbook.
One
aspect
that
is
there
is
the
automation
and
I've
been
debating
back
and
forth
of
whether
it's
better
to
have
no
automation.
F
A
Know
that
sounds
contradictory,
but
like
from
the
perspective
of
it
automatically
does
certain
actions
and
it's
on
certain
dates
and
it's
a
little
bit
hard
time
for
teams
to
track
from
aspect
or
if
we
need
to
do
better
communication
of
how
that
automation
works.
So
engineering
managers
kind
of
better
understand
the
process.
It
sounds
like
you're,
leaning
more
towards
the
latter
solution.
From
that
perspective,
yeah.
A
It
all
manually,
in
fact
we
weren't,
even
aware
of
the
deliverable
when
I
joined
last
year,
we
started
using
a
scheduled
label
which
had
no
automation
around
it.
It
was
all
on
us
to
to
move
things
forward
or
drop
things
out
of
a
release,
and
it
worked
reasonably
well,
but
the
big
thing
that
we
missed
from
it
that
I
had
seen
in
other
tools
in
other
places,
with
that
ability
to
track
like
how
many
times
something
has
slipped
things
were
the
long
tail
and
that
that
automated
labeling
really
tells
us
like.
A
Okay,
this
this
hasn't
just
slipped
one.
This
issue
has
slipped
two
or
three
releases.
So
let's
go
back
and
figure
out,
hey,
there's
a
issue
just
too
big.
Do
we
need
to
break
it
down?
What's
what's
what's
the
problem
with
with
moving
it
forward,
and
so
that's
the
part
I
think
of
the
deliverable
workflow
that
we
really
have
used
to
help
us
understand
what
our,
what
our
longtail
items
are:
okay,
cool.
A
That
sounds
like
it
definitely
confirms
my
suspicions,
which
is
they
need,
if
you're
out,
how
to
better
make
sure
that
we
communicate
overall
and
I'll
work
with
mech
on
that
mech
and
Kyle
actually
should
say
that
they're
providing
the
mechanism,
but
I'll
figure
out
better
to
make
sure
that's
kind
of
understood
and
then
kind
of
work
through
the
process.
They're
cool
dan
is
not
on
the
call.
He
mentioned
that
a
in
manage
they've
received
feedback.
That's
just
from
near
gab
labs
that
the
product
of
bellman
flow
is
not
clear.
A
I
Yeah
thanks
I
think
this
is
joining
the
command.
I
mean
produced
the
ribbon
office.
Our
I
think
this
framework
is
very
interesting
and
it
tries
to
solve
the
same
problem
for
every
development
teams
at
plays,
because
we
all
work
with
UX
project
management
and
try
to
achieve
the
same
degree
of
features
during
the
same
cycle.
So
we're
definitely
facing
the
same
issues
and
we're
all
trying
to
achieve
to
find
a
solution
on
our
own.
I
So
I
reiterate
my
the
the
idæan
suggestion
to
maybe
have
a
working
group,
because
we
are
all
trying
to
stop
that
and
it's
definitely
not
efficient
to
me.
So
I
was
just
mentioning
that
insecure.
We
did
a
run
of
experiments
and
clarifications,
because
we
also
had
some
difficulties
to
clearly
understand
some
part
of
the
product
development
flow
and
be
some
different
interpretation
of
the
different
stage.
So
I'm
just
sharing
that
and
we'll
probably
participate
to
the
other
issue,
but
I'm,
assuming
that
there
are
plenty
of
other
issues
from
each
team
trying
to
improve
things.
A
J
This
is
in
the
handbook
page
by
the
way,
defining
those
roles
and
responsibilities
of
when
these
things
happen
is
described
in
that
handbook
pay,
so
I
can
refer
people
that
pretty
easily
to
say,
okay,
this
is
who's
responsible
for
this
leading
up
to
this
point,
and
then
engineering
is
responsible,
etc.
Both
of
these
things
so
I'm
happy
to
participate.
I'm,
certainly
very
interested.
If
we
want
to
start
do
a
working
group
or
some
other
conversation
about
this,
so
I
think
this
is
something
I'm,
pretty
passionate
about.
A
H
Reme
yep
yeah,
so
in
this
one
this
is
related
to
master
stability,
master
pipeline
stability,
so
we've
actually
already
implemented,
merge
results,
pipelines
and
been
seeing
very
positive
results.
We
went
from
from
having
about
three
to
five
brokered
masters
a
week
due
to
stay
alive,
ours,
the
merged
in
two.
We
haven't
seen
any
in
the
the
week
that
this
has
been
on
so
very
promising
early
results,
but
if
there
is
any
feedback,
we
did
link
to
the
issue
that
this
was
done
in
feel
free
to
add
it
to
that
issue
as
well.
H
H
Yep,
so
we
are
working
towards
it.
My
apologies
for
leaving
this
out
it's
kind
of
in
this
was
an
initiative.
That's
really
in
parallel
with
performance
optimizations
in
the
pipeline,
and
once
the
get
a
pipeline
is
a
little
bit
runs
in
a
shorter
duration,
and
we
have
we
have
some
performance
improvements
there,
we'll
look
to
towards
merge
strains
for
the
get
that
pipeline.
A
All
right
we'll
keep
moving,
so
the
last
section
that
we'll
verbalize
is
issues
the
track
for
the
next
retro.
So
I
have
a
number
of
these
listed
in
here.
If
anybody
had
any
from
the
previous
retros
that
they
want
to
make
sure
included,
please
add
them
to
the
bottom:
we're
trying
to
keep
these
down
to
four
to
five,
but
we're
right
now
at
seven,
that's
okay!
As
long
as
we're
seeing
progress
towards
them!
So
that's
the
key
aspect.
We.