►
From YouTube: GitLab Retrospective 12.5
Description
Please add feedback here: https://docs.google.com/forms/d/12-QPpvggEsqCvZnDCnuCjqP53joKwjRMPy4PH-Mqp-I/viewform?edit_requested=true Thank you!
A
Good
morning,
good
afternoon,
good
evening
get
lab.
This
is
the
12/5
retrospective
and
this
my
name
is
Christopher
levels.
I'm,
the
senior
director
development
I,
am
the
curator
of
set
retro.
Summary
I
really
appreciate
everybody's
feedback
with
the
US
long
holiday
weekend,
I
was
kind
of
a
little
bit
concerned
going
into
it,
but
it's
pretty
exciting
to
see
that
we
have
notes
from
22
different
teams
associated
with
this
effort
and
I
just
like
to
point
out
that
two
months
ago
we
only
had
11
of
these.
So
this
is
pretty
exciting
to
see
the
team
expanding.
A
A
B
A
You
very
much
Starbuck.
Next
one
is
myself:
basically,
we
had
a
issue
around
workflows
need
to
be
evaluated.
I
did
not
get
a
purse
made
on
that.
So
we'll
include
that
in
the
next
retrospective
and
then
the
next
one
after
that
one
is,
is
around
precious
process.
Efficiencies
and
trends
been
primarily
driving
this
so
Chen.
If
you're
available,
can
you
fertilize
your
results
so.
D
A
E
Yeah
so
last
month
we
added
a
change
to
the
handbook
about
making
engineering
managers
the
the
ones
that
will
merge
in
the
release
post
content
so
that
merge
request
was
merged
in
and
there
was
a
couple
other
links
that
I
added
here
that
describe
the
manual
workflow
process
for
sort
of
validating
issues,
and
then
there
is
an
automation
tool
that
I
started
that
works
kind
of,
but
in
certain
cases
it
won't
don't
always
find
the
correct
answer.
So
it's
it's
available
for
other
people
that
contribute.
If
you
want
to
jump
in
on
it.
A
So
the
highlights
that
I
saw
was
first
of
all
a
number
of
folks,
citing
productivity
improvements
around
both
smaller
em
ours,
reading,
resulting
in
faster
and
more
reviews
highest
months
for
a
number
of
teams,
or
at
least
a
couple
teams
and
less
miss
deliverables
and
a
few
teams
as
well.
We
also
added
a
second
DB
maintainer,
which
I'm
super
excited
about:
adding
more
maintainer
x'
to
that
part
of
the
process
and
ux
completed
three
additional.
You
explore
cards,
which
is
super
exciting
to
seeing
from
that
perspective
in
the
efficiency
section.
This
is
section
two.
A
Next,
in
collaboration,
we
had
both
the
customer
issue
and
security
issues
or
we,
which
was
a
Rapid
Action
issue.
Both
of
those
saw
some
really
good
collaboration
in
a
crisis
situation,
so
really
good
to
see
the
team
pulled
together
in
those
situations
and
then
also
both
verify
testing
and
defend,
have
an
alignment
on
direction
which
is
good
to
see
making
sure
that
teams
are
moving
effectively.
A
In
that
regard,
an
update
around
team
calls
that
in
particular,
Geo
they've,
gotten
so
effective
at
async
that
they
use
that
you
have
time
for
socialization,
which
is
still
super
important
working
in
a
get
lab
and
an
important
part
of
the
process,
but
important
to
see
that
we
can
do
things
effectively
in
an
async
fashion.
So
that's
great
to
see.
A
Next
up
in
the
process
section,
the
tactics
team
has
been
pretty
excited
about
their
new
process
and
working
on
a
Kanban,
II
type
of
fashion
and
then
in
the
reference
architecture
and
performance
test
coverage
that
we
got
our
fifty
reference,
50
K
reference
architecture
certified,
which
is
super
cool,
so
we're
able
to
give
customers
the
right
set
of
things.
Our
support
is
better
associated
with
that,
and
then
we
also
have
better
reporting
associated
with
that
and
then
we've
added
some
additional
measurements
and
capabilities
and
rounder
pipe.
A
Other
part
is,
is
we're
actually
thinking
about
how
we
can
improve
that?
So
that's
that's
great
to
see.
Now
we
go
on
to
what
went
wrong
section
and
this
will
again
be
pretty
lightning
round
again,
but
just
to
note,
this
section
is
usually
longer
because
we
do
focus
on
how
to
improve
a
lot
more
than
we
focus
on
our
successes.
That's
not
to
say
that
we
aren't
being
successful.
That's
seeing
that
we
just
focus
in
those
areas,
because
we
want
to
improve
in
the
efficiency
section,
there's
a
lots
of
feedback
around
efficiency.
A
A
Our
reviews
rushing
a
feature
and
one
comment
at
the
end,
which
I'll
probably
dig
into
over
the
time
next
period,
which
is
a
case
where
mrs
are
being
potentially
broken
up
too
much,
where
this
dependency
structure
being
built
out
and
that's
causing
issues
and,
last
but
not
least,
pipeline
issues,
associated
pipeline
feedback
or
issues
with
pipelines
being
challenging.
So
we'll
be
looking
at
that
as
well
in
the
collaborations
section
unintended
changes
that
were
fortunately
caught.
This
is
a
good
example
of
why
we
need
code
reviews
and
why
we
need
reviews.
A
The
things
in
general
I
actually
debated
about
whether
this
really
should
be
in
this
section.
It's
interesting
that
we
got
recommendations
that
didn't
actually
work
out,
but
it's
great
that
we
caught
it
beforehand.
So
that's
great
to
see
from
that
perspective
and
then
in
the
customer
experience
section,
we
did
have
a
customer
who
experienced
some
data
loss,
so
that's
obviously
problematic
good
to
see
that
we're
working
on
these
specific
problems
to
that
and
then
Auto
DevOps
being
broken
for
a
couple
days,
obviously
affects
our
productivity.
A
So
that's
something
that
we
want
to
make
sure
we're
working
to
build
a
better
monitoring
dashboard
around
that
and
effectively
manage
in
the
planning
section.
A
lot
of
team
improvement
focuses
making
sure
we
plan
appropriately
escalations
affecting
our
ability
to
deliver
on
what
we
committed
to
for
the
release.
Please
remember
if
we
have
an
escalation
and
that's
a
reprioritization,
that's
a
repression!
That's
the
right
thing
to
do
so.
A
It's
good
to
bring
those
up
here,
but
also
understand
that
those
are
the
trade-offs
that
we're
gonna
make
as
a
business
and
it's
important
to
see
that
we're
actually
dealing
with
the
security
issues
as
quickly
as
possible.
So
making
the
right
position
call
super
important
and
then,
in
Section
five
of
our
line
item
five
of
this
section
deployments
to
station
canarian
production
we're
seeing
that
it's
getting
slowed
down.
A
Potentially,
we
could
speed
up
here
if
we
had
more
frequent
deployments
to
staging.
So
that's
something
we
need
to
potentially
look
into
and
then
last
two
sections
are
around
both
flaky
masters
and
notifications
and
pipeline
triage
rotations
and
I.
Don't
know
if
you
could
summarize
this
better
for
us,
because
this
kind
of
came
in
last
minute
sure
happy
to
so.
G
Point
first:
yes,
it
wasn't
raised
by
me,
but
it
has
been
raised
in
secure
retro
and
by
a
few
folks
from
the
secure
stage.
So
just
noise
to
signal
ratio
in
our
channels
where
we
are
notified
about
prop
masters
and
similar
issues
is
not
very
good
and
it
leads
to
developers
either
muting
that
channel
or
just
maybe
even
leaving
the
channel
altogether,
which
you
know,
makes
it
even
worse
for
the
situation
because
less
eyes
on
broken
master
to
actually
fix
it.
G
F
I
improve
my
dam
directly
after
at
the
end
to
address
the
amplification,
specifically
we're
gonna
look
into
others
as
well.
I'm
gonna
take
on
the
rest
of
the
updates
here,
so
the
EP
TNG
in
particular,
is
well
aware
of
this
and
to
play
key
masters.
We
actually
added
a
lot
more
debug
trust
to
know
to
find
out
what
are
the
flavors
of
the
earth.
Everything
brings
from
karma
to
static
analysis
and
also
the
depreciation
tests.
We've
made
some
improvements
here,
but
please
be
aware
that
we
are
actively
looking
deeply
into
it.
F
Next
point:
I
have
is
a
one
occasion.
Code
has
been
merged
master
even
with
studying
enough
with
failing,
and
it
is
also
contributing
to
too
flaky
masters
as
well
and
I.
Think
it's
because
of
the
output
isn't
shown.
I
was
made
aware
of
this
recently,
so
I'll
be
asking
the
between
to
look
deep
into
why
this
was
the
case
and
potentially,
and
also
addressed
locking
on
the
gates
for
there
as
well
moving
on
to
the
pipelines
right
rotation.
F
So
this
continues
to
be
overwhelming
for
the
the
quality
imaging
department
with
the
South
Korean
agent,
us
even
with
two
people
on
rotation,
it's
taking
a
it's
a
full-time
job
for
two
people
and
we're
looking
into
how
to
better
provide
reports
to
unblock
ourselves
and
also,
in
addition
to
the
failing
tests.
They
also
statement
time
outs
and
runners
as
well,
so
there's
potentially
underlying
infrastructure
issues
that
we
need
to
good
cause
outs
in
the
near
term.
That's
the
summary
of
the
updates
from
in
this
section.
A
Expect
appreciate
that
will
now
move
on
to
the
how
we
can
improve
section.
This
is
the
section
where
we
get
a
little
more
conversational.
We've
got
about
15
minutes
left
for
that
and
add
additional
action.
I
am
so
or
roughly
on
time,
Matt.
If
you're
available,
would
you
like
to
talk
about
a
retrospect
participation,
yeah.
E
B
A
Basically,
they've
have
orchestration
scheduling
both
a
monitoring,
dashboard
and
alerting
for
Auto
DevOps.
Basically
that
issue
we
mentioned
before,
so
they
because
it's
that
scheduled
for
twelve
seven.
So
that's
encouraging
to
see
that
we're
going
to
fix
that
in
the
next
couple
months,
Alison
Brown.
Are
you
available
to
articulate
the
one
on
collaboration,
Oh.
E
Oh
speak
on
the
monitor
ones
for
Alison
as
well
yeah,
so
I
also
brought
up
a
really
good
point
here,
saying
that,
since
we're
starting
to
do
more
airtrac
on
the
monitor
health
side,
so
having
some
kind
of
GDK
seeding
would
make
it
easier
to
test
it
as
well
as
for
review
times.
Since
people
may
not
be
as
familiar
with
this
feature,
and
so
we've
created
an
issue
to
investigate
this
effort
and
we're
gonna
try
to
get
investigating
and
hopefully
something
in
there
for
12.7
cool.
A
And
this
is
one
of
the
areas
that
we
actually
have
some
focus
on
in
FY
21
planning
for
a
team
to
be
better
managing
the
situation.
So
it's
encouraging
to
hear
that
there's
some
needs
around
this.
I
also
added
this
as
an
action
to
check
to
basically
address
for
an
ex
retro,
because
we
don't
want
to
wait
for
that
team
basically
get
on
these
things
and
feels
like
a
good
one
for
us
to
track.
A
Rachel
had
the
next
one,
basically
she's
talking
about
the
fact
that,
when
these
they're
finding
issues
outside
of
Geo
that
they'll
oftentimes
it'll
affect
our
ability
to
deliver,
because
they're
going
to
basically
have
to
go
outside
of
their
team
to
basically
get
something
done.
Basically,
her
communist
is
that
there's
a
give-and-take
here.
Sometimes
you
should
definitely
go
and
fix
the
issue
yourself,
particularly
if
that
other
team
is
prioritized.
A
That
way
that
doesn't
allow
your
team
to
move
forward,
but
also
you
should
be
talking
to
that
team
to
make
sure
that,
what's
getting
prioritize
by
that
team,
if
they
can
manage
it,
so
it's
a
little
bit
of
give
and
take
there,
but
we
want
to
optimize.
So
that
teams
feel
like
they're,
empowered
to
go
fix
things
in
other
areas
of
the
code
base,
but
at
the
same
time,
if
possible,
get
the
work
prioritized
by
the
team.
That's
changed
that
particular
part
of
the
code.
H
A
A
H
Yeah
so
during
the
night
research,
a
direct
action,
one
of
the
issue
that
we
found
was
the
testing
infrastructure.
So,
on
the
one
hand
we
wanted
to
make
sure
on.
The
website
was
happy.
That
unit
has
coverage
and
on
other
hand,
we
want
to
collaborate
with
other
teams,
testing
tech
team
to
put
effort
Ultimates
the
issue
that
we
have
found
doing,
that
action
I
think
we
hope
we
can
catch
the
issues
as
much
as
possible
by
ourselves
before
we
eat
to
outside,
and
also
as
we
making
a
change.
A
Cool
that
makes
a
lot
of
sense,
good,
good
observation.
The
next
one
is
Niko's,
which
I
don't
think
is
on
the
phone,
but
he's
working
on
identifying
cross
team
dependencies
in
advance.
The
milestone
make
sure
that
we
use
the
work
full
of
blocked
metric
or
that
work
flow
clock
labels
so
that
we
can
better
know
what's
been
blocked
based
on
either
cross
team
collaboration
or
other
there's
comment.
You've
got
the
next
couple.
E
Yeah
planning
still
is
a
little
bit
of
a
challenge
for
the
monitor
health
team.
Specifically,
it's
been
a
little
bit
difficult
to
figure
out
on
a
board
visible
visibility
for
work
that
aren't
specifically
scheduled
for
the
milestone
that
part
of
the
planning
specific
to
the
stage.
So
examples
would
be
like
working
group
work,
so
Kristen's
been
working
on
the
web
pack
working
group
wanting
to
help
drive
that,
but
it's
a
little
bit
hard
to
surface
that
so
needing
a
better
collaboration
in
the
planning
process
has
been
identified.
So
we're
gonna
try
to
do
that.
E
Have
that
conversation
early
on
so
that
engineers
can
can
bring
up
things
that
they
would
like
to
be
part
of
the
planning
board.
As
the
first
step
to
resolving
this
issue
and
the
second
one
from
the
PM
side,
it's
been
a
little
bit
hard
to
manage
kind
of
the
planning
board,
since
we
have
like
25
plus
issues
which
includes
deliverables,
as
well
as
some
small
improvements
to
fill
in
the
cracks
in
case
deliverables.
Finish
early,
so
there's
different,
interesting
challenges
here
and
we
pointed
out
we
have
to
working
with
the
workflow
board.
Then
we're
gonna.
E
Try
to
use
to
help
visibility,
see
kind
of
how
issues
are
progressing.
These
are
dependent
on
engineers,
updating
the
labels
as
well,
which
we've
been
trying
to
get
better
at
as
well
as
scoping
down
to
an
issue.
Being
you
know,
small
enough
to
understand
that
in
dev
means
like
how
long
it's
going
to
take
versus
a
big
issue
in
dev
could
be
a
long
time.
E
We're
also
considering
experimenting
with
another
approach
to
the
planning
board,
so
scoping
features
and
columns
encouraging
engineers
to
break
down
two
features
into
smaller
iterations,
which
also
funnel
into
the
workflow
board
being
a
little
bit
easier
to
understand,
because
issues
will
generally
take
around
the
same
time
versus
currently,
the
workflow
board
will
tell
you
where
things
are
being
in
development,
but
not
necessary
accurately
telling
you
how
close
it
is
or
how
long
it
may
take.
So
that's
kind
of
the
issues
and
how
we're
trying
to
resolve
those.
A
Cool
one
observation
is,
it
might
be
worthwhile
to
talk
to
the
I
believe
it's
the
manage
teams,
user
experience,
researcher
or
somebody
from
user
experience.
If
we
think
that
there's
a
feature
or
a
piece
of
functionality,
that
would
help
that
would
be
good
feedback
to
get
back
into
the
product.
From
that
perspective,
as
we
were
kind
of
going
through,
that
particular
scale
is
scale.
Esports
is
definitely
something
we
should
be
thinking
about,
in
particular
cool
Craig.
Do
you
want
to
articulate
yours
for
a
memory
sure.
I
Now
we
still
have
a
growing
team
and
we're
still
iterating
on
the
best
ways
to
commit
to
issues
in
milestones
and
one
of
them.
We
talked
about
spending
a
little
more
time
planning
and
fleshing
out
stories
before
picking
them
up.
We've
talked
about
different
ways
that
we
could
bring
them
up
like
we
have
office
hours
a
couple
days
a
week
where
we
just
have
an
open,
zoom
channel
to
pair
and
maybe
use
some
of
this
time
to
talk
about
some
of
the
stories
ask
some
questions
and
iterate
it
on
it.
I
Within
get
labs,
we
can
provide
assumptions,
provide
an
approach
and
just
kind
of
work
through
it,
and
also
we,
we
amended
our
team
meeting
agenda
to
make
sure
that
we
call
out
the
asynchronous
items
that
we
need
to
follow
up
with
after
the
team
meeting.
So
we're
just
iterating
on
ideas
on
how
to
better
understand
the
issues
that
we
are
committing
to
within
Muslims.
I
A
Sounds
great
good,
good
good
that
the
team
is
focusing
on
that
because
it's
super
important
CJ
is
not
available.
He,
his
team
is
looking
at
basically
for
trivial
changes.
Looking
at
a1
approval
model
associated
with
it.
So
this
is
an
example
where
the
team
is
experimenting
with
for
trivial
things.
Can
we
actually,
without
regressing
or
causing
quality
issues,
go
to
one
approval
model
as
opposed
to
a
to
approval
model?
Nick
naquin?
Are
you
available
to
articulate
your.
J
J
Yeah
as
a
fairly
new
manager,
I
didn't
fully
understand
the
complexity
of
the
security
MRR
process
and
scheduled
too
many
for
this
miles,
don't
even
though,
in
and
of
themselves
the
the
security
fixes
were
we're
fairly
small,
but
there
was
a
lot
of
overhead
in
the
process,
so
I
think
maybe
it
won't
always
be
possible,
but
whenever
it
is
possible
we
can
better
spread
out
those
security
Amar's
across
a
security
issue.
The
cross
milestones.
K
L
A
Cool
makes
sense.
Jen.
You
may
want
to
hit
me
up
to
talk
about
how
we
can
potentially
dig
in
on
some
more
additional
information.
There's
a
bunch
of
different
ways.
We
could
potentially
we'll
get
like
em
our
time
to
merge
and
those
kinds
of
things
you
might
want
to
isolate
it
to
your
team,
to
kind
of
look
at
those
aspects:
cool,
Stephen,
listen!
You
got
the
next
point.
Yes,.
M
Sir,
this
was
our
first
time
through
with
the
the
recent
change
to
make
engineering
managers
merged
the
release
posts,
and
so
we
were
successful.
We
had
some
last-minute
scrambling
and
sort
of
which
things
we're
gonna
actually
make
the
release
and
have
a
post,
so
I
think
to
improve
our
this
next
time
around.
M
If
we
spent
some
more
time
up,
front
figuring
out
which
things
are
going
to
be
released,
release
worthy
and
get
them
properly
labeled
and
then
follow
them
more
closely,
so
that
we
don't
have
sort
of
a
scramble
at
the
end
to
get
statuses
and
things
all
all
in
place
and
buttoned
up
for
the
release.
Post.
L
I
So,
in
addition
to
amending
our
team
meeting
to
have
asynchronous
items,
we
also
talked
about
time
boxing,
because
we
have
a
tendency
to
rabbit
hole
on
interesting
and
confusing
problems.
Complex
problems,
probably
better
term,
so
coming
up
with
ways
to
time,
box,
I'm,
I'm,
happy
to
be
the
time
box
police
and
even
sharing
a
slack
message
right
now
with
different
apps
that
we
can
use
to
share
like
stop
wats
in
the
background
or
something
so
we'll
figure
out
a
way
to
make
it
more
visible.
J
Yeah
we
had
some
issues
with
context:
switching
between
working
on
different
integrations
and
the
milestone,
so
just
for
the
future,
if
possible,
we
want
to
work
with
product
to
try
to
group
work
on
different,
integrations
together
and
try
to
allow
an
engineer
to
focus
on
a
specific
integration
for
that
milestone
to
reduce
context
switching.
So
we
found
that
even
if
the
issue
itself
is
straightforward,
there's
there's
overhead
in
learning
a
new
integration
and
also
just
switching
over
to
to
developing
on
it.
C
So
the
that
led
it
to
the
onboarding
experience
being
way
too
steep
for
new
starters,
the
one
of
the
things
that
and
because
of
end-
and
that
was
one
of
the
commentaries
that
items
that
came
up
as
a
part
of
our
retrospective-
was
that
we
we
were
too
late
in
getting
a
curated
set
of
resources
available
for
people
that
were
joining
the
the
response
to
that
is
to
create
what
is
affectionately
referred
to
is
a
week
to
onboarding
issue.
So
a
template
has
been
a
version.
C
One
issue
template
has
been
created
for
that
to
aid
with
this
process.
We
will
be
iterating
on
this
going
forward,
as
this
has
been
identified
as
the
sources
that
engineers
need
to
get
started
with
and
defend
and
who,
as
we
start
to
complete
our
first
series
of
issues,
we
will
do
a
retrospective
on
what
additional
resources
were
needed
for
them
to
their
assignments
and
will
continue
to
improve
this
going
forward.
To
hopefully
make
it
make
it
easier.
As
we
continue
to
mature
is
the
stage
cool.
A
A
Do
we
need
a
week
to
specific
to
a
section
and
then
also
make
sure
that
we're
called
basically
cultivating
anything
that
we
view
is
common
so
that
all
team
members
are
kind
of
getting
up
to
speed
appropriately
cool,
we'll
move
on
to
improvements
for
the
next
release
to
track
I'm
going
to
just
go
ahead
and
quickly
summarize
here
the
ones
that
I
found
and
kind
of
added,
given
the
short
short
timespan
we
have
so
particularly
the
fulfillment
team
they've.
They
recognized
above
that
they're
having
with
areum,
are
in
pajamas.
A
They
were
seeing
an
underlying
issue
associated
with
it.
It
feels
like
this
is
a
potentially
systemic
issue,
so
I'm
gonna
go
in
at
an
issue
here
to
track
and
see
if
there's
a
way,
they're
gonna
the
base
go
into
an
estimated
or
cause
of
delay
the
estimated
weight
they
say
so
we'll
need
to
see
if
there's
something
we
can
do,
that's
more
common
to
fix
associated
with
that
and
then
above
we
talked
a
little
bit
about
the
GDK
seeding
work.
A
I
felt
like
that's
a
good
one
for
us
to
track
and
make
sure
everybody's
using
effectively,
because
it's
a
pretty
much
group-wide
associated
with
that
also
going
to
the
single
reviewer
experiment
feels
like
a
good
one.
We
should
report
back
on
what
happened
over
that
month,
because
that
may
be
something
other
teams
want
to
think
about
again
for
trivial,
but
making
sure
that
the
trade
rate
criteria
for
trivial
understood
from
that
perspective
and
then
the
flakey
pipelines
mech
has
a
couple
of
issues
there.
One
is
in
particular
developments
gotten
really
noisy.
A
It
was
the
one
that
I
missed
from
last
time,
which
is
a
particular
part
of
the
workflow
I
need
to
be
evaluate
and
determine
whether
or
not
we
can
either
reduce
steps
or
automate
steps
associated
with
them.
Are
there
additional
items?
So
we
would
like
the
track
at
a
high
level
to
report
on
at
the
next
retro.