►
From YouTube: GitLab Retrospective 12.6 (Public Livestream)
Description
Please add feedback here: https://docs.google.com/forms/d/12-QPpvggEsqCvZnDCnuCjqP53joKwjRMPy4PH-Mqp-I/viewform?edit_requested=true Thank you!
A
Good
morning,
good
afternoon
and
good
evening,
everybody
get
lab.
This
is
the
12:6
retrospective
or
summary
retrospective,
as
it
were,
because
there
are
several
retrospectives
going
on.
My
name
is
Christopher
la
volts,
I'm
senior
director
development
and
will
be
emceeing.
This
retrospective
we've
had
great
information
even
in
a
shortened,
a
shortened
development
timeframe.
Given
the
holiday
season,
the
month
of
December
had
we
still
had
26
or
22
teams
to
async
retrospectives
and
provide
information
up
and
looks
like
we've
got
some
good
results
to
talk
about
and
discuss.
This
consists
of
four
sections.
A
The
first
section
is
talked
about
previous
tasks.
Then
we
talked
about
what
went
well
when
went
wrong
and
how
we
can
improve,
and
then
last
but
not
least,
will
be
action
items
we
want
to
track
to
the
next
retrospective,
so
I,
guess
technically
it's
five
sections.
I
will
talk
very
quickly,
but
first
we'll
start
with
the
first
section
where
we'll
share
and
the
previous
retrospective
our
improvement
tasks.
So
Chris
do
you
want
a
verbal
is,
what's
been
going
on
with
the
first
one
sure.
B
Chris
bounced
the
fulfillment,
front-end
engineering
manager
and
let's
talk
about
some
of
the
issues
that
we've
had
with
waiting
issues
specifically
for
the
pajamas
project
that
were
working
out,
were
converting
the
customers
portal
to
pajamas
currently
and
view
and
simultaneously
and
in
that
process
we
found
that
some
of
our
issues
were
incorrectly
waited
in
12.6.
We
still
noted
that
for
issues
were
incorrectly
weighted
and
three
of
them
rolled
over
to
12.7
and
12.8,
which
is
a
far
too
high
of
a
ratio.
We're
trying
to
improve
on
that
ratio
currently
and
to
do
that.
B
We
didn't
issue
individually
and
what
we're
gonna
do
going
for
is
a
partially
sink
a
sink,
a
process
where
we
will
assign
issues
to
be
weighted
to
team
members
ahead
of
our
weekly
status
meeting
and
then
weekly
status
meeting,
try
to
come
to
consensus
on
the
weight
of
the
issue
to
try
to
drive
down
a
little
bit
deeper,
its
potential
issues
with
the
weighting
of
the
issue
and
try
to
come
to
a
more
accurate
weight.
Consensus
I'm
going
to
continue
tracking
this.
B
A
Cool,
it's
great
to
hear
that
both
implemented
and
but
also
continuous
improvement
on
that
and
working.
It
early
so
really
appreciate
that
Chris
we'll
move
on
to
the
next
one,
which
is
around
Genie
case
eating
work,
I,
actually
looked
this
one
last
yesterday.
Basically,
it
was
elected
to
defer
this
one
for
this
release.
A
A
D
Sir
I
am
so
flaky
pipelines
and
noisy
and
other
vacations
in
the
development
Channel,
so
happy
announced
with
a
great
work
from
the
engineering
Purdue
team.
We
already
fixed
this.
So
now
we
create
a
new
channel
called
broken
master
with
the
enduring
party
routine
being
a
PRI
in
charging.
If
it
the
pipeline
and
Common
Core
will
fix
it
immediately.
If
we
need
help
from
each
and
every
other
development
teams
or
groups,
we
will
fan
out
and
there's
an
SLO
tied
to
these
issues
as
well,
and
we
will
start
measuring
them.
D
The
next
one
I
have
is
Owen,
since
it's
less
noisy
now
in
development,
we
are
encouraging
people
to
join
back
and
also
unmute.
We
will
make
multimodal
communications
again
in
a
company
call,
and
we
already
sent
on
this
an
email,
an
email
form,
the
last
one
ace
AC
y
cold
was
merged
with
static
analysis,
failing
I
believe
this
has
been
fixed,
confirmed
by
Remy,
and
this
should
be
now
correctly.
Carding
study,
nasus
bugs
Thank.
A
E
E
Improve
our
code
review
performance
and
just
in
terms
of
progress
made,
we
introduced
the
first
responses
alone.
The
review
responses,
hello,
I've,
also
added
that
to
the
weekend
review
for
engineering
to
look
at
for
everybody,
but
essentially
we've
not
formalized
that
first
review
response
and
and
further
reviewer
responses
should
happen
within
two
business
days
and
in
the
documentation.
We've
got
some
kind
of
like
instructions
of
what
can
happen.
E
If
you
do,
if
you
can't
make
the
deadline
or
if
somebody
isn't
able
to
respond
from
today's,
what
author
of
they
mark
can
do
next
up,
we've
got
Nick,
who
has
put
up
his
hand
to
work
on
measure
and
report
on
review
times.
So
we
need
to
look
at
kind
of
like
extending
our
data
that
we
can
get
for
these
to
measure
the
the
impact
that
that
these
changes
have
on
our
review
times.
Clemente
has
also
put
up
his
home
to
work
on
establishing
the
screen,
shape,
review
sessions
for
Amir's
and
also
student
by
example.
E
So
one
of
the
best
ways
we
found
to
teach
people
used
to
share
them
by
example
and
he'll
be
willing
to
put
that
in
place.
The
last
one
that
I'll
be
working
on
over
the
next
month
is
T
is
the
consideration
for
having
maintained
s
with
the
main
expertise.
So
we've
had
quite
a
lot
of
conversation
around
that
and
I'll
be
opening
in
a
more
with
a
proposal
for
that
over
the
next
week
or
so
well,.
A
C
A
Goes
to
review
to
when
it
actually
gets
merged,
so
we'll
be
tracking
this
closely
as
well
at
a
global
level
to
kind
of
see
how
these
changes
work
through
it.
China
might
be
good
to
put
these
down
below
things
that
we
kind
of
talked
about
for
an
update
next
next
month
because
feels
like
this
is
an
important
area
objective.
If
you
don't
mind
doing
that,.
A
Excellent
I
think
it's
the
same
ones
I'm
looking
at,
which
is
like
that's
the
one
we
want
to
see
x,
improve
on
both
in
average
in
and
90th
percentile
and
particular
perspective.
Thanks
for
that
Mac
cool
next
one
is
an
action
Aaron
for
myself.
We
were
looking
at
the
verification
step
specifically
at
scale.
We
don't
have
the
capabilities
to
verify
everything
going
into
production,
particularly
in
our
MRA.
Currently,
though,
we
do
view
it
as
important
for
significant
features
are
for
teams
that
feel
that
information
behind
feature
flags
is
still
important.
A
So
we
didn't
want
to
remove
the
actual
step.
What
we
want
to
do
is
make
it
optional,
so
that
for
minor
changes
and
for
things
that
we
feel
are
going
to
be
oughta
wouldn't
be
caught
through
normal
regression
failures.
You
don't
necessarily
have
to
see
that
from
the
workflow
verification
perspective,
plus
reproducing
bugs
can
be
also
a
tricky
in
a
workflow
verification
aspect
cool.
Now
we
move
into
the
section
where
I
talk
extremely
fast
around
the
areas
of
what
went
well
this
month
and
what
went
wrong.
A
Excuse
me
in
the
area
of
what
went
well
in
collaboration,
it's
important
to
note
that
there
are
seven
different
instances
of
both
inter
and
intra
group
collaboration.
So
that's
really
good
to
see
I'm
really
glad
to
see
that
as
part
of
of
noting
in
the
efficiency
section,
we
love
the
new
security
process.
This
one
quiz,
our
mr
counts
artificially,
but
actual
through
work
that
we're
doing
because
we're
moving
off
a
dev
superseded
excited
to
see
us
use
the
feature
both
from
my
capability
and
dogfooding.
A
As
well
as
move
us
into
a
more
consolidated
base,
so
that's
good
to
see
as
well,
and
then
we
have
a
definition
of
how
to
use
epics
and
issues
for
a
given
team,
which
is
important
to
get
everybody
on
the
same
page.
They
on
the
same
page.
In
our
results
section
we
have
another
DB,
maintainer
added,
so
we're
down
to
three
super
excited
about
that.
To
see
us
increasing
the
number
there
to
help
with
reviews.
A
We
also
had
a
research-based
MVC
for
code
review,
analytics
happen,
which
is
also
really
good
to
see,
because
that'll
help
us
be
more
efficient
and
delivering
functionality,
as
well
as
delivering
that
actual
functionality.
And
then
another
thing
to
note
is
point
II,
where
we
have
an
example
with
team
is
using
the
optional
workflow
verification
and
that's
been
great
at
catching
some
cases
where
the
future
hasn't
been
working,
quite
as
like,
as
we
would
expect
it.
So
that's
excellent
to
see
as
well.
F
A
So
that's
a
great
thing
as
well
that
we've
been
it
was
certified
much
more
quickly
associated
with
that
and
then
last
but
not
least,
in
the
results
section
line,
J
the
secure
team
shipped
more
items
than
they
slept,
showing
good
to
better
say,
do
ray
so
excited
see
that
trend
beings
in
the
positive
direction.
For
that
related
to
planning.
One
team,
in
particular
defined
our
estimation
process
that
you'll
notice
that
estimation
processes
are
different
from
team
to
team.
A
So
very
cool
feature
that
the
testing
team
has
created.
Now
we
move
to
what
went
wrong
and
the
what
went
wrong
section
is
longer
than
what
went
well
and
that's
because
we
always
focus
on
the
challenges,
that's
not
to
say
that
we
aren't
doing
great
stuff
because
we're
doing
lots
of
great
stuff
here,
that's
the
safe,
that's
just
where
we
focus
our
time
on
how
we
can
improve
so
highlights
there
in
the
efficiency
section.
Pipeline
failures
has
obviously
been
near
and
dear
to
our
minds
of
flight
and
basically
working
on
addressing
those
Stan.
A
A
The
other
thing
which
I'll
spend
a
little
bit
of
time
reviewing
with
Rachel
is
the
access
request,
taking
a
long
time
making
sure
that
we
get
back
on
that
and
see
that
we
have
our
s
always
being
met
there
with
our
sister
sister
teams,
so
that
we
can
make
sure
the
right
things
are
happening
in
the
planning
section,
a
story
that
is
of
special
interest
to
me,
large
M,
ours
and
I'm,
not
getting
to
work
done
in
the
cycle.
So
it
sounds
like
we
are
fighting
the
normal
challenges
of
any
development
team,
which
is
we.
A
Are
a
very
iterative
we
want
to
try
to
push
it
to
be
smaller
and
smaller,
smaller,
so
want
to
make
sure
that
that
happens
associated
with
it
in
the
deployment
section
it
looks
like
we
have
some
regression
failures
associated
with
the
bot
notifications.
Now
regressions
I
should
say,
but
some
consistency
challenges
with
it.
So
it
looks
like
we
need
to
dig
in
there
a
little
bit
and
understand
what's
going
on
why
it's
missing
some
M
ours
associated
with
that
documentation.
A
A
This
is
section
5a
for
those
who
are
trying
to
keep
up
with
me
that
kind
of
caught
my
attention
as
well
in
regards
to
us
thinking
in
terms
of
what
additional
things
can
we
catch
on
staging
before
we
actually
get
it
out
to
production
in
the
regressions
section,
6a,
noting
the
fact
that
retreated
transitioning
from
Ruby
to
golang
for
the
gate
lab
shell
and
as
part
of
that
we've
had
some
regressions
related
to
that.
As
part
of
that.
A
So
that's
a
another
case
where
we'll
have
to
be
thinking
about
in
terms
of
as
we
transition
certain
parts
of
our
code
base
from
brewery
to
go.
How
do
we
make
sure
we
don't
regress
those
pieces
of
it
and
then,
in
the
testing
section
as
above,
we
have
some
pipeline
challenges
and
a
defend
has
been
having
some
growing
pains
related
to
the
fact
that
their
coverages
is
not
as
strong,
though,
to
understand
that
to
understand
that
situation.
I
want
to
call
it
that
that
team
is
extremely.
C
A
Past
couple
months,
and
because
of
that
you
know
we're
just
starting
in
because
of
that,
should
every
and
be
embarrassed
about
our
iteration.
This
is
a
good
example
of
the
team
call
themselves
out
from
up
ship
didn't
appreciate
it
great
I,
somehow
managed
to
say
all
that
within
15-minute
times,
brands
that
leaves
us
10
minutes
to
talk
about
how
we
can
improve
and
the
action
items
will
on
track
for
the
next
retros.
So
we'll
start
off
with
Thomas.
Can
you
talk
about
collaboration,
sure.
G
Thing
so
during
our
practice
that
we've
started
using
within
secured
during
our
weekly
stage
wide
meetings
is
making
sure
that
we're
taking
time
to
celebrate
issues
that
are
that
are
being
closed
within
a
within
a
given
iteration.
The
the
challenge
that
we've
started,
that
we
realized
during
this
past
over
December
or
during
12:6,
was
that
the
list
of
issues
that
we
were
closing
was
getting
quite
long
and,
as
a
result,
was
causing
a
number
of
folks
to
just
ignore
that
portion
of
our
agenda.
So
we
wanted
to
make
sure
that
we're
caught.
G
Just
just
really
quick
give
them
involved
with
and
get
them
into
the
issue
themselves
and
just
making
sure
that
we're
calling
out
the
usage
and
demonstrating
that
it's
up
working
as
a
part
of
the
way
that
we're
celebrating
things
that
were
shipping-
that's,
that
was
something
was
noted
out,
noted
and
we've
got
a
tracking
issue,
we're
beginning
to
start
working
towards
that
is
within
this
item
as
well.
For
the
folks
that
are
curious,
cool.
A
Thanks
Thomas
I,
put
it
down
below
I
hope,
that's
a
good
one
to
talk
about
next
time,
cuz.
It
feels
like
that's
a
good
sharing.
If
that
turns
out
to
be
a
successful
way
to
do
that,
we
may
want
to
think
about
other
teams
leveraging
that
same
type
of
model.
Gabe.
Are
you
available
to
articulate
for
plan
I.
F
Am
yeah-
and
this
is
in
response
to
Shawn's
note
above
about
things
that
didn't
go
so
well,
and
it
was
more
around
the
fact
that
we
have
our
weights
range
capped
at
five
kind
of
hides
hidden
complexities
or
makes
it
easier
to
do
so
like
you
might
have
something
that
it's
a
true
five,
but
is
that's
the
highest
weight
that
we
assign?
It
also
means
that
a
five
could
be
a
13
or
something
larger,
and
it
I
think
extending
the
weight
range
out
to
something
like
the
Fibonacci
sequence.
H
John
hope
you're
coming
for
Charlie
I
believe
yeah,
this
one's
pretty
straightforward.
It's
just
them
as
we
put
more
and
more
of
our
features
into
graph
Gail
on
the
back
end.
There
are
a
few
reasons
to
couple
the
work
in
Mrs
since
the
last
retro
we've
been
looking
at
how
we
can
improve
our
throughput,
so
we're
going
to
keep
an
eye
on
this,
among
other
things,
but
yeah
it
should.
There
should
be
very
few
circumstances
where
we
have
to
do
both
the
front
end
and
the
back
end
in
1mr
if
it
involves
graph
key.
Oh.
I
This
is
just
kind
of
a
quick
thing.
You
know
any
issue
out.
There
has
a
risk
of
being
sort
of
an
iceberg
where
the
10%
you
see,
doesn't
really
tell
you
about
the
90%,
you
don't
so
I'm,
actually
on
the
analytics
team
and
we've
just
been
helping
out
with
some
security
issues
in
manage
so
I
think
access
the
access
team
already
knows
about
this,
but
it's
something
that
was
kind
of
a
surprise.
I
think
to
us
analytics
developers
was
that
these
security
issues
that
exist
tend
to
be.
I
You
know
like
at
least
50%
of
the
time.
Icebergs
of
there
are
a
lot
of
edge
cases
and
and
little
things
that
people
don't
notice
and
end
up
blowing
up
these
issues
into
bigger
things.
So
just
kind
of
something
you
know,
I
thought
was
worth
sort
of
shirring
was
that
if
you
encounter
a
security
issue,
you
know
maybe
increase
your
your
assumed
risk
of
it
being
an
iceberg.
A
If
we
have
that
documentary
we're
in
the
handbook,
I
don't
know
that
we
do,
it
might
be
good
to
go
back.
Can
you
take
an
action
to
we
won't
track
it,
but
can
you
take
an
action
to
go
back
and
see
if
we
don't
have
it
in
the
handbook
you're
at
a
good
place,
but
I
think
it's
important
that
we
kind
of
like
that
at
least
known
in
some
fashion.
Sure
thing
I'll:
do
that
cool
thanks,
Craig
gear
up
yeah.
J
So
this
was
a
quote
by
Matthias
during
our
retro,
and
this
month
was
especially
interrupt-driven.
This
milestone
was
especially
interrupt-driven
for
the
memory
team,
and
I
thought
it
was
a
great
quote
from
Matthias
about
focusing
on
breaking
down
big
problems
iterating
on
it
and
while
we're
doing
that
part
of
the
memory
team
charter
to
start
billing
out
building
out
proactive
tooling.
A
That's
that's
awesome
thanks
for
sharing
that
Craig
and
that's
a
good
reminder
for
us.
There's
a
on
the
top
of
the
engineering
page,
there's
a
discussion
or
section
that
talks
about
efficiency
and
things
should
just
work
fixing
problems.
So,
if
you're
an
engineer-
and
you
see
something,
that's
broken
and
you
feel
like
it
needs
to
be
addressed
and
it
takes
less
than
a
half
a
day
to
fix.
A
You
should
spend
the
time
to
go
work
on
that,
mr
and
it
could
happen,
and
we
even
have
a
label
for
it,
which
I
think
is
things
should
just
work
and
we're
gonna
start
tracking
that
label,
because
we
think
that
it's
important
that
we
give
engineers
the
capability
both
to
chip
away
problems
and
also
to
solve
them
cool
Steven
Wilson.
Are
you
available,
sir?
A
We've
recently
started
using
the
deliverable,
workflow
automation.
Previously
our
projects
weren't
in
in
the
fork
Glo.
It
gave
us
some
great
visibility
into
things
that
are
slipping
going
forward.
We're
gonna
try
to
understand
the
themes
behind
things
that
are
slipping
to
get
better
at
being
predictable.
That's
that's
really
about
it
are
our
numbers
are
I
would
say
somewhat
not
poor,
but
they're,
not
as
high
as
that
we
would
like
them
to
be
previously.
A
We
just
didn't
have
a
lot
of
visibility
into
it
and
now
we're
at
least
at
a
stage
where
we
understand
you
know
and
have
visibility
to
both
where
we
are
and
how
we
improve
cool
I.
Wonder
if
this
is
one
that
we
should
report
back
on
next
time,
just
to
say
whether
it
worked
well
or
whether
it
didn't
work.
Well,
that
kind
of
thing
like
I'm,
not
looking
for
you
know
we.
A
Things
that
gotcha
yeah,
we
can
add
some
there
we're
off.
We
also
have
an
okay
our
to
track
our
overall
just
raw
numbers.
Previously
I
was
doing
that
manually
and,
of
course,
it's
great
to
have
labeling
in
there,
so
that
I
have
spend
less
time.
You
know
spelunking
and
more
just
running
reports
with
labeling
so,
but
we
can
take
it
a
step
further
and
try
to
share
some
of
the
themes,
cool.
C
A
K
Are
you
available
yeah,
so
Florin
the
growth
acquisition
team?
We
defined
a
lot
of
our
processes.
This
milestone.
One
other
thing
that
we
want
to
work
on
is
meeting
discipline
so
running
our
weekly
sink
meetings
in
a
similar
fashion
to
how
our
group
conversations
are
run
where
agendas
are
filled
out
beforehand,
meeting
starts
30
seconds
afterwards
and
they're
very
efficient.
So
that's
one
thing
which
we're
working
on
cool,
Thank,
You,
Michele,.
C
We
verify
that
it
was
working
and
we
release
a
release
post,
and
then
we
found
out
that
it's
actually
behind
a
feature
flag
and
it's
not
turned
on
and
there's
no
clear
super
easy
way
to
tell
the
only
really
good
way
to
tell
if
it's
behind
a
feature
flag
in
its
office
to
talk
to
the
actual
engineer.
Who
did
it?
So
we
want
to
be
more
efficient,
I
linked
tracking
issue.
There
I
still
need
to
fill
out
some
more
details
on
that,
but
we
have
a
few
proposals
already.
C
That
I
think
would
work,
and
one
of
them
is
to
update
the
EMR
template
so
that
we
can
just
specify
that,
as
well
as
holding
ourselves
more
accountable
to
using
the
feature
flag
label
and
we'd
like
to
get
some
other
ideas
there
as
well
to
sort
of
track.
That
moving
forward
see
how
we
can
improve
thanks.
A
G
Righty,
so
within
defend,
one
of
the
things
that
was
noted
during
this
during
12:6
is
that
multiple
folks
are
continuing
to
struggle
setting
up
ATO
DevOps
and
make
it
work
locally
with
GDK.
So
the
reason
we
pushed
and
posted
this
with
inefficiency
is
that
this
was
really
slated
as
more
of
an
onboarding
challenge
within
the
within
the
group
and
that
we're
one
of
the
things
we're
looking
at
is
creating
a
more
straightforward
guide
for
making
this
combination
of
work.
Getting
it
set
up
and
more
coherent.
So
we
can
follow
it.
L
Yes,
so
do
you
live
in?
They
go
to
yes,
sir
github
and
I
think
didn't
assume
some
operative
work
for
coming
next
time.
I
think
that
will
too
well
and
it
speeds
things
up,
I
think,
but
in
the
long
run
we
would
like
to
do.
You
think
part
of
the
best
practice
between
us
and
the
infrastructure
team
to
find
that
path.
L
Practice
and
also
feel
improvements
that
we
have
identified,
that
we
can,
that
we
can
do
is,
to
maybe
add
more
monitoring
stuff,
to
give
us
more
visibility
to
the
stuff,
and
hopefully
that
will
help
us
and
also
other
teams,
including
infrastructure
team,
to
get
more
feedback
about
seeing
about
how
things
going.
If
things
goes
wrong,
we
can
punch
it
hopefully
quickly.
D
Sure
so
this
regarding
the
CI
incident
that
happened
during
the
holidays,
it
was
carpe
our
tests,
but
it
was
an
escalated
in
timely
fashion.
We
are
currently
having
a
rapid
action
that
Amy
just
stabilized
all
the
end-to-end
tests
that
runs
on
staging
and
potentially
have
it
guard
their
all
every
deploy
going
forward.
D
So,
when
I
look
at
this,
it's
also
clear
that
there
there
are
some
scenarios
that
can
be
tested
by
unit
and
integration,
because
I
believe
the
CI
was
passing
and
we
trusted
engineers
trust
our
CI
before
we'd
merge
and
in
addition,
we
need
to
improve
test
planning
after
looking
through
I
believe
that
quality
and
our
CTS,
our
software
engineering
tests,
weren't
involved
earlier
in
the
planning,
because
an
easy
fix
for
this
is
it
was
a
changes
cross-cutting.
Please
monitor
staging
results
when
you
change
it
stationing
that
we
could
easily
cut
it.
D
The
next
one
is
the
in
addition
to
this
tightly
related.
We
also
updated
our
charging
priorities,
so
we
will
now
try
staging
first
because
it's
tightly
coupled
with
the
deployment
I
believe
before
we
were
doing
master
and
also
production
as
well.
But
so
right
now
of
hurry
is
to
get
staging
results
clear
as
soon
as
possible.
D
A
To
you
guys
for
thanks
Mac,
both
those
are
great
things
that
we
need
to
work
on
improving.
Obviously
our
CI
is
going
to
give
us
a
hyper
focus
on
making
sure
those
get
addressed
more
quickly,
cool,
we're
short
on
time,
so
I'm
going
to
quickly
go
over
the
least
the
ones
I
caught.
One
was
the
new
process
changes
Secours
thinking
about
because
it
feels
like
that
might
be
nice
to
share
updates
on
that.
The
feature
flag
tracking,
which
Michelle's
putting
in
place,
has
a
tracking
issue.
A
So
it's
great
to
see
the
deliverable
workflow
progress
which
will
pull
and
use
it.
Okay
are
whichever
items
Steven
gives
us
and
then
for
the
CI
Renner
incident.
This
feels
like
a
gimme
but
I
like
gimme
some
times
as
well
and
routes
to
the
fact
of
you
know
it's
an
RCA.
So
hopefully,
all
the
progress
is
completed
by
next
time.
Let's
just
double
check
back
on
the
retro.
If
you're,
okay,
with
that
Mac.