►
From YouTube: GitLab 10.5 Release Retrospective
Description
Follow along in our doc: https://docs.google.com/document/d/1nEkM_7Dj4bT21GJy0Ut3By76FZqCfLBmFQNVThmW2TY/edit
A
B
A
C
C
A
Well,
in
the
follow-up
test
is
a
tell
app
to
get
lucky
I'm
happy
to
announce
that
we've
merged
the
first
iteration
of
that
which
is
to
test
basic
login
suite,
there's
still
more
to
test.
But
that's
the
first
thing
to
do
so
looking
to
improve
that
over
the
coming
weeks
of
adding
SSL,
adding
LDAP,
Group
synchronous,
sync
and
all
that,
but
that's
in
place,
I
would
have
caught
that
issue.
We've
had
and
then
ten
for
Fabia.
D
E
They're
so
also
talking
about
staging,
we
improved
a
lot
the
way
to
test
CI,
CDN
security
products,
rated
features,
and
now
we
have
shard
runners
working
correctly
on
staging
and
we
had
a
problem
with
the
registry
certificate,
but
it
has
been
fixed.
So
thank
you.
Everyone
that
worked
to
make
it
perfect
and
so
testing
on
staging
is
now
probably
100%,
complete,
at
least
for
our
items.
E
Okay.
So
what
112
this
month?
First
item
is
still
mine.
The
release
process
for
our
C's
has
been
improved
a
lot.
So
we
have
also
very
good
communication
now,
and
this
helps
in
avoid
losing
time
or
try
to
find
the
correct
process
to
do
something.
So,
for
example,
if
a
test
you
can
find
the
link
in
the
issue
where
easier,
because
stage
Inga
was
working
as
expected
and
also
rendering
out
to
DevOps
template.
That
was
a
major
pain
in
the
last
releases
intended
for
now.
E
It's
not
fully
streamline
yet,
but
let
me
improve
the
lot,
and
so
you
can
find
an
example
to
the
remember
where
I
was
in
order
to
quest
that
words
are
quite
simple
to
dress
and
the
release
managers
who
are
clear
in
what
then
we're
clear
on
what
we
expect,
and
so
it
was
done
without
major
problems.
Come
here.
Next
point
is
yours:
yes,.
F
Actually,
we
were
able
to
verify
everything
on
staging
from
the
CI
and
it's
like
the
flag.
The
most
important
for
me
personally
for
the
previous
release
was
traces
on
object,
storage
and
we
were
basically
able
to
ensure
that
this
is
fully
working
in
production
like
environment
before
coming
to
to
get
rom-com.
So
this
is
definitely
something
that
did
help
us
a
lot
in
being
more
confident.
E
Yeah
yeah,
so
so
we
have
a.
We
had
a
few
problems,
I'm
coming
problems
and
not
regressions.
You
will
see
later
why
that
was
spotted
after
the
future
freezer,
so
I
feel
that
we
still
have
see
some
different
clothes
between
product
and
engineering
in
how
to
manage
the
future
assurance
and
how
to
notify
possible
lab
problems
there.
But
I
put
another
note
to
talk
specifically
about
that,
so
my
understanding
is
that
the
priority
should
be
avoid
shipping
problems
instead
of
benefits
to
users.
E
So
it's
not
really
a
matter
of
it
is
a
problem
of
specifications
or
probable
implementation
or
the
design,
but
it's
just
if
I
find
something
that
is
bringing
a
confusion
or
problem
to
user
I,
don't
want
to
say
I
as
a
product
manager.
I
don't
want
it
to
be
shipped,
as
is,
and
that's
why
I
created
a
few
regressions,
labeled
labeled
issues,
but
they
are
absolutely
not
for
blaming
someone
work
but
to
discuss
and
solve
problems
earlier.
E
So
I'm
not
really
sure
that
we
all
share
the
the
idea
that,
after
today,
the
responsibility
of
a
feature
is
shared
between
all
the
departments.
So,
just
to
make
clear
that
is
not
blaming
to
a
specific
implementation,
just
an
early
warning
to
avoid
possible
problems
for
users
and
anyway,
at
the
end
everything
went
well
of,
since
we
did
the
changes
that
we
needed
for
the
very
basic
bad
things,
and
we
just
postponed
after
our
discussion,
everything
that
was
not
really
really
needed
in
order
to
avoid
a
lot
of
extra
work
after
the
feature.
G
Her
thanks,
so
we
had
a
few
issues
with
object,
storage,
migrations
jobs
mentioned
that
object.
Storage
is
only
a
migration
blocker
now
for
artifacts,
LFS
and
traces,
so
I
think
traces
and
artifacts
are
enabled
currently
and
LFS
is
awaiting
a
fixed
before
we
enable
lab
that.
Somebody
please
correct
me
if
I'm
wrong,
if
I'm
wrong,
we
have
an
issue
with.
G
We
had
an
issue
where
we
enables
inadvertently
I,
think
NFS
on
production,
and
then
we
have
some
data
loss
when
we
try
to
migrate
back
from
object,
storage
to
local
storage,
I
camel
created
a
great
Muslim
issue
for
that
so
camel
I,
don't
know!
If
you
want
to
take
over
from
here
and
talk
about
what
happened
there
and
then
you've
got
the
next
yes,
so
it's.
F
It
was
actually
hard
to
go
for
this
discovery
process
because,
like
so
many
things
that
go
wrong
that,
like
it
probably
could
creates
a
ritual
out
of
this
issue
alone,
but
like
the
road
costs
of
like
all
of
these
mic,
that
we've
had
that
we
data
loss
is
basically
get
a
configuration.
His
law
was
talking
test
for
the
default
values.
It
started
it
enabled
by
ground
upload
of
everything,
and
it
enabled-
and
we
did
it
notice-
that
on
staging
because
staging
did
it
had
object,
storage
for
RFS
enabled
and
production
did
hurt.
F
So
essentially
we
enabled
on
production
on
way
more
than
we
actually
tested
on
staging,
and
this
kind
of
last
of
the
problem
that
we've
seen
users
complaining
about
unable
to
turn
off
data
out
of
elephants
because
of
the
Google
Cloud
Storage,
not
returning
HTTP
address
and
get
our
first
clients
complaining.
Because
of
that,
when
we
expiry
them
did
go
into
migrating
data
back
into
local
storage
at
the
first,
we
thought
that
the
migration
code
was
broken
because,
like
we
had
data
loss,
but
it
turned
out
that
migration
code
was
not
broken.
F
What
was
broken
like
out
of
this
migration
was
that
we
are
able
to
concurrency
round
to
migrations
in
two
different
directions.
So,
basically
we
had
the
background
upload,
which
was
migrating
to
remote
storage,
and
we
had
local
upload,
which
was
sorry.
We
had
like
another
code
which
was
migrating
into
local
storage
because
there
was
no
exclusive
locking
there.
But
what
is
what
is
even
worse,
there
that
we
didn't
have
were
Sonic
and
the
backups
in
the
place
to
be
able
to
recover
the
data.
F
So
we
effectively
hit
this
small
tiny
time
when
we
out
of
over
2,000
objects
that
we
try
to
migrate
park
with
we
fight
to
migrate
42,
they
were
on
the
objects
the
tourists
before,
but
because
we
didn't
have
had
versioning
on
the
packets.
We
like,
we
basically
remove
them
without
being
able
to
recover
that
this
is
like
this
is
like
very
long
story,
very
long
discovery
phase
of
like
what
did
go
wrong,
but
a
lot
of
things.
Basically,
some
jar
for
Alex.
You
want
to
add
anything
because
you
were
asking
involved,
yeah.
H
The
only
thing
I
had
it
there
was
that,
with
regard
to
the
staging
and
production
configuration
difference,
it
be
nice
to
know
whether
this
was
a
process
problem
where
it
was
just.
No
one
thought
to
ask
whether
this
was
enabled
on
staging
whether
we
thought
it
was
enabled,
and
it
wasn't,
but
at
least
it
I
do
have
an
issue
up
in
for
this
in
a
new
environment
that
are
gonna,
try
to
make
it
so
the
two-lane
enforces
the
promotion
of
configuration
from
stage
into
production.
F
F
Then
we
actually
did
run
on
the
production
if
there
was
different
difference
in
the
configuration,
so
we
tested
happy
path
of
likes
the
data
or
start
they
are
downloadable,
but
it
turned
out
in
the
end
that
we
did
not
test
that
clue.
The
main
part
of
the
future
Shawn
I
know
that
you
brought
another
point,
not
you
who
is
coordinating
yeah.
G
So,
like
part
of
this
was
just
the
issue
I
had
when
trying
to
fill
in
that
retrospective,
that
you
linked
earlier
about
the
other.
Has
data
losses
like
it's
kind
of
hard
to
like
fit
in
a
timeline
as
someone
who
wasn't
working
on
it
because,
like
it
wasn't
fair
to
me
like
where
we
did
the
work,
so
we
have
issues
for
LFS,
uploads
and
artifacts.
G
That
I
can't
find
the
current
issue
for
artifacts,
maybe
I'm
looking
in
the
wrong
place,
but
they
don't
seem
to
be
updated
very
frequently
and
I'm,
not
sure
they
have
all
the
information
in
them.
That
is
current,
so
it's
like
I
am
involved
in
this
and
I
find
it
hard
to
figure
out
what's
going
on
so
for
somebody,
who's
not
involved
in
this
to
figure
out
what's
going
on,
is
gonna,
be
super
hard
right.
G
So
I
think
this
is
sort
of
a
tricky
thing,
because
it's
like
sort
of
spread
across
like
different
slack
channels
and
issues
that
are
like
specific
bugs
that
we
need
to
fix
in
the
code
and
issues
in
the
infrastructure
tracker
where
we
need
to
coordinate
things
with
the
production
team,
and
then
we
do
something
for
calls.
And
then
we
have
Google's
office.
You've
linked
to
next
jobs.
G
F
Thank
you
so
much
for
had
like
more
private
conversation
with
John
and
Alex
Friday.
Last
week,
I
mean
it
was
actually
it
was
Tuesday
last
week
when
we
had
Google
Doc
with
like
the
current
status,
but
to
be
fair,
this
should
be
put
in
the
issue.
Everyone
can
contribute
and
I
definitely
think
that
this
is
the
missing
part
of
what
you
are
expecting.
F
So
this
is
the
current
status
in
the
Google
Talk,
also
related
to
objects.
Dollar
Tree
last
Friday
discovered
live
traces,
his
show
it
did
came
by
the
accident.
He
had
actually
fixed
Marist,
but
it's
also
something
that
that
that
had
that
had
some
potential
impact
on
the
customers.
But
we
didn't
receive
any
reports
of
the
customers
facing
this
problem,
but
it
will
require
some
corrective
actions
action
to
actually
to
bring
back
this
data
because
of
the
application
bug.
F
Fortunately,
this
effect,
that's
a
small
amount
of
the
I
mean
I'm,
not
really
aware
of
anyone
being
affected
by
that
issue,
but
it's
possible
that
some
of
the
users
are
affected
and
may
be
affected
where
we
actually
move
data
away
for
the
top
trace,
and
we
don't
bring
back
to
the
correct
storage
because
of
unable
to
update
the
data
in
the
database.
D
So
yeah,
like
all
of
the
actual
problems
that
happen
in
the
day,
loss
I
should
have
put
that
into
the
laws,
but
it's
just
like.
We
actually
tried
to
merge
to
measure
something
a
feature
that
was
not
meant
to
be
used
until
whiteford
testing
was
done
on
the
dude
s
bed
and
wanted
to
be
like
or
pad
to
do.
D
That
was
to
actually
merge
it
into
masters,
so
we
could
get
a
nightly
package
out
of
it
and
then
do
the
testing
on
the
do
test
bed,
but
like,
unfortunately,
this
this
merge
actually
inadvertently,
like
toggled
a
background
like
a
configuration.
So
in
hindsight
like
we,
we
should
we
should
probably
not
use
like.
Maybe
we
should
not
like
merging
two
master
do
actually
test
something
all
on
the
Geo
test
bed
and
we
should
build
packages
all
the
branch
so
like
this
is
more
contained
on
the
test
bed.
G
Well,
let's
do
the
refactor
first
then
do
the
feature,
whereas
because
we
were
feeling
some
time
pressure,
even
though
it's
actually
taken
a
really
long
time,
we
were
trying
to
do
them
all
together,
which
is
obviously
how
you
get
issues
like
this,
because
you
know
this
is
asleep
and
me
at
McHale's
mentioning
that
we're
forcing
this
to
see
right
after
the
refactor
Libre
ended
even
and
like
you
know
again.
Ideally,
if
we
could,
if
we
could,
with
the
benefit
of
perfect
hindsight,
do
this
all
again,
we
would
refactor
this
in
Libre.
G
F
Okay,
so
we
had
some
serious
regression
affecting
our
paid
customers,
I
mentioned
in
the
beginning,
about
Q&A
site.
This,
so
actually
one
case
where
we
did
IQ
IQ
q,
a
manual
Qi
before
magic
stuff,
but
like
basically,
we
didn't
test
EEP
and
we
didn't
test
the
feature
on
AP,
which
added
some
additional
constraints.
F
F
If
one
of
these
hard
cases
like
we
prepare
to
fix
quite
quickly,
but
I
wonder
like
this
is
like
the
IDR
case
that
we
should
probably
react
better
if
you
would
just
river
that
feature
and
deploy
reverted
version
as
soon
as
we
figure
out
that,
because
the
reviving
the
fix
for
that
park
that
focus
I
if
I
remember
correctly,
for
working
dice,
so
it
is
not
insignificant
amount.
It
took
us
for
working
advice
to
prepare
a
prepare,
a
fix,
make
sure
that
is
actually
fully
tested
and
it's
deployed
on
github.com.
H
H
Yeah
I
just
noticed
that
the
regression
for
the
largest
such
keys
was
missing,
so
I
added
it
just
and
also
making
a
point
here
that
we
should
add
the
test
to
QA.
There's
an
issue
up
in
for
it.
It
would
be
nice
to
try
to
get
this
in
get
these
changes
in
quicker,
but
you
know
I
think
it's
it's
a
fairly
straightforward
bugged
into
those
fixed
fairly
quickly.
H
I
I
What's
the
change
that
config
to,
we
were
using
a
to
JSON
the
spit
out,
the
config,
giving
them
JSON
syntax
rather
than
Ruby
syntax,
and
that
previously
was
working
with
how
the
get
data
there's
config
had
been
built.
But
in
ten
five
we
ended
up
doing
some
cleanup
that
removed
the
ability
for
that
JSON
syntax
to
work,
which
we
weren't
even
really
aware,
was
like
working.
But
the
fact
that
the
JSON
syntax
was
removed
was
basically
a
progression.
I
So
we
fixed
that
right
away
in
the
first
patch
that
went
out,
and
then
we
have
a
plan
moving
forward
to
print
our
recommendations
for
config
changes
for
deprecation
messages,
we're
going
to
start
printing
those
in
room
in
more
of
a
ruby
syntax
using
the
awesome
print
gem
so
that
that
was
one
major
regression.
We
had
causing
grief
and
then
also
as
part
of
our
let's
encrypt
feature
that
we
added
in
the
omnibus.
I
A
J
Awesome
in
this
release,
my
point
is
about
that
all
front
end
deliverables
and
either
as
aggression,
open
or
warrants
or
written,
and
they
are
not
the
kind
of
three
questions
we
usually
do.
I
think
there
aren't.
We
have
two
main
problems:
I've
linked
to
the
ICT
with
respect,
if
they're,
but
I
think
we
have
two
main
problems,
and
the
first
one
is
that
in
the
specification
phase,
the
vision
that
product
has
is
not
the
same
that
engineering
team
understands.
J
So
what
what
ends
up
being
implemented
is
not
the
thing
that
should
be
implemented,
which
causes
the
second
communicate
communication
problem,
which
is
basically
in
every
in
every
deliverable,
there's
a
regression
being
open,
because
the
regression
level
is
being
used
for
support
for
things
that
were
working
before
that
much.
The
question
were
broken
in
that
much
request,
but
also
for
the
things
that
weren't
specified
in
the
issue
and
therefore
weren't
implemented,
and
because
we
are
not
all
in
the
same
page.
J
This
is
a
source
of
frustration
and
also
you
lose
a
lot
of
time
in
back
and
forth
in
each
issue
open
this
regression
every
release,
deciding
if
it's
a
regression
or
not,
and
why
why
it
isn't
and
we've
been
discussing
this
for
a
couple
of
months-
I
think
the
first
time
I
was
profit
was
on
10.0
and
we've
been
discussed
that
in
the
SSD,
with
respective
also
for
all
these
releases.
But
we
haven't
been
able
to
reach
any
any
action
items
on
how
to
improve
this
situation.
J
So
I
think
we
need
anything
else
and
I've
left.
Some
proposals
of
our
questions
are
not
to
improve.
If,
if
we
can
make
the
specification,
how
can
we
make
the
specification
face
better?
So
both
product
and
engineering
are
in
the
same
kitchen
and
and
engineering
is
implementing
the
same
I'm,
not
sure.
J
If
maybe
doing
the
planning
step
a
little
bit
earlier
in
the
process
would
help
avoiding
not
having
deliverables
still
without
the
final
decisions
in
the
last
week
of
the
of
the
release
cycle
and
probably
creating
another
label
other
than
my
question
to
tackle
the
things
that
weren't
specified
in
the
issue
and
because
of
that
they
weren't
implemented.
Instead
of
my
questions
and
probably
left
a
point,
I'm
not
sure
you
want
to
end
up
anything
to
this.
Oh
yeah.
E
Thank
You
Philippa,
so
I
totally
agree
yeah.
This
is
not
the
first
time
it
happened
informally.
Last
time
it
was
the
C
not
resolved
this
time,
I
tried
to
figure
out
the
root
causes
of
this
promise
of
project.
First
thing
is
that
as
a
product,
we
are
considering
progressions
altered
things
that
are
due
to
the
product
or
uux
regressions,
but
there
is
not
shared
with
the
process
that
engineers
everywhere
engineers
consider
regressions
exactly
if
the
issue
is
implemented
by
the
specific
specifications
or
not,
as
I
said
before,
I
consider
more
regression.
E
If
we
are
introducing
a
problem,
if
you
are
shipping,
a
problem
to
users,
I
feel
that
maybe
we
can
create
a
new
label.
This
could
be
a
good
option.
I,
don't
think
that
having
the
planning
done
early
can
help
lot
we
are
already
did
we
have
plans
done
for
a
couple
of
releases
in
forward,
but
these
problems
are
coming
from
the
C
specific
cases
or
something
that
will
be
specified
once
implementation
is
clear.
E
So
probably
we
cannot
reach
the
details.
We
need,
even
if
we
do
it
three
months
for
matters
earlier
and
also
having
got
them.
Let's
see
the
different
process
may
help
I'm
still
not
sure
that
future
assurance
is
fully
understood
so
and
I
put
a
point
later,
but
I
can
anticipate.
Now
we
talked
about
that
in
the
product
meeting
and
it
seems
that
the
vision
is
that
the
understanding
is
quite
different,
so
I
was
asked
to
find
that
a
process
that
can
work
for
both,
or
at
least
that
find
a
way
to
avoid
shipping
problems.
E
That
is
how
long
all
we
don't
really
they
commit
to
use
one
label
or
the
other.
We
just
want
to
avoid
the
shipping
problems
and
I'm
talking
with
engineering,
lead,
sign
order
to
find
out,
which
is
their
understanding
of
the
process
and
then
I'll
come
out
with
a
proposal
and
obviously
shared
where
the
engineers
are.
So
we
can
try
to
having
future
assurance
effective
and
fully
shared
between
the
group.
So
without
creating
this
frustration
that
is
on
both
sides.
Don't
wanna
be
one
of
the
big
problems
that
is
regression
a
priority
label
for
product.
E
It
is
not
so
putting
a
regression
label
doesn't
imply
that
you
have
to
work
on
it
immediately.
I
understood
that
in
the
engineering
department
that
this
is
considered
a
priority
label,
so
it's
a
sort
of
pushing
something
to
be
done
immediately,
and
this
can
create
some,
not
some
problem.
So
this
is
one
of
the
points
that
can
be
cleared
up.
If
you
look
at
the
link
to
the
product
agenda,
you
can
find
more
details.
There
see
comments,
know
if
someone
that
is,
writing
wants
to
jump
in
and
talk.
E
Otherwise,
I
think
we
can
jump
into
what
you
can
improve.
First
point
is
mine.
The
idea
of
merge
smaller
smelt
earlier
when
possible
obviously
can
help
by
spotting
products,
province
or
misalignment
in
features
before
the
future
Fred
so
asking
up
for
future
assurance.
Even
if
you
are
not
ready
to
merger
the
feature
helps
it
is
working.
E
E
I
know
that
it's
quite
harder
because
a
reviewers
are
developing
things,
so
they
will
review
mostly
by
the
end
of
the
cycle,
but
if
you
can
split
that
things
in
small
iterations
within
the
same
cycle
and
prepare
even
more
possible
small
things
that
this
helps
a
lot
product
and
Nuiqsut
to
validate
them.
First
next
point:
Phillipa.
J
So
we
we
need
to
improve
the
communication
on
clcd
between
product
and
engineering
team.
Mostly
I
was
I,
was
reading
Taoist
point
in
the
last
thing,
I
mentioned
exactly
because
I
don't
think
the
problem
is
I
would
define
as
a
question.
The
problem
is
what
is
causing
it
with
all
this
regression,
so
if
we
could
fix
the
specification
phase
to
include
everything
that
needs
to
be
implemented,
I
think
the
second
problem
with
this
regression
label
itself
would
be
fixed
by
its
own.
So
what
is
causing
us
with
all
these?
J
J
I
So
this
is
really
the
product
of
a
conversation
I
had
with
BJ
and
so
credit
to
him,
but
his
take
was
is
that
the
work
that
we've
been
doing
over
the
past
of
release
is
related
to
feature
assurance,
meaning
things
that
are
at
the
level
of
they
appear
in
the
kickoff
dock
and
then
receive
future
insurance
has
been
going
pretty
well,
obviously
still
thinks
we
can
always
improve
as
Fabio
mentioned,
but
that's
really
the
reason
why
ten-four
was
a
relatively
smooth
release
on
the
surface.
10-5
was
rougher,
but
his
take
was.
I
Is
that
that
work
related
to
that
specific
type
of
feature
carried
over
into
10-5
and
effect
improved
and
got
better?
His
take
on
white
n
5
was
a
little
bit.
Rougher
was
a
different
type
of
issue,
meaning
things
that
are
smaller
touch
existing
features
so
more
likely
to
cause
regressions,
normally
rise.
The
level
of
that
kickoff
dock,
which
was
used
as
the
source
material
for
the
future
insurance
work.
I
So
what
he
was
suggesting,
what
I
think
is
there
gonna
be
is
some
kind
of
process
by
which
things
that
don't
reach,
that
level
of
detail
nevertheless
gets
structured
qi.
So
they
go
into
issues
or
they
go
into
a
single
issue
with
the
checklist
wanted
to
make
sure
that's
happening
at
the
appropriate
time,
but
also
to
make
it
repeatable.
I
So
if
a
half
business,
if
it
needs
to
happen
over
the
course
of
several
release
candidates,
we've
got
essentially
a
script
that
we
could
repeat
whether
we
could
clone
and
repeat
to
make
that
to
make
that
easier.
The
challenge
there
is
that,
and
the
reason
why
this
doesn't
just
happen
easily
is
because
there's
a
large
volume
of
these
of
these
issues
at
least
there
was
intent
I've.
So
you
know,
BJ's
take
was
afraid.
I
Just
do
this,
the
the
simple
naive
way
in
10
five,
it
would
have
been
far
too
much
to
QA
in
the
first
release
candidate,
so
we
need
to
figure
out
a
way
to
do
the
highest
party
stuff
or
spread
it
out
over
the
over
the
release
process
and
that's
that's.
An
interesting
challenge
was
something
that
I
hope
that
Marin,
but
overly
you
can
delegate
to
one
of
the
next
release.
Managers
can
help
figure
out
in
partnership
with
with
QA
nejd
Shawn
you're
up
next.
G
Oh
yeah,
this
is
just
a
Tom
I.
Had
it
from
earlier
to
create
this
clear,
single
source
of
truth
for
the
object,
storage,
migrations
we
are
performing
on
Caleb
comm.
What's
blocking,
though
he's
working
on
them,
etc.
I
think
camel
actually
volunteered
to
do
this
instead,
so
thanks
camel.
That
makes
it
easier
for
me.
Okay,
so
I
will
ask
you
for
a
contributor.
I
Okay,
so
I
think
that's
the
last
improvement,
so
just
looking
at
the
list
that
we've
generated
so
far
Fabio's
first
one
is
a
good
one,
but
I
think
it's
it's
something.
That's
sort
of
a
best
practice
that
we
need
everybody
to
do.
We
need
managers
holding
their
teams
accountable
to
merge,
smaller
and
merge
earlier.
We
need
peers
to
hold
each
other
accountable
and
the
code
review
process
to
do
this,
but
isn't
necessarily
a
large
action
item
that
we
can
give
to
a
single
person
here,
but
definitely
I
think
that's
a
good
one.
I
We
should
do
that.
Philippe,
you're,
I,
know
about
communication
between
caed
product
and
engineering
teams.
I
think
that's
correct,
but
we
do
need
to
think
about.
Like
specifically,
what
can
we
try
process
wise
communication,
wise
people
wise
to
make
that
more
actionable?
My
third
one
I'm,
biased
but
I-
think
we
definitely
need
to
do
this.
I'm
gonna
go
ahead
and
bold.
It
I'll
put
it
on
Marin,
but
I'll
hope
that
he
delegates
that
to
a
release,
manager
and
then
Shawn
and
Camille
I
think
we
could
definitely
use
this
grouping
of
the
object,
storage
issues.
I
I
think
it's
been,
it's
been
pretty
regularly
falling
within
30
minutes.
I
think
this
one
was
an
aberration,
but
I
think
it
was
for
a
good
reason.
So
I'm
okay,
with
going
eight
minutes
over
and
just
trying
to
continue
to
force
it
into
30
minutes.
Maybe
we
could
do
the
next
ones,
just
remind
people
to
be
cognizant
of
the
of
the
time
and
and
make
sure
that
they're
being
efficient
and
if
we
go
over
make
it
a
deliberate
decision.