►
From YouTube: GitLab 10.3 Retrospective
Description
Follow along in our doc: https://docs.google.com/document/d/1nEkM_7Dj4bT21GJy0Ut3By76FZqCfLBmFQNVThmW2TY/edit?usp=sharing
A
B
Thanks
Victor
Stan,
maybe
you
can
you
can
raise
been
or
figure
out
what
the
status
of
his
item
is
and
and
cover
that
for
us
later
so
well.
Well,
this
month,
I
heard
from
multiple
people
that
are
released.
Managers,
Oswaldo
and
Thiago
did
a
really
nice
job
under
difficult
circumstances,
a
difficult
release.
So,
thanks
to
both
of
you,
I
think
as
a
reminder,
we
now
have
two
Tiago's,
so
this
was
Thiago
be
next.
B
In
general,
the
release
was
difficult,
but
the
production
team
did
use
a
new
roll
back
strategy
where
they're
rolling
back.
You
know
early
and
often
and
keep
the
in
closer
eye
on
metrics
and
has
seen
to
limit
customer
impact.
So
I
think
that's
that's
a
small
win
there.
Given
the
circumstances,
Kingdom
did
measure
four
nines
of
availability.
That's
good,
but
I
think
there's
some
some
controversy
about
whether
or
not
the
the
monitoring
that
we
have
is
really
sufficient
enough.
B
B
Obviously
this
is
a
complex
problem,
especially
you
know
the
difference
between
one
at
gran,
Galib
converses
see
I
like
there's
a
lot
of
complexity
there,
but
it
is
something
that
we
want
to
to
make
progress
on
and
then
switching
gears
to
what
went
wrong
this
month.
I
happen
to
put
a
lot
of
items
in
here.
That's
because,
as
the
release
was
going
on,
I
kind
of
did
a
pre
retrospective
just
to
kind
of
keep
myself
informed.
B
As
I
talk,
if
there
are
people
who
are
closer
to
these
issues,
that
want
to
correct
me
by
all
means
kind
of
jump
in
but
I'll
I'll
kind
of
lead
for
now.
The
first
thing
we
noticed
was
that
our
c1
and
our
C
were
under
playable.
Our
c3
was
deployed
at
9
a.m.
on
the
19th
I.
Think
a
lot
of
people
brought
up
the
point
that
we
need
to
to
ship
our
T's
more
often,
I.
Think
that's
true.
B
In
this
case
it
would
not
have
been
the
correct
decision
to
ship
our
C
ones
and
because
they
were
that
problematic.
So,
yes,
we
should
ship
earlier,
but
no
we
should
not
have
shipped
RC
went
into
that
was.
That
was
a
correct
decision
because
they
were
they're
problematic,
but
let's
say
going
forward
that
the
13th
or
the
14th
of
the
month
is
as
ideal,
maybe
even
earlier.
B
B
Secondly,
there
was
a
Mis
configuration
of
virtual
PG
bouncer
names.
This
is
something
where
I
think
maybe
Jason
had
Tevin
I'm,
not
sure
chases
back
to
something
that
was
really
committed
in
August.
This
was
a
long-standing
issue
there,
the
the
load
and
the
other
issues
that
we
had
revealed
it
for
the
first
time.
Next
we
had
a
performance
progression
from
Prometheus
and
I
think
this
was
a
case
a
little
bit
different
than
what
happened
in
the
previous
release.
But
in
this
case
there
was
a
section
of
code
that
was
merged.
B
It
was
known
to
be
problematic.
There
was
also
a
hotfix
or
a
patch
for
it
rather,
and
that
path
was
not
merged,
so
the
bad
code
went
out
rather
than
the
bad
code,
plus
it's
patch
so
sort
of
a
release
management
issue
in
general
there
and
then
next,
the
from
what
I
understand
the
build
artifacts,
the
dislocation
of
them
changed
in
the
C
ICD
code
base
that
went
out
I
believe
the
problem
here
was
that
it
wasn't
tested
on
staging
before
it
went
out.
B
C
B
Don't
have
a
testing
environment
for
this,
but
then
we
decided
to
ship
anyway,
so
I
think
to
to
bus
players
there
and
then.
My
last
item
is
just
that
sort.
The
races
he's
scheduled
retrospective
didn't
happen
last
week.
It
was
probably
unrealistic
and
we
we
should
have
moved
it,
but
it
was
one
of
these
sort
of
zombie
meetings
that
I've
mentioned
so
next
time.
B
Let's
identify
that
either
the
meeting
organizer
isn't
there
to
perform
the
YouTube
stream
or
not
enough
critical
mass
of
people
are
there
to
do
it
and
go
ahead
and
just
push
it
out
a
week
in
advance,
rather
than
let
it
be
sort
of
a
confusing
experience
for
people
and
then
Tim
you've
got
the
next
item.
Multi
file,
editor
yeah,.
D
So
the
modifier
editor
was
restructured
and
we
factored
and
we
added
awesome,
Tufted
a
new
data
state
instructor.
So
what
happened
is
evidently
this
movement
from
inside
the
file
repo
into
a
complete
new
route
and
having
it
on
a
new
route
before
these
four
to
five
además
requests.
At
the
same
time,
going
in
in
one
huge
mush
request
simply
at
the
problem
that
we
were
not
able
to
get
test
screen,
we
had
the
problem
that
as
soon
as
we
had
tests
screen,
we
already
had
much
conflicts
again,
especially
on
the
seventh
classic
problems.
D
If
we
have
run
each
change,
we
really
need
to
find
a
way
from
our
side
to
be
able
to
really
limit
this
down
and
have
a
spreadsheet
before
that
in
place.
The
whole
thing
is
now
merged,
but
end
of
war.
That's
the
nice
thing,
and
from
now
on,
we
should
be
able
to
do
all
the
things
in
very
small
iterative,
much
requests
what
we
are
already
doing
and
we
have
way
less
dependencies
now
and
that
helps
a
lot
Camille.
E
Even
that
we
prepare
the
things
quite
quickly.
We
didn't
manage
to
merge
these
feats
to
to
the
actually
to
make
this
feature
working
to
the
stood
to
the
stable
release
due
to
all
other
issues
that
we
had
and
other
pressure
on.
The
release
managers,
which
kind
of
ended
up
in
having
the
the
release
stable
release
the
just
the
one
on
the
22nd
not
really
have
the
feature
working
as
as
Intendant,
and
he
started
working
after
the
first
part
release.
It
was
kind
of
big
disappointed
that
we
actually
had
to
do
that.
B
Thanks
Kyle,
so
the
first
thing
I
think
it's
just
more
discipline
about
what
is
in
the
release.
I
think
this
is
something
that
you
know.
The
production
team
has
some
eyes
on.
The
the
build
team
is
some
eyes
on
the
release.
Managers
have
some
eyes
on,
but
really
we
need
more
participation
from
the
backend
and
front-end
teams
themselves.
The
situation
I'm
specifically
thinking
of
is
that
prometheus
prometheus
patch
that
fixed
a
known
problem
and
just
didn't
get
into
the
release.
B
I,
don't
know
that
any
anybody
else,
except
for
the
team
in
question,
is,
is
well
situated
to
really
guarantee
that
that
gets
in.
So
I
don't
want
to
pick
on
any
one
team.
It's
just
an
example,
but
I
think
the
ask
is
for
everybody
to
Shepherd
their
work
into
the
release
and
partner
with
the
release
managers
to
make
sure
that
that
happens.
B
The
next
is
perhaps
the
build
team
rotate
as
release
managers,
so
we
rotate
today,
which
is
good.
You
build
up
some
tribal
knowledge
about
how
to
do
release
as
well
and
then,
because
our
team
is
so
large,
we
rotate
those
people
off
and
that
tribal
knowledge
is
sort
of
lost.
We
don't
want
to
put
it
on
any
one
person.
Nor
do
we
want
to
have
anybody,
be
a
full-time
release
manager
but
I
think
the
suggestion
is
to
rotate
it
within
a
team.
B
So
at
least
the
tribal
knowledge
that
we
build
up
is
located
with
another
team.
I.
Think
it's
easier
to
share.
Is
there
to
document
and
not
be
so
tribal,
nology
anymore?
If
we
do
that
and
Marin,
if
you're
on
the
column,
maybe
you
want
to
speak
up
because
I
think
there's
we
chatted
about
this
I
think
it's
maybe
not
quite
as
easy
as
this
or
or
maybe
it
is
so.
F
So
my
suggestion
right
now
would
be
have
the
build
team
on
release
management
for
a
while
get
one
person
from
the
build
team
to
be
dedicated
over
the
period
of
let's
say
two
months
or
three
months
and
then
get
one
person
from
any
of
the
back-end
teams
or
front-end
teams
to
help
out
also
in
the
same
period.
Basically,
so
if
we
decide
that
someone
from
the
build
team
is
going
to
be
there
for
two
months,
then
one
person
from
any
of
the
back
end
or
front-end
teams
should
be
there
also
for
for
two
months.
F
So
we
can
slowly
build
up
more
discipline,
more
knowledge
there
and
you
can
then
easily
know
between
two
releases
or
three
leaves
the
on
what
we
decide
who
to
whom
you
you
should
talk
to
about
getting
your
changing.
The
reason
why
I'm
suggesting
getting
more
people
in
here
is
because,
if
we
are
also
under
power
right
like
we
also
have
a
lot
of
things
to
work
on
and
if
we
are
solely
dedicated
for
this,
we
are
going
to
miss
all
our
goals
and
you're
going
to
depend
less
than
us
for
the
things
that
you
need.
B
I,
like
that
I
think
that's
reasonable
and
I'd
make
I.
Do
when
I
remind
people
that,
like
in
some
ways
were
just
you
know,
we're
looking
ahead
to
the
GCP
migration
project
and
and
having
continuous
delivery.
So
a
lot
of
the
things
we're
talking
about
doing
our
stopgap
measures
to
get
to
that
point.
In
which
case
you
know,
the
collaboration
I
talked
about
between
teams
and
production
and
release
managers
that
would
be
native
in
a
continuous
delivery
world
that'll
have
to
happen.
B
Your
code
watch
ship,
so
this
isn't
somebody's
good
practice
for
that
and
in
some
ways
the
burden
that
we're
talking
about
getting
on
the
build
team
is
somewhat
temporary,
where
ideal
world
after
doing
continues
delivery
for
a
while
is
that
it's
almost
all
serviced
by
the
teams
themself
as
kind
of
like
an
end
state.
So
okay,
so
this
is
maybe
a
good
candidate
for
something
that
we
we
take
out
of.
B
B
Maybe
do
one
per
release
or
one
per
release
candidate,
if
we're
gonna,
if
we're
gonna,
do
the
more
frequent
or
at
least
candidates
that
might
help
there
I
think
I
already
covered
this
one
prometheus
actively
managing
their
patches
getting
into
releases
this
next
one
is
important.
I
think
we
need
to
invest
in
the
staging
environment
for
CI,
CD
I
think
this
is
good
work
going
forward.
You
know,
as
the
the
production
team
is
performing
the
GSP
migration
project,
they're
working
on
setting
up
pristine
environments
through
automation.
B
We
want
to
make
sure
that
now-
and
also
in
that
future
state
that
those
environments
are
sophisticated
enough
for
the
CIC
detune
to
be
able
to
stress
their
with
with
code
changes.
So
I'd
like
to
see
some
coordination
there
I
think
this
is
maybe
a
good,
a
good
candidate
as
well
next
I
think
in
the
case
of
CI
CD,
this
particular
CI
CD
change.
This
may
have
been
something
where
we
wanted
to
actually
hold
it
out
of
the
release
and
wait
for
better
testing.
So
what
I'd
like
to
do
in
the
future
is
you
know?
B
Bj
is
working
on
every
release
he's
seeing
these
things
come
through
if
he
finds
one
that
he
says
you
know
like.
We
just
haven't
tested
this
enough.
That
carries
enough
risk
where,
if
we're
putting
them
in
production,
we're
basically
doing
it
with
a
series
of
unknowns.
He
can
explain
it
to
me
and
we'll
talk
about
it.
We'll
talk
about
with
the
team
and
and
and
I
may
choose
to
hold
that
back
from
the
release,
rather
than
carry
the
risk
for
downtime,
so
it'll
be
a
conversation,
but
just
know
that
we're
willing
to
take
that
step.
B
A
Deployed
stuff
on
staging,
but
we
didn't
have
the
same
monitoring
on
learning
stuff.
I've
tried
to
go
to
the
dashboards
and
kevanna
for
staging.
It
was
empty.
So
that's
one
thing
we
clearly
improve
and
it
would
have
helped
catch
a
lot
of
the
migration
issues
that
we
saw
in
the
RCS
and
the
next
one
is
basically
the
thing
that,
along
the
same
points
of
Lewis
matter,
just
deploy
these
migrations.
A
They
take
a
long
time,
but
we
have
no
visibility
and
which
ones
took
longest
and
flag
and
ways
to
flag
those
that
took
a
long
time
and
say
review
them
later.
So
we
have
issues
and
people
who
are
do
you
I,
think
James
already
added
the
merge
request
to
show
the
deployment
times
and
so
that
we
can
just
cut
and
paste
the
output.
And
then,
if
there's
one
that
takes
more
than
five
minutes,
we
can
look
at
it
and
say:
hey,
that's
clearly
wrong.
A
Qa
tests,
I
think
we
it's
clear
that
we
need
a
QA
test
for
the
get
up
importers
and
some
people
who
use
that.
Quite
often,
people
use
that
to
move
over
to
get
lab
and
as
a
backstop,
you
know:
if
QA
fails,
we
need
we,
we
rely
on
sentry
to
catch
any
issues
and
I
don't
know
they
ask
of
everybody.
Here
is
when
the
release
candidate
goes
out.
A
Just
look
at
sentry
every
day
every
morning
to
see
if
there
anything
that
sticks
out-
and
there
are
a
number
of
things
that
did
stick
out
this
release
and
I
can't
be
the
only
one
doing
and
other
people
have
to
be
doing
as
well,
because
we
could
catch
a
lot
of
problems
a
lot
faster.
If
we
saw
this,
this
is
again
as
a
backstop.
Everything
else
fails
century
to
serve
our
last
resort
and
we
should
use
it
CJ.
Oh.
G
Yeah
this
was
actually
handled
already
by
Eric
in
what
went
wrong,
I
think
but
yeah.
Basically,
it
was
planned
originally
for
the
second
Christmas
Day,
which
is
a
national
holiday
I
guess
in
many
countries
at
least
historically
Christian
Christian.
So
that
was
my
point.
Maybe
we
check
the
availability
calendar
but
anyway,
Eric
already
handled
this
and
then
Oswaldo
I'm,
not
sure
if
someone
else
wanted
to
take
this
point.
G
If
no
one
speaks
up,
I'll
read
up.
I
know
this
that
this
would
be
a
big
process
change,
but
deploying
more
often
wouldn't
make
our
cease
and
final
releases
more
reliable.
The
current
release
process
leads
to
a
big
file.
Change
is
deployed
on
our
c1,
which
now
my
text
shifts.
Where
was
I
deployed
on
our
c1,
which
usually
leads
to
critical
bugs
which
generates
Holtz,
patches,
rollbacks
and
delays.
Is
there
an
existing
issue
discussing
the
possibility
of
daily
and
weekly
deploys?
I?
Think
Eric
just
talked
about
this.
B
C
C
The
problem
is
that
if
those
chops
are
for
whatever
reason,
they're
tried
or
the
jobs
get
pulls
for
a
while,
they
can
all
still
love
being
executed
in
the
same
time
period
which,
depending
on
the
work
they
do,
can
cause
height
in
baseball
like
we
saw
earlier
today,
one
potential
idea:
I
have
no
idea
how
difficult
this
would
be
to
implement.
Is
that
when
we
take
it
back
on
job,
regardless
of
the
scheduling
interval
after
we
execute
it,
we
ensure
that
that
particular
process
doesn't
pick
up
any
jobs.
C
For
that
say
another
ten
minutes
you
can
take
it
a
step
further
say
that
all
processes
won't
do
that.
But
then
we
could
only
do
one
job
for
ten
minutes,
which
I
think
is
over
the
two
extreme
or
maybe
you
want
to
scope
it
for
Migration
class,
either
way
when
we
pick
up
jobs,
we
somehow
should
make
sure
that
this
a
very.
E
How
about
thinking
about
something
different
that
takes
into
account
system
usage
like
one
one
problems
that
we
had
with
bygone
migration?
Seated
to
my
right
that
week,
replication
lag
so
maybe
saying
that
we're
on
a
Splenda
migration.
As
long
as
we
can
keep
a
replication
locked
in
like
same.
C
E
And
like
cpu
set
of
database
system
in
the
same
state-
and
this
is
maybe
the
way
how
we
model,
because
right
now,
we
just
schedule
in
intervals
it's
kind
of
doesn't
care
if
we
have
like
to
schedule
twenty
millions,
because
this
interval
would
have
to
be
like
very
short
and
we'll
say
the
problem,
not
us,
but
also
when
we
have
Hitler
running
through
the
day.
We
have
the
times
where
we
can
actually
effectively
run
way
more
data,
we're
more
migrations,
because
the
system
usage
is
much
lower.
E
C
Something
I
been
thinking
about
the
sub
migration
help
is
there
a
tricky
thing
is
Oh
for
one
it
will
be
poses
eco
only
but
I'm
perfectly
fired
focus
in
my
own
words
screw
my
sequel
and
the
interval
would
still
be
sufficient
enough
for
that.
The
tricky
thing
is
the
replication
lag.
Is
a
I'm,
not
sure
if
you
can
measure
it
by
just
using
the
primary
I
think
you
can
get
it
if
you
use
replication
salts
so,
but
how
accurate
it
will
be.
I,
don't
know.
C
C
Directly
correlated
to
what
the
work
is
that
you're
doing,
and
so
you
can
do
it.
It
definitely
sounds
interesting,
but
it
will
be
tricky
to
to
implement
and
you
could
have
cases
where
you,
you
start
your
work.
You
say:
okay,
we
have
no
load,
so
you
do
a
whole
bunch
of
work
and
then
suddenly
these
statistics
they
get
updated
or
whatever
you
see.
Oh,
we
have
I,
look
all
right.
Let's
stop
doing
the
work
that
that
point.
You
might
have
already
caused
unnecessary
high
load
for
a
couple
of
minutes.
So
it's
it's
a
tricky
thing.
C
I,
don't
think.
There's
a
simple
solution,
but
I
think
eventually
of
all
you
probably
end
up
with,
is
something
where
we
force
them
into
whole.
Look
out
less
like
just
the
minimum
of
say
two
units
per
job
and
then
we
could
probably
build
on
top
of
that
and
say:
oh,
if
we're
doing
background
migrations,
but
our
replication
alack
is
above
X
seconds.
Then
we
wait
two
stat
amount
of
time,
for
example,
for
the
settle
something
like
that.
But
it's
definitely
something
you
should
consider.
B
Okay,
thanks
thanks,
York
I
think
we're
there's
a
there's
a
rathole.
We
could
go
down
here
so
I
want
to
make
sure
we
use
the
rest
of
the
rest
for
agenda,
but
I
think
you're
on
the
right
track.
Now,
so
I
wanted
to
make
sure
you
have
time
for
two
things:
one
is
Assad
bends
on
the
colonists,
so
I
want
to
circle
back
to
the
action
and
from
the
previous
order,
spective
and
then
nominate
a
champion
for
the
action
is
out
of
this
threat
perspective.
H
B
I
think
the
the
delay
has
been
the
production
team
is,
is
rolling
the
azure
fleet
due
to
a
bug
in
their
hypervisor,
so
that's
to
lay
some
of
the
things
they
were,
but
they'll
jump
on
that
as
soon
as
that's
done,
they
have
a
hard
deadline
of
the
10th
to
make
sure
that's
completed.
Thank
yous
though
yeah.
Ok,
so
then
we've
got
a
I
highlighted
three
good
candidates
to
be
major
action
items
at
this
right
perspective,
York
I
think
yours
is
a
is
a
good
one.
B
It's
sort
of
multifaceted
and
you
already
picked
up
an
action
item
in
the
infrastructure
meeting
this
morning
to
kind
of
like
think
about
what
our
ideal
migration
tool
is
so
I
think
I
think
I
would
not
nominate
this
one
when
you're
already
working
on
into
we
don't
quite
know
what
the
answer
is
like
I
think
you
have
to
scope
that
out
and
then
we
need
to
look
at
the
effort
and
maybe
do
a
couple
of
iterations
on
it.
So
I
think
you're.
Are
you
already
working
on
that
one?
B
But
let's,
let's
not
highlight
this
one
now
cuz,
it's
not
definite
and
then
it
that
remains
to.
When
is
the
process
one
and
what
is
a
technical
one,
so
I
think
we've
done
this
champion
process
a
couple
times
now,
and
people
have
done
a
good
job
in
deliverance.
I
think
we're
graduating
to
the
point
where
we
can
take
on
two
rather
than
one,
and
so
why
don't
we
do
both
of
remaining
ones
so
Marin?
If
you
could
basically
take
on
this
process
change
around
I'm
having
your
team
focus
on
being
the
release
managers
schedule.
B
You
know,
members
from
other
teams
to
cooperate
that
you
cooperate
with
them.
You
know
documenting
the
tribal
knowledge.
That's
been
building
up,
taking
SIDS
comments
here
into
account
regarding
the
CDE
merge
and,
basically
you
know
kind
of
retooling.
Our
process
here
I
think
that's
a
great
one
and
then
Camille.
E
F
B
Yeah
I
guess
the
alternative
would
be
you
spend
this
release,
designing
a
process
and
roll
it
out
the
next
one,
but
I
feel
like
I
feel
like
we
know
what
we
need
to
do,
and
rather
do
the
first
iteration
now,
even
if
it's
we're
already
a
little
bit
underway
and
they
get
better
at
it
for
the
second
one
rather
than
waiting
30-ish
days.
I
think
this
is
a
high
priority.
Okay,.
F
B
B
He
discovered
that
over
okay
yeah
now
so
the
update
from
him
was
that
they've
they've
got
it
basically
ready,
they're
waiting
on
the
production
team
to
get
it
activated.
The
production
team
is
currently
rolling.
Our
azure
fleet,
dude
hypervisor
bug
so
they'll
pick
it
up
when,
when
they're
done,
but
I'll
make
sure
that
connection
is
made
all
right
thanks.
Everybody
have
a
good
day.