►
From YouTube: 14.1 Retrospective (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
The
game
plan
for
today
is
that
we
are
going
to
walk
through
previous
retrospective
improvements,
we'll
spend
about
five
minutes
on
that
walk
through
two
discussion
topics
that
we
have
queued
up
for
today,
document
and
for
tracking
for
next
time
track
the
improvements
that
we
want
to
track
for
the
next
retrospective
and
then
wrap
up
before
we
get
into
previous
retrospective
improvements.
I
want
to
just
remind
everyone:
14.1
was
the
release
re-released
on
july
22nd,
just
a
couple
of
weeks
ago.
A
Some
of
the
highlights
from
that
release
that
we
included
in
our
release
post
were
that
we
included
a
home
chart
registry
ci
tunnel
for
kubernetes
agent,
merge,
request,
approvals
or
sorry
code
coverage
approvals
in
the
merge
request,
as
well
as
escalation
policies,
so
plus
almost
50,
plus
other
improvements
across
the
product,
so
really
great
work
by
the
whole
r
d
department
to
deliver
another
wonderful
release
and
we're
here
to
do
a
retrospective
on
how
that
went.
A
We'll
start
with
previous
retrospective
items,
so
the
first
one
was
christopher,
who
I
didn't
see
on
the
call,
was
tracking
a
improvement
around
maintainer
availability.
I
think
from
reviewing
last
month's
retrospective.
The
question
was
about
our
ability
to
have
data
and
visualization
about
maintainer
bandwidth
and
availability.
Does
anyone
have
a
update
on
that
topic?.
B
Just
I
don't
have
any
thorough
update,
but
I
looks
like
there's
been
some
progress
on
that,
but
the
visualization
isn't
available
yet
so
maybe
next
month
we
could
check
in
again
on
that.
A
Great
thanks
sam,
I
will
queue
it
up
for
continue
trekking
to
next
month
and
then
the
next
one
was
donald,
who
I
think
this
was
actually
a
carryover
from
the
previous
retrospective
about
getting
a
merge
request
merged
to
review,
p1s
and
s1s,
and
I
think
I
remember
the
discussion
was
partially
about.
We
just
haven't
had
enough
time
to
see
new
p1s
or
s1s
come
in.
Does
anyone
have
an
update
from
for
donald,
oh
donald,
johnson,.
C
Hi
yeah
yeah,
that's
correct
your
overview.
Unfortunately,
I
was
looking
through
the
14
one
issues
and
no
well
probably
not.
Unfortunately,
maybe
fortunately,
there
were
no
p1s
ones
within
the
planned
stage,
so
nothing
to
nothing
to
really
review
as
to
whether
the
fr
helped
it
at
all.
So
we
should
probably
keep
this
for
one
more
and
then,
if
we
don't
have
one,
I
think
we
can
probably
just
go
ahead
and
create
the
mr
at
the
global
stage
which
I'll
do
for
the
14
2
retro.
A
Okay
makes
sense,
yeah
and
I
think
not
having
p1's
s1s
is
a
great
outcome
and
we
shouldn't
necessarily
keep
track
of
this
until
we
have
a
proof
point.
So
I
like
that
plan,
we'll
track
it
for
next
release
and
then,
if
we
don't
we'll
move
it
to
a
global
proposal,
tanya
you
have
the
next
one
about
licensing
timelines.
D
Yeah
that's
correct.
Unfortunately,
after
the
last
retro
I
was
out
sick
for
three
weeks
of
last
four,
so
I
haven't
been
able
to
make
progress
on
here.
So
I'll
have
an
update
for
next
time,
so
we
should
track
this
again.
A
Okay,
thanks
tanya
and
then
the
last
one
was
shawn
hoyle.
Who
is
a?
A
I
think,
a
technical
account
manager
for
one
of
our
customers
was
pointing
out
that
it
was
difficult
to
find
information
about
some
breaking
changes
with
pages
that
we
released
in
fourteen
one,
I
mean
I
didn't
get
this
update
specifically
from
the
release
group
who
owns
the
pages
category,
but
I
did
notice
that,
in
our
praise
for
14.1,
nicole,
the
engineering
manager
for
release
highlighted
that
the
team
was
doing
a
great
job,
communicating
with
customers
and
getting
feedback
from
them
about
that
process.
A
A
All
right,
no
other
carryover
improvement
tasks
from
the
last
retrospective.
So
let's
move
on
to
proposed
discussion
topics.
So
the
two
discussion
topics
that
we
didn't
have
any
proposals
or
votes
for
discussion
topics.
So
I
I
chose
these.
They
were
things
that
I
was
interested
in
learning
more
about.
So
the
first
one
was
a
comment
from
nick
and
the
geo
team
who
was
highlighting
that
they're
planning
on
doing
ad
hoc
retros
for
like
s1
and
s1
issues
or
incidents
and
or
large
epics
that
were
recently
closed.
A
Dan,
I
see
you
typing
I'm
happy
to
take
notes.
While
you
verbalize.
E
Thanks
kenny
yeah,
the
opposite
department
started
doing
we
had
a
otr
for
last
quarter
to
do
rca's,
root,
cause
analysis
and
retrospectives
on
incidents
contributing
to
those.
I
don't
know
if
sam
you
want
to
expand
on
that,
a
bit
more.
That
was
for
the
whole
department.
So
I
wanted
to
mention
that.
B
Yeah,
I
could
expand
on
that
a
bit
more
so
yeah,
like
dan
said,
we've
been
there's
a
company-wide
kind
of
infrastructure-driven
process
for
doing
rcas
and
that's
specific
to
incidents.
So
I
think
this
question's,
a
little
broader
than
that
one
thing
we've
been
doing
in
ops-
is
getting
development
teams
more
heavily
involved
in
that
process.
B
So
having
teams
when
it
there's
a
connection
back
to
that
development
team
review
those
rcas
or
complete
them
in
the
cases
that
they
are
not
complete,
which
is
pretty
typical,
so
I
think
you
know
we're
still
scoring
that
getting
some
feedback.
You
know
I've
talked
to
everybody
who's
done
that,
but
the
idea
is,
to
you
know,
get
more
of
a
feedback
loop
around
what's
happening.
B
I
guess
there
was
another
kind
of
thing
that
came
up
today.
I
thought
was
relevant
to
this,
where
there
was
an
s1
it
incident
about
10
minute
degradation
earlier
today
could
have
been
the
european
day
and
it
tied
to
an
issue
that
is
gregorian.
The
verified
team
has
been
investigating
for
a
while,
which
is
related
to
kind
of
database
transactions,
save
points
and
we're
actually
planning
to
do
a
retro
on
that
this
isn't
a
specific
incident.
B
This
is
more
of
a
effort
to
investigate
and
resolve
this
issue,
and
one
of
the
things
that
we're
hitting
here
and-
and
I
think
have
hit
you
know
in
a
few
well,
I've
seen
it
more
than
once
is
when
you
have
a
problem.
There's
you
know
incidents
happening
because
of
that
problem.
It's
not
always
obvious.
B
B
We
have
this
group,
which
also
has
a
lot
of
other
priorities,
investigating
this
issue,
which
is
looking
more
and
more
like
kind
of
a
global
database
issue,
but
we
don't
really
have
the
information
yet
to
say
that
with
certainty-
and
you
know
until
we
really
fully
understand
it,
we
want
so.
The
idea
was
to
have
a
retro
that
was
kind
of
focused
on
answering
that
question
like
how
do
we,
how
do
we?
B
A
Yeah
thanks
sam.
I
I
appreciated
your
call
out
that
I
think
this
issue
is
kind
of
broader
scopes
than
just
s1
issues,
and
I
would
imagine
that
doing
a
retrospective
on
something
like
an
s1
incident
is
a
little
bit
more
investigatory
than
doing
a
retrospective
about
an
epic
that
might
be
a
little
bit
more
about
process
and
how
the
team
members
felt
that
the
process
could
be
improved
or
what
went
well
does
it
do?
E
Milestones,
I'm
not
I'm
not
sure
if
it's
applicable
here
either.
I
think
one
of
the
things
that
the
package
team
does
is
to
do
asynchronous
retrospectives
on
miss
deliverables
and
issues,
so
that
ends
up
being
on
something
where
we
missed
something
we
committed
to
we'd
do
an
asynchronous
just
it
would
be
in
the
actual
issue,
so
that
customers
and
interested
parties
could
all
see
that
and
in
scenarios
where
we've
had
bugs
that
have
come
up,
we
follow.
We
tend
to
follow
the
same
process
in
that
team.
E
F
F
I
usually
forget
to
do
it
when
the
time
comes,
and
so
part
of
this
was
around
trying
to
maybe
create
some
more
clear
guidelines
and
automation
to
detect,
maybe
something
that
runs
on
a
schedule
that
can
detect
when
issues
of
of
the
appropriate
criteria
are
closed
and
and
then
remind
us
to
to
do
these
or
maybe
kind
of
like
what
gitlab
bot
does
and
posts
a
comment.
Pinging
the
right
people.
E
I
think
the
intention
there
was
to
provide
that
feedback
to
people
which
requires
one
of
the
team
members,
usually
the
person
working
on
the
particular
issue.
They
need
it's
a
manual
process
for
them
to
provide
the
feedback
or
like
what
happened
and
where
we
ran
into
problems
and
that
led
to
us
not
being
at
the
deliver.
E
So
I
hadn't
actually
tried
to
automate
that
because
it's
touch
points
and
the
milestone,
and
even
when
something's
delivered,
it's
not
necessarily
focused
on
the
milestone,
but
as
we're
reviewing
our
deliverables
and
what
we're
going
to
miss
and
what
we've
managed
to
achieve.
That's
sort
of
part
of
that
process,
where
you
know
usually
it's
the
chord
planning,
but
often
it's
just
the
em
and
the
pm
they're
actually
talking
through
what
we
missed.
E
And
why
and
then
just
sort
of
having
the
team
be
reminded
that
these
are
things
that
we
want
to
do,
and
we
should.
We
should
provide
that
feedback
to.
Let
people
know
why
there
was
a
miss
deliverable
or
some
other
bug
that
came
up
as
part
of
this,
so
I
hadn't
really
looked
at
automating,
there's
not
enough
of
them.
E
I
feel
like
to
make
that
worth
while
early
on
and
it
seems
to
be
working
reasonably
well,
so
I
can't
really
offer
any
feedback
on
automating,
although
maybe
that
would
make
sense
broadly
across
the
org.
If
we
felt
like
mr
async
missed
the
liberal,
retrospectives
or
bug
retrospectives
would
be
valuable.
B
B
Yeah,
I
added
a
comment
here
saying:
typically
we
do
so.
We
have
rcas
and
retros.
Rcas
are
usually
reserved
for
outages,
so
something
like
a
p1
or
s1
an
incident
happens.
You
want
to
figure
out
why
that
happened,
and
then
the
retrospectives
are
more
on
I'd,
say
a
cadence
of
milestones
or
epics.
So
if
you
close
a
large
project,
then
you
do
a
retrospective
about
what
went
well
in
the
project.
B
What
can
be
improved
et
cetera
the
both
of
these
actually
have
action
items
that
come
out
of
it
and
improvements,
but
I'd
say
that
the
rca's
format
really
is
targeted
at
digging
deeper
into
whatever
that
root
cause
was
and
part
of
the
rca
is
asking
the
five
whys.
So
you
just
keep
keep
asking
why
until
you
actually
find
that
root
cause,
so
I
I
think
both
are
great
tools.
I
think
we
use
them
for
slightly
different
use.
F
I
also
thanks
for
differentiating
that
I
think
I
think
you're
right
that
that,
even
though,
in
the
original
comment
it
was
sort
of,
they
were
lumped
together
like
yeah,
there's
a
retrospective
and
narcier
to
different
tools
and
are
probably
better
suited
to
yeah
one
suited
to
maybe
like
an
outage
or
an
s1
and
the
other
for
completing
a
feature.
Epic.
A
Cool
great
great
comment,
nick
and
great
discussion.
Let's
move
to
the
second
item,
which
was
a
comment
from
scott
in
verify
testing,
who
highlighted
that
their
mr
rate
was
going
down
and
one
of
the
things
they
realized
was
that
that
might
be
because
they
had
a
number
of
technical
investigations
going
on
more
than
normal.
B
Milestone,
I
think
this
is
definitely
something
that
I've
seen
in
my
teams
whenever
we
have
to
do
like
a
technical
spike
that
spike,
you
know,
takes
a
fair
amount
of
research
time
and
that's
time
that
isn't
spent
contributing
code
into
our
code
base.
Therefore,
mr
rates
go
down.
B
I
think
one
way
to
sort
of
maintain
mr
rates
is
also
to
just
start
documenting
a
lot
of
your
findings
and
so
whether
that
be
in
a
handbook
where
you're
sort
of
documenting
what
that
technical,
spec,
which
you're
defining
might
be
so
I'd,
say
that
that's
probably
one
way
which
you
can
still
show
that
your
velocity
is
being
maintained.
It
might
be
a
little
bit
controversial
and
probably
discuss
that
more,
but
I
think
you're
still
able
to
maintain
your
mr
rates
and
still
able
to
show
that
you're
making.
G
Yeah,
so
we
are
very
transparent
and
if
you
talk
about
mris
on
bugs
the
data
is
in
the
handbook,
so
I
just
want
to
make
sure
everyone
is
aware
so
but
to
to
add
to
the
context
of
of
scott.
You
can
see
the
dashboard
of
the
verified
testing
group
here
as
well.
A
B
I
think
it
probably
goes
back
over
to
the
product
side
of
balancing.
You
know
you
want
to
do
investigation
issues
so
that
you
can
basically
define
or
lay
out
whatever
that
technical
architecture
is
going
to
be
implemented.
Two
three
four
milestones
out
so
basically
you're
laying
the
groundwork
for
future
work,
but
at
the
same
time,
I
think
you
need
to
show
velocity
in
your
current
milestone.
So
I
think
it's
a
balance
of.
F
Yeah,
I
was
just
going
to
say
that
geo
team
experienced
this
when
we
were
investigating
supporting
petronia
on
a
geosecondary
and
a
lot
of
it
actually
resulted.
There
were
some
code,
mrs,
but
a
lot
of
it,
resulted
in
just
documentation,
changes
and
sort
of
verifying
that
things
worked.
F
So
while
we
felt
like
we
were
being
productive,
it
didn't
really
result
in
a
bunch
of
mrs,
but
it
did
result
in
still
a
pretty
big
feature
announcement
for
something
that
was
requested
by
a
lot
of
our
large
customers
that
are
on
larger
environments.
F
So
I
we
did
try
to
limit
other
similar
types
of
investigation
issues
while
we
were
doing
this,
but
I
also
do
wonder
if
there's
for
these
types
of
for
for
this
type
of
work,
if
there's
like
another
way
to
measure
the
velocity
or
impact
besides,
just
just
the
mrs
that
are
produced
from
the.
H
Work,
I
think
it's
worth
pointing
out
that
the
xp
tradition
traditionally,
where
velocity
and
volatility
came
from,
you
measure
points
delivered.
You
know
not,
mrs
you
know
business
value
delivered,
and
so
it's
interesting
that
we
do
measure,
mrs,
which
are
a
measure
of
the
code
and
that's
not
necessarily
directly
correlated
to
the
value
delivered.
Like
you
just
said,
investigation
documentation
that
can
all
represent
user
facing
value
being
delivered.
E
Yeah-
and
I
think
my
comment
here
that
I
added
to
the
document
was
just
wondering
whether
you
know
as
we're
thinking
about
this
sort
of
fixes,
whether
we're
talking
about
being
vastly
under
the
mra
target-
or
you
know,
like
you,
know
one
or
two
percentage
points,
then
I'm
not
sure
that
we
need
to
fix
anything
necessarily
because
I
wonder
about
the
impact
on
you
know:
product
development
timelines
we've
got
product
goals
that
we're
trying
to
achieve.
E
I
think
one
of
the
ways
that
we've
tried
to
manage
this
is
limit
technical
investigations
to
like
a
one
point
that
you
can
be
easily
put
into
a
milestone
and
when
we
identify
something
that's
going
to
require
a
technical
investigation,
we
do
move
that
out
into
a
later
milestone
and
then
provide
the
team
the
space
to
actually
deliver
on
that
so
sort
of
managing
those
technical
investigations
in
terms
of
their
scope.
How
much
effort
goes
into
them
and
then
making
sure
we're
organizing
the
dependencies?
A
A
Mac,
do
you
want
to
verbalize
your
comment?
Thank.
G
G
B
This
may
I
mean
exist
in
a
lot
of
these
cases,
but
I
think
having
a
clear
criteria
for
kind
of
what
you're
trying
to
investigate,
and
ideally
maybe
there
are
ways
to
kind
of
iterate
on
that
and
potentially
even
deliver.
You
know
investigate
deliver,
investigate
delivery,
but
I
think
you
know
one
of
the
challenges
with
investigation
is
it
can
be
open-ended
right
so
for
something
to
recognize,
like
that's
enough
investigation.
Now,
let's
switch
into
a
different
mode,
it's
easy
for
it
to
kind
of
drag
on.
A
Yeah-
and
I
want
to
go
back
to
jerome's
point
earlier-
that
I
do
think
our
product
organization
needs
to
make
sure
we're
providing
that
scoping
of
how
like
how
large,
how
deep,
what's
the
end
goal
that
we're
trying
to
achieve
in
this
investigation
and
sometimes
if
we
enter
a
milestone
without
that
it
can
end
up
being
much
more
open-ended
than
intended.
A
A
A
Great,
that
is
a
good
transition
to
improvements
for
next
release,
so
we
captured
the
ones
that
we
carried
over
I'll.
Add
the
one
from
myself
any
other,
oh
time
just
capturing
for
me.
Thanks
tanya
any
other
items
we
should
capture
to
track
for
next
release.
A
All
right,
if
not
so,
wrapping
up
14.1
group
retrospective
great
discussion
about
handling
retrospectives,
whether
those
are
s
ones
or
epic,
s1,
root,
cause
analysis
or
s1
or
sorry
or
epic
retrospectives,
and
then
making
sure
that
we
have
strong
scope
on
our
technical
investigations
that
might
get
scheduled
into
a
milestone
and
again
lots
of
great
praise
and
other
areas
of
improvement
that
are
happening
from
as
a
result
of
the
group
retrospective.
So
thank
you
all
for
your
participation
and
I
look
forward
to
the
14.2
retrospective
in
a
month.