►
From YouTube: 14.8 Ops Section R&D Retrospective Discussion
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
If
you
can
want
to
see
the
recording,
then
please
go
to
the
recording
of
the
summary
for
the
one
going
well,
but
when
ground
and
what
can
we
improve
in
this
discussion?
We
are
going
to
go
through
some
topics
that
we
were
having
open
from
our
last
retrospective
areas
of
improvement
that
each
team
was
proposing,
and
then
we
are
going
to
go
through
one
of
the
discussion
topics.
A
So
to
start,
we
have
michelle
on
the
experiment
on
mr
little
hygiene
and
the
update
is.
We
do
have
some
dashboards
that
are
enable
us
to
measure
the
mrs
with
word,
type
label
and
package
stem
is
running
an
experiment
with
a
bot
that
is
reminding
engineers
to
label
their
issues,
I'm
expecting
to
share
the
numbers
and
the
change
after
the
bot
in
the
next
retrospective,
and
then
we
have
verify
pipeline
we're
proposing,
in
the
last
retrospective,
to
design
discussions
in
separate
issues
instead
from
the
implementation.
B
I
can
verbalize
and
then
sam
feel
free,
and
I
get
any
details
that
I
miss,
which
is
likely
so
recently
the
team
started
breakout
design
issues
from
and
discussions
just
in
general
and
separating
out
those
issues
between
design
back
end
and
front
end.
B
One
of
the
examples
I
placed
in
there
is
our
one
of
our
epics
that
we've
been
working
on
in
pipeline
authoring,
which
kind
of
illustrates
and
shows
how
the
issues
are
being
organized
and
separated
out
from
a
design,
front-end
back-end
perspective,
and
then
we've
also
adopted,
which
we'll
talk
about
in
a
minute.
The
implementation
table,
which
kind
of
illustrates
that
and
kind
of
organizes
kind
of
where
we're
at
in
that
process,
just
to
make
sure
that
nothing
falls
to
the
cracks
and
then,
as
a
follow-up
test.
B
We
also
have
updated
our
handbook
and
kind
of
updated
the
the
verbiage
around
how
we're
splitting
up
those
issues
so
that
everyone
can
understand
our
processes
as
well.
A
C
The
escalating
manually
created
incidents
is
ready
to
ship
in
14.9.
The
project
took
longer
than
expected
due
to
in
fluctuations
in
team
capacity,
as
well
as
more
complexity
than
originally
planned.
There
was
so
much
there
was
so
much
collaboration
on
the
team
to
complete
complete
this
work,
including
backend
engineers,
working
on
a
significant
portion
of
the
front
end
code
feedback
from
the
team
about
successes
and
working
together.
Pairing
in
code
reviews
has
been
excellent.
D
A
Thank
you.
So
the
idea
of
this
section
is
all
these
actions
were
coming
from
our
previous
retrospective
on
what
can
we
improve?
So
the
idea
of
having
this
discussion
is
to
see
how
other
teams
are
learning
and
try
to
learn
from
those
to
replicate
on
other
things,
if
it's
useful,
so
thank
you.
Everyone
for
sharing
on
the
discussion
topics,
I
added
a
topic
and
we
didn't
have
both.
So
let's
go
with
that
one.
I
was
wondering
if
we
are
measuring
any
way
our
diversity,
inclusion
and
belonging
value.
A
We
normally
have
our
what
went
well,
what
went
wrong?
How
can
we
improve
on
these
retrospectives
by
values
and
I'm
not
seeing
any
on
that
value
on
any
of
the
topics,
so
I
was
curious
like
how
their
teams
are
seeing
it.
Do
you
think
this
is
something
that
we
should
start
taking
a
look.
C
I
think
a
good,
a
good
thing
to
do
here
is
take
a
look
at
the
competencies
in
the
handbook.
C
C
We
can
look
at
contributions.
I
know
that,
for
example,
some
team
members
have
contributed
to
the
the
the
the
college
course
I
think
the
exact
name
of
it.
I
apologize
that
dava
was
leading.
People
have
contributed
to
that,
but
there's
not
necessarily.
C
I
don't
know
if
that
jumps
to
the
front
of
people's
minds
when
they're,
when
they're
taking
a
look
at
it-
and
I
think
this
particular
milestone
and
the
retrospectives
we've
had
for
this
milestone
have
been
a
little
bit
affected
by
you
know,
world
events
that
maybe
you
don't
want
to
get
into
too
much,
but
I
know
people
are
relatively
affected
by
this.
C
A
I
think
you're
bringing
an
interesting
topic,
because
if
I
read
from
all
the
things
that
we
have
on
our
retrospectives,
we
normally
focus
on
what
we
are
delivering
for
the
product,
how
the
team
is
working
and
I'm
pretty
sure
all
the
teams
have
examples
on
this
part
about
supporting
people
in
different
time
zones
on
collaborating
areas.
Like
all
the
examples
that
you're
mentioning,
we
are
just
not
voicing
them,
and
I
think
it
will
be
very
important
that
we
start
thinking
about
them.
When
we
are
feeling
the
talk.
E
Yeah,
that's
a
great
point.
Michelle
you
know.
One
of
the
intentions
of
structuring
this
by
values
is
so
that
we,
you
know,
highlight
great
examples
of
us
living
our
values
and
that's
the
way
we
further
our
values
in
the
first
place.
So
now
dan's
point
that
reviewing
the
values
and
the
sub
values
you
know
they're
they're,
probably
just
if
you
were
to
read
that
right
before
you
were
to
write
this
document,
you
would
think
oh
yeah.
This
was
a
great
example
of
that
value
being
lived
out
at
gitlab
by
the
specific
team
member.
E
But
sometimes
I
think
in
particular
the
dib
value
sub
sub
components
are
a
little
bit
harder
to
recognize
in
a
moment
when
you're
thinking
about
like
what
did
we
deliver?
What
was
what
how
did
the
team
work
together?
So
maybe
I
don't
know
how
the
doc
is
structured
right
now,
but
maybe
just
adding
that
reminder
like
remember
to
go
through
these
sub
values.
When
thinking
about
whether
to
put
your
highlights
down.
A
I
think
a
quick
improvement
can
be
put
a
link
to
the
handbook
values
on
the
document,
so
when
we
are
collecting
all
the
summary
we
just
created
before,
I
don't
know
how
other
managers
feel
like
changing
the
individual
retrospectives
to
do
it
by
values
can
be
an
option,
but
I
think
product
can
be
also
we're
heavy.
There.
E
F
G
No
no,
no
worries.
I
just
wanted
to
link
this
issue
here
that
I
saw
recently
where
a
lot
of
great
ideas
were
thrown
out
for
ways
to
not
only
implement
this
quarterly
goal
of
ours,
but
also
some
measurable
actions
such
as
maybe
make
x
number
of
meetings
in
a
rotation
for
different
time
zones
using
a
utc
plus
offset
when
discussing
times
etc.
C
Yeah,
I
think
part
of
what
we
can
look
at
we
can
be
looking
at
here
is,
as
a
you
know,
as
a
team,
we're
all
trying
to
make
sure
we're
actually
adding
thoughts
and
feedback
into
these
retrospectives,
because
it's
a
good
way
to
learn,
and
so
I
would
sort
of
encourage
everyone
on
the
team,
regardless
of
whoever,
whatever
role
you
have
in
the
team,
but
particularly
managers
when
you're,
actually
looking
at
your
retro,
if
you're
not
seeing
feedback,
then
maybe
that's
an
opportunity
to
ask
folks
about
why
they're
not
adding
feedback
there
and
see
if
there's
some
sort
of
general
thought
process
about
about
certain
aspects,
whether
that's
diversity,
inclusion,
belonging
or
not-
and
you
know
it's
important-
that
this
value
is
pretty
valuable.
C
Of
course,
we
have
it
as
a
value
otherwise,
but
in
general
you
want
to
be
looking,
I
think,
at
the
retrospectives
and
trying
to
make
sure
that
you're
getting
feedback
and
if
you're,
not
maybe
that's
feedback
in
itself
to
go
chat
with
people
about
it,
perhaps
in
one-on-ones
or
as
a
team
meeting
or
whatever
makes
sense.
A
B
So
I
may
be
opening
a
can
of
worms
here,
but
I
thought
I'd.
Maybe
just
ask
the
question
to
the
group.
Recently
I've
been
putting
a
lot
of
focus
towards
looking
at
just
the
overall
performance,
the
team
workload.
How
much
we
can
take
on.
You
know
with
a
lot
of
discussion,
at
least
within
pipeline
authoring,
around
iteration
cadences
and
looking
at
how
we
can
be
you
know,
executing
on
things
and
getting
things
to
done
in
a
a
fairly
efficient
manner.
B
I
started
thinking
about
as
it
relates
to
the
say,
do
ratios
within
the
teams
and
I'm
curious
from
other
teams
perspectives
how
they've
been
managing
that
or
how
they've
been
able
to
assess.
You
know
what
is
the
optimal
number
of
effort
that
each
team
can
take
on
while
still
being
able
to
move
forward.
So
I'm
curious
as
an
open
question
how
other
teams
are
doing
it
too.
A
B
Yeah,
so,
from
our
perspective
on
a
per
milestone
basis,
we
look
at
what
we're
committing
to
and
then
treating
it
as
when
it's
complete
is
when
it
has
that
workflow
production
label
attached
to
it,
and
so,
when
we
look
at
when
I
look
at
milestones
where
we've
had
higher
completions
of
or
higher
safety
ratios,
you
know
we've
looked
at
what
the
number
breakdown
has
been.
You
know
on
the
on
a
per
team
member
basis
and
saying,
okay,
that
okay,
that
milestone
went
really
well.
B
This
milestone
may
be:
maybe
it
wasn't
as
high
and
looking
and
comparing
what
those
numbers
look
like
again,
it's
not
it's
not
certainly
very
elaborate
or
or
detailed.
It's
just
looking
at
to
see
if
there's
trends
that
we
can
see
over
time
that
say:
okay,
this
is
you
know,
as
we
do
more
of
this.
This
is
you
know
an
ideal
number.
Obviously
there's
a
lot
of
variables
in
play.
B
C
Sorry,
cuteness
break
there
order.
Thank
you
yeah.
So
I
was
gonna.
Ask
mark
you
mentioned
say,
do
a
couple
of
times,
you
know
say:
do
is
a
really
good
measure.
I
think
for
looking
at.
You
know
what
we're
committing
to
and
how
much
of
that
we're
delivering.
But
you
know
in
theory,
you
could
look
at
a
say
doing
and
go
well.
We
got
100
because
we
only
committed
to
do
to
doing
two
three
two
or
three
things
right.
C
So
if
we
have
really
low
expectations
around
what
we're
delivering
and
also
you
know
how
we
measure
that
you
can
have
a
really
awesome
say
to,
which
is
why
something
for
a
team,
like
mr
rate,
is
really
valuable
because
you
can
you
can
you
want
to
from
my
view
anyway?
C
I
want
to
think
about
like
competing
metrics
that
are
sort
of
in
tension
a
little
bit,
and
so
I
would
say
you
know
one
of
the
things
is
taking
a
look
like
you
said
it,
what
we're
completing
over
time
across
milestones
and
then
sort
of
seeing
you
know
when
we
produced
a
lot.
What
did
that?
Look
like
one
of
the
other
things
that
I
think
might
be
helpful
there
as
well,
is
the
the
average
weight
of
issues
that
are
being
completed
as
well
and
sort
of
looking
at
that
and
going
okay.
C
C
At
least
you
know,
there's
a
heap
of
factors
there
to
be
thinking
about
as
contributions,
but
I
would
say
that
one
of
the
key
aspects
is
the
numbers,
are
helpful
but
and
can
be
like
really
really
critical
in
terms
of
like
looking
at
sort
of
that
aspect
of
performance,
but
talking
with
team
members
and
asking
them
hey.
How
did
this
one
go?
You
know
like
what
what
went
well
in
this
milestone,
which
is
why
we
do
these
retrospectives,
of
course,
so
you
can
be
like
hey.
We
had
so
many
things
going
on.
C
It
was
really
hard
to
get
through
and
know
what
to
prioritize
or
we.
We
did
this
awesome
job
in
security,
but
I
know
we
had
this
other
area
that
we
we
didn't
do
because
of
that.
We
needed
to
adjust
the
priorities
accordingly,
and
so
I
think,
getting
that
holistic
picture
and
identifying
which
matrix
there
have
been.
C
You
know
valuable
and
determining
how
that
went,
but
still
like
making
sure
we're
chatting
with
the
team
to
come
back
to
your
original
question,
I
would
sort
of
say
looking
at
sedu,
like
I
said,
is
really
great:
I'm
a
big
fan
of
seidu.
I
think
it's
a
sort
of
multivariate
type
metric
that
we
have
and
so
attributing
that
around
and
sort
of
going
well,
how
many
items
did
we
actually
complete
out
of
this
list?
Is
this
a
good
number
of
things?
Did
we
make
significant
progress?
What
is
progress?
C
What
is
progress
towards,
and
so
that's
kind
of
the
only
my
thoughts
on
I'll
stop
talking.
E
Yeah,
I
would
agree
with
dan,
and
I
think
one
thing
that
dan
mentioned-
that's
really
important
is
that
these,
whatever
performance
indicator
we
use
for
development
or
product
or
the
quality
team?
If
you
have
a
assigned
sct
or
design,
it's
really
important
that
the
whole
kind
of
quads
share
an
understanding
of
what
those
are
and
have
be
a
part
of
the
retrospective
to
ensure
that
they're
helping
each
function
achieve
their
performance
indicators.
E
E
In
the
case
of
mr8,
that's
very
much
focused
on
iteration,
maybe
it's
slightly
different
for
say
do,
but
just
making
sure
that
your
whole
quad
has
an
understanding
of
each
function's
performance
indicators
really
helps.
You
ensure
that
the
whole
of
the
group
is
is
involved
in
moving
them
rather
than
just
specific
functions.
A
So
you
mentioned,
like
you,
normally
take
a
look
at
the
savior
ratios
and
I'm
assuming
after
that,
you
review
the
weight,
as
that
was
mentioned,
like
the
weight
of
that
medicine.
That
went
very
well
so
probably
having
those
metrics
on
the
individual
retrospective
issues
that
we
have
can
kick
off
the
conversation
to
have
the
team
talking
about
what
really
worked
well
to
deliver
so
much
or
so
low.
B
Yeah
to
dance
point
two
looking
at
on
a
per
milestone
basis,
having
the
context
around
numbers
and
not
just
being
so
consumed
with
looking
at
the
percentages
of
100,
because
there
were
several
ones
at
least
within
pipeline
authoring,
where
we
had
different
team
members
that
were
distributed
to
other
initiatives,
and
so
the
numbers
may
look
high.
B
You
know
and
say:
oh
you
know
it's
100,
but
you
know
the
the
amount
that
we
completed
was
maybe
two
or
three
issues
just
because
we
didn't
have
as
many
team
members
to
be
able
to
work
on
specific
pipeline
authoring
work.
So
I
think
keeping
in
that
in
mind-
and
you
know,
and
knowing
that
context
is
helpful
as
you
look
to
the
overall
picture
of
data
trends.
A
So
not
wanting
to
put
any
action
to
anyone,
but
if
any
manager
here
wants
to
improve
their
retrospectives
to
think
about
it,
and
if
someone
wants
to
put
an
experiment
on
it,
so
we
can
talk
again
on
the
next
retrospective
on
how
we
are
measuring
this
one.
We
can
put
it
as
a
implement
or
process
that
we
want
to
measure
from
next
retrospective.
A
Great,
let's
go
to
what
are
the
improvements
that
we
want
to
track
for
our
next
release
and
we're
going
to
start
with
pipeline
authoring.
B
Great
so
recently
the
team
started
to
trial,
using
kanban
style
approach,
with
being
able
to
what
I
call
grab
and
go
from
our
from
our
board
in
our
ready
for
development.
The
biggest
thing
that
we're
trying
to
achieve
or
try
to
understand
better
about
in
these
improvements
is
answering
the
question.
B
What
is
next
so
obviously
we
have
planning
issues
and
we
have
focus
that
we
should
be
working
on,
but
those
evolve
and
change
over
the
course
of
a
milestone
and
so
looking
at
how
we
can
maybe
unblock
and
allow
you
know,
team
members
to
be
able
to
grab
stuff
that
they
know
is
at
the
top
of
the
list
and
be
able
to
move
forward.
B
Is
you
know
something
that
we're
looking
at
and
see
if
that's
helping
initial
feedback
so
far
seems
to
be
positive,
but
again
we're
still
evaluating
this,
and
so
we'll,
maybe
something
enough
for
us
to
track
into
the
next
release
as
well,
and
I
got
the
next
one
too
michelle
so
I'll
just
keep
going
as
it
relates
to
streamlining.
B
So
one
of
the
things
that
we're
trying
to
do
as
a
team
is
try
to
have
better
organization
uniformity
around
our
issue,
creations,
and
so
we
have
ad
hoc
requests
that
come
up
trying
to
have
a
you
know.
Everyone
has
different
ways
of
how
they
organize
their
issues
when
they
create
them,
so
we're
exploring
having
a
template
in
place
that
will
allow
us
to
capture
those
details
within
that
template.
We
also
have
that
implementation
table
that
we
talked
about
that
is
optional.
It's
just
something!
B
B
Yeah,
so
the
so
this
template
or
your
screen
to
the
template
itself.
It
was
more
about
the
content
that
was
going
into
it.
Sometimes
it
said
problems
sometimes
said
proposals.
Sometimes
it
said
solution.
B
Sometimes
it
said
you
know
a
implementation
table,
so
there
wasn't
a
lot
of
consistency
which
sometimes
made
a
challenge
to
see
if
we
were
capturing
all
the
required
data
and
so
having
this
template,
provided
in
addition
to
some
of
the
labeling
that
helped-
and
that
was
generic
labeling,
I
think,
is
in
the
template
to
give
some
structure
so
that
the
team
felt
they
you
know
had
some
consistency
and
what
to
expect
when
these
ad
hoc
requests.
C
Cool
thanks
mark
yeah.
I
just
added
a
note
just
based
on
the
pre-conversation
before
we
got
recording
here,
but
I
think
it
probably
makes
sense
to
have
a
co-host
in
this
meeting
so
that
we
can
live
stream.
There's
a
chat
going
on
about
this.
I'm
not
sure
if
this
is
super
valuable
to
be
also
live,
streamed.