►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Next,
I
want
to
go
over
the
work
that
being
done
in
q3.
First
kr
is
around
contribution
efficiency.
We
have
managed
to
increase
contributors
monthly,
though
it
didn't
hit
the
target
that
we
wanted.
Unfortunately,
we
were
not
able
also
to
increase
the
amount
of
coaches.
This
was
a
stretch
goal
that
we
were
not
able
to
to
attain
last
quarter.
We
will
renew
these
efforts
again
sometime
in
q1.
A
Lastly,
we
were
able
to
complete
our
commitments
on
measuring
the
efficiency
that
jihoo
is
contributing
upstream
to
us
next
area
last
quarter
is
around
improving
product
quality.
Overall,
great
progress
on
increasing
test
coverage,
solidifying
our
deployment
pipelines
with
better
validation
gates,
an
area
that
we
were
not
able
to
realize
a
net
net
progress
is
around
reducing
time,
diverse
failure.
A
A
Moving
on
to
highlights
that
we're
doing
in
q4,
along
the
same
lines
of
improving
quality
and
then
test
coverage,
we
are
continuing
to
improve
the
deployment
gates,
new
staging
environments,
boosting
the
deployment,
confidence,
we're
committed
to
move
project
horse
and
deliver,
get
features
to
support
that
and
then
lastly,
we
are
doing
three
experiments
to
further
reduce
time
difference.
Failure
next
up
is
around
engineering
efficiency.
A
We
are
looking
at
increasing
adoption
and
awareness
of
reference
architectures
and
get,
and
we've
seen
that
we
need
better
materials,
testing
information,
testing,
cadence
and
cost
to
run
we're
also
doing
experiments
around
increasing
community
contributions.
As
you
see
before,
we
were
able
to
attain
some
gains,
but
not
as
as
drastic.
This
quarter,
we're
shifting
our
approach
to
doing
more
experiments
and
then
analyzing
the
results
and
then
optimize
on
those
experiments
that
work
really
well
we're
continuing
with
supporting
jihoo
and
their
efficiency
and
then
also
more
accuracy
in
our
sas
cost
attributions.
A
Moving
forward
to
a
summary
of
our
key
performance
indicators
I
mentioned
before
contributors
from
per
month
increase
but
they're
still
under
target
on
raw.
We
have
shifted
focus
from
raw
to
focus
on
the
foundation
of
community
contributors
and
contributions.
We
expect
a
renewed
focus
here
again
around
q1
masters.
Pipeline
stability
is
doing
okay,
it's
slightly
different
under
90
percent.
It's
it's
recovering
and
the
team
is
putting
attention
to
it.
A
Highlighting
review
apps
review.
Apps
stability
has
dipped,
but
that
was
because
we
added
more
tests
and
now,
with
the
increase
in
stability
and
also
additional
test
coverage
test,
runs
against
review
apps.
We
are
getting
an
overall
net
progress
in
the
game,
so
new
test
and
also
recovery
of
of
disability.
A
A
Next
up
is
bucks
s1
and
s2
oba,
in
short,
for
open
bug,
h,
really
proud
of
the
team
that
we're
able
to
lower
s1
over
even
lower
with
a
new
target
of
100
days.
Right
now,
I
believe
it's
under
100
days,
though,
with
s2s,
we
have
been
focusing
on
closing
more
recent
s2s.
Hence
the
age
hasn't
been
lowered,
but
I
believe
we
are
in
control
and
better
control
of
the
the
number
of
backlog.
S2S
and
one
highlight
here,
which
is
not
a
kpi,
is
around
a
mr
pipeline
duration.
A
A
A
We
are
looking
at
another
hackathon
q4
in
q3
we
did
two
outreaches
one
hackathon
and
also
one
hacktoberfest
participation.
We
have
a
slight
recovery
in
open
community
mrih.
From
my
last
update,
the
trajectory
was
going
up.
The
age
was
increasing
this
this
month.
This
quarter,
we
have
realized
some
recovery
at
the
tail
end,
and
I
look
forward
to
more
momentum
downwards
in
this
area.
A
Next
up
is
around
two
new
performance
indicators:
I'm
really
proud
of
the
team
for
shipping
these
new
measurements.
These
are
also
performance,
job
family
performance
indicators
for
the
software
engineering
test.
Job
family
first
up
is
measuring
the
test
efficiency,
so
we're
measuring
overall
duration
of
our
end-to-end
test
suite,
and
we
want
to
bring
that
time
down
the
shorter
the
time,
the
more
efficient
our
tests-
and
this
is
in
a
downward
trajectory
really
proud
of
the
team.
The
target
is
16
minutes
and
right
now,
it's
around
80.
before
that
it
was
much
higher.
A
As
you
can
see,
the
second
one
on
stability
is
age
of
quarantine
tests.
So
slight
context
here,
if
the
test
is
not
stable,
we
quarantine
the
test
and
if
the
test
is
really
not
stable,
it's
going
to
be
in
the
quarantine
state
for
a
really
long
time.
So
we
measure
the
age
how
long
the
test
stays
in
quarantine
state.
A
This
is
where
we
need
to
put
our
focus
on,
because
in
fixing
the
tests,
we
also
need
to
fix,
bugs
or
fix
changes
in
the
product
that
will
aid
to
improving
the
stability
of
these
tests.
Right
now,
it's
above
200
days
and
the
target
is
at
150
days.
The
fixture
is
also
to
advocate
for
more
fixes
in
our
product
to
to
adhere
to
these.
These
tests.
A
Do
you
a
deeper
dive
in
support
to
to
gitlab
cnrgu?
We
have
two
new
measurements.
This
is
also
taking
the
philosophy
of
open
measuring
open
things.
We
apply
them
towards
merge,
requests
being
upstream
by
gu
as
well,
so
we're
measuring
open
mrh
that
gu
submit
to
us
and
then
also
review
time,
those
openmrs
last
two
releases.
A
There
was
a
huge
spike
and
it
took
a
really
long
time
for
the
team
to
go
in
and
review
upstream
module
quest
from
jihoo,
and
that
was
because
these
were
not
as
small
iterations
that
our
team
is
used
to.
So
we
corrected
that
and
we
added
automation
we
added
efficiency
and
tooling,
and
now
you
can
see
that
the
age
of
open,
gu
and
mars
has
dropped
significantly,
and
we
we
aim
to
keep
the
momentum
downwards
as
well.
A
Lastly,
in
the
overview
we
are
collaborating
with
the
wider
wider
company.
First
is
around
new
staging
environments
and
improving
the
current
staging
environment
we're
doing
well
here,
the
new
environment
is
set
up,
it
hasn't
been
opened
up
to
the
broader
team
for
access
yet,
but
we're
making
great
progress
here,
we're
also
adding
more
tests
and
more
load
on
the
existing
staging
environments
to
make
sure
that
the
gates
are
really
solid.
A
A
This
is
born
out
of
seeing
a
fragmentation
of
both
test
data
sample
data
and
demo
data,
and
this
working
group
is
aimed
to
unify
or
attempt
to
have
a
more
common
data
model
where
test
data
is
really
similar
or
can
learn
from
demo
data
the
same
with
demo
data,
because
the
things
that
we
sell
to
customers
should
be
the
things
that
we
test.
This
working
group
is
a
it's
a
new
one
and
it's
something
I
look
forward
to
to
see
efficiency
gains
here.
A
A
Moving
on
to
the
the
sub
teams
and
sub
departments
in
in
the
department.
First
off
is
engineering
analytics.
I
want
to
highlight
the
work
done
last
quarter.
We
helped
move
pi's
in
other
departments
that
needed
help
and
also
improving
the
load
time
of
our
performance
indications
in
the
handbook,
and
we
have
made
a
made
good
progress
here.
This
quarter,
though,
we're
focused
on
ensuring
the
sas
cost.
A
Modeling
is
correct
and
also
fill
in
the
security
power
apis,
which
is
has
a
broad
visibility,
and
those
are
the
ones
that
we
aim
to
help
mature
the
most
and
the
rest
is
efficiencies
and
then
other
other
team.
Member
updates
on
engineering
productivity,
great
progress.
A
Last
quarter
again,
the
the
tiff
was
the
one
that
was
is
passed
through
to
the
department,
and
this
is
where
the
team
put
a
lot
of
effort
in
though
it
didn't
realize
in
a
net
decrease,
though
we
are
positive
on
on
the
experiments
we
plan
to
do
this
quarter
on
measurements
within
the
engineering
productivity
area,
ryu
apps
deployment
I
mentioned
before.
I
took
a
dip
because
we
added
more
tests,
it's
recovering
now,
so
now,
with
increased
recovery.
It's
an
overall
net
gain,
which
is
great.
A
The
the
master
pipelines
ability
is
bubbling
up
around
90
percent
high
80s.
I
believe
that
this
is
no
cause
of
concern.
A
team
is
focused
on
optimizing
and
correcting
the
disability
here
on
time
to
face
first
feel
like
we
are
really
close
in
going
under
15
minutes
and
shifting
towards
doing
experiments.
I
believe
we
can
do
more
creative
things,
trying
different
approaches
and
I'm
optimistic
on
seeing
this
going
lower
and
kudos
to
the
the
the
perseverance
of
the
team
on
this
one.
A
Next
up,
a
a
deeper
look
into
the
quality
engineering
sub
department.
Last
quarter
overall,
great
progress
on
increasing
test
coverage,
more
solid
gates
in
our
tests,
ensuring
we
ship
with
quality,
and
then
this
quarter.
We
are
going
to
continue
to
boost
deployment
with
confidence
and
support
the
business
with
reference
architectures
and
get,
and
then
the
rest
is
around
the
training
and
other
team
member
updates.
A
A
The
h
is
not
recovering,
though
the
number
of
s2
backlog,
his
hasn't,
been
increasing,
with
the
renewed
focus
on
older
s2s.
I
expect
to
see
some
improvements
here
and,
and
I'm
optimistic
about
that,
so
that's
the
overall
highlight
for
the
department
update.
I
hope
you
come
with
questions
and
I
look
forward
to
our
discussions.
Thank
you.