►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So,
first
off,
I
would
like
to
welcome
our
new
team
members
joining
welcome.
I
do
want
to
highlight
two
new
job
families
being
opened
up.
One
is
in
the
open
source
outreach
team.
This
is
going
to
help
us
execute
further
on
our
open
core
and
contributor
strength
and
the
other
one
is
a
a
new
job
family
and
an
opening
per
manager
in
the
engineering
analytics
team.
That's
coming
up
in
q4.
A
A
The
contributors
are
in
a
sustained
state,
it
hasn't
been
upticking
high
in
the
in
the
high
in
the
high
numbers
that
we
are
looking
for,
but
we
do
have
hackathons
and
more
events
coming
up.
That
will
hopefully
push
the
momentum.
We
are
also
looking
at
how
to
increase
how
to
decrease
the
the
review
time
for
community
contributions.
A
We
are
going
to
implement
a
code
sync
indicator
or
with
jihu.
We
have
a
scope
this
down
and
I
believe
we
can
deliver
an
mvc
by
the
end
of
this
quarter
and,
lastly,
we
still
need
to
make
some
headway
in
in
coaches.
We
have
improved
the
materials,
however,
we
need
them
to
enlist
more
people
and
volunteers.
A
We
do
have
great
updates
in
get
project
and
in
project
course,
engineering
allocations,
reducing
implementing
the
the
staging
environment,
the
new
staging
environment,
we're
making
good
progress
on
the
team,
okr
new
job,
family
performance
indicators,
we
are
hiring
to
plan
and
we
are
making
good
progress
on
our
360
reviews.
A
Next
up,
a
brief
update
in
our
kpis
mara
has
slowed
down.
We
haven't
put
attention
here.
Much
we
have
been
prioritizing
contributors
and
increasing
those.
The
open
community,
mrh
or
short
for
acma
needs
to
be
sustained
further.
We
are
stabilizing
the
trend,
but
still
this
needs
attention.
A
Master
pipeline
and
review
app
stability
is
in
a
great
state.
Master
pipeline
is
now
above
90
and
sustained
for
five
months.
Great
job
for
the
engineering,
productivity
team
and
and
our
counterparts
that
are
helping
out
review
app
is
also
improving
as
well.
A
We
we
did
an
experiment
earlier
on
which
reduced
the
stability,
we're
adding
more
tests
and
more
load,
and
we
are
intentionally
raising
the
bar
and
using
more
stretching
it
out,
then
dog
footing
more
and
now
we
are
bringing
the
the
success
rate
up
again,
which
is
great
time
to
failure
slightly
increase.
I
believe
it's
in
the
15
minute
ring
still
around
the
target.
We'll
do
a
update
on
that
later
in
the
in
the
ep.
A
Space
bugs
s1,
oba,
open
bug
h,
is
under
target
a
great
job
quality
engineering
self
department
for
driving
this
we're
going
to
lower
the
target
further
from
150
days
down
to
100
days
to
keep
our
momentum
and
ambition
as
tools,
however,
hasn't
been
getting
that
much
progress,
it's
increasing
and
but
the
backlog
has
been
decreased.
It
points
to
the
teams
are
closing
out
more
recent
s
tools,
which
is
great
next
up.
A
We
will
be
continuing
to
focus
on
age
and
closing
all
the
amount
retention
is
in
a
good
state
average
of
open
positions
improve,
but
we
are
keeping
attention
here
and,
lastly,
the
the
the
set
scaring
ratio.
It
is
improving,
though,
I'm
keeping
this
in
a
red
red
state
until
we
are
getting
closer
to
the
target
we
are.
We
are,
though,
making
hiring
progress
in
those
runs.
A
Now,
moving
on
to
contribution
contributors,
efficiency,
we're
working
closely
with
our
community
relations
team,
we
completed
a
hackathon
which
helped
sustained
the
recovery
of
unique
of
contributors
per
month.
It's
around
78.
I
believe
you
pay
attention
to
the
screenshot
here,
so
we're
seeing
a
recovery
up,
though
not
at
the
high
high
end
of
the
range.
If
we
want
to
go
yet,
we
have
been
completed
a
stale
in
my
report,
so
this
is
a
report
that
lists
community
contribution,
mrs
that
hasn't
been
merged
for
a
long
time.
A
This
also
aligns
with
measuring
open
things,
open
age
and
count
and
getting
all
those
closed
out
or
merged.
We
have
a
lot
of
improvements
lined
up
the
the
community
relations
team
is
going
to
launch
a
a
contribution.
Experience
survey
we
have
an
okr
aligned
together
and
at
the
company
level
to
increase
the
contributors
to
120
per
month.
There's
a
hacktoberfest
going
on
now.
Thank
you,
narutsi
kyle
and
people
that
are
helping
out.
A
We
are
going
to
focus
on
decreasing
acma
the
open
community,
mrh
other
updates,
there's
a
proposal
to
use
co-creators,
and
we
can
talk
about
it
in
the
upcoming
group
conversation
and
versus
using
code
contributors.
The
context
here
is
because
we
are
accepting
contributions
from
open
and
closed
source.
A
We
are
also
wanting
to
accept
contributions
in
other
things,
besides
code
test,
documentation,
design,
reviews
and
all
the
the
broad
spectrum
of
of
community
work,
and
then,
lastly,
we
are
going
to
look
deeper
into
each
product
groups
to
see
how
we
are
merging,
mrs,
in
an
efficient
manner.
So
we
have
the
measurement
at
the
top
level.
Now
we're
gonna
do
a
deeper
dive
and
where
the
hot
spots
are,
if
each
product
groups
they
need
help
or
not
and
then
take
it
further
from
there.
A
Next
up
is
an
update
on
on
the
ally,
opr2
rollout,
I
believe
right
now.
Engineering
and
product
are
both
in
ally
the
integration
to
the
handbook
handbook
embedding
has
been
completed,
and
now
I
also
want
to
point
out.
There's
there
was
a
linkedin
kudos
from
the
vp
of
sales
at
ally,
showcasing
the
collaboration
between
both
of
our
companies.
A
They
did
showcase
our
feature
request
to
have
okrs
embedded
and
we
are
showcasing
how
we
are
handled
first
and
being
transparent
to
the
world
on
our
ambition
on
the
engineering
analytics
team
front,
great
progress
on
okrs
overall,
it's
also
in
the
handbook.
A
Next
two
is
around
metrics
for
engineering.
I
want
to
give
a
brief
update
on
infradev
and
the
measurements
there,
so
our
team
has
delivered
the
dashboard,
the
infradeb
dashboard.
A
A
We
are
having
a
top-level
view
down
on
all
all
the
infra
dev
and
all
product
groups,
and
I
want
to
call
that
the
team
is
making
good
progress
here,
we're
starting
to
see
the
number
of
issues
overall
decrease,
but
we
still
need
to
pay
attention
on
on
clearing
out
the
overdue
backlog,
and
then
next
up
is.
We
will
add
this
momentum
to
s1
and
s2s
once
they're
in
a
good
state,
we'll
move
to
s3
standards,
force
and
we'll
take
learnings
here
into
implementing
the
engineering
allocations.
A
That's
coming
up
next,
which
I
want
to
talk
about
so
engineering
allocations.
You
can
see
this
as
a
next
evolution
of
infradev
infradev
is
issues
requested
by
infrastructure
and
delivered
by
the
development
teams,
and
also
sometimes
a
product
group
together
as
a
whole
outside
of
development,
as
well,
so
to
scale
up
our
process
into
the
future.
A
We
need
a
nomenclature,
an
accounting
method
that
can
measure
requests
and
also
dris
or
engineering
departments
that
need
to
deliver
on
those
requests
and
those
are
called
as
consumer
and
then
producers,
the
consumers,
are
the
the
departments
that
are
requested
making
the
requests.
Producers
are
the
ones
that
are
producing
the
requests.
A
Another
diagram
here,
our
first
iteration,
is
to
implement
this
communication
charging
mechanism
for
the
five
department
departments
listed
here.
What's
been
completed,
labels
are
created,
trash
automation
has
already
been
completed.
We
have
identified
the
metrics
very
close
to
what
you
saw
in
infradev,
so
open,
open
age
and
count
of
the
issues.
A
We're
close
to
getting
an
mvc
done
at
around
80.
We
need
some
more
polish.
Next
up
is
to
deliver
the
views
for
all
the
engineering
department,
vp.
So
the
view
from
the
producer
and
then
we'll
gather
feedback
from
there
on
to
engineering
productivity
okrs,
enabling
the
launch
of
gitlab
cn.
There's
one
kr
there
to
implement
the
the
code.
A
Sync
performance
indicator
we're
to
go
scale,
this
back
down
to
an
mvc
where
we're
measuring
the
open
age
of
gu
contributions
and
that's
a
better,
smaller
iteration
that
we
think
we
can
deliver
faster
and
just
an
overview
of.
As
I
said
before,
the
master
pipeline
success
rate
is
in
a
good
state.
Ryu
app
has
recovered
and
time
to
failure.
It's
at
15
minutes.
It's
still
around
our
target
and
we
will
stand
stand
in
line
here
and
I
believe
next
up
will
be
tanya
to
give
an
update
on
the
quality
engineering
hub
department.
B
Hi
there
I'm
tanya,
I'm
the
director
of
quality
engineering
here
at
gitlab,
and
I'm
going
to
present
some
of
the
quality
engineering
sub
department
updates.
For
this
group
conversation.
Let
me
share
my
screen
great.
The
first
slide
here
is
about
our
okrs,
we're
doing
pretty
well
overall,
the
bottom
okr,
where
we're
a
little
bit
behind,
is
driving
team
member
satisfaction
through
career
growth.
B
Conversations
based
on
the
360
schedule,
we're
having
those
conversations
now-
and
I
expect
in
the
next
week
or
two
we'll
see
that
percentage
pop
up
quite
a
bit
and
start
to
look
better
next
up.
Our
kpi
updates.
I'm
excited
to
say
that
our
s1
oba
reached
its
current
intended
target
and
ob
stands
for
open
bug
age,
so
we're
planning
to
reduce
the
target
to
ambitious.
It
was
150
days
and
we're
going
to
push
it
down
to
100
days.
B
We
only
have
eight
open,
s1
bugs,
which
is
very
exciting,
but
that
means
that
any
that
are
older
do
have
a
larger
than
intended
effect
on
the
average
we'll
be
looking
to
work
with
product
groups.
To
get
those
closed
next
up,
s2
oba
is
sorry.
S2
ob
improvement
is
a
focus
area
for
us
for
q3
that
will
likely
continue
into
q4
due
to
the
volume
of
bugs
in
this
bucket
in
the
next
iteration.
There
are
two
additional
pi's
that
we're
looking
to
build
out.
B
The
first
is
average
ended
test,
suite
duration
and
the
second
is
the
average
age
of
quarantine
and
then
test
we're
hoping
to
have
these
done
in
q3,
so
we
should
be
able
to
report
on
them
in
our
next
group
conversation
next
up
our
deploy
with
confidence,
epic.
This
has
been
a
huge
focus
for
us.
This
quarter,
in
line
with
the
reliability
focus
that
all
of
engineering
has.
B
We
have
completed
dog
footing
the
test
cases
feature
and
updating
our
test
reporting
to
use
that
we've
also
completed
the
first
iteration
of
tracking
pass
rate
metrics
for
the
end-to-end
test
suite.
We
have
several
things
in
progress
this
quarter.
The
first
is
building
a
mixed
deployment
test.
Suite
next
is
making
a
review
qa
smoke
job
mandatory.
That
should
help
prevent
progressions
from
being
found
on
staging
and
later
on.
In
the
deploy
process,
we're
working
to
reduce
the
number
of
quarantine
tests.
B
We
have
we're
working
to
close
out
quality
corrective
actions
and
we're
building
more
robust
feature,
flag
practices
in
our
end
time
tests
and
then
next
for
next
quarter
we're
going
to
improve
our
test
setup,
test
data,
execution
and
cleanup.
This
is
all
with
a
reliability
and
efficiency
focus
and
then
we'll
also
work
to
proactively
promote
tests
to
reliable,
automatically
quarantined
tests
and
selectively
execute
tests.
This
should
help
us
block
the
appointments
more
ambitiously.
B
When
things
go
wrong,
it
should
help
us
unblock
as
well
more
quickly
when
we
can
automatically
quarantine
and
it
should
help
us
have
shorter
test
runs
if
we
can
selectively
execute
the
correct
tests
based
on
the
changes,
our
achievements
so
far,
so
we
released
version
1.2
of
the
gitlab
environment
toolkit
also
called
get.
We
also
released
version
2.9
of
the
gitlab
performance
tool.
Gpt.
B
We
have
expanded
the
cloud
native
hybrid
instructions
to
cover
all
the
reference
architecture
levels
for
all
user
counts.
We
have
generally
improved
documentation
about
our
reference
architecture,
work
and
performance
testing
and
what
we
do
and
what
cadence
we
have
built.
The
first
iteration
of
mixed
deployment,
node
tests,
they've
been
added
to
the
staging
pipelines
and
then
we've
also
improved
our
sct
onboarding
for
the
secure
stage,
and
we
expect
that
to
in
the
future,
expand
to
other
stages
as
well.
B
B
Our
work
in
progress,
so
the
two
main
things
that
we're
working
on
is
increasing
test
coverage
and
increasing
the
test
efficiency.
So
there
are
many
areas
where
we're
working
to
increase
the
coverage.
The
first
and
foremost
is
gidly,
we're
also
working
on
cloud
licensing,
user
registration
and
billing
search
pipeline
execution,
geo
and
mobile
browser
testing,
as
well
as
far
as
efficiency
goes
we're
working
to
trigger
the
full
test
of
sorry,
the
full
suite
of
undead
tests
on
feature
flag
toggles.
This
should
help
us
catch
any
kind
of
issues
before
they
reach
production.
B
We
are
working
on
making
a
more
efficient
approach
towards
test
record,
recall
functionality
and
search
and
then
we're
refactoring
package
tests
to
separate
by
the
endpoint
level.
B
Additionally,
there's
quite
a
bit
more
going
on
in
get
aligned
with
product
and
supporting
project
tours.
We
have
links
here
to
the
version
1.3
and
2.0
releases,
but
there's
much
more
to
come.
In
addition
to
that,
we
also
are
working
to
implement
geo,
failover
and
recovery
strategy
within
get
and
then,
lastly,
we're
supporting
moving
ci
and
storage
purchasing
to
gitlab.com
in
alignment
with
product
fulfillment.