►
From YouTube: Quality Group Conversation Highlights 2022-05-23
Description
Quality department Group Conversation Highlights for 2022-05-23
A
Hi
everybody,
I'm
tiny
pizzitny
and
I'm
here
with
my
fellow
leaders
from
the
quality
department
to
give
you
an
intro
on
our
group
conversation
for
may
23rd
2022..
Let
me
share
my
screen.
A
Great,
so
we
would
like
to
welcome
a
bunch
of
new
hires
that
we
have
over
the
last
few
months.
We
have
nick
joining
us,
the
director
of
con
contributor
success,
david
and
two
other
upcoming
folks
for
engineering,
productivity
and
edgar's,
carlo
and
two
new
folks
for
quality
engineering,
and
we
do
have
several
open
positions
right
now
across
the
department.
So,
if
you
know
of
anybody
who
might
be
a
good
fit,
please
refer
them
or
encourage
them
to
apply
for
our
q1
okrs.
A
At
a
high
level,
we
have
ar
deliver
cost
insights,
reference
architecture
and
reference
architectures
and
increase
contributions,
product
focus
of
improved
product
quality
and
increased
customer
centricity
and
a
people
focus
of
grow.
The
team
and
increase
geographic
diversity
and
clicking
down
into
that
for
key
business
initiatives.
We're
supporting
growing
monthly
os
contributors
from
116
to
150..
A
The
project
horse,
rollout
we're
adding
two
new
features
to
support
that
and
then
we're
establishing
testing
coverage
for
cloud
licensing
for
our
product
focus
okr.
We
are
shipping
next
prioritization
charts
and
mr
type
hygiene
automation,
we're
expanding
required
code
or
approval
for
security
purposes
and
then
we're
shifting
left
through
expanded.
Mr
testing
and
new
testing
types
and
then
lastly,
for
our
people
focus
okr.
We
have
a
pretty
ambitious
hiring
plan
that
we're
hoping
to
meet
we're
going
to
conduct
mid-year
check-ins
with
each
team
member
and
then
we're
going
to
do
security
training.
A
And,
lastly,
we
wanted
to
highlight
we're
currently
either
leading
or
taking
part
in
three
important
working
groups.
One
is
the
next
prioritization
working
group.
One
is
the
contribution,
efficiency
working
group
and
the
last
is
a
demo
and
test
data
working
group
nick
I'll,
send
it
over
to
you
for
contributor
success.
B
B
So
that's
what
we're
going
to
focus
on
and
if
you
see
the
next
slide,
we
can
see
that
I
joined,
which
is
like
the
first
part
of
growing
a
team
and
then
also
amy
has
switched
teams
since
the
11th
of
may,
until
at
least
the
end
of
this
quarter,
there's
an
opening
position
or
an
open
position
on
greenhouse,
and
we
also
are
collaborating
with
another
person
within
gitlab.
For
this
recruiting
and
sourcing
the
kanban
and
backlog
boards
are
available
to
us.
B
B
We
did
quite
some
analysis
and
learnings
and
I
think
one
of
the
important
learnings
that
we
can
like
zoom
in
on
just
a
little
bit
is
that
to
make
sustainable
contributions.
We
have
to
go
to
10
contributions
per
contributor
because
that's
kind
of
the
break-even
point
in
terms
of
making
sure
that
someone
is
successful
and
then
keeps
on
contributing
back
to
kit
lab
for
other
learnings.
I
suggest
you
look
at
this
data
slide
onto
you.
Kyle.
C
Thanks
nick
I'll
talk
through
engineering
analytics
and
on
mech's
behalf
I'll
do
my
best,
but
please
direct
any
questions
to
mac
during
the
group
conversation
similar
to
the
themes
and
okrs
above
product
ar
and
people
so
delivering
product
development,
workflow
insights.
This
is
mainly
in
support
of
next
prioritization
and
sas
efficiency.
C
Delivery.
Well,
sorry
ar
is
sas
efficiency,
delivering
cost
insights
so
making
sure
that
we
have
the
right
visibility
into
our
gitlab.com
expenses
and
tracing
it
back
to
each
team.
So
we
can
identify
areas
to
improve
and
then
growing
the
team.
I
believe,
there's
there's
some
a
few
openings
in
the
engineering
analytics
team,
including
a
leader
of
the
team.
So
if
you
were
interested,
we
are
accepting
referrals
now,
let's
move
into
the
work
type
improvements.
C
The
main
idea
here
is
we're
aligning
on
three
different
categories
or
types
of
work,
bugs
features
and
maintenance
and
making
sure
that
there's
visibility
to
engineering
teams
in
the
makeup
of
their
backlog
and
the
type
of
output,
the
type
of
work
that
is
being
completed
for
the
team,
so
so
that
you
can,
they
can
align
on
the
amount
of
work
that
they're
doing
compared
to
what
their
backlog
looks
like
and
have
a
healthy,
healthy
prioritization
going
forward
and
then
for
and
then
this
is
an
example
of
that
dashboard
that
is
available
for
teams
here.
C
There's
an
issue
as
part
of
the
next
prioritization
working
group
to
roll
this
out
to
all
teams.
So
if
you're,
an
engineering
manager
listening,
you'll
likely
be
hearing
from
lily
or
someone
in
the
working
group
to
roll
this
out
further
moving
into
engineering
productivity,
rkrs
are
around
growing
the
team
and
then
ex
and
then
adding
more
I'll
say
like
increasing
the
controls
that
we
have
in
place
for
for
code
owners.
That
was
talked
about
above
we're
still
early
in
the
quarter
and
and
we're
making
great
progress.
C
But
as
far
as
the
measurable
results,
we're
still
looking
to
to
accomplish
we're
still
looking
to
make
progress
on
our
measured
goals
here,
moving
into
the
performance
indicator
updates,
we've
identified
a
few
duration
regressions,
which
have
actually
put
time
to
first
failure,
pipeline
duration
and
cost
all
kind
of
impacted,
those
in
a
negative
direction
that
we
worked
to
correct
earlier
this
week.
C
We
did
have
we
we
caught
an
issue
with
the
get
lab
cloud
native
chart
implementation
through
dog
fooding,
but
that
did
cause
a
large
decrease
in
review
app
deployment
until
that
was
fixed
and
then
master
stability
is
a
big
area
of
focus
right
now,
especially
over
the
last
week,
master
has
been
almost
historically
unstable,
so
as
an
engineering
productivity
team,
we're
looking
to
hold
a
retrospective,
get
feedback
from
other
teams
to
look
to
identify
corrective
actions
that
we
can
take
on
from
some
of
the
themes
that
we're
seeing
with
master
being
broken
and
I'll
quickly
touch
through
some
of
the
achievements.
C
Sorry
so
you're
on
the
right
side.
There
tanya
one
of
the
things
that
we
focused
on
last
quarter
is:
it
is,
is
expanding
code
owner
approvals
for
high
impact
changes.
This
was
done
in
response
to
a
security
issue
so
that
we
can
ensure
the
right
dris
are
approving
changes
that
affect
their
areas
and
now
we're
rolling
that
out
to
more
areas
and
allowing
those
teams
to
decide
where,
where
do
we
need
to
make
sure
we
have
visibility
on
changes
so
that
they're
not
surprised
by
something
that
could
cause
an
incident?
C
Also
in
support
of
the
next
prioritization
working
group,
we're
adding
adding
tooling
to
infer
merge,
request
types
so
that
we
can
drive
that
number,
which
was
30
in
december
down
to
5
at
the
end
of
this
quarter,
and
we
had
good
success
on
hiring.
As
we
mentioned,
we
added
two
new
team
members,
this
quarter
to
the
team
and
we're
looking
forward
to
continuing
growth
of
the
team,
and
one
notable
thing
for
work
in
progress.
I
would
say:
is
we're
dog
footing
the
visual
review
tool
for
the
git
lab
review
apps?
C
This
is
really
to
allow
product
designers
to
be
able
to
provide
easier
feedback
using
review.
Apps
in
mrs
that
they're
reviewing,
instead
of
having
to
spin
up
a
git
pod
instance
or
use
gdk
to
look
at
it,
we're
really
encouraging
them
to
use
review
apps,
which
have
become
a
lot
more
stable
and
using
the
functionality
we
have
in
the
product
to
provide
feedback
on
those
designs
over
to
you,
tanya
to
wrap
it
up.
A
Yeah,
thank
you
so
for
quality
engineering
okr's
a
lot
of
similarities
to
what
you're
hearing
elsewhere
we're
going
to
be
focusing
on
s2
bugs
part
of
that
is
with
the
next
prioritization
working
group.
A
part
of
that
is
also
a
refinement
on
on
the
qem
side
to
make
sure
that
the
bugs
are
still
relevant
and
correctly
severitized.
A
We're
also
supporting
the
top
12
initiatives
such
as
cloud
licensing
and
project,
tours
we're
going
to
shift
left
through
increased
testing
types
and
then
we're
trying
to
improve
efficiency
generally
for
our
performance
indicator
updates.
S1
is
in
a
good
state.
We
are
a
little
bit
high
in
may,
due
to
just
one
older
blocked
bug
based
on
the
counts
of
s1.
Bugs
one
older
bug
can
have
a
pretty
large
impact
on
the
average
s2
is
in
a
little
bit
worse
shape.
It
continues
upwards.
A
We
did
see
a
dip
in
q1
based
on
a
human
okr,
so
we're
hoping
that
our
q2
occur
can
continue
to
drive
some
improvement
and
then
the
next
prioritization
working
group
should
also
help
drive
a
stabilization
and
then,
in
the
long
run
a
decrease
for
our
sct
gearing
ratio.
It's
going
in
the
right
direction:
it
will
continue
to
over
the
course
of
this
year,
with
our
nine
remaining
head
counts,
so
we're
looking
forward
to
seeing
that
improve
as
well.
A
Our
average
duration
is
steady
at
target
and
then
our
average
age
does
continue
upwards.
This
is
concerned
for
us,
so
we
have
a
q2,
okay
to
d,
quarantine,
all
and
then
test
past
slo.
So
this
should
drive
a
pretty
massive
improvement
for
our
achievements.
We're
proud
to
say
that
we
supported
a
wide
variety
of
cross-functional
initiatives.
A
A
We've
also
created
iterations
to
get
and
reference
architectures
mainly
to
support
project
horse,
but
also
to
make
it
easier
for
new
users
to
onboard
and
to
make
it
easier
to
understand
the
cost
involved
with
running
one
of
our
reference.
Architectures
we've
also.
This
is
also
cross-functional,
but
we've
also
made
improvements
to
staging
ref
and
canary,
and
then
we
also
created
a
triage
bot
to
help
alleviate
cs
workload.
I'm
really
happy
to
see
this
collaboration
by
our
team.
It
reduced
triage
time
by
75
and
then
lastly,
we're
shifting
left.
A
As
far
as
work
in
progress,
similar
themes,
we're
continuing
to
support
project
horse
decomposition,
we
have
more
shifting
left
initiatives
that
we're
pulling
in
running
end-to-end
tests
on
more
mrs
implementing
load
performance
testing
on
staging
ref,
implementing
backboard
testing
and
then
we're
also
working
to
improve
the
undone
test.
Suite
both
the
speed
to
selectively
run
tests
to
make
it
more
usable.