►
From YouTube: 2022-05-18 Maintainership Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
That's:
okay,
yeah.
I
thought
that
that
was
just
the
audio
anyway,
so.
A
A
All
right,
let's
get
started,
welcome
to
the
maintainership
working
group
for
may
18th.
This
agenda
is
a
little
bit
light
right
now,
but
I
I
bet
it
will
get
a
little
longer
as
discussions
start
and
so
changes
since
last
time.
Kyle,
you
can
go
ahead
and
verbalize
for
max.
Oh
here's
max.
D
B
Okay,
so
I
ib
ib
head
added
capacity
data
into
the
roulette
workload
dashboard.
So
with
that
you
can
see
by
role
by
like
reviewer
maintainer
for
like
back-end
front-end
database.
What
the
available
capacity
for
reviewers
and
maintainers
are.
B
You
can
see
that
over
time
we
also
late
last
week
got
that
data
loaded
into
psi
sense
and
I
started
a
simple
dashboard
charting
the
availability
rate,
the
reduced
capacity
rate
and
the
like
the
rate
of
total
capacity.
That's
on
pto
as
well.
I'm
I'm
I'll
work
with
max
to
expand
that
out.
I
think
this
tells
us
our
our
capacity.
We
still
don't
have
great
data
into
the
demand
of
review
requests
by
project
and
type.
B
We
may
be
able
to
do
something
clever
in
size
sense,
with
the
danger
comment
and
parse
out
to
see
how
many
mrs
have
different
review
requests
in
lieu
of
I'm
working
with
the
ep
team
to
try
to
log
that
somewhere
else.
But
maybe
we
can
get
something
done
before
that
point
in
time.
B
D
So
things
are
calming
down
on
that
front.
Well,
from
my
perspective,
anyway,
yeah
that's
really
really
awesome.
I
had
a
quick
look
at
the
scissors
dashboard,
I'm
happy
to
like
tinker
with
that
if
you're
short
on
time,
but
I
haven't-
got
right
access
to
it.
I
do
have
an
editor
account
inside
sense,
but
not
to
that
dashboard.
B
Okay,
yeah
I'll,
take
a
look
real,
quick
because
I
thought
I
turned
editor
access
on
and
we'll
we'll
figure
that
out
outside
the
meeting.
Thanks
for
letting
me
know.
A
B
B
So,
like
you
know,
just
using
basic
numbers,
let's
say
we
have
50
maintainers
like
we're
seeing
between
25
and
30
opted-in
maintainers,
so
we
have
almost
half
of
that
pool
is
not
available
for
suggestions.
So,
ideally
we
can
use
this
to
chart
over
time
as
we
make
some
corrective
actions.
Are
we
increasing
maintainer
participation,
or
is
it
staying
the
same
so
that
that
load
is
evenly
shared
or
shared
shared
more
evenly?.
D
D
That's
that's
the
same
really.
I
think
the
thing
that
the
assumption
that
we're
challenging
is
that
increasing
the
number
of
maintainers
necessarily
increases
the
capacity
the
total
maintain
the
capacity
across
the
company,
that's
not
to
say
that
we
shouldn't
be
increasing
the
number
of
maintainers,
but
we
do
need
to
focus
on
understanding
why
our
currently
active
maintainers
are
not
taking
on
reviews.
A
And
then,
on
that
kyle
do
we
have
can
do
we
have
data
in
terms
of,
let
me
go
back
to
ib's
dashboard
like
what
is
the
average?
You
know.
I
know
that
natalia
has
an
issue
in
terms
of
like
what
is
overwhelmed.
What
does
overwhelmed
mean?
Is
there
data
related
to
like
what
are
maintainers
on
average
doing
for
reviews
so
that
maybe
we
can
infer
like
you
know?
This
is
the
ideal
target
kind
of
thing.
B
A
A
Thank
you
both
very
much.
Let's
skip
steve.
He
has
no
updates
on
the
communication
plan.
There's
not
a
lot
to
communicate
right
now,
although
the
dashboard
is
incredibly
useful
and
then
I'll
verbalize
for
robert
who's,
doing
a
rapid
action
right
now.
A
No
progress
since
last
week,
on
my
part,
due
to
the
rapid
actions,
we'll
sort
out
my
action
item
from
last
time,
which
was
gathering
feedback
from
the
various
engineering
levels
on
our
current
maintainer
processes,
feel
free
to
tag
him
on
anything
related
to
this
exit
criteria,
because
he'll,
hopefully
be
done
with
the
most
part
of
the
rapid
action
today
and
he'll
also
be
away
next
week.
A
Is
there
anything
that
we
need
from
robert?
We
want
to
put
in
the
agenda
and
then
I'm
probably
going
to
actually
pull
up
his
exit
criteria.
A
Okay,
so
help
needed
if
you're
the
dri
for
one
of
the
exit
criteria.
Would
you
please
open
an
mri
to
update
the
start
date
and
your
anticipated
end
date,
as
well
as
any
progress
completion?
So
this
is
a
percent
if
you
still
have
more
issues
that
you
need
to
add
to
your
epic
or
if
you
know
that
you
have
issues,
you
need
to
add
to
your
epic,
but
you
don't
quite
know
what
those
issues
are
feel
free
to
work
with
me
in
the
slack
channel,
and
we
can
help
come
to
something
on
that.
A
A
I'm
sorry!
This
is
a
lot
of
me
talking
this
time.
Steve
is
all
verbalized
for
steve
who's.
Not
present
steve
says:
do
we
know
if
certain
teams
have
more
community
contributions
than
others?
If
so,
that
will
essentially
force
those
teams
to
be
involved
in
more
reviews
than
teams
that
have
less
community
reviews.
How
can
we
account
for
this
when
considering
the
maintainer
workload?
B
Yeah
the
contributor
success
team
should
be
able
to
help
here
right
now.
It's
just
nick
and
remy
is
on
borrow
on
that
team.
There
is
a
chart
that
can
help
give
that
in
insight
by
stage
that
I
linked
to
in
a2,
and
it
shows
how
many
mrs
were
opened.
As
of
that
date,
it's
a
daily
chart,
I
think
going
back
at
least
12
months,
so
you
can
see
trends
by
stage
by
group
that
chart
gets
a
bit
noisy,
but
I'm
sure
we
could
export
data
in
the
format.
That's
needed.
B
A
A
D
Kind
of
anarch
data,
rather
than
anything
solid,
appear
to
be
version
bumps
of
dependencies,
so
they're
from
what
I've
seen
can
be
quite
small,
it's
not
to
say
they
don't
take
cognitive
load
from
verify,
but
they're
not
always
sort
of
big
chunky,
mrs
that
need
lots
of
eyes
on
it.
B
There
are
also
a
lot
of
like
stale
runner,
like
years
old
runner,
mrs
that
are
in
that
that
group
of
verify,
mrs
that
are
adding
new
features.
That
may
not
be
desired,
and
I
think
there's
just
a
large
backlog
that
elliott
rushton,
for
example,
kind
of
is
aware
of,
but
it's
hard
to
work
through
with
the
team.
A
All
right,
you
again
kyle
for
that.
I
have
moved
my
items
so
that
it's
not
just
me
minaj.
Do
you
want
to
verbalize
your
point.
E
Yeah,
so
I
was
going
to
ask
about
a
missing
piece
that
we
would
require
to
get
the
roulette
thing
ready,
so
this
one
ap
endpoint
that
appears
to
be
not
in
supported
by
the
in
the
api
docs.
So
if
we
have
that,
I
think
that
would
be
the
last
missing
piece
we
would
need,
and
if
it's
not,
then
maybe
we
can
schedule
an
issue
to
get
it
done.
E
So
kyle
says
that
it
is
in
the
ui,
so
at
least
graphql
should
be
able
to
support
it,
but
I'm
not
sure
if
roulette
uses
graphql
yet
so
yeah.
B
Sorry
I
wasn't
making
the
connection
to
roulette.
I
thought
it
was
just
get
lab.
D
E
A
E
A
A
C
Point
c:
yes,
that's
my
my
question.
C
Sorry,
I
missed
the
last
meeting
and
also
I
I
see
that
there
are
a
couple
other
meetings
before
then
so
for
me,
it's
just
for
my
benefit
because
I
just
joined
like
heavy
discussed
anything
about
specifying
code
review
workload
for
engineers,
like
as
we
do
for
deliverables
like
I'm,
not
sure
whether
that's
same
for
other
teams,
but
unless
for
our
team
we
have
10
weight
using
as
a
you
know,
guide
for
adding
deliverables
within
the
release,
but
I'm
just
wondering
whether
we
thought
about
adding
something
like
that
for
reviews
so
actually
making
it
more.
C
You
know
making
something
that
everyone
is
sort
of
aiming
to
get
at
get
as
opposed
to.
This
is
currently
that
I
feel
like
it's
a
tactile
job
that
it's
not
really
like.
You
don't
have
to
do
it
kind
of
thing,
as
opposed
to
deliverable
count.
You
sort
of
like
try
to
push
it.
So
I'm
wondering
whether
we
had
any
discussion
around
that.
D
I've
been
wondering
the
same,
and
I'd
be
curious.
I
guess
something
michelle
might
be
able
to
find
for
us
is
if
we
were
to
take
productivity
stats
of
engineers
who
are
and
aren't
maintainers.
Is
there
a
general
trend
already
that
maintainers
generally
just
get
a
bit
less
done
in
terms
of
feature
development
than
non-maintainers,
because
it
wouldn't
surprise
me.
C
Yeah
I
mean
like
that's
the
next
point
I
made
in
there
as
well
that
that's
a
different
thing.
I
support
them,
but
the
main
thing
was
that
I
think
I
saw
another
command
in
one
of
the
issues
in
the
other
one
that
says
that
some
people
likes
to
review
more
and
some
people
want
to
focus
on
the
actual
deliverables
and
stuff
like
that.
So
if
people
take
on
more
review
tasks,
I
think
personally,
I
think
we
should
lower
their
expectation
to
deliverables.
C
Obviously
they
spend
more
time
in
reviewing
if
you
think
they
are
equal
white
in
terms
of
the
what
we
call
the
significance
of
the
contribution
that
engineers
make.
I
think
we
need
to
be
able
to
sort
of
move
things
around
based
on
their
preference
or
their
desire,
or
something
like
that,
so
maybe
some
people
drop
maintainership
because
they
feel
too
overwhelmed
by
doing
reviews
properly
and
doing
the
deliverables
at
the
same
time.
So
just
a
thought.
Maybe
we
should
explore
that
area.
A
So
I'm
I'm
understanding
this
question
to
be
kind
of
a
two-parter
number
one.
Have
we
thought
about
what
the
guidance
is
in
terms
of
your
reviews
and
number
two?
Have
we
thought
about
reducing
or
increasing
your
capacity
to
do,
reviews
right
so
for
the
first
one,
there's
an
issue
around
like
what
is
the
guidance,
and
I
think
that
that
might
be
natalia's
issue
under
exit
criteria
to
develop
metrics.
A
We
don't
know
what
the
guidance
is
unless
we
have
the
data,
but
the
second
point
is
around
capacity,
and
I'm
glad
you
brought
that
up
because
I
think
christopher
has
an
action
item.
I
didn't
follow
up
on
from
the
past
to
speak
with
david
defanto
about
reducing
capacity
for
number
one.
This
maintainership
initiative
push
sort
of
thing,
but
there
should
also
be
an
issue
david.
If
you
want
to
create
one,
there
should
also
be
an
issue
for
do.
We
need
to
consider
that
that's
an
open
question.
A
One
thing
that
you
had
mentioned
was
according
to
preference,
and
so,
if
I'm
an
engineer-
and
I
prefer
to
do
reviews
more
than
I
prefer
to
do
like,
for
example,
deliverables-
should
we
be
flexible
on
that
to
answer
your
second
question
in
terms
of
what
triggered
this
working
group,
there
are
a
lot
of
complaints
from
existing
maintainers
and
a
lot
of
maintainers
who
have
they're
now
x,
maintainers,
because
they
can't
keep
up
with
this
demand
and
it's
not
balanced,
and
so,
if
I
want
to
review
less
and
work
on
deliverables
more
but,
for
example,
minaj.
A
This
now
puts
him
in
a
situation
where
he
has
to
review
more
because
I'm
reviewing
less
and
that's
where
we
get
into
the
situation
we
are
today.
So
one
of
the
meetings
that
we
had
a
few
weeks
ago
was
around
you
know.
Our
maintainer
ratio
looks
really
good
in
the
handbook,
but
in
practice
you
see
people
who
are
reviewing
way
more,
mrs
than
other
people
or
that
do
not
disturb
status
as
kind
of
a
question
mark
and
things
like
that,.
C
Yeah,
I
guess
that's
around
like
saying:
if
for
a
maintainer,
obviously,
if
you
are
taking
on
a
role
of
maintainish,
then
I
would
think
that
you
will
be
spending
more
time
reviewing.
So
it
seems
to
make
sense
for
them
to
focus
on
more
review
side
by
virtual
being
a
maintainer,
but
at
the
moment
it's
sort
of
like
up
to
the
engineer
to
work
it
out
and
we
don't
really
cater
for
that.
C
As
far
as
I
know,
so,
that's
why
maybe
many
people
feeling
pressured
to
do
both
and
they
feel
like
and
also,
I
feel
like,
sometimes
that
my
deliverables
are
sort
of
not
getting
done
and
focus
on
the
reviews
and
stuff
like
that.
It's
always
too
much
of
a
juggle.
So
I'm
just
trying
to
think
whether
there's
anything
that
we
can
make
it
easy
for
those
who
choose
to
be
a
maintainer.
A
So
I
think
that
there's
and
actually
max,
I
thought
that
you
had
one
related
to.
A
I
thought
that
you
had
one
related
to
like
what
the
guidance
is
or
ideal
ratio
would
be,
but
I
might
be
mistaken.
I
think
there
are
a
few
issues
here,
david.
If
you
can
create
one
for
the
kind
of
capacity
aspect,
should
we
be
reducing
capacity
situation
and
then
the
second
one
would
really
be
help
me
out.
A
Have
to
go
back
and
probably
watch
the
recording,
because
I
just
lost
my
thought
on
that,
but
I
think
there
was
a
second
issue
here
that
we
should
maybe
dig
into
oh
max.
You
had
mentioned
the
workload
for
maintainers
if
they're
actually
less
productive.
I
think
that
would
be
a
good
issue
to
look
into.
A
F
F
F
This
is
mostly
because
one
as
far
as
distribution
is
concerned,
mazrikus
can
come
from
two
broader
categories,
one
within
the
team
to
outside
the
team,
but
how
it
is
different
is
that
outside
the
team
can
mean
other
teams
from
gitlab.
F
This
is
this
is
a
bit
more
different
than
rails
development
or
the
usual
packet
development
or
any
other
components,
because
most
of
the
changes
that
rails
backend
has
will
automatically
have
some
change
in
omnibus
gitlab
all
charts,
so
any
change
in
the
rails
backend
can
translate
to
more
work
for
the
reviewers
and
maintenance
in
these
teams.
So
essentially,
we
wanted
to
give
the
first
reviewers
the
people
who
are
noted
maintenance,
but
want
to
be
a
maintainer.
F
We
wanted
to
give
them
a
sort
of
agency
to
pick
on
stuff
that
they
wanted
to
focus
on
instead
of
getting
assigned
to
random
multiples
from
random
projects
in
random
portions
of
the
code
base.
So
the
three
projects
that
distribution
takes
care
of
are
entirely
different
in
the
sense
of
technology
stack
or
how
how
stuff
works.
So
we
wanted
to
give
that
agency
to
reviewers
so
that
they
could
pick
up
stuff
based
on
their
preference
and
based
on
their
face.
F
They
can
move
to
the
maintainership
like
they
can
travel
the
maintenance
part
in
a
pace
that
they
feel
okay
with
and
they
can
choose
to
focus
on
one
area
for
a
particular
time.
Pick
up
produce
from
that
area
make
sure
they
are
experiencing
that
move
on
to
the
next
one.
So
they
have
that
choice.
F
So
I
recently
asked
my
team
what
their
feedback
was
about
this
like
do
you
think
you
are
overwhelmed,
or
do
we
see
more
ms
breaching
slo
because
of
a
bystander
effect
like
no
one
is
speaking
because
everyone
is
thinking,
the
other
person
will
pick
it,
but
the
general
feedback
was
the
team
still
liked
this
idea
of
having
this
agency
to
pick
what
they
get
to
review
rather
than
being
ascend
random
stuff
to
review,
so
they
kind
of
can
do
this
at
a
comfortable
pace.
F
So
I
I
don't
think
this
makes
sense
for
the
race
backhand
project,
because
it
is
a
huge
project
with
huge
number
of
changes
coming
in,
but
when
we
are
dealing
with
the
main
ownership
process,
what
are
our
plans
for
doing
this
for
projects
that
doesn't
use
to
be
a
roulette
distribution?
Doesn't
I
think,
gitlab
pages?
Doesn't
I
I'm
not
sure,
but
I
heard
somewhere
getting
might
be.
Italy
is
also
thinking
of
dropping
the
rear
that
in
favor
of
something
else.
F
So,
if,
if,
if
you
are
go
moving
forward
with
the
standardization,
are
we
thinking
of
making
reviewer
roulette
a
standard
thing
used
across
company?
Or
should
we
be
thinking
about
alternative
setups?
That
teams
that
fall
outside
our
regular
development
process
takes
so
it
how
it
relates
to
the
topic
we
were
discussing
it?
F
I
think
this
gave
the
first
level
preview
is
more
of
a
confidence
to
work
on
stuff
and
get
to
maintain
a
ship
like
thinking
it,
as
I
can
do
it
at
my
my
own
pace,
rather
than
I
will
be
overwhelmed.
If
I
start
like,
if
I
mark
myself
as
a
reviewer,
damn
I
will
be
getting
review
requests
all
of
a
sudden
and
I
won't
be
able
to
do
any
work
rather
than
that
they
can
like.
F
A
I
can
speak
to
this
one.
If
they're
is
there
any
other
feedback
or
questions
or
anything
that
we
would
like
to
ask,
so
I
can
speak
a
little
bit
to
this
one.
This
is
why
exit
criteria
one
is
really
important
and
that
one
still
doesn't
have
a
dri
exit
criteria.
One
is
number
one
for
in
sizes
when
we
calculate
the
maintainer
ratio,
we're
excluding
a
lot
of
projects,
and
should
we
be
and
number
two
does.
A
Maybe
we
should
just
take
a
step
back
and
understand
what
demand
is
omnibus
going
to
have
today
what
what
demand
do
you
have
today
sounds
like
you're
all
doing
a
really
good
job
handling
that
demand.
What
does
that
demand?
Look
like
next
year?
Are
you
going
to
be
able
to
handle
it
next
year?
Sounds
like
maybe
yes,
and
at
that
point
is
that
a
problem
that
we
need
to
solve
git
lab
it's
a
little
bit
different.
A
We
do
have
a
problem
today,
so
I
know
we'll
have
a
problem
a
year
from
now
and
we
need
to
solve
that
problem,
but
rather
than
solving
it
one
at
a
time,
maybe
we
can
figure
out
who
has
the
problem
and
can
we
implement
something
for
all
of
them
or
not,
and
because
otherwise,
I
think
what
I
have
learned
throughout
this
process
is:
we
do
have
a
lot
of
you
mentioned.
Charts
omnibus
we've
also
got
git
lab
ui.
A
We've
got
technical
doc,
repos,
there's
so
many
repositories
that
we're
maintaining
the
analyzer
groups
and
not
all
of
them
have
problems
or
are
seeing
issues,
and
so
it's
it's
become
difficult.
If
that
makes
sense.
C
Just
on
that,
I'm
just
wondering
I
think
you
mentioned
a
couple
times
before
that
people
were
complaining,
but,
like
I'm
just
wondering
about
like,
because
I
wasn't
the
one
that
you
know
aware
of
how
this
working
group
started.
That's
sort
of
like
my
next
question
was,
but
was
it
from
main
gitlab
report
like
people
are
complaining
that
they
have
too
many
reviews
that
they
have
to
do
or
is
it
like?
A
I
would
say
the
reviewer
side,
so
what
has
happened
over
the
past
three
months?
I
would
say-
and
we've
seen
this
in
our
okrs
already-
the
database
group
has
been
incredibly
overwhelmed
and
they've
been
almost
enforcing
that
all
of
the
product
groups
get
more
maintainers.
Maybe
you
have
heard
of
this
more
database
maintainers
at
the
same
time
as
that,
workhorse
and
shell
are,
I
think
they
have
like
one
or
two
maintainers
total,
and
so
when
an
incident
happens,
they're
not
able
to
merge
any
of
the
work
or
these
rapid
actions.