►
From YouTube: CHAOSS Metrics Models Working Group 12-7-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
My
brain
is
fuzzy
too
yeah
perfect,
my
my
throat
and
my
brain.
So
the
minutes
are
in
the
chat.
If
you
could
add
yourself,
that
would
be
wonderful.
A
I
will
share
my
screen
here
a
couple
things
I
just
want
to
chat
about
today,
so
this
is
just
so
huey,
you
know
and
and
lucas
you
know
too.
This
is
our
last
meeting
until
2022,
I'm
not
entirely
sure.
If
we
start
this
one
up
in
the
next
week.
You
know
like
the
week
we
get
back
or
the
week
after
you
know
what
I'm
talking
about:
yeah,
I'm
looking
for
chad,
tenth
or
if
it's,
if
it's
the
later
one
so.
C
D
There
you
go
yep,
it
looks
like
it'll
start
the
we'll
have
this
meeting
next
on
january,
18th,
okay,.
A
We'll
make
sure
that
we'll
get
this
in
the
calendar
so
not
a
big
deal.
There
was.
E
There
another
url
for
that,
I'm
getting
the
my
sharepoint
error.
A
Yeah
yeah
sean
can
you.
You
always
seem
to
help.
D
Yeah,
I've
got
it
up.
This
is
the
I
thought,
I'd
updated
the
meeting,
but
you
know,
I
think,
a
lot
of
things.
This
is
the
one
I
have
which
does
look
slightly
different
than
the
one
matt
shared
and
that's
from
the
meeting
appointment
on
the
chaos
calendar.
A
That
is
working,
okay,
okay,
awesome!
Thank
you
all
right.
So
the
first
thing
that
I
just
wanted
to
to
kind
of
talk
through
a
little
bit
is
oh
amazon,
hi
emma.
D
A
So
the
first
thing
that
I
wanted
to
talk
about
is
the
this
just
the
spreadsheet,
so
this
kind
of
goes
to
a
larger
issue
that
we're
having
in
the
chaos
project,
which
is
just
ensuring
that
our
documentation
is
accessible.
You
know,
when
it
kind
of,
is
consistent
and
makes
sense
for
folks.
A
You
you've
all
seen
this,
so
I've
been
working
pretty
hard
to
kind
of
update
the
spreadsheet
across
all
of
the
metrics
working
group,
so
this
is
different
than
the
metrics
model
working
group,
but
I'm
trying
to
to
to
really
kind
of
simplify
the
the
this
thing,
the
spreadsheet,
so
that
it
it
carries
enough
information
that
is
useful,
but
it
doesn't
carry
too
much
information
it
just
absolutely
like
melts
your
brain.
A
A
I
think
we
can
follow
a
release
cycle
or
a
release.
Cadence
that's
similar
to
the
metric.
So
as
we
have
a
new
model,
that's
being
developed,
we
can
kind
of
track
it
in
this
spreadsheet,
similarly
have
remarks
about
that
potential
model
and
then
also
have
links
to
the
metrics
model
as
we're
working
on
it.
A
The
process
of
work
is
that
when
something
is
in
progress,
so
like
row,
12
or
any
of
these
like
17,
18
so
and
so
forth,
13
is
a
a
liar
on
this
one
for
a
second,
but
when
anything's
in
progress
we
just
work
on
it
in
a
google
doc.
So
the
google
doc
is
our
work
in
progress
platform.
So
we
don't
do
work
in
progresses
on
github
or
or
some
other,
or
we
could
use
google
doc
or
we
can
use
some
other
shared
document
tool.
A
That's
completely
fine,
like
sharepoint
or
something
else,
and
only
it's
only
when
we
have
put
the
metric
or
the
metric
model
under
review,
that
we
move
it
to
github.
A
A
A
A
C
I
I
believe
just
a
little
part
of
them
cannot
access
to
the
google
doc,
but
but
most
of
them,
and
they
have
access
to
the
google
doc
and-
and
I
also
discuss
within
that-
if,
if
bomb
opening
cannot
access
it,
I
can,
I
can
transfer
those
content,
some
of
the
google
doc
two
to
the
other,
just
some
other
tools,
yeah
yeah,
exactly.
A
E
Could
you
correct
me,
I
think
the
essence
of
what
you're
going
for
here
is
that
the
repo
is
for
work
that
has
been
kind
of
it's
ready
for
github
and
and
forget
and
get
type
flows,
and
you
have
more
kind
of
shared
editability
through
the.
A
A
So
any
of
these
like
this
is
like
that
space
that
that
namespace
of
github
slash
chaos,
wgdei
is
kind
of
that's
an
important
space
that
we
don't.
We
don't
work
in
there,
that's
only
where
our
things
are
kind
of
done,
so,
of
course,
we
can
do
a
pull
request.
If
somebody
has
an
issue,
you
know
what
I
mean.
We
can
change
things
in
there,
of
course,
but
generally
speaking,
yeah.
E
A
The
contributor
funnel,
oh
sorry,
oh
no,
I
don't
have
it
in
here.
I
did
not
see
that
so
that
should
be
in
here
somewhere.
A
C
C
A
C
B
C
A
Okay,
cool
the
webs,
so
the
website's
gonna
be
like
this.
This
thing
was
one
thing:
to
clean
up
the
website's
gonna
be
a
whole
nother
thing
to
clean
up
correct.
That's
next.
A
We
need
we
just
we
have
it's
interesting
like
like
community
growth
and
community
like
longevity,
it
creates
this
proliferation
of
documents
and
they
end
up.
They
end
up
just
kind
of
ever
so
slightly
like
splitting
from
each
other
and
over
time.
They
need
to
be
brought
together.
Okay,
all
right
cool.
So
then,
in
terms
of
the
metrics
model
tab
for
this
group,
because
it's
different
right,
this
is
like
this
is
different
than
the
metrics.
It's
because
the
metrics
models
are
slightly
different.
Yeah.
A
A
When
it's
released,
I've
been
following
a
very
just
a
model
of
when
it's
released,
it's
version
one
and
if
we
modify
it
it'll
become
version
something
other
than
one
all
right,
and
so
right
now
it's
when
anything's
in
progress.
It's
this
not
given
a
version
all
right
and
the
reason
we
wanted
to
just
version
things
I
think,
is
because,
as
we
do
updates,
we
just
want
to
be
able
to
track
that
this
has
been
updated.
A
This
is
like
a
1.1
of
this
metric
sean
had
made
a
recommendation
that
we
have
potentially
a
working
group
home
for
a
metrics
model.
It's
becoming
pretty
common.
That
say
the
risk
working
group
would
be
developing
a
metrics
model
or
the
dei
working
group.
Maybe
we
could
talk
about
that
elizabeth,
but
like
the
dei
working
group
would
be
developing
a
metrics
model.
We
actually
talked
about
one
just
a
couple
days
ago
on
monday
a
couple
days
ago.
A
Yesterday,
right
so
we
may
have
a
home
for
these,
and
the
question
that
I
would
have
for
people
is
if
we
have
a
working
group
home
so,
for
example,
dei
event.
Badging
we've
talked
about
this
as
a
metric
model.
A
E
I
think
we
want
to
talk
about
what
is
the
line
between
on
the
output
of
a
working
group
and
the
metrics
themselves
and
a
model.
I
started
to
work
on
a
dei
model
and
I
found
that
basically,
I
was
just
restating
the
original
webpage
and
not
really
adding
any
value.
C
Gotcha
yeah,
I
I
shared
the
stimulant
about
this
because
I
what
we
want
to
do
to
to
to
pick
up
the
different
metrics
who
have
a
logic
connections
from
different
current
text
working
group.
C
D
Fair
shaun
did
you
have
a
comment
on
this
yeah?
It's
a
really
good
point
that
it
will
be
hard
for
p
individuals
approaching
the
project
to
distinguish
between
the
models
and
the
metrics.
I
don't
think
that
prevents
working
groups
from
developing
metrics
models
and
or
proposing
them
and
having
this
working
group
sort
of
help
them
refine.
It.
D
D
A
So
maybe
like
home,
isn't
the
best
phrase
there
really.
I
don't
know.
D
A
A
A
F
A
C
A
D
D
Now,
then,
all
right,
so
what's
the
name?
What's
it
called?
I
mean
we're,
calling
it
welcoming,
but
it's
elizabeth's
metric
that
she
proposed
in
one
of
our
prior
meetings,
and
I
was
welcomingness
or
welcoming
this
about
welcoming
whatever
it's
the
best
phrase
I
could
come
up
with.
I
don't
know
if
that's
what
elizabeth
intended.
B
D
D
A
A
D
There
we
go,
so
these
are
the
metrics
at
the
top.
Here
we
I
me
ragava
and
I
have
been
calling
it.
The
welcoming
metrics
model
or
the
elizabeth
model
and
elizabeth
identified
activity,
community
culture,
licensing,
stability
and
code
related,
metrics
and
ragava,
and
I
built
a
number
of
either
leveraged
assets
that
are
in
augur.
Ragava
built
a
number
of
things
related
to
different
things,
and
we
have
some
holes,
which
I
can
talk
about
a
little
bit,
for
example,
inclusive
leadership.
D
I
think
that's
one
that
we're
going
to
have
to
borrow
from
the
dei
working
group,
and
I
don't
know
if
we
I
don't.
We
don't
have
a
quick
way
to
measure
that
so
we'll
have
to
figure
out.
How
do
we
incorporate
that
into
a
model
it
may?
If
a
model
is
something
that
people
run,
it
may
just
have
to
include
instructions
on
how
to
good
point
deploy.
D
Yeah,
I
think
I
think,
a
lot
of
them
will
many
of
our
metrics
include
trace
data,
but,
like.
D
Right
right-
and
I
think,
as
we
talked
about
so
these
are
the
five
headings
of
the
metrics
and
we've
tried.
This
is
a
jupiter
notebook,
so
it's
a
little
clumsy
in
terms
of
presentation,
we'll
tidy
it
up
as
we
finish
it
and
make
it
a
thing,
though
we
may
store
it
as
a
jupiter
notebook.
A
Yes,
please,
that
would
be
great,
thank
you
and
just
for
emma.
I
don't
know
if
you're
hearing
us,
but
basically
we
had
spent
just
a
few,
maybe
a
few
weeks
ago
or
a
month
ago,
we
had
spent
some
time
in
this
meeting
kind
of
just
brainstorming
on
some
different
metrics
models
that
people
might
be
interested
in
like
how
they
could
draw
metrics
together.
That
would
be
meaningful
in
different
contexts
and
what
sean
and
ragava
have
been
working
on
is
is
so
we
can.
A
We
can
specify
the
metrics
models
and
that's
great
cool,
no
problem,
so
we
can
specify
those
metrics
models
like
in
those
documents
that
I
was
showing
earlier,
but
sean
and
ragava
have
actually
been
doing
work
on
deploying
the
metrics
models.
So
if
people
want
to
see
this
data,
how
do
they
go
about
doing
that?
So
that's
just
a
little
bit
of
background,
so.
D
D
I
think
we
need
to
work
on
this
a
little
bit
because
the
issue,
basically
how
old,
are
the
issues
that
were
opened
longer
ago,
they're
older,
so
it's
it's
not
a
perfect
visualization.
Just
yet
so
I'll
move
past
it
pretty
quick.
D
Then
we
have
issue
response
time
and
so
in
many
cases
there's
an
existing
auger
endpoint
that
can
deliver
the
data.
In
this
case
it's
a
it's
kind
of
a
new
way
of
representing
the
data
in
order
to
make
it
this
kind
of
visualization,
so
we'll
we'll
roll
it
in
as
a
new
endpoint,
so
that
this
big
query
isn't
in
the
notebook
and
you
don't
need
database
access
to
get
to
it.
But
this
is
issue
response
time.
So
the
example
project
is
auger,
and
this
just
shows
that
it
may
get
a
little
bigger
at
all.
D
I
can
I
can
oh
yeah.
It
makes
it
a
lot
bigger
too
and
we'll
probably
organize
this
by
month
or
quarter.
It's
organized
by
month,
we
could
probably
go
to
quarter
if
we're
gonna
show
the
full
length
of
the
project.
D
That's
kind
of
a
design
decision,
and
one
of
the
reasons
we'll
make
it
available
is
a
jupiter
notebook
with
blogger.
Endpoints
is
so
that
people
can
play
with
it
and
just
apply
it
to
their
own
auger
instance,
and
ultimately,
our
goal
in
the
coming
year
would
be
to
have
have
examples
from
grower
lab
as
well.
D
And
so
this
is
issue
response
time
also
under
activity
is
issued
time
to
first
response,
and
so
this
is
correct
me.
This
is
a
mean
correct,
or
is
this
a
cumulative?
D
It's
a
it's
a
mean,
it's
a
mean
okay,
so
this
is
the
average
time
to
first
response.
How
long
it
took
to
close
is
what's
on
the
thing,
so
I
don't
issue
yep
yeah,
so
we
also
have
a
first
response
representation
a
little
bit.
This
is
kind
of
duration
to
close
a
little
bit
so.
D
Is
not
yet,
but
there
will
be
I'll
make
this
available
soon
right
now,
there's
not
a
link
and
that's
just
because
there's
database
credentials
in
it.
I
have
to
convert
a
few
things
to
auger
end
points
first,
so
that'll
happen,
but
you
know
before
we
meet
next
for
sure
I
would
guess
in
the
next
week
we'll
get
that
done
cool.
Thank
you
for
community
culture,
there's
code
of
conduct
and
really
is
there
a
code
of
conduct
or
not
is
kind
of
the
indicator.
D
We
weren't
sure
how
to
visually
represent
this,
but
we
include
a
link
to
the
code
of
conduct
where
it
exists,
so
augur
will
gather
the
code
of
conduct
and
then
oops,
apparently
that
didn't
work.
That's
comforting,
like
you
get.
H
D
So
github
has
metadata
around
code
of
conduct
files
and
it's
part
of
how
you
make
your
project
searchable.
So
if
you
declare
a
code
of
conduct
file,
then
it's
collected
as
part
of
the
metadata,
which
is
why
possibly,
this
is
wrong.
Although
this
this
link
should
work
right,
it
was
developing
at
code
yeah,
oh
yeah,
that's
the
right!
D
D
D
Here
there
we
go
there,
that's
exact,
that's
what
the
query
returns
and
apparently
the
whole
thing
doesn't
get
transformed
into
the
jupiter
notebook
very
cleanly.
It
leaves
a
piece
out.
So
if
a
project
has
one
declared
in
the
metadata
it's
gathered,
are
there
other
places?
We
should
be
looking
emma.
H
I
mean
you
know
more
about
the
metadata.
I
just
know
that
sometimes
it's
in
the
dot,
github
folder
and
sometimes
it's
right
in
the
root
and
then
other
folks
will
just
kind
of
put
it
in
random
places.
I
know
sometimes
I've
had
troubles
finding
it.
That's
why
I
was
wondering.
D
Yeah
yeah,
so
it's
if
a
project
has
declared
that
they
have
a
code
of
conduct
and
github
kind
of
motivates
the
projects
to
do
this.
They
they
tell
you
what
you
need
to
do
to
be
a
project
that
is
up
to
community
standards,
and
this
is
something
that
github
started.
Maybe
four
years
ago
now
at
github
universe,
I
think
in
2017.
D
Yeah
and
yeah,
so
that's
inclusive
leadership.
That's
the
one
I
mentioned
at
the
outset
that
we
have
to
decide
how
to
handle
that
license.
Coverage
is
one
that
is
an
auger
endpoint
and
I
just
for
convenience,
just
copied
the
image
on
the
main
page
that
hits
that
endpoint,
instead
of
showing
you
the
pure
json,
because
I
had
something
and
license
is
declared.
We
also
have
a
auger
endpoint
that
delivers
a
bunch
of
data,
but
we
need
to
process
it
in
the
metric
model.
D
D
So
when
the
license
scanner
scans,
auger
it
it
comes
up
that
we
have
every
single
open
source
license
available
in
at
least
one
file.
C
I
have
a
question
which
last,
which
stress
and
scanner
are
used
by
by
by
auger,
it's
implemented
by
augur
all.
E
You
know,
I
think,
that
the
fact
that
we're
having
this
conversation
about
how
do
you
quantitatively
measure
the
matrix
kind
of
goes
to
show
the
value
of
the
models
project
right,
like
you're
sitting
there
putting
metrics
into
practice
for
the
sake
of
many
models
and
it's
kind
of
helping
guide
us
for
future
projects.
D
Yeah,
okay-
and
this
is
for
badgings,
so
cii
best
practices
badging.
I
should
really
call
this
cii
best
practices
badging
status,
most
projects
candidly-
are
not
best
practice
badged
and
I
won't.
I
won't
run
to
the
top
to
make
that
markdown
cell
play
nice,
but
basically
we
show
you
that
its
passing
status
is
met
by
this
particular
repo,
and
I
was
working
on
making
a
pretty
green
or
yellow
color,
but
I
didn't
get
that
finished.
D
Test
coverage
is
very
challenging.
There's
a
lot
of
different,
it's
basically
language
specific,
so
we
don't
have
any
auger
implementation
of
that.
I
don't
know
of
any
metrics
toolkit
that
does
it's
it's
another
one
that
we'll
have
to
think
about.
D
Bus
factor
can
be
calculated,
but
I
think
we
want
to
give
people
parameters
which
is
what's
specified
in
the
in
the
repo
in
the
metric
definition
that
chaos
has.
A
D
C
D
I
D
Off
the
decoding,
because
I
think
I'm
the
blue.
D
D
And
whether
they
contribute
so
this
is
the
first
time
contributors
per
quarter.
This
is
repeat,
first
time
contributors
per
quarter.
This
is
all
second
time
contributors
per
quarter,
and
this
is
we.
It
should
be
flyby,
apparently
something
snuck
past
my
goalie,
it's
flyby
in
the
narrative
description
below,
but
it's
drive.
By
still
in
that
title,
I
have
to
fix
that.
E
This
is
just
another
if
it's
possible
to
get
some
kind
of
benchmark
on
those
drive-by
contributors.
It
would
make
the
data
easier
to
understand.
Yeah.
D
Bench
by
benchmark,
you
mean
like
compare
it
without
comparing
other
projects
yeah,
I
think
so.
This
is.
This
is
always
one
of
the
tricks
is
people
do
need
to
compare
projects
to
interpret
them
and
the-
and
I
think
the
point
that
you
raise
lucas
is
a
really
good
one.
You
know
do
metrics
models
need
to
have
some
of
those
comparisons
sort
of
enabled
by
default.
You
know,
should
we
produce
an
example
that
lets
you
see
a
couple
of
projects
side
by
side.
H
D
H
D
And
then
this
is
just
another
view
of
new
contributors.
This
would
just
be
cumulative
new
contributors
in
a
month.
D
Using
the
new
contributor's
endpoint
that
we
have
that's
just
data
and
then
we
have
change,
request
acceptance
rates,
which
is
also
a
visualization
api
endpoint
and
for
the
acceptance
rates
I'm
just
going
to
zoom
this
down
a
little
bit
just
to
try
to
make
it
fit.
D
We
have
all
the
ones
that
are
merged
and
not
merged,
so
you
can
see
in
2020,
agar
merged,
360
and
didn't
merge,
78
out
of
438
and
then
in
2021
we
have
267
merged
and
68
that
weren't
merged,
and
then
we
also
look
at
the
the
20
slowest
to
be
merged
that
are
accepted
and
merged
and
slowest
to
be
merged
and
rejected.
So
you
can
see
in
we
had
88
accepted,
slow
ones
in
2020
and
38
accepted,
slow
ones,
and
so
more
of
the
slow
ones
get
rejected
in
2021
than
2020.
D
My
end
point
is
wrong,
so
I
have
a
task
to
fix
that
end:
point
for
the
just
the
data
and
a
visualization
that
regava
put
together
so
yeah.
That's
it.
D
D
Eat
right
now,
it's
really
hard
to
show
and
tell
with
the
jupiter
notebook,
but
I
think
that's
it's
the
easiest
way
to
build
something.
So
hopefully
you
all
found
it
somewhat
in
enlightening
or
helpful.
I
found
it.
D
D
I
will
give
you
access
to
to
play
with
it
and
if,
if
you
have
some
repo
like
right
now,
it
depends
on
auger
data
so,
but
it
could
easily
be
so
that,
if
there's
a
different
way
that
we
want
to
do
these,
you
know
I'm
open
to
it.
I'm
not
married
to
doing
it
with
awkward
data.
It's
just
what
I
know.
C
A
H
H
I
don't
know
which
one
and
like
show
it
to
that
working
group,
because
we
have
like
a
couple
dapper
folks
there
and,
and
then
people
sort
of
talk
through
how
these
may
or
may
not
apply
to
the
thing
the
models,
I'm
calling
it
a
model.
I
don't
know
really
exactly
that
we're
building.
H
E
Sean
are
you,
okay
with
me
posting
that
link
to
the
slack
for
the
work
for
the
metrics
model
group?
Yes,
cool,
okay,.
A
So
sean
the
one
question
I
had
so
as
we
I'm
gonna
share
my
screen.
So
so
this
the
highlighted
in
yellow
is
the
work
that,
when
we
were
working
kind
of
individually
about
a
month
ago,.
A
And
part
of
the
way
that
we've
been
developing,
the
metrics
models
is
like
you
can
see
here.
This
is
the
dei
event
badging
and
we
have
the
metric
and
then
we
kind
of
explain
why
you
care
about
this
one
with
respect
to
event
badging
and
then
why
you
care
about
this
one
and
so
on
and
so
forth.
A
A
I
mean
I
can
just
copy
and
paste
it
now
and
yeah.
D
A
D
A
A
A
D
Two
yeah
I
mean
tell,
will
tell
people
where
the
source
of
truth
is,
but
also
probably
get
things
into
the
notebook.
Okay,
at
least
initial
adoption.
A
I'm
wondering
you
know
how
like,
when
you
do
the
graphics,
the
images,
then
you
provide
the
the
caption
I'm
my
my
brain
is
like
yeah.
Words
are
slow
right
now,
but
the
caption
like
you
could
put
some
of
that
in
there
perhaps
yeah.
A
Okay,
yeah
for
sure,
okay,
just
the
why
you
care
kind
of
thing:
okay,
cool!
Thank
you,
sean
and
regava
is
so
what's
next
for,
for
you
guys.
D
On
this,
I
think,
if
you
know,
we've
gotten
some
feedback
here
and
we
know
what
isn't
done
yet,
and
I
think
this
adding
context
is
really
important.
So
I
think,
between
now
and
when
we
meet
again
we'll
have
something
that's
publicly
shared
that
people
can
look
at
right
now.
You
know
for
you
emma,
I
think.
D
Certainly
I
can.
I
can
give
you
a.
I
give
you
a
week
we'll
finish
this
up
before
the
holiday,
the
christian
holiday
that
shuts
us
all
down
so
I'll.
Send
you
a
link
in
the
next
week
to
10
days.
H
H
D
H
H
H
A
The
we
have
just
a
few
minutes
and
three
items,
but
I'm
guessing
emma
put
this
one
here.
H
H
Okay,
so
so
right
now,
I
have
to
open
something
for
myself
to
look
at
while
I'm
talking
so,
as
you
know
like
inside
of
microsoft
and
everywhere,
I
guess
people
are,
you
know
asking
about
metrics
and
how
do
I
tell
my
community
is
or
my
project
is
this-
that
and
the
other
thing
right,
and
so
I
got
a
bunch
of
folks
together
from.
H
The
teamswordapper
babylon.js.net
a
bunch
from.net
and
then
some
researchers
and
someone
who's
in
charge
of
github
sponsors
just
to
say
like
what.
What
do
we
as
a
group
can
commit
to
work
on,
and
so
I
had
everyone
kind
of
list,
the
things
they're
interested
in.
Why
and
then
I
synthesize
that
into
two
groupings
which
I'm
calling
maybe
it'll
actually
be
better.
Do
you
mind
if
I
just
quickly
share
my
screen,
because
then
you
can?
No,
you
don't
have.
D
F
H
Right
so
kind
of
jumping
to
the
end
thing
here
is
like
there's
these
two
areas
that
are
really
clear
and
strong,
like
people
want
to
know
the
very
basics,
and
I
think
sean
like
you
know-
and
I
these
are
the
initial
ones
that
kind
of
came
out
that
was
like
discovery,
responsiveness,
which
you
know
like
some
of
the
stuff
you're,
actually
talking
about
something
around
contribution.
H
You
know
by
type
org
region.
These
are
just
things
that
I
personally
made
up.
So
I
don't
meet.
You
know,
maybe
there's
other
categories
that
exist.
Things
like
you're
already
talking
about
again
so
exciting,
like
the
code
of
conduct,
finding
it
and
usage.
So
usage
comes
up
a
lot.
I
don't
know
if
there's
anything
there
and
then
the
project
sustainability
is
the
other
category.
H
So
that's
maybe
a
tougher
one,
but
definitely
like
a
bucket
right,
like
the
github
sponsors
folks,
want
to
make
sure
that
you
know
that
we're
supporting
projects
that
are
you
know
that
that
we're
supporting
projects
and
helping
sustain
them,
but
that
there's
certain
things
we
look
out
for
you
know
that
they're
an
inclusive
project,
but
also
you
know
we
want
to
see
if
there's
burnout
risk
somewhere.
We
want
to
send
folks
money
there
yeah
so
anyways.
This
is
just
the
things
that
I
came
up
with.
H
So
what
and
then
I
also
some
we
actually
have
some
existing
metrics
around
use.
As
an
example,
I'm
just
gonna
move
my
hopefully
the
I'm
trying
to
be
mindful
of
your
time
here
hold
on.
H
So
I
I
have
my
own
spreadsheet,
which
I
know
matt
you'll
be
so
excited
to
hear.
So
then
I
just
start
all
I
did
was
babylon
has
some
some
metrics
they
already
use,
which
are
like
under
discovery,
usage
and
contribution,
and
what
are
those
like?
How
many
open
issues
there
are?
H
How
many
like
git
depend
turns
up
things,
so
I
just
documented
what
they
already
had.
So
hopefully
we
can
either
validate
what
you
have
contribute
these,
what
they're
doing,
but
my
next
step
is
to
kind
of
start
to
fill
in
the
blanks
of
of
these,
and
you
know
I
would
love
to
do
in
our
next.
When
I
bring
everyone
together
again
is
show
them
some
of
that
auger
work,
connect
it
with
either
open
source
or
the
101
or
the
sustainability
yeah,
and
then
because
there's
the
auger
work.
H
This
is
where
my
nubianism
is
coming
on
like,
but
then
there's
also
other
types
of
metrics
that
are
more
like
things.
We
like
evaluate
yeah.
D
H
D
D
H
Yeah-
and
I
know
that
the
for
good
folks
so
github
and
at
microsoft,
the
github
or
the
open
source
for
good
folks,
that's
one
of
the
things
they're
trying
to
evaluate
both
from
like
helping
identify
projects
that
are.
You
know
bad.
I
guess
for
those
that
need
help,
but
yeah
you
need
to
scale
to
do
that
like
going
through
each
repo
and
going
oh,
is
there?
You
know
this
like.
B
It's
hard
and
most
of
those
I
know
like
in
the
case
of
the
psychological
safety
metric,
mostly
just
surveying
the
community
members
a
lot
of
those.
So
I
mean
there
is
a
way
to
get
a.
You
know
to
get
some
data
around
it,
but
it's
tricky
because
you're
relying
on
the
communities
to
survey
their
own
members
and
yeah.
B
H
And
I
almost
wonder
if
there's
like
iteration,
like
an
iterator
iterative
approach
to
these
in
a
little
bit
like
you,
would
run
something
like
auger
to
identify
like
all
of
those
that
had
a
code
of
conduct
and
then
you
know
like
automated
as
much
as
we
could
and
then
got
to
the
point
where
we
started
to
ask
questions
yeah.
I.
E
So
I
think
you're
breaking
ground
here
and
it's
really
valuable.
So,
for
example,
your
project
sustainability
list
is
kind
of
a
model
in
itself
and
okay,
and
maybe
it
would
be
helpful
just
to
kind
of
you
know,
work
in
that
with
the
group
as
a
whole.
So
talk
about
the
safety
stuff
and
how
do
we
measure
burnout
risk
and
so
on
and
the
whole
community
to
contribute
to
that
and
help.
H
Okay,
yeah
and
the
goal
just
to
be
clear,
like
I
want
to
contribute
as
much
as
we
can
back
if
there's
like
auger
functionality
that
you
need
engineering
time
on
I'd,
love
to
bring
that
up
in
the
group
too
sean.
You
know
there's
engineers
in
there
and
there
we
want
they
want
this.
So
you
know
if
there's
areas
of
work
like
I'd
like
to
bring
that
too.
So,
maybe
when
I'm
sending
you
my
projects
for
comparison,
you
can
send
me
the
like
things.
That
would
be
helpful.
D
You
know
sometimes
like
I'm
generally
pretty
responsive,
but
sometimes
I
keep
issues
open
until
the
work
is
done.
So
some
issues
do
stay
open
a
while.
H
C
H
A
Already
done
them
down-
oh
you're,
not
in
the
spreadsheet,
but
I
I'm
with
lucas.
These
are
both.
These
are
both
metric
models
so
and
in
fact,
in
the
in
the
notes
you
have
it
called
open
source
project
health
101.
A
A
H
H
We
can
always
change
it:
okay,
well,
okay,
anyways
I'll
check
in
again
in
the
new
year.
Thanks
for
your
time,
I
have
to
take
someone
to
soccer
here.