
►
From YouTube: CHAOSS Metrics Models Working Group 12-7-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
uh
I
will
share
my
screen
here
um
a
couple
things
I
just
want
to
chat
about
today,
um
so
this
is
just
so
huey,
you
know
and
and
lucas
you
know
too.
um
This
is
our
last
meeting
until
2022,
I'm
not
entirely
sure.
If
we
start
this
one
up
in
the
next
week.
You
know
like
the
week
we
get
back
or
the
week
after
you
know
what
I'm
talking
about:
yeah,
I'm
looking
for
chad,
tenth
or
if
it's,
if
it's
the
later
one
so.
C
D
A
D
A
A
A
I
think
we
can
follow
a
release
cycle
or
a
release.
Cadence
that's
similar
to
the
metric.
So
as
we
have
a
new
model,
that's
being
developed,
we
can
kind
of
track
it
in
this
spreadsheet,
similarly
have
remarks
about
that
potential
model
and
then
also
have
links
to
the
metrics
model
as
we're
working
on
it.
A
The
process
of
work
is
that
when
something
is
in
progress,
so
like
row,
12
or
any
of
these
like
17,
18
so
and
so
forth,
13
is
a
a
liar
on
this
one
for
a
second,
but
when
anything's
in
progress
we
just
work
on
it
in
a
google
doc.
So
the
google
doc
is
our
work
in
progress
um
platform.
So
we
don't
do
work
in
progresses
on
github
or
or
some
other,
or
we
could
use
google
doc
or
we
can
use
some
other
shared
um
document
tool.
A
A
A
A
A
A
A
So
any
of
these
like
this
is
like
that
space
that
that
namespace
of
github
slash
chaos,
wgdei
is
kind
of
that's
an
important
space
that
we
don't.
We
don't
work
in
there,
that's
only
where
our
things
are
kind
of
done,
so,
of
course,
we
can
do
a
pull
request.
If
somebody
has
an
issue,
you
know
what
I
mean.
We
can
change
things
in
there,
of
course,
but
generally
speaking,
yeah.
E
A
A
C
C
A
C
B
C
A
A
We
need
we
just
we
have
it's
interesting
like
like
community
growth
and
community
like
longevity,
it
creates
this
proliferation
of
documents
and
they
end
up.
They
end
up
just
kind
of
ever
so
slightly
like
splitting
from
each
other
and
over
time.
They
need
to
be
brought
together.
Okay,
um
all
right
cool.
So
then,
in
terms
of
the
metrics
model
tab
for
this
group,
because
it's
different
right,
this
is
like
this
is
different
than
the
metrics.
It's
because
the
metrics
models
are
slightly
different.
Yeah.
A
A
um
When
it's
released,
I've
been
following
a
very
just
a
model
of
when
it's
released,
it's
version
one
and
if
we
modify
it
it'll
become
version
something
other
than
one
all
right,
and
so
right
now
it's
when
anything's
in
progress.
It's
this
not
given
a
version
all
right
and
the
reason
we
wanted
to
just
version
things
I
think,
is
because,
as
we
do
updates,
we
just
want
to
be
able
to
track
that
this
has
been
updated.
A
This
is
like
a
1.1
of
this
metric
sean
had
made
a
recommendation
that
we
have
potentially
a
working
group
home
for
a
metrics
model.
It's
becoming
pretty
common.
That
say
the
risk
working
group
would
be
developing
a
metrics
model
or
the
dei
working
group.
Maybe
we
could
talk
about
that
elizabeth,
but
like
the
dei
working
group
would
be
developing
a
metrics
model.
We
actually
talked
about
one
just
a
couple
days
ago
on
monday
a
couple
days
ago.
A
A
E
C
D
Fair
shaun
did
you
have
a
comment
on
this
yeah?
It's
a
really
good
point
that
it
will
be
hard
for
p
individuals
approaching
the
project
to
distinguish
between
the
models
and
the
metrics.
I
don't
think
that
prevents
working
groups
from
developing
metrics
models
and
or
proposing
them
and
having
this
working
group
sort
of
help
them
refine.
It.
D
D
D
A
A
A
F
A
C
A
D
D
Now,
then,
all
right,
so
what's
the
name?
What's
it
called?
I
mean
we're,
calling
it
welcoming,
but
um
it's
elizabeth's
metric
that
she
proposed
um
in
one
of
our
prior
meetings,
and
I
was
welcomingness
or
welcoming
this
about
welcoming
whatever
it's
um
the
best
phrase
I
could
come
up
with.
I
don't
know
if
that's
what
elizabeth
intended.
B
D
D
A
A
D
There
we
go,
so
these
are
the
metrics
at
the
top.
Here
we
I
me
ragava
and
I
have
been
calling
it.
The
welcoming
metrics
model
or
the
elizabeth
model
and
elizabeth
identified
activity,
community
culture,
licensing,
stability
and
code
related,
metrics
and
ragava,
and
I
built
a
number
of
either
leveraged
assets
that
are
in
augur.
Ragava
built
a
number
of
things
related
to
different
things,
and
we
have
some
holes,
which
I
can
talk
about
um
a
little
bit,
for
example,
inclusive
leadership.
D
I
think
that's
one
that
we're
going
to
have
to
borrow
from
the
dei
working
group,
and
I
don't
know
if
we
I
don't.
We
don't
have
a
quick
way
to
measure
that
so
we'll
have
to
figure
out.
How
do
we
incorporate
that
into
a
model
um
it
may?
If
a
model
is
something
that
people
run,
it
may
just
have
to
include
instructions
on
how
to
good
point
deploy.
D
A
Yes,
please,
that
would
be
great,
thank
you
and
just
for
emma.
I
don't
know
if
you're
hearing
us,
but
basically
we
had
spent
just
a
few,
maybe
a
few
weeks
ago
or
a
month
ago,
we
had
spent
some
time
in
this
meeting
kind
of
just
brainstorming
on
some
different
metrics
models
that
people
might
be
interested
in
like
how
they
could
draw
metrics
together.
That
would
be
meaningful
in
different
contexts
and
what
sean
and
ragava
have
been
working
on
is
is
so
we
can.
A
We
can
specify
the
metrics
models
and
that's
great
cool,
no
problem,
um
so
we
can
specify
those
metrics
models
um
like
in
those
documents
that
I
was
showing
earlier,
but
sean
and
ragava
have
actually
been
doing
work
on
deploying
the
metrics
models.
So
if
people
want
to
see
this
data,
how
do
they
go
about
doing
that?
So
that's
just
a
little
bit
of
background,
so.
D
D
D
Then
we
have
issue
response
time
and
so
in
many
cases
there's
an
existing
auger
endpoint
that
can
deliver
the
data.
In
this
case
it's
a
it's
kind
of
a
new
way
of
representing
the
data
in
order
to
make
it
this
kind
of
visualization,
so
we'll
we'll
roll
it
in
as
a
new
endpoint,
so
that
this
big
query
isn't
in
the
notebook
and
you
don't
need
database
access
to
get
to
it.
But
this
is
issue
response
time.
So
the
example
project
is
auger,
and
this
just
shows
that
it
may
get
a
little
bigger
at
all.
D
D
um
That's
kind
of
a
design
decision,
and
one
of
the
reasons
we'll
make
it
available
is
a
jupiter
notebook
with
blogger.
Endpoints
is
so
that
people
can
play
with
it
and
just
apply
it
to
their
own
auger
instance,
and
ultimately,
our
goal
in
the
coming
year
would
be
to
have
have
examples
from
grower
lab
as
well.
D
D
D
Is
not
yet,
but
there
will
be
I'll
make
this
available
um
soon
um
right
now,
there's
not
a
link
and
that's
just
because
there's
database
credentials
in
it.
I
have
to
convert
a
few
things
to
auger
end
points
first,
so
that'll
happen,
but
you
know
before
we
meet
next
for
sure
I
would
guess
in
the
next
week
we'll
get
that
done
cool.
Thank
you
for
community
culture,
there's
code
of
conduct
and
really
is
there
a
code
of
conduct
or
not
is
kind
of
the
indicator.
D
H
D
So
github
has
metadata
around
code
of
conduct
files
and
it's
part
of
how
you
make
your
project
searchable.
So
if
you
declare
a
code
of
conduct
file,
then
it's
collected
as
part
of
the
metadata,
which
is
why
possibly,
this
is
wrong.
Although
this
this
link
should
work
right,
it
was
developing
at
code
yeah,
oh
yeah,
that's
the
right!
D
D
D
Here
there
we
go
there,
that's
exact,
that's
what
the
query
returns
and
apparently
the
whole
thing
doesn't
get
transformed
into
the
jupiter
notebook
very
cleanly.
It
leaves
a
piece
out.
um
So
if
a
project
has
one
declared
in
the
metadata
it's
gathered,
are
there
other
places?
We
should
be
looking
emma.
H
D
Yeah
yeah,
so
it's
if
a
project
has
declared
that
they
have
a
code
of
conduct
and
github
kind
of
motivates
the
projects
to
do
this.
They
they
tell
you
what
you
need
to
do
to
be
a
project
that
is
up
to
community
standards,
and
this
is
something
that
github
started.
Maybe
four
years
ago
now
at
uh
github
universe,
I
think
in
2017.
D
Yeah
and
yeah,
so
that's
uh
inclusive
leadership.
That's
the
one
I
mentioned
at
the
outset
that
we
have
to
decide
how
to
handle
that
um
license.
Coverage
is
one
that
is
an
auger
endpoint
and
I
just
for
convenience,
just
copied
the
image
on
the
main
page
that
hits
that
endpoint,
instead
of
showing
you
the
pure
json,
because
I
had
something
and
license
is
declared.
We
also
have
a
auger
endpoint
that
delivers
a
bunch
of
data,
but
we
need
to
process
it
in
the
metric
model.
D
Yeah,
okay-
and
this
is
um
for
badgings,
so
cii
best
practices
badging.
I
should
really
call
this
um
cii
best
practices
badging
status,
most
projects
candidly-
are
not
best
practice
badged
and
I
won't.
I
won't
run
to
the
top
to
make
that
markdown
cell
play
nice,
but
uh
basically
we
show
you
that
its
passing
status
is
met
by
this
particular
repo,
and
I
was
working
on
making
a
pretty
green
or
yellow
color,
but
I
didn't
get
that
finished.
D
A
D
C
D
I
D
D
And
whether
they
contribute
so
this
is
the
first
time
contributors
per
quarter.
um
This
is
repeat,
first
time
contributors
per
quarter.
This
is
all
second
time
contributors
per
quarter,
and
this
is
we.
It
should
be
flyby,
apparently
something
snuck
past
my
goalie,
it's
flyby
in
the
narrative
description
below,
but
it's
drive.
By
still
in
that
title,
I
have
to
fix
that.
E
D
uh
Bench
by
benchmark,
you
mean
like
compare
it
without
comparing
other
projects
yeah,
I
think
um
so.
This
is.
This
is
always
one
of
the
tricks
is
people
do
need
to
compare
projects
to
interpret
them
and
the-
and
I
think
the
point
that
you
raise
lucas
is
a
really
good
one.
You
know
do
metrics
models
need
to
have
some
of
those
comparisons
sort
of
enabled
by
default.
You
know,
should
we
produce
an
example
that
lets
you
see
a
couple
of
projects
side
by
side.
H
D
H
D
D
We
have
all
the
ones
that
are
merged
and
not
merged,
so
you
can
see
in
2020,
agar
merged,
360
and
didn't
merge,
78
out
of
438
and
then
in
2021
we
have
267
merged
and
68
that
weren't
merged,
and
then
we
also
look
at
the
the
20
slowest
to
be
merged
um
that
are
accepted
and
merged
and
slowest
to
be
merged
and
rejected.
So
you
can
see
in
we
had
88
accepted,
slow
ones
in
2020
and
38
accepted,
slow
ones,
and
so
more
of
the
slow
ones
get
rejected
in
2021
than
2020.
D
D
D
D
D
A
H
H
E
A
A
A
D
A
D
A
A
A
D
A
A
D
On
this,
I
think,
if
you
know,
we've
gotten
some
feedback
here
and
we
know
what
isn't
done
yet,
and
I
think
this
adding
context
is
really
important.
So
I
think,
between
now
and
when
we
meet
again
we'll
have
something
that's
publicly
shared
that
people
can
look
at
right
now.
You
know
for
you
emma,
I
think.
D
H
H
D
H
H
H
H
The
teamswordapper
babylon.js.net
a
bunch
from.net
and
then
some
researchers
and
someone
who's
in
charge
of
github
sponsors
just
to
say
like
what.
What
do
we
as
a
group
can
commit
to
work
on,
and
um
so
I
had
everyone
kind
of
list,
the
things
they're
interested
in.
Why
and
then
I
synthesize
that
into
two
groupings
which
I'm
calling
uh
maybe
it'll
actually
be
better.
Do
you
mind
if
I
just
quickly
share
my
screen,
because
then
you
can?
No,
you
don't
have.
D
F
H
You
know
by
type
org
region.
These
are
just
things
that
I
personally
made
up.
So
I
don't
meet.
You
know,
maybe
there's
other
categories
that
exist.
um
Things
like
you're
already
talking
about
again
so
exciting,
like
the
code
of
conduct,
finding
it
um
and
usage.
So
usage
comes
up
a
lot.
um
I
don't
know
if
there's
anything
there
and
then
the
project
sustainability
is
the
other
category.
H
So
that's
um
maybe
a
tougher
one,
but
definitely
like
a
bucket
right,
like
the
github
sponsors
folks,
want
to
make
sure
that
you
know
that
we're
supporting
projects
that
are
you
know
um
that
that
we're
supporting
projects
and
helping
sustain
them,
but
that
there's
certain
things
we
look
out
for
um
you
know
that
they're
an
inclusive
project,
but
also
you
know
we
want
to
see
if
there's
burnout
risk
somewhere.
We
want
to
send
folks
money
there
um
yeah
so
anyways.
This
is
just
the
things
that
I
came
up
with.
H
H
H
How
many
like
git
depend
turns
up
things,
so
I
just
documented
what
they
already
had.
So
hopefully
we
can
either
validate
what
you
have
contribute
these,
what
they're
doing,
but
my
next
step
is
to
kind
of
start
to
fill
in
the
blanks
of
um
of
these,
and
you
know
I
would
love
to
do
uh
in
our
next.
When
I
bring
everyone
together
again
is
show
them
some
of
that
auger
work,
connect
it
with
either
um
open
source
or
the
101
or
the
sustainability
yeah,
and
then
uh
because
there's
the
auger
work.
H
D
H
D
D
H
Yeah-
and
I
know
that
the
for
good
folks
so
github
and
at
microsoft,
the
github
or
the
open
source
for
good
folks,
that's
one
of
the
things
they're
trying
to
evaluate
both
from
uh
like
helping
identify
projects
that
are.
You
know
bad.
I
guess
for
those
that
need
help,
but
yeah
you
need
to
scale
to
do
that
like
going
through
each
repo
and
going
oh,
is
there?
You
know
this
like.
B
It's
hard
and
most
of
those
I
know
like
in
the
case
of
the
psychological
safety
metric,
mostly
just
surveying
the
community
members
a
lot
of
those.
So
I
mean
there
is
a
way
to
get
a.
You
know
to
get
some
data
around
it,
but
it's
tricky
because
you're
relying
on
the
communities
to
survey
their
own
members
and
yeah.
B
H
And
I
almost
wonder
if
there's
like
iteration,
like
an
iterator
iterative
approach
to
these
in
a
little
bit
like
you,
would
run
something
like
auger
to
identify
like
all
of
those
that
had
a
code
of
conduct
and
then
you
know
like
automated
as
much
as
we
could
and
then
got
to
the
point
where
we
started
to
ask
questions
yeah.
I.
E
So
I
think
you're
breaking
ground
here
and
it's
really
valuable.
um
So,
for
example,
your
project
sustainability
list
is
kind
of
a
model
in
itself
uh
and
um
okay,
and
maybe
it
would
be
helpful
just
to
kind
of
um
you
know,
work
in
that
with
the
group
as
a
whole.
So
talk
about
the
safety
stuff
and
how
do
we
measure
burnout
risk
and
so
on
and
the
whole
community
to
um
contribute
to
that
and
help.
H
Okay,
yeah
and
the
goal
just
to
be
clear,
like
I
want
to
contribute
as
much
as
we
can
back
if
there's
like
auger
functionality
that
you
need
engineering
time
on
I'd,
love
to
bring
that
up
in
the
group
too
sean.
You
know
there's
engineers
in
there
and
there
we
want
they
want
this.
So
you
know
if
there's
areas
of
work
like
I'd
like
to
bring
that
too.
So,
maybe
when
I'm
sending
you
my
projects
for
comparison,
you
can
send
me
the
like
things.
That
would
be
helpful.