►
From YouTube: CHAOSS Metrics Models Working Group 11/9/21 - 11/10/21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Record
welcome
everybody
to
the
november
9th
november
10th
chaos
metrics
model
meeting,
it's
good
to
have
everybody
here.
I
just
I
wanted
to
give
you
just
a
real,
quick
update,
so
sean-
and
I
were
at
the
member
summit
last
week
and
you
know
I
think
there
was
a
lot
of
interest.
A
At
least
I
picked
it
up
sean
I
don't
know,
but
a
lot
of
interest
on
metrics
models.
I
think
there
is
people
are
starting
to
see
the
utility
more
than
just
individual
metrics.
I
just
I
just
want
to
reinforce
that.
I
think
it's
really
good
work
that
we're
doing
here.
I
don't
know
if
sean,
if
you
kind
of
picked
that
up,
I
did.
B
I
mean,
I
think,
I
think,
a
lot
of
the
people
who
are
putting
metrics
together
at
different,
open
source
driven
companies
are
kind
of
doing
metric
models
in
practice,
and
this
is
a
way
of
codifying
these.
These
ways
that
people
are
applying
the
metrics
in
in
reality
and
for,
I
think,
providing
like
another
layer
of
usefulness
at
the
chaos
project
level,
so
yeah.
I
think
people
are
pretty
excited
about
this
cool.
I
think
it's.
What
do
I
do
about
it,
but
it's.
A
Yeah,
so
cool
so
great,
I
just
it
was
mostly
just
a
comment
from
sean
and
myself
to
reinforce.
This
is
a
good
group
of
people.
Yeah
good
good
work,
everybody.
A
So
while
I
have
sean
and
regava
on
the
line,
so
one
of
the
things
that
we're
taking
a
look
at
is
how
to
actually
deploy
the
metrics
models,
so
we
will
get
back
to
the
the
kind
of
building
out
new
metrics
models,
but
one
of
the
things
that
we're
taking
a
look
at
is
how
deployable
these
are
in
a
tool
like
auger,
so
sean
and
ragava.
B
Okay,
how's
that
going,
I
think,
we're
getting
started.
So
I
think
you
know,
regardless
learning
where
the
data
is
in
augur
and
I'm
looking
at
some
examples
and.
C
Yeah
yeah,
I
just
went
through
elsewhere
model
like
sony
updated
in
the
world
of
in
the
api
endpoints.
I
just
fixed
all
the
data,
and
I
was
just
analyzing
that.
B
So
we
have
a
framework
in
in
the
auger
project
or
in
the
chaos
project.
It's
the
auger
community
reports,
repository,
ordered
action.
B
Is
taking
a-
and
I
don't
know
if
you
have
a
handy
link
to
the
particular
metrics
model
that
we're
working
on,
but
it's
one
that
elizabeth
put
together
and
we're
using
the
endpoints
that
auger
already
has
deployed
for
these
metrics
to
to
bring
the
discrete
metrics
that
are
part
of
the
model
into
a
single
jupiter
notebook,
and
then
we
are
going
to
dream
up
ways
of
making
that
metrics
model
digestible.
So
I
think
the
creative
part
will
be
what
are
the
right
visualizations?
B
What
are
the
right
ways
to
describe
these
things,
and-
and
that's
I
think
that
is
more
of
a
process
of
creation
than
it
is
a
process
of
definition.
We
kind
of
have
to
think
through
what
is
that
metrics
model?
B
What
what
does
it
need
to
look
like
for
it
to
be
digestible,
and
probably
one
one
good
approach
would
be
to
maybe
set
as
a
target,
have
a
a
few
candidate
ideas,
even
if
they're,
not
driven
by
the
data,
but
just
by
design
ideas
that
we
can
present
for
next
time.
Regatta.
Yes,
definitely.
B
Yeah-
and
you
know
whether
or
not
we
have
the
actual
data
or
not
that
that
will
just
I
mean
I
have
it
in
my
head-
that
everything
is
already
in
august,
but
I
think
when
gavin
and
I
go
to
do
the
work
we
may
find,
we
need
to
add
something
or
it's
not
shaped
quite
the
way
it
needs
to
be,
and
we
need
to
reshape
it.
So.
A
B
Well,
it
starts,
I
think
I
think
it
starts
by
making
a
notebook
available
and
it'll
start
with
auger
data,
but
in
the
in
the
perfect,
in
a
perfect
world,
the
concepts
and
the
design
will
be
abstractable
and
usable.
You
know
from
other
data
sources
because
you
can
get
the
data
that
auger
has
in
a
myriad
of
myriad
of
ways,
and
so
the
fact
that
we're
using
auger
endpoints
that
shape
data
in
a
certain
way
means
that
other
endpoints
from
other
tools
could
shape
data
in
the
same
way
and
just
use
those
notebooks
in
theory.
B
I
think
that
I
think
making
that
sort
of
real
and
flexible
we're
just
going
to
aim
for
a
proof
of
concept
right
now,
but
I
think
the
long-term
vision
is
is
that
that,
in
some
form
that
these
metrics
models
become
concretely
consumable
on
a
shared
basis,
even
if
it's
just
against
example,
data,
that's
driven
from
auger,
I
think
it
can.
B
D
It
it
would
be
great
that
we
can
see
some
demo
of
the
august
deployment,
maybe
next
week,
sorry,
maybe
next
meeting
or.
E
A
B
A
A
I
did
have
a
question.
Oh
I
was
thinking
too.
This
could
probably
it
would
make
sense.
You
know
how,
in
the
metrics
models
that
we
have,
one
of
our
headers
is
implementation.
B
A
B
What
do
I
do
with
that?
So
as
a
so
as
a
practical
matter,
the
first
iteration
of
this
will
be
tightly
coupled
with
an
instance
of
auger
that
has
the
data
that
the
notebook
is
accessing.
Okay,
all
right,
I
think
that'll
just
be
publicly
available.
Yeah
yeah,
these
the
api
endpoints
are
already
publicly
available,
and
so
we'll
just
be
trying
to
use
them
in
a
way
that
we
haven't
before.
B
Okay
and
we
may,
we
may
determine
we
need
to
create
a
couple
new
ones
as
we
work
through
the
process
which
I
can
do
for
regatta.
Okay-
and
I
I
look
at
probably
like
regal,
and
I
need
to
do
two
things.
One
is
evaluate
the
data
that
we
have
and
the
what
we
think
the
metrics
model
is
and
then
and
like
come
up
with
the
design
concept
that
we
want
to
present
and
okay,
and
so
I
think
I
think
it's
a
design
activity
more.
B
A
B
C
B
A
B
Now-
and
I
think
I
think
it's
abstract
for
regatta-
and
I
okay
like
we
know-
we
know
that
we
know
that
it's
not
like
the
technical
work
is
kind
of
done
that
this
is
really
a
user
data,
design,
problem
user
experience,
kind
of
thing,
yeah
yeah,
you
know
elizabeth
put
together
every
thoughtful
metrics
model.
I
actually
don't
remember
which
one
it
is
it's
in
the
notes.
It's
right
there:
okay,
inclusive
leadership,.
C
B
Let's,
let's
maybe
you
can
see
if
you
can
show
regatta
how
to
do
that,
regatta
and
I
will
fall
back
on
what
we
know
works
and
go
to
and
get
that
done.
Second,
but
if
you
can
help
her
gaba
get
that
put
in
place
spenad,
then
I
we
can.
I
mean
certainly
that
I
like
that
idea
of
a
deployment
better
conceptually.
I
just
I
don't
want
to
layer
too
many
pieces
on
top
of
what
we
just
signed
up
for
I'm
going
down
the
conceptual
hole.
B
F
Than
you
just
developed,
the
jupiter
notebook
like
jupiter
notebook
code,
whatever
you
have
a
python
and
I'll
help
him
deploy
it
in
that
before
the
testing.
It'll
be
okay,
that
easy,
okay,
yeah
yeah,
all
right.
A
Too
bad
for
you
all
right,
that's
great,
and
I
think
just
sort
of
seeing
any
steps
forward
like
would
help
me
a
lot,
and
I
think
it
would
help
everybody.
B
A
B
A
A
The
next
thing
on
the
the
agenda
here,
the
community
reports
metric
model,
you
scroll
down
a
little
bit
sean.
I
just
keep
going
just.
A
There
you
go:
oh
okay,
so
this
had
come
up
last
time
that
we
develop
a
and
I'm
glad
you're.
Here
sean
we
develop
a
metrics
model
based
on
our
community
report,
and
this
was,
if
you
recall,
I
think
this
was
elizabeth's
comment
that
was
along
the
lines
of
like
maybe
somebody
just
wants
kind
of
a
high-level
overview
of
of
their
community
work.
Remember
that
comment
at
all.
B
A
A
So
there
were
a
few
a
few
issues,
though
that
came
of
this.
So
the
way
that
we've
been
building
metrics
models
to
date
is
that
we've
been
using
existing
chaos
metrics
and
like
using
those
to
build
the
models.
Not
not
the.
I
think,
isn't
that
the
acronym
approach,
not
not
the
backroom
approach,
where
we.
A
B
Right
and
so
so,
the
evolution
working
group
is
actually
going
to
push
out
calling
them
code
changes
commits
because
they're
what's
happening,
the
evolution
working
group
is
going
to
add,
commits
to
the
existing
name,
okay,
sort
of
as
reverse
compatibility
with
our
obsession
for
some
reason
of
not
using
the
term
commit
initially
and
the
fact
that
this
is
what
we're
talking
about
is
commits,
and
so
it's
a
little
confusing
when
you
see
code
changes
like
how.
What
is
that?
Well,
it's
a
commit.
Why
don't
you
call
it
that
it's
a
long
story.
A
So
code
changes,
okay
got
it
and
then
I
guess
one
of
the
things
for
you
sean
is.
If
I
I
can
start
building
up
this
metrics
model,
but
I
was
kind
of
wondering
because
you
had
done
so
for
a
lot
of
people
that
don't
know
we
had
built
metrics
model.
I
don't
know
how
I
scroll
down,
but
you've
just
yeah,
you've
designed
this
is
already.
A
B
These
are
the
people
in
the
neighborhood,
these
other
things,
and
so
some
of
these
actually
have.
I
mean
this-
makes
it
easier.
So
the
way
the
community
reports
metric
model
was
conceptualized
was.
It
was
essentially
a
series
of
reports
on
discrete
metrics
that
together
told
a
story
and,
and
so
like
auger
built.
B
Some
visualization
end
points
that
actually
brought
to
like
the
flyby
and
repeat
contributor
accounts
per
month,
for
example,
that
that's
actually
contributor
data,
combined
with
the
things
that
the
contributors
did
like
commits
and
comments
and
all
of
the
various
kinds
of
activities
that
someone
can
perform
in
a
repo,
and
so
that
isn't
it's
it's
actually
an
integration
of
several
metrics,
the
flyby
and
contributor
counts
per
month.
So
it
would
be
like
a
combination
of
contributors
and
contributions
sliced
by
month.
A
B
I
would
have
to
go
back
and
look
to
remember
quite
exactly
what
it
was,
but
it
inc
included
like
new
issues.
It
included
new
computer
yeah,
new
pr's
new
commits
and
then
comments
on
issues
and
comments
on
pull
requests.
A
D
Requests
do
we
have
any
metrics
to
to
know
that
if
this
issues
is
active
or
supported
fully
supported
by
by
people
in
the
community,
I
mean
if
there
are
more
than
one
or
two
comments
around
this
issue.
Oh
there's
only
one
response
to
to
show
that
if
this
issue
it's
it's
attract
the
most
attention
from
people
so.
B
Basically,
right
so
for
the
flyby
and
repeat
contributors,
we
were
explicitly
teasing
out
people
who
came
in
and
made
a
single
contribution
and
then
didn't
make
a
subsequent
one
and
people
who
were
repeat
contributors.
So
there
was
a
separation
of
those
and
right
now,
I'm
sort
of
looking
I'm
trying
to
look
up.
But
I
don't.
A
C
A
A
D
A
I
agree
no.
A
Actually,
that
would
be
a
good
evolution
metric,
okay,
so
sean
this
is
helpful.
Thank
you
sure,
or
was
there
anything
else
that
you
were
going
to
add
in
here
or.
A
B
B
This
is
one
example
of
the
we
still
have
drive
by
apparently
on
some
of
these,
but
this
so
it
wouldn't
be
all
of
them,
but
we
have
like
a
visualization
endpoint
that
all
first-time
contributors
per
quarter,
all
repeat:
first-time
contributors
per
quarter,
drive-by,
first-time
contributors
and
second-time
contributors,
then
there's
a
there's,
a
caption
for
each
and
so
in
some
cases
because
of
the
prior
efforts
to
create
these
standardized
community
reports.
We
have
pulled
together,
metrics
that
that
that
visualize
things
like
this
just
automatically,
so
it's
like
you,
don't
have
to
build
anything.
A
G
B
It's
there,
just
that's
the
okay.
This
is
what
it
was
so
part
of
it
was
generated.
So
I
like
in
some
cases
for
example,
there
wasn't
we
did
it.
We
have
a
heat
map,
visualization
of
mean
duration
of
pull
requests.
Well,
if
there
aren't
very
many
pull
requests
on
a
repo,
the
heat
map
is
sort
of
meaningless.
It's
it's
a
visualization
intended
for
high
volume
projects
and
whatever
actually.
A
C
B
I
mean
for
yeah
for
a
high,
so
summit,
so
some
of
these
reports
are
more
or
less
useful,
depending
on
the
volume
of
activity
in
a
community
that
you
know
so,
it's
a
smaller
community
may
use
different
metrics.
A
I
think
the
the
intention
of
like
this
matrix
model
and
this
community
report,
if
they're
kind
of
this
thing
it
was
like
it's
just
like
it,
it's
to
get
people
that
want
to
know
more.
If
you
recall,
like
yeah,.
A
B
A
B
Okay,
so
then
pdf
in
there,
because
it's
probably
a
useful
reference
to
keep
in
context.
Oh
thank
you
is
that
the
old
report
yeah
that's
their
salt
stack
report.
We
just
looked
at
it.
A
A
I
do
too
all
right
all
right.
I
had
a
question
for
folks
here
as
we're
kind
of
getting
into
our
metrics
models
that
I
think
are
getting
pretty
close
to
release.
A
So
it's
kind
of
this.
This
bottom
thing.
So
I
think
last
time,
if
you
click
on
any
of
these
sean,
can
you
kind
of
click
on
sure
like
this
one,
maybe
like
all
of
them,
yeah,
that's
the
one
that
that's
the
one
that
lucas
was
working
on
yep
and
then
can
you
click
on
just
open
all
this
there's
a
new
version
of
that
for
what
it's
worth?
Oh?
Can
you
put
it
in.
B
A
Yeah,
I
was
just
going
off
the
old
old
minute
lucas
thanks.
A
B
A
A
E
Yeah
sure
so
I
reset
the
formula
of
this
without
making
any
kind
of
meaning
level
changes
just
in
accordance
with
our
conversation
in
the
last
meeting.
E
So
everything
here
should
reflect
consensus
with
with
one
exception,
if
you
scroll
down
a
little
bit
you'll
see
that
I
notated
I
said
I
couldn't
find
any
metrics
in
these
things,
and
I
don't
know
if
that's
even
true
that
they
don't
exist.
You
have
to
go
a
little
bit
further.
Go.
B
B
I
agree
I
I
I
think
there
is
creating
an
issue.
There
is
a
metric
for
that.
An
evolution
working
group
tell
me
more
about
substantive
comment
and
I
think
we
have
a
metric
in
common
for
counting
comments.
I
have
to
look.
E
The
idea
on
a
substantive
comment,
I
guess
it
was
really
any
comment
like
I
think
when,
when
somebody
makes
their
first
comment
on
an
issue,
it
can
easily
become
part
of
you
know
becoming
part
of
a.
B
E
A
B
That's
right,
although
sean
has
just
pointed
out
that
I'm
saying
the
the
last
two
are
clearly
not
the
first
one
might
be
yeah.
I
know
I
know
that
we've
included
in
in
these
auger
community
we've
heard
these
chaos
community
reports,
but
I
don't
know
if
it's
actually
to
find
a
metric.
A
First
thought
in
terms
of
how
to
capture
these-
let's
just
say
like
triaging
issues:
let's
pick
that
one
yeah,
if
we
I'm
gonna,
put
it
in
the
chat-
and
I
think
most
people
are
familiar
with
this-
the
spreadsheet.
E
Yep,
oh
I'm
sorry,
okay,
this
could
be
a
good
place
to
look
up
metrics.
Is
that
what
you're
thinking.
A
A
So
if
we
were
to
as
an
example,
if
we
were
to
pick
on
triaging
of
issues,
you
know,
do
you
think
that's
a
a
metric
that
and
if
you
look
across
the
bottom,
see
where
it
says
common
value,
yeah
evolution,
yeah
like
this
is
probably
not
why
trigrating
is
probably
not
in
dei
yeah.
No,
no.
D
D
A
E
B
A
Do
it
right
in
the
middle
or
something
I
think
you
have
to
add
a
row
off
the
one
that
has
the
the
stuff
in
it
already
yeah
yeah
like
that
one,
because
then
it'll
pull
down
yeah
that
thing.
So
what
this
would
be
considering
at
this
juncture,
right
and.
B
A
B
A
B
Know
what
happened
I
helped
and
then
the
other
one
that
was
added
was
improving.
Documentation,
visiting
metric.
A
A
B
B
And
so
actually
so,
interestingly,
there
is
a.
There
is
a
a
type
of
contribution
metric
that
I
believe,
if
memory
serves,
is
the
kitchen
sink
of.
B
Yeah
there's
a
this
is
a
metric.
Let
me
find
it
really
quick
here.
B
It's
it's
in
intended,
I
mean
basically
it's
not
a
very
I
mean
on
one
level.
It
I
think,
is
intended
to
cover
all
of
the
things
that
you
could
possibly
do
as
a
contribution
to
a
project,
and
so
documentation.
Authorship
is
like
one
of
those
activities,
and
so
that's
why
I
call
the
kind
of
a
kitchen
sink
metric
where
we've
defined
us.
This
discrete
set
of
types
of
activities
and
in
theory
each
of
them
could
be
counted
in
a
repository.
E
I
agree
I
mean,
I
think
that
all
of
these
would
count
as
kind
of
new
contributor
formal
kind
of
thing,
but
at
the
same
time,
because
this
is
so
broad-
it's
hard
to
build.
On
top
of.
B
It
no
it's
yeah.
It
is
it's
it's
hard
to
say
that
I've
implemented
this
metric,
except
to
say
I
have
a
way
of
counting
each
of
these
discrete
things,
and
maybe
this
maybe
this
inventory
should
like,
I
think,
actually,
for
example,
bug
triaging.
We
just
just
said
that
should
be
its
own
metric
and
I
think
what
we're
saying
is
documentation,
improving
documentation
or
is
another
one
that
we
should
create
as
a
discrete
metric
or.
B
B
But
I
think
this
is
a
place
where
we
could
add
the
the
second
one:
improving
documentation.
E
I
think
that
just
to
digress,
but
briefly,
I
think
that
this
discovering
valuable
new
metrics
to
create
is
it
kind
of
fits
with
the
earlier
conversation
on
data
visualization
and
the
auger
related
model
in
terms
of
the
value
of
the
metrics
models
group,
and
in
a
way
that
this
group
is
about
chaos,
dog
fooding,
its
own
metrics
and,
and
so
it's
kind
of
valuable.
In
that
sense,.
A
B
B
B
So
when
it
comes
to
improving
documentation,
we
might
cross-reference
these
documentation
characteristics
of
usability,
discoverability
accessibility.
If
we're
trying
to
help
new
contributors
understand
what
we
mean
by
improving
documentation,
because
it's.
A
E
I
I
I
I
wish
these
seemed
closer
in
spirit.
I
would
say
that
the
the
need
in
this
context
is
to
know
about
people
who
are
doing
work
on
documentation.
How
many
people
are
doing
any
sort
of
work
and
documentation
right,
you're,
trying
to
quantify
contributor
activity
or
any
sort
of
documentation,
regardless
of
type
and
usability,
discoverability
and
accessibility,
would
all
be
orthogonal.
B
A
A
A
E
Of
them
you
get
it.
I
would
say
that
I
leave
it
up
to
the
consensus
of
the
group
like
I
created
this
as
an
a
you
know,
to
follow
up
on
our
conversation
about
figuring
out
how
to
do
metrics
models
by
brainstorming
in
them
and
just
trying
and
seeing
what
worked.
E
So
I
think
it
may
be
valuable
just
to
talk
about
the
format
here
and
what
we
learned
about
the
format
and
what
worked
and
what
didn't,
and
if
people
want
to
call
this
cooked
enough
to
use
that's
good,
but
if
not,
I
will
not
take
it
personally
in
any
way.
A
F
And
I
I
also
personally
feel
like
this
is
good
like
as
any
model
we
develop
that'll
keep
on
evolving
over
the
period
of
time.
So
at
this
stage
I
really
feel
it
is
good.
The
only
thing
I
have
with
this,
which
my
group,
I
tell
our
thing
if
we
are
having
a
persona
here,
which
was
like
a
kind
of
a
debate
we
had.
F
A
So
my
I'll
make
a
few
comments
on
this,
so
I
kind
of
at
the
top
so
first
of
all,
dating
back,
maybe
six
weeks
ago,
when
you
said,
let's
just
start
building
these
and
see
what
comes
of
it,
that
was
spot
on
because
look
we're
here.
We
are,
and
we
have
about,
I
think
three
metrics
models
that
are
really
close
to
being
done.
Yeah
and
we've
learned
a
lot
about
kind
of
how
to
like
how
you
set
that
up
as
community
manager.
You
know
I
mean
that
phrase.
A
A
I
think
the
second
thing
that
came
out
of
this
that
we
just
talked
about
was
metrics
that
are
missing
that
we
may
want
to
feed
our
dog
food.
Was
that
the
feeding
our
own
dog
food?
I
don't
even
eating
our
own
dog
food,
no.
A
Us
in
that
regard,
so
I
think
you
were
spot
on
with
that.
If
you
scroll
down
a
little
bit
sean,
I
do.
A
A
A
F
Angle,
I
think
the
description
here
is
more
of
a
like
objective
thing
or
some
explanation
for
a
user
like
who
wants
to
implement
that.
So
maybe
that
this
can
be
a
part
of
who,
who
you
should
care
or,
like
maybe
objective,
of
this
model.
That
can
be
that
description.
B
You
know
how
overall
contributor
growth
is
happening
or
not
yeah,
and
then
how
are
you
doing
with
new
contributors,
which
is
a
separate
set
of
a
separate
slicing
of
some
of
the
same
data
and
ways?
I
mean
things
that
influence
success
at
each
stage,
I
suppose
for
new
contributors,
that's
easy,
you
did
you
get
them
to.
Are
they
coming
back
and
making
a
second
contribution
on
on
some
rate,
that's
higher
than
it
was
before
and
overall
rate
of
contributor
growth?
B
I
think
this
is
where
some
of
those
missing
metrics
like
bug,
bug,
triaging
and
documentation
improvements,
I
think
they
become.
I
think
these
are,
I
don't
know,
tell
me
if
I'm
off
here
lucas,
but
those
are
sort
of
practice
based
things
that
will
help.
You
know
if
you
have
specific
problems
that
might
be
contributing
to
a
loss
of
contributors.
E
I
wonder:
is
there
consensus
to
rename
implementation
to
objectives
of
this
map
of
this
model?
What
are
people's
thoughts
on
that.
A
A
F
E
A
Subset
of
why
you
should
care,
or
some
a
header
in
between
metrics
and
the
metrics
model,
and
why
you
should
care,
and
then
we
actually
have
a
section
called
deployment,
which
is
where
this
thing
or
or
maybe
you
know,
because
we
say
we
have
this
header
called
you
should
care.
Maybe
we
can
kind
of
keep
it
a
little
bit
more
fun
like
you
know,
metrics
model
living
in
the
world
or
metrics
model
and
practice.
You
know
what
I
mean
like
where
you
can
find
this
model
in
the
wild
kind
of
thing.
So.
B
B
Getting
it
I
mean,
I
guess
there
are
also
objectives
but
they're
for
me.
Thinking
about
this
model.
They
actually
direct
me
toward
certain
ways
of
presenting
information
that
that's
manifest
in
the
model
that
isn't
obvious
just
by
the
enumeration
of
the
metrics
included
in
the
model.
B
B
E
A
All
right,
we
need
to
wrap
this
up.
So
oh
my
gosh
yeah.
E
A
That'd
be
great-
I
don't
kind
of
in
time
I'll
in
in
the
interim
here
I'll,
make
some
comments
on
this
yeah
yeah
very.