►
From YouTube: CHAOSS Value Working Group 5/5/22
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
A
B
Yes
yeah,
so
I
I
have
so
I've
gone
through
and
I've
I've
looked
at
all
of
them,
but
I've
actually
I've
held
off
on
creating
the
issues
for
them.
Okay,
because
of
the
kind
of
larger
question
I
have
about
recommending
metrics
be
turned
into
metrics
models.
B
C
B
Yeah
that
one
was
already
decided
to
to
move
that
one
into
into
a
model,
but
when
we
look
at
academic,
open
source
project
impact,
for
example,
that
one
is
a
model.
D
D
A
B
Working
group
from
a
from
a
working
group
perspective,
we
are
in
the
middle
of
a
metrics
freeze
right,
so
we've
all
agreed
not
to
create
any
new
metrics
so
so
doing
the
modeling
work
in
value
that
that
could
be
how
we,
what
we
do,
what
we
use
our
time
for,
I
suppose.
A
D
On
our
list,
and
so
if
we
were
able
to
bring
like
a
working
model
to
the
group,
I
think
that
would
be
helpful.
D
Yeah
I
mean
that
is,
that
is
a
possibility,
so
I
personally,
I
think
we
should
be
including
implementations
that,
if
we're
going
to
be
releasing
metrics
models
to
the
world,
we
should
take
our
time
and
think
about
them
as
working
instances
that
people
actually
engage
with
yeah.
D
That
said,
there
was
an
agreement
in
the
model
working
metrics
model
working
group.
That
said,
if
there
isn't
an
implementation,
it
could
still
be
released.
Yes
with
an
implementation
to
come,
but
personally,
I
wonder
about
the
utility
of
that
and
again
my
concern
would
be.
Is
that
we're
just
kind
of
kicking
something
down
the
road
which
is
we
have
a
metrics
model,
but
we
have
no
implementation,
so
it
the
utility
that
it
provides
for
an
audience,
is
pretty
low.
B
Yeah,
I
suppose
I
get
that
I
I
suppose
in
my
mind,
the
creation
of
the
metrics
model
itself
is
a
bit
of
a
discussion,
and
in
that
discussion
we
can
we
can
land
on.
Well,
maybe
this
metrics
model
that
we've
just
created
doesn't
have
utility
and
maybe
there's
no.
Maybe
there
isn't
a
point
in
in
moving
it
towards
implementation,
and
I
think
that
would
be
okay
or
in
in
other
cases,
that
that
metrics
model
may
be
suited
for
an
implementation
in
in
a
number
of
different
ways
right.
B
It
could
end
up
as
a
badging
initiative,
or
it
could
end
up
as
part
of
auger
labs,
or
it
could
end
up
in
the
whatever
the
this
implementation
that
the
the
metrics
model
group
is
creating,
which
are
I,
which
I'm
still
not,
I'm
not
completely
sure
what
those
are,
because
it
kind
of
it
kind
of
harkens.
Back
to
the.
When
we
first
started
when
we
were
creating
the
the
notebooks
right,
the
which
we,
which
we
abandoned.
D
We
haven't
abandoned
them.
The
notebooks.
C
They
still
know
we're
still
using
notebooks.
In
fact,
the
metrics
models,
the
two
that
have
been
implemented
are
both
implemented
in
jupiter
notebooks.
They,
I
think
we
do
have
aspirations
of
producing
metric
models
using
more,
I
guess
I'd,
say
user-friendly
tools
or
tools
with
a
lower
barrier
of
entry.
I.
B
C
C
Yeah
and-
and
I
think
I
think,
maybe
kevin
what
you're
thinking
of
is
the
evolution
working
group
tried
to
implement
every
metric
in
complicated
jupiter
notebooks
and
handle
all
the
deployment
stuff
at
one
point:
yes
yeah.
That
is
what
I'm
thinking
of
that.
We
don't
do
anymore,
though,
and
the
approach
that
we
take
in
the
metrics
model
working
group
is
really
far
less
dependent.
The
barrier
to
entry
is
significantly
lower
for
running
those
than
it
was
on
the
ones
in
the
evolution
working
group.
B
D
A
D
D
Multiple
yeah
it
just
to
me
in
the
metrics
model
working
group.
We
need
to
have
implementations,
be
like
the
badging
program,
okay,
where
somebody
fills
out
a
form
and
pushes
a
button,
and
then
they
get
things
they
need
right
and
the
notebook
is
on
definitely
on
the
path
towards
that
yep.
But
but
then
to
your
point.
But
not.
It
still
requires
somebody
to
do
some
software
installation,
which
is
usually
a
pretty
big
barrier
for
a
lot
of
people.
B
Well,
so
I
think
I
can
make
that
list
when
we
started
this,
I
I
just
I've
been
hesitant
to
make
any
recommendations
because
well
because
there
are
questions
around
this
and
I
and
I
wasn't
sure
how
everyone
wanted
to
proceed,
and
it's
it's
a
it's
very
legitimate
to
say
you
know
hey.
Maybe
we
shouldn't
try
to
convert
all
of
these
into
into
models.
So
I
agree
with
what
matt
said
there.
C
So
I
mean
so,
but
I
do
think
that
we
could
update
the
metrics
in
place
as
metrics
and
identify
ones
that
can
be
converted
at
some
future
point
to
models
potentially
without
engaging
in
the
full
process
of
analyzing
them
as
models.
Because
for
the
metrics
models
working
group,
we've
had
to
create
metrics
in
some
cases
in
order
to
build
the
metric
model
because
there
wasn't
an
existing
metric.
C
So
if,
when
we
do
these
conversions,
there's
metrics
that
are
going
to
fall
out
of
them
that
are
not
presently
in
place
would
be
my
expectation
so
start
with.
Let's
just
review
the
metrics
and
make
them
better
in
place
and
if
we
think
something's,
ultimately
better
as
a
metric
model,
make
a
note
of
that
somewhere.
Perhaps
in
the
spreadsheet
as
a
metric
model,
candidate
column
or
something
like
that.
B
D
A
question
to
like
when
we
talk
about
a
metric
versus
a
metric
model,
I
think
there's
always
gonna
be
some
gray
area.
I
would
suspect
between
them.
D
You
know
like
maybe
some
metrics
are
clearly
like
super
isolated
things.
That
is
just
like
the
lowest
level
atomic
metric
and
then
that's
on
one
end
of
the
spectrum.
Then,
on
the
other
end
of
the
spectrum,
we
have
the
dei
badging
initiative,
which
is
like
clearly
a
model.
It's
a
deployed
model
but
like
the
transition
between
those
two
there
there
has
to
be
some
area
of
ambiguity
in
there
and
like
where
do
we
draw
the
line
on
saying
I'm
happy
that
sure
this
could
maybe
be
a
metric
model,
but
I'm
kind
of
comfortable
with
it?
D
Just
being
you
know,
this
kind
of
robust
metric
perhaps
is
whatever
yeah.
You
know
yeah.
I
don't
know
where
that
is.
E
So
we're
going
to
take
something
like
psychological
safety
and
kind
of
lump
all
of
those
numbers
from
a
survey
and
kind
of
put
a
put
a
score
on
it
on
the
end
or
put
some
kind
of
numerical
value
of
it
at
the
end.
Just
for
simplicity's
sake,
I
think
so
I
think
that's
that's
kind
of
what
you're
talking
about
right,
matt.
D
B
I
think
it
depends
a
little
bit
on
what
the
definition
of
psychological
safety
is
right
if
it,
if
it
means,
if
it
has
a
more
nuanced
meaning,
then
it
probably
is
a
model
but
a
a
metric.
That's
that's
looking
at
a
metrics,
that's
that's
measuring
three
different
things
and
those
things
that
it's
measuring
are
all
kind
of
very
similar.
C
I
think
the
metrics
that
we've
produced
represent
a
valuable
asset
for
a
lot
of
open
source
software
folk
and
changing
so
keeping
that
that
sort
of
foundation
upon
which
we're
building
the
metric
models
in
place.
While
we
build
symmetric
models
and
discover
what
it
is,
they
actually
are
through
the
process
of
building
them.
I
think
it's
important
to
keep
that
foundation
of
metrics
in
place,
that
that
is
the
anchor
the
stable
point,
the
lighthouse,
the
granite,
whatever
that
the
whole
community
is
built
on
is
having
those.
B
I
think
that's
a
fair
statement
I
I
would.
I
would
add,
though,
that
I
think
the
metrics
working
groups
do
need
to
have
a
little
bit
more
of
a
focus
on
atomic
metrics
in
the
future,
because
that
is
the
foundation
from
which
everything
else
is
being
built
on
and
if
we're,
if
we're
skipping
that
foundational
layer
and
just
jumping
right
into
metrics
models,
then
I
think
we
we
run
into
this
situation,
where
we
continuously
have
to
kind
of
figure
out
what
we're
doing.
A
I
I
think
it's
not
like
that.
We
have
built
the
foundation
like
we
have
the
metric,
but
now
over
the
period
we
learn
and
we
realized
that
we
need
a
model.
So
in
that
process
we
are
now
looking
back
to
that
and
in
that
process
we
also
observe
that
we
are
missing
something
that
we
are
creating
atomic
matrix.
So
we
have
the
foundation.
We
are
moving
a
step
ahead,
but
we
are
also
coming
back
and
looking
back
to
the
foundation,
we
build.
B
C
B
D
C
I
think
it's
the
case
that
some
working
groups
have
really
defined
the
core
80
90
percent
of
metrics
that
will
ever
be
needed
under
that
category
and
then
they're
kind
of
done
and
so
metric
really
needed
to
review
and
revise
the
ones
that
they've
created
early
on
and
focus
on
metric
model
development.
But
that
seems
like
a
natural
evolution
that
doesn't
require
the
revisiting
of
our
foundation.
C
C
I
think
if
we
look
deeply
into
a
lot
of
them,
we
would
see
a
lot
of
things
that
we
ideally
would
create
as
metric
models
and
that
a
number
of
metrics
would
fall
out
of
them.
But
I
think
in
terms
of
keeping
the
community
healthy,
that
doing
that,
all
at
once
would
not
would
would
make
a
lot
of
stuff
non-functional.
So
us
a
healthier
approach
would
be
to
update
and
review
the
metrics.
C
D
B
No,
the
the
the
disagreement
that
I
was
the
disagreement
that
I
was
referring
to
was
that
I
don't
believe
the
working
groups
have
spent
enough
time
on
those
foundational
atomic
metrics.
B
I
think
they've
they've
jumped
ahead
and
started
working
on
models
without
really
addressing
those
those
those
foundational
metrics
and
I
think,
that's
been
a
kind
of
a
a
common
reoccurring
theme
in
the
with
the
with
the
group
so
and
now
that,
now
that
we
now
that
we're
actually
doing
metrics
models,
it's
it's
becoming.
It's
becoming
really
obvious,
because
the
because
when
we
go
and
we
look
at
these
metrics,
they
are
their
models.
D
This
was
like
the
other
like
just
the
other
day,
because
because
I
agree
that
the
inclusion
of
metrics
models
has
muddied
the
water
in
terms
of
it
in
terms
of
how
we
think
about
these
things,
and
so
I
think,
for
a
long
time
I
mean
the
conversation
is
around
having
atomic
metrics
and
composite
metrics.
Do
you
remember
that
yeah
yep.
C
C
D
C
D
I
don't
think
it
was
that
we
didn't
that
we
hated
atomic
metrics.
I
just
think
that
that
was
part
of
the
common.
The
conversation
really
before
metrics
models
showed
up
would
be
my
kind
of
a
timing
issue
and
now
metrics
models
are
here
and
it
certainly
does
create
some
like
overlap
with
focus
areas
and
composite
metrics,
and
so
maybe
the
question
is
like:
how
do
we
most
appropriately
untangle
these,
because
we're
here
now
like
we
were
respected?
How
things
showed
up
and
not
not
like
against
any
working
group
or
we're
here
like
it?
D
A
D
D
His
project
recommendability
been
released
yes
and
then
yeah,
okay,
that
we
mark
them
in
here,
as
you
know,
like
whatever.
D
A
Yeah,
what
I'm
proposing
then,
is
like,
maybe
in
the
next
meeting
kevin,
if
you,
since
you
have
reviewing
this,
so
can
you
if
you
can
bring
that
okay,
these
are
proposed
for
a
model.
These
needs
to
be
reviewed,
and
these
needs
a
minor
or
major
revision.
Then
we
can
okay
prioritize.
Okay,
we
take
one
and
move
ahead
like
that.
D
So
let
me
yeah
so
that's
so
then
I'm
wondering
if,
for
example,
okay,
so
what
is
this
one
project
popularity
where's
that
down
here?
Okay,
so
yeah?
Okay?
So
it's
already
there
yeah
that
this
would
be
a
candidate
for
a
metrics
model
and
then.
A
D
Okay,
I
I
still
believe,
there's
utility
in
taking
a
look
at
this
to
sean's
point
just
from
an
objective
perspective
or
like
some
of
the
review
comments
are
like
this
looks.
Okay,
but
like
the
the
listing,
the
the
formatting
around
lists
is
screwed
up
or
like
a
figure,
is
too
hard
to
read
or
something.
A
C
A
A
A
So
right
now,
if
you
look
at
back
in
the
meeting
minutes,
we
have
the
work.
We
have
started
working
on
this
particular
metric
as
a
model
and
we
have
not
removed
this
as
yet
once
the
model
is
ready
and
everyone
agrees,
so
we
move
it
to
the
working
group,
then
we
can
happily
remove
it.
It
would
be
nice.
E
E
My
only
question
about
this
is
so
we're
deciding.
We
probably
need
to
define
each
of
these
numbers
one
through
thirteen,
with
with
the
exception
of
the
ones
that
we've
already
done.
How
do
we
know
that
other
working
groups
aren't
also
working
on
some
of
these,
like,
like
followers
like
that,
seems
like
maybe
or
clones
like?
Maybe
that
might
be
in
common.
B
Clones
clones
has
been
defined
in
common
yeah.
A
To
answer
to
your
question,
elizabeth
is
many
of
us
are
in
different
many
of
the
working
groups,
so
we
do
know
somehow
that
this
is
defined.
This
is
not,
and
then
we
propose
okay,
let's
look
at
this
one
or
let's
continue
developing.
That
is
not
written
or
outlined
anywhere,
but
just
our
presence
help
that
solution.
B
And
I
suppose
we
can,
we
can
look
at
the
the
table
to
see
if
it's
being
worked
on
and
we
can
look
at
the
release
to
see
if
it's
been
defined.
B
Although
the
the
table
has
the
released
ones
as
well,
so
we
could
just
look
at
the
table.
Yep.
A
D
So
then
ready
would
include
attention
to
any
like
atomic
metric
like
it
needs
to
either
be
done
before
we
remove
it.
If.
D
B
In
the
previous
model
meetings,
we
had
talked
about
having
fewer
metrics
in
these
models.
To
begin
with,
so
like
four
to
six,
four
to
six
metrics
in
a
model
is
the
most
I
would
ever
want
to
see.
I
don't
know
what
other
people
thought
think
on
that,
but
13.
F
D
B
D
B
D
C
A
A
So
how
about
we
have
a
two-step
release
process,
one
we
develop
the
model
release
it
and
the
second
step
is
we
release
the
working
implementation
of
the
model.
A
D
So
once
so
once
this
is:
let's
pretend
that
we
get
this
list
down
to
six
and
we
have
links
to
all
the
things
that
we
want.
You
know
what
I
mean
all
the
metrics
are
defined,
and
then
we
get
it
into
a
template
and
it's
not
implemented,
but
we
get
it
into
the
metrics
model
template
and
then
we
tell
the
metrics
model
team.
Like
hey
here's
one,
that's
you
know.
C
E
A
B
Process
but
it's,
I
think,
I
think,
if
that's
the
process
as
well,
but
it
would
obviously
be
replaced
with
some
of
these
metrics
that
we've
we've
had.
B
E
Regarding
implementation,
this
is
a
little
to
the
side,
but
I
think
that
yahoo
had
brought
this
up
kind
of
as
an
offhand
comment,
but
it
would
be
great
if
we
used
chaos
as
the
test
to
implement
these
like.
I
would
like
to
do
something
like
this
on
the
chaos
project.
B
I
think
that's
a
really
good
idea,
because
we
could
do.
We
could
do
surveys
for
the
more
some
of
the
more
qualitative
models
which
which
would
be
hard
for
us
to
do
otherwise.
B
Well,
I
just
I
just
worry
that
we're
moving
we're
moving
past
the
the
metrics
and
the
models
too
fast
into
the
implementations,
and
I'm
not
I'm
not
that's.
My
worry,
I'm
not
saying
that's
what's
happening,
but
my
worry
is
that
we're
doing
that.
So
I
kind
of
want
to
slow
down
for
a
second
and
take
a
peek
and
make
sure
that
we
know
what
metrics
are.
D
Yeah
and
I
think
that's
I
do
think
that
has
been
a
conversation
in
the
metrics
model
working
group,
and
I
I
think
you
were
at
that
one,
given
the
big
thumbs
up
kevin,
which
was
like
let's
do
these
one
at
a
time
and
slow
like
not
have
a
release
cycle,
that's
like
the
metrics,
where
we
sometimes
release
17
or
whatever
it
might
be
in
a
six
month
period,
but,
like
I
mean
a
six
month
period,
might
entail
two
metrics
models,
maybe
one,
but
it's
it's
good.
It's
end
to
end
it's
articulated.
D
C
A
Maybe
then
for
the
next
meeting,
are
you
kevin
going
to
bring
the
list
which
needs
to
be
booked
and.
B
Matt
and
elizabeth
have
created
an
issue
format
for
going
through
these
reviews,
so
I'll
have
that
done
by
the
next
meeting
and
I'll
have
recommendations
for
changes
based
on
keeping
them
as
a
metric
and
and
I'll
also
have
recommendations
on
whether
or
not
I
think
they
should
be
turned
into
a
metrics
model.
A
A
D
A
B
A
D
C
Project
popularity,
I
would
separate
from
all
of
the
discussion
that
we've
had
here,
because
that
was
the
original
metric
where,
during
its
development
people,
I
won't
say
who
he
said.
It's
a
composite
metric.
It's
just
blending
a
bunch
of
metrics
together.
This
is
not
the
same
thing
as
the
other
metrics,
and
so
like
this.
One
of
all
the
metrics
and
value
is
the
most
obviously
a
model.
It's
clearly
incorporating
many
different
metrics
that
need
to
be
defined,
yep
yeah,
so
so
we
can
choose
to
like.
D
B
C
B
So
it
wasn't
it
wasn't,
it
was
interim
released,
I
mean
which
one
this
one
I
forgot,
which.
B
C
B
The
the
conversion
rate
metric
yeah.
C
C
A
C
D
Yoga
so
the
metrics
models
are
clearly
a
value,
add
and
things
that
I
think,
organizations
and
communities
and
people
would
like
to
see
implemented.
I
mean
we
hear
this
time
and
time
again
and
the
atomic
metrics
themselves
provide
some
utility,
but
probably
not
a
lot.
C
B
Also,
the
the
work,
the
work
of
building
models
and
working
groups
does
take
some
of
the
pressure
off
the
the
metrics
model
working
group.
So
I
mean,
if
value
is
creating
a
model
for
project
popularity.
That's
a
that's
a
model
that
the
work
the
model
working
group
doesn't
have
to
do
right
right
and.
A
B
And
if
implementation
is
strongly
encouraged,
but
not
required,
then
that
implementation
could
happen
at
a
later
time
or
be
implemented
by
another
group
entirely.
I
suppose,
if
you
know,
perhaps
the
the
implementation
happens
in
augur
instead
of
the
metrics
model
working
group,
or
it
happens
in
gremore
lab
instead
of
the
metrics
model
working
group.
A
A
D
Any
suggestions
on
community
structural
changes-
let's
try
to
like
I
just
I
don't
also
just
don't
want
to
kick
this
down
to
another
group.
We
need
to
take
a
look
at
what
structures
we
have
in
place
right
now,
okay,
and
to
your
point,
the
points
being
raised
of
building
metrics
models
within
the
working
groups,
even
if
they're,
not
implementable,
totally
agree
that
takes
pressure
off
of
that
skinny
end
of
the
funnel
yeah.
C
So
there's
I
mean
by
socializing
I
mean
maybe
there's
a
podcast
about
it.
Maybe
there's
a
right,
a
blog
post
about
these
things
like
this
is
more
than
one
blog
post
more
than
one
podcast
like
there's,
I
think
a
lot
of
thought.
That's
occurred
in
the
metrics
model,
working
group,
the
age-specific
working
group
and
around
this
revision
of
existing
metrics
that
I
think,
maybe,
if
we
just
socialized
it
and
made
it
more
of
a
conversation,
some
of
the
things
we're
wrestling
with
in
this
meeting
today
might
resolve.
A
B
Thank
you.
Thank
you
for
having
the
conversation.
I
I
apologize.
If
I'm
pushing
too
hard,
I
I
I
I
have
that
habit.
Sometimes
I,
like
I'm,
not
sure
they're
listening
to
me.
I'm
gonna
push
harder
so.
C
B
But
I
it
generally
speaking,
I
I
like
what
I've
been
hearing
and-
and
I
and
I
like
what
I
I
like
the
discussions
that
are
happening
in
the
the
metrics
model
working
group
as
well,
and
just
to
be
clear,
I
don't,
I
don't
disagree
with
the
with
the
goals
or
what
we're
doing
I'm
just
I'm
really
at
the
at
the
can
we
slow
down
and
make
sure
we
have
that
we
understand
parts
of
the
process
yeah!
That's
where.