►
From YouTube: CHAOSS Value Working Group August 11, 2022
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
B
Here,
no,
I'm
still
really
trying
to
like.
I
work
at
a
startup
so,
like
my
attention
is
very
fragmented,
but
I
have
been
taking
all
the
resources
that
you
guys
shared
with
me
last
time
and
really
trying
to
find
a
how
best
to
integrate
them
into
my
day-to-day.
But
one
of
the
action
items
that
I
had
taken
last
time
was
to
go
review
issue
108
in
the
value
github
repo.
A
B
B
Value
means
something
really
different
to
a
company
like
ours,
where
we
are
very
focused
on
using
open
source
as
the
core
part
of
our
basically
our
product,
and
so
in
this
model,
one
of
the
metrics
that
I
think
is
really
important
to
us
is:
where
should
we
contribute
our
open
source
developer
resources
so
like
this
model
in
general,
is
intended
to
surface
like?
Is
something
overall
suitable
for
use
as
a
product
right?
B
But
one
of
the
things
that
I
think
is
valuable
to
us
is,
as
we
look
at
open
source
components,
we
know
that
everything
we
do
is
going
to
be
open
source
and
we're
trying
to
figure
out.
Where
should
we
add
value
right?
So
if
we
look
at
a
particular
project
and
go
well
the
code's
fine,
but
the
documentation
sucks,
our
best
value
is
having
an
engineer
focus
on
helping
to
improve
the
community
documentation.
B
C
This
sounds
to
me
like
it
would
be
a
metrics
model
where
you're
looking
at
a
few
different
metrics,
that
kind
of
come
together
to
compare
you
know,
activity
or
lines
of
code
or
code
quality
with
different
other
assets
of
the
or
aspects
of
the
project.
B
Yeah
I
mean
we
like:
we
have
open
source
code
that
we're
using
right
now,
where
in
one
project
the
code,
the
core
code
is
fine
and
we're
improving
documentation
and
another
one.
The
core
code
is
fine,
but
we're
hoping
they're
testing
infrastructure
some
of
it.
All
of
that
is
fine,
and
it's
the
core
project
that
needs
help
so
being
able
to
surface
that
in
a
way
where
I
can
present
it
to
executives
of
like
this
is
where
our
effort
is
going
is
helpful.
C
So
in
your
mind,
you
would
want
to
look
at
like
you
would
have
a
list
of
the
different,
the
different
pieces
of
the
open
source
puzzle.
So
there
would
be
community
and
documentation
and
testing,
and
maybe
licensing
or
or
other
things
do
do
we
want
as
a
group
to
kind
of
make
a
list
of
those
things
and
we
could.
B
We
certainly
can
like
in
the
brr
I
read
through
all
like,
because
this
is
a
particular
interest
to
me.
I
read
through
all
of
the
reference
to
documentation
and
the
one
of
the
things
that
keeps
coming
up
is
that
a
lot
of
these
metrics
are
very
subjective,
so
code
quality
even
is
something
subjective.
Documentation,
quality
is
something
subjective,
and
I
don't
know
how
to
make
that
more
more
measurable
in
a
in
a
way
where
we
can
go
like
honestly.
I
mean
this
is
something
I'm
just
bringing
to.
C
So
I
think
and
shawn
you
can
speak
to
this
too,
but
traditionally
the
philosophy
of
chaos
has
been
we're.
Just
gonna
help
people
figure
out
how
to
measure
stuff
and
we're
not
gonna
make
any
judgments
on
if
that's
good
or
bad.
If
your
answer,
you
know,
if
your
trends
are
good
or
trends,
are
bad
or
whatever,
because
each
open
source
project
is
so
very
different
now.
C
That's
you
know,
obviously
publicly
available
and
just
just
to
have
a
bigger
pool
of
data
from
which
people
can
compare,
and
I
think
that
that's
kind
of
what
you're
saying,
maybe
not
an
apples
to
apples,
but
something
that
you
could
say.
Oh
this
project
compared
to
the
rest
of
open
source,
has
you
know,
50
less
documentation
just
in
general.
So
maybe
that's
something
to
at
least
dig
deeper
on.
You
know.
C
B
Would
go
if
you
all
you
see
in
a
project
is
code,
but
you
see
no
documentation.
I
think
everyone
will
universally
agree
that
that's
probably
bad
right,
but
then
the
quality
of
that
documentation
varies
wildly,
and
I
think
that,
because
documentation
isn't
as
attractive
of
a
thing
to
work
on
people
go
oh,
if
there's
some
documentation,
then
it's
fine
right.
I
don't
think
a
lot
of
people
go
and
say:
does
that
documentation
actually
help
people
and
documentation?
Is
I
mean
there's
a
lot
of
different
documentation?
That's
needed.
C
Yeah
yeah-
and
these
are
not
easy
things
to
measure
you
know
right,
you
can
you
could.
We
do
have
a
couple
of
metrics
that
look
at
things
like
documentation,
accessibility,
if
discoverability
and
inclusivity
and
those
are
in
our
dei
working
group
but
they're,
they
don't
say
a
whole
lot
about.
You
know
how
much
is
for
the
end
user
versus
the
developer,
and
things
like
that,
it's
more
of
like.
Can
people
find
what
they're
looking
for?
Can
people
access
the
documents?
C
Are
they
you
know
hard
to
hard
to
get
if
you're,
you
know
not
familiar
with
the
project
things
like
that
or
not.
They
don't
accept
prs
like
things
like
that,
so
that
is
related.
I
think,
to
what
you're
talking
about
but
yeah.
I
think
I
think
we're
kind
of
talking
about
a
couple
of
levels
of
metrics
models.
Even
so
you
would
have
like
a
model
that
measures
quality
of
documentation
and
it
would
have
all
of
those
different
metrics
underneath
that
umbrella
and
then
then
the
second
piece
would
be
code.
C
B
What
kind
of
what
this
oss
pal
metric
is
trying
to
accomplish
is?
It
seems
because
the
the
in
this
thread,
originally
there
was
a
model
called
brr
and
which
was
the
business
readiness
rating
and
even
the
original
author
of
it
acknowledged
like
it
didn't,
do
it
wasn't
really
usable
in
an
open
source
way,
and
so
that's
where
they
came
up
with
this
oss
pal
metric,
and
you
know
inter
you're,
absolutely
right.
Elizabeth,
like
I
don't
I've
been
actually
even
trying
to
brainstorm
like
how.
How
can
we
get
data?
B
That's
harvestable
right
and
like
one
of
the
things
I
find
valuable
is
when
people
are
talking
about
it
and
a
couple
of
the
communities
that
I'm
involved
in
actually
not
even
related
to
work.
They
do
kind
of
like
crowd-sourced
quality
like
where
people
vote
on
like
they
mostly
do
it
on
code,
not
on
documentation,
but
people
like
like
even
like
kind
of
similar
to
like
github
stars
right
like
if
people
are
liking
it
and
using
it.
How
can
we
crowdsource
the
quality
of
like?
B
Are
people
finding
this
documentation
usable
and
the
other
part
that
I
thought
of,
is
you
know
from
a
licensing
perspective?
Spidix
made
license
discovery,
programmatic
and
there's
a
little
bit
of
that
starting
to
happen,
I'm
seeing
even
in
like
other
stuff,
like
codes
of
conduct
and
stuff
right,
like
there's
standardized
codes
of
conduct,
that
a
machine
could
go,
look
at
a
repo
and
find
out.
Do
you
have
a
co,
a
code
of
conduct
and
what
does
it
look
like
for
a
documentation
or
other
metrics?
A
So
I
think
so,
for
example,
in
dei
one
of
the
things
that
we're
doing
with
project
badging
is
creating
a
dei.md
file
that
will
follow
a
structure
that
looks
at
four
metrics
that
we've
defined
in
the
dei
working
group
yeah.
That
sounds
great
and
that's
exactly
what
I
envisioned
for,
for
how
that
file
would
work.
Is
that
the
first
stage
of
you
know
identifying
so,
for
example,
the
code
of
conduct
github
really
just
looks
to
see
that
the
file
exists
and
they
don't
necessarily
examine
the
sections
of
it,
though
they
may
I'm
not
inside
github.
A
Similarly,
with
the
dei.md
file,
we
think
we
can
scan
for
the
existence
of
that
file
and
also
pretty
easily
automate
the
scanning
for
structural
elements
that
we
define,
so
we
can
determine
if
someone's,
just
thrown
a
dei.md
file
and
versus
if
someone
has
done
that,
following
a
format
that
we've
laid
out
right-
and
so
I
think
the
inclusion
of
some
of
these
files
does
enable
a
certain
amount
of
automation.
But
to
your
earlier
point
about,
you
know
subjectivity.
I
was
writing
some
things
down.
A
I
think
some
of
the
metrics
that
we
have
are
are
looking
at
objective
measurements
of
things,
like
responsiveness
to
pull,
requests
or
issues
or
work
that
brings
in
more
newcomers.
These
are
things
that
that
can
be
quantified
and
though
no
statistic
is
entirely
objective.
I
think
it
exists
on
a
more
objective
space
on
some
continuum
of
objectivity
than
something
like
code
quality,
which
is,
I
think,
and
documentation,
quality
which
both
of
those,
I
think,
are
more
subjective.
A
That
said
there
there
are
things
that
projects
do
that.
Do
not
involve
that.
For
example,
when
it
comes
to
code
quality,
there
are
measurements
that
are
proxies
for
code
quality,
so
we've
talked
about
in
other
working
groups.
Test
coverage,
as
a
signal
of
you
know
how
what
the
code
quality
is
and
the
inference
from
software
engineering
process
is
that
if
you
have
more
test
coverage,
then
you
are
less
likely
to
release
software.
That
has
some
kind
of
understandable
defect
in
the
future.
A
A
If,
if
you
and
anecdotally
I'll
share
that
you
know
when
I
work
with
automotive
linux
people,
for
example,
or
people
who
work
in
safety
critical
systems
and
have
a
visibility
into
how
linux
kernels
are
developed,
one
of
the
things
that
is,
I
don't
want
to
say
missing,
but
softly
enforced
in.
C
A
Open
source
software
is
software
engineering
process,
and
this
this
idea
of
test
coverage
or
ensuring
that
a
safety,
critical
system
or
a
safety,
critical
piece
of
open
source
software
has
a
repeatability
or
a
replicable
process
right,
and
we
know
we
know
from
software
engineering
that
these
are
signals
of
quality,
but
there
isn't
one.
There
isn't
one
like
tight
signal
right.
It's
not
it's
not
like
responsiveness
where
I
think
we
can
narrow
it
down
to
a
few
discrete
metrics
that
indicate
comp.
A
You
know
a
level
of
responsiveness
that
you
can
then
contrast
across
multiple
projects
to
see
the
relative
responsiveness.
I
think
when
it
comes
to
code
quality.
There
are
these
proxy
signals
that
that
likely
suggest
higher
code
quality.
But
I
don't,
I
think,
it's
a
far
more
subjective
indicator.
Oh.
B
A
B
B
108
that
we're
looking
at
because
not
only
from
the
overall
perspective
of
this
thing
but
as
I
mentioned,
being
able
to
surface
places
where
things
need
additional
help,
that's
really
helpful
for
us
because
we
are
we're
a
thinly
resourced
startup.
And
so
when
I
go
talk
to
engineering
leadership
and
say,
okay,
we're
going
to
work
on
this
project.
But
you
need
to
focus
on
documentation,
that's
the
best
for
the
community
and
that's
the
best
for
what
we're
going
to
get
out
of
the
project.
That's
that's
helpful.
Not
just
the
overall
rating
yeah.
A
Yeah,
and
so
is
that,
with
regards
to
these,
do
you
do
you
think
those
are?
Is
the
brr
model?
Where
is
that
valid,
or
is
there
no.
B
The
brr
model,
like
so
one
of
the
people
I
forget
who
it
was
reached
out
to
the
original
author
of
the
paper
that
came
up
with
the
concept
of
a
brr
and
even
that
author
said,
we've
abandoned
it
like
it's,
not
some
something
we
do
now.
It's
been
replaced
with
this
other
metrics
tool
methodology
called
oss
pal.
A
And
I
could
read
through
the
thread.
But
what
is
that.
B
So
the
pal
is
intended
to
be
like
a
reference
to
one
of
the
authors,
colleagues
who
passed
away
and
who
was
part
of
this,
but
oss
is
in
open
source
software
right.
So
it's
a
metrics
collection
for
open
source
software,
specifically
okay,
one
of
the
things
that's
very
interesting,
even
even
in
the
brr
model.
If
system.
B
Pals
much
further
down
it's
a
really
great
thread.
I
found
it
really
really
enlightening
those
weightings
that
that
are
at
the
top.
In
that
figure.
Those
are
intended
by
design
to
be
per
user
per
company
per
organization,
because
some
organizations
may
weigh
documentation
quality
more
heavily
than
they
do
code
quality.
They
may
weigh
test
coverage
more
heavily
than
they
do
other
things
so
yeah
how
to
make
this
more.
B
A
A
A
A
A
A
Hey
mako,
are
you
following
the
discussion
that
we're
having
I
mean
it's
yeah,
okay,
good
elizabeth.
Do
you
know,
do
we
have
a
metrics?
Is
the
metrics
template
updated
in
the
spreadsheet
and
is
there
is
the
metrics,
and
what
about
the
metrics
model
template?
Is
that
in
the
spreadsheet
or
do
we
need
to
put
that
in
the
spreadsheet.
C
A
Okay,
well,
let's
I
don't
know
if
nobody
disagrees,
I
think
maybe
trying
to
sketch
this
out
might
be
a
good
use
of
our
time.
C
B
A
A
B
C
Shawna,
I
dropped
a
link
to
the
metrics
model,
template.
C
Shoya
and
ruth
have
been
kind
of
moving
some
things
around
as
part
of
that
community
handbook
restructuring
so
yeah.
All
the
templates
are
in
community
repo.
A
Yeah,
I
looked
for
them
there
and
community
resources
isn't
where
I
would
have
looked.
A
All
right,
so
I
put
the
I'll
work
with
consensus
here.
If
there's
folks
who'd
like
to
not
work
on
this,
just
let
me
know.
A
All
right
so
yeah
business,
I
mean
business
readiness,
I
don't
have
a
better
word
for
it,
but
it's
somebody
has
a
studio,
called
working
title
productions.
I
forget
who
it
is
we'll
just
call
this.
The
working
title,
yeah.
B
A
B
Will
but.
B
A
Do
we
want
to
take
the
organization
perspective
or
take
a
so?
Obviously
you
mentioned
earlier
that
you
know
all
these
this.
This
readiness
is
really
evaluated
at
the
organizational
level
in
many
cases,
but
I
think
we're
trying
to
look
here
for
something
that
is
not
just
confined
to
one
organization's
point
of
view
on
exactly.
B
C
Do
we
want
to
consider
the
opinions
of
the
project
itself
like?
Is
that
a
proxy
for
what
the
like?
Do?
We
care
what
the
project
thinks,
so,
in
other
words,
if
they
think
oh,
we
need
help
with
documentation.
So
here's
a
hundred
issues
that
I'll
say
help
wanted
and
it's
all
related
to
documentation.
A
B
C
B
B
C
Has
come
up
in
the
past
with
regard
to
other
things
like
sentiment
and
trust,
and
things
like
that
of
you
know,
trying
to
get
the
tone
of
the
mailing
list,
and
I
think
sean
has
that
capability
in
augur
right.
Sean.
A
C
B
B
Yeah-
and
it's
also
true,
like
you,
get
a
sufficiently
large
project
like
linux
and
it's
very
federated
in
how
it's
managed
so
like
you
have
all
the
maintainers
and
different
subsystems,
maintainer
of
one
subsystem
may
think
this
subsystem
is
in
great
shape
and
a
maintainer
of
a
different
one
will
go.
Ours
is
crap.
We
need
a
lot
of
help
being
able
to
poke
into
different
areas
of
even
a
single
project.
There's
not
just
one
metric.
That
represents
the
entire
project
for
sufficiently
large
projects.
C
B
A
C
B
A
Yeah,
that's
my
experience
as
well.
It's
not
to
say
there
aren't
projects
you
may
put
some
of
this
stuff
in
there,
but
I
think
contributing
is
a
less
dynamic
document
than
I
would
imagine
this
one
to
be
so
if
a
project
put
forward,
they
need
help
in
certain
areas.
I
suspect
that
once
they
got
that
help
the
help
that
md
might
evolve.
B
A
You
know
we're
being
recorded,
but
I
would
say
my
I've
taken
a
deep
dive
into
this
with
some
colleagues
who
I
have
six
years
of
work
experience
before
academia,
developing
pacemaker
software,
and
so
I've
functioned
in
a
safety,
critical
environment,
and
some
of
my
colleagues
have
as
well,
and
we've
developed
a
course
for
software
engineering
and
open
source,
and
the
consensus
we
have
is
that
most
projects
in
open
source
are
not
following
anything
that
we
could
identify
as
a
software
engineering
process
right
they're.
A
A
So
I
like
this
help
md
are
there
other
metrics
that
we
don't
have
that
we
think
we
may
need
to
develop
to
support
this
metric
model,
because
I,
like
I,
like
the
idea
of
this
help.md
file.
I
think
it
really
is
a
strong
signal
from
a
I
yeah.
I
I
think
I
think
it
would
be
a
really
helpful
thing
to
have
a
project
implement.
C
I
mean
I'm
thinking
it
would
be
really
helpful,
even
just
for
chaos
like
like,
we
need
an
empty
file
because
we
have,
you
know
documentation
around
like
how
you
can
get
involved,
but
we
don't
have
likes.
You
know,
aside
from
specific
issues
that
are
rare.
You
know
we
don't
really
have
that
anywhere
of
like
where
we
need
the
help
the
most.
We
don't
really
indicate
that
anywhere.
C
B
C
A
Yeah
so
I
mean
I'm
looking
at
the
evolution
spreadsheet
here
and
we
actually
did
say
co-development
process
quality,
so
not
treating
code,
develop,
not
treating
code
quality
as
a
static
thing,
but
recognizing
that
code
quality
tends
to
be
drive
as
a
function
of
process
quality
in
the
software
and
hearing
literature,
at
least,
which
is
not
a
bad
reference
point
for
something
like
quality.
A
But
I
think
there's
there's
also
when
it
when
it
comes
to
like
actual
code
quality.
There
are,
I
mean
there
are
measurables
that,
like
test
coverage,
which
has
been
discussed
before
and
I
believe,
that's
in
the
risk
working
group.
B
Kind
of
out
of
left
field,
you
know
something
else
that
I
thought
of
with
this
help.md
file
is.
You
know
I
spent
a
lot
of
time
in
various
open
source
communities
just
in
my
personal
life
and
it's
pretty
common
that
like
newbies
come
in
and
they're
like.
I
don't
know
how
to
get
engaged
in
this
project
and
like
what
is
this?
A
Which
I'll
throw
in
there
as
a
link-
and
this
is
so
one
of
the
things
with
test
coverage
that
we've
learned
by
doing
it-
is
that
this
metric
is
fairly.
It
was
not
it's
not
difficult
to
define,
because
the
software
engineering
literature
does
a
pretty
good
job
of
defining
it.
A
What
is
difficult
about
it
is
that
implementation
tends
to
be
very
language
specific.
So
when
you're
trying
to
evaluate
test
coverage
or
code
quality
or
anything
like
that,
the
tools
tend
to
be
very
language.
Specific.
There
are
tools
for
c
python
c,
sharp
whatever
your
language
is,
there
isn't,
as
far
as
I've
discovered
to
date,
a
generalized
tool
that
will
look
at
things
across
different
there's,
no
tool
that
will
look
at
test
coverage
across
languages.
B
C
If
you're
in
your
original
thought
about
this,
this
metric,
you
were.
You
originally
said
that
you
wanted
to
know
how
organizations
could
add
value
to
open
source
projects
that
they
care
about.
Is
it?
Would
there
ever
be
a
case
where
it
was
more
organizational
driven?
C
What
I
mean
by
that
is
we
kind
of
flip-flopped
and
kind
of
made
it
project
driven?
So
you
know
what
now
we're
listening
to
the
project
to
tell
us
where,
where,
as
an
organization,
we
should
be
contributing,
but
if,
like,
if
a
project.
C
B
C
You
know
if
you,
if
your
organization
cares
about
risk
and
dependencies,
but
the
project
is
a
little
more
laid
back
in
that
area
like
how
do
we
incorporate
that
desire.
B
I
think
that
comes
from
the
weighting
of
the
metrics,
so
in
the
oss
pal
model
and
even
in
the
old
br
model,
as
part
of
that
there's
those
weightings
right.
So,
okay
and
that's
up
to
that's
definitely
an
organizational
thing
where
the
like.
We
may
care
a
lot
about
dachshunds,
so
the
status
of
docs
in
a
particular
project
would
be
elevated
in
that
way.
C
I
sean
correct
me
if
I'm
wrong,
but
if,
if
I
recall
that
original
business
readiness
model
also
kind
of
took
a
spin
of
if
a
project's
looking
for
funding
or
some
kind
of
monetary
like
donations
or
things
like
that,
is
that
something
that
also
we
want
to
consider
is
like
an
open
source
project
that
wants
to
sell
their
project
or
you
know,
kind
of
partner
with
a
company.
Something
like
that.
Where
funding
is
on
the
line
like
what
would
what
would
matter
to
them.
In
that
case
or
or.
A
A
I
mean
I
I
mean
I
so
I
I
think
what
you're
talking
about
is
this
idea
that
a
project
can
solicit
funds
and
if
I'm
like,
if
I'm
an
ospo
and
I
value
test
coverage
like
we
were
talking
about,
I
might
look
at
projects
that
are
very
important
to
me.
Organizationally,
like
I
depend
on
or
import
these
other
projects
significantly,
and
I
want
to
contribute
something
to
the
upstream.
That
is
valuable
to
me.
B
Yeah,
you
know
that
that
brings
up
a
different
angle
on
this
that
I've
run
into
as
well.
So
you
know
rapid
silicon,
we're
a
fpga
vendor
and
we're
dealing
with
a
lot
of
like
open
source
tooling
in
the
eda
world,
and
there
are
reasons
for
it
that
we
don't
need
to
get
into
right
now,
but
a
lot
of
those
projects
don't
take.
B
They
don't
take
pull
requests
like
they
take
guidance
from
the
community
and
one
of
the
ways
they
take
guidance
is
for
companies
to
sponsor
work
in
a
particular
area
right
so
for
a
synthesis
tool.
We
may
the
way
that
they're
going
to
get
like
a
direction
to
go.
Work
on
a
particular
feature
is
if
a
vendor
is
paying
for
it
and
putting
their
direction
in
that
way.
B
C
C
So
I
don't
know
that
it
could
be
in
this,
but
I
feel
like
there
is
some
kind
of
tie
there,
and
I
also
know
that
there
are
companies
who
are
interested
in
sponsoring
monetarily
open
source
projects,
but
they
want
to
make
sure
they're
putting
their
money
in
good
places,
and
you
know
things
that
are
valuable
to
them
and
things
that
are
valuable
to
the
community
and
that
they're
healthy
projects.
C
So
I
know
that,
like
there's
that
piece
too,
like
overall
project
health,
because
even
if
it's
a
project
you
rely
on,
maybe
it's
a
super
toxic
project
and
you
don't
want
to
you,
don't
want
to
give
money
to
them
or
or
you
want
to
give
money
to
them
in
a
way
that
you
know
they
don't
want
to
accept
or
something.
So
I
think
there
is
some
some
kind
of
bridge
between
money
and
this
model
somewhere
just.
B
C
And
sorry,
one
more
thing:
one
more
thought
about
the
funding
is,
if
I'm
a
small,
open
source
project,
and
I
think
to
myself,
I
want
to
have
money
coming
in
from
sponsorships,
github
sponsorships
or
donations
whatever.
What
do
I
need
to
do
to
get
there
like
where?
Where
what
do
I
need
to
to
provide?
What
information
do
I
need
to
provide?
Is
it
in
the
health.md?
Is
it
in
my
you
know,
project
plan?
Is
it
in
some
kind
of
health
report
that
I
can
post?
C
C
A
Downtown,
oh,
my
god
wow,
I'm
sorry!
I
I'm
glad
somebody's
noticing
that,
because
I'm
not
taking
that
at
all
all
right,
good
discussion,
everyone,
I
think
actually
it's
kind
of
cool
that
we
made
some
progress.
Developing
a
metric
model
here.
B
Yeah,
I
really
appreciate
you
picking
up
the
ball
and
running
with
it
with
me
here
on
this,
so
in
the
normal
process,
do
you
guys
follow
like
now
that
we've
identified
we
might
want
to
have
this
help.md
file?
Does
that
become
like
a
different,
separate
sub
working
group
to
like
identify
that
or
how
does
that
have
to
happen?.
A
We
tend
to
make
those
initiatives
so,
for
example,
examples
include
the
metrics
model
working
group
itself
and
the
dei
badging
program,
first
events
and
now
the
work
that
we're
doing
with
badging
projects
and
and
that's
actually
where
the
dei.md
file
comes
from,
and
I
think
the
help.md
file
is
a
sort
of
a
quality
focused
pair
to
that.
Where
we're
saying
the
inclusion
of
these
mv
files,
and
this
and
structuring
them
is
important
to
chaos.
And
so
one
thing
I'll
do
I'll.
A
C
I
was
also
going
to
say,
I
think
the
dei
working
group
would
be
super
interested
in
that
they
do
a
lot
of
work
with
onboarding
newcomers
and
making
you
know
more
making
it
easier
for
people
to
contribute.
So
they
might
be
super
interested
in
helping
kind
of
flesh
out
what
that
would
look
like
and
what
should
be
included
in
that
file.
A
I
think
I
can
see
mental
gymnastics
trying
to
sort
out
this
difference,
but
I
think
we
know
what
we
agree
yeah.
So
if
we,
if
we
focus
on
that
first,
then
I
think
we're
gonna
make
more
progress
more
quickly.
Okay,
I've
learned.
If
I
can
dodge
definitional
debates
inside
chaos,
we
we
work
faster.