►
From YouTube: CHAOSS Common Working Group 4/14/22
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
A
We
also
have
a
few
open
issues
and
prs.
I
I
want
to
start
with
the
pull
request.
Has
anybody?
Does
anybody
know
what
this
is.
C
C
C
No,
no,
nothing
should
we
shouldn't
have
a
dot
idea,
folder
period
no
yeah
period
and
I
would
say
the
outreachy
stuff.
Okay,.
E
Yeah,
I
thought
this
was
an
accidental
pull
request.
When
I
was
looking
at
it,
I
don't,
I
don't
think
they
they
meant
to
do
this.
F
A
But
I
mean,
is,
there
are,
there's
sorry
is.
B
B
Yeah,
I
would
just
I
would
say,
maybe
join
the
outreachy
channel
in
slack
because
that's
where
we
have
all
that
stuff.
B
Yep,
well,
I
think
one
of
the
things
with
outreach
is
they
have
to
we've
been
seeing
this
a
few
times
they
have
to
at
the
reg
site.
They
have
to
submit
like
an
issue.
That's
been
accepted
or
a
pr
that's
been
merged
or
an
issue,
that's
been,
you
know
like
put
in
or
accepted
pr,
so
I
think
a
lot
of
the
students
are
trying
to
get
that
done
because
they
have
to
show
like
work
that
they've
done
in
the
community
during
this
period.
That's
all.
A
A
B
A
So
it
stands
for
objective
and
key
results.
It
was
something
that
was
invented
by
one
of
the
founders
of
intel,
andy
grove.
I
think-
and
it's
basically
just
a
way
of
stating
your
your
objectives
as
kind
of
outcomes
like
what
you
want
to
achieve
and
then
having
key
results
that
tie
into
each
objective
and
then
they
sort
of
cascade,
so
that
my
objectives
and
key
results
are
tied
back
into
my
managers
and
their
managers
and
the
managers
managers,
so
that
things
are
are
aligned.
A
A
A
Sure
how
the
metrics
as
okay,
ours,
I
so
even
though
I
I
know
quite
a
bit
about
okrs,
I
don't
understand
this
issue
either.
So
it's
not
not
just
you.
Okay,.
B
E
A
Right,
yeah,
okay,
so
we
will
so
matt
commented
and
asked
so
we'll
see
we'll
see
what
we
get
from.
That
is
there
anything
we
need
to
do
on
the
release,
notes
and
the
metrics
candidate.
A
E
A
E
E
So
I
I
think
the
fact
that
b,
hack
pulled
out
that
this
would
be
interesting
to
look
at
is
a
function
of
what
the
metric
is.
A
Oh
sorry,
did
we
within
the
focus
area,
add
the
metric
in
the
table
and
provide
a
link
to
the
metric
and
metric
question?
Did
we
do
that.
A
No,
no
worries,
I'm
just
teasing.
Do
you
want
to
do
you
want
to
run
through
the
the
action
items
from
from
last
week?
I've
got
them
here
on
the
screen.
G
I
think
first
action
item
was
done
like
it's
ready
for
release,
but
since
we
were
in
the
review
period
so,
but
for
the
second
I
haven't
worked
yet
so.
F
G
G
A
Okay,
let's
get
out
of
the
next
meeting
section
for
the
things
that
we
should
talk
about.
Oops
did
that
twice
and
then
there
was
one
more.
I
think.
A
E
A
So
that
one's
done
done
done
cool
is
there
anything
else
kevin
that
we
need
to
talk
about
for.
E
No,
this
is
no
everything
was
done
correctly
here,
so
this
is
perfect
and
the
the
message
at
the
the
end
that
it's
ready
for
release
is
a
very
good
signal
for
me
to
put
it
on
the
website.
E
Well,
we
should
take
a
look.
I
think
I
may
have
removed
it,
but
there's
a
chance.
I
didn't
it's
gone.
Oh,
it's
gone
okay
and
then
I
I
must
have
removed
it
when
I
added
the
data
privacy
statement.
So,
oh
I
will
say
there.
There
is
one
thing:
I've
noticed
with
the
for
some
reason:
none
of
these
metrics
candidate
release
issues
include
a
link
to
the
actual
metrics
markdown
page
in
them.
E
E
So,
okay,
it's
this
one.
Does
this
one
does
I
don't
know
if
it
was
edited
or.
G
Yeah,
the
reason
is
because
when,
when
somebody
creates
that
issue
at
that
time,
the
website
link
is
not
live,
so
we
cannot
have
that
link
exactly.
E
G
B
B
E
So,
regarding
the
regarding
the
website
or
the
markdown
files,
so
we
do.
We
do
follow
standard
naming
conventions
for
that,
so
you
you
could
actually
generate
these
links
prior
to
them
existing
you
just
run
the
risk
of
someone
commenting
that
the
the
link
is
broken
okay
prior
to
prior
to
it
going
live,
is.
B
E
No,
however,
when
I
mean
this
this
whole
bit
that
we're
looking
at
now
with
the
metrics
quality
checklist,
this
was
a
template
that
we
created
so
that
we
could
just
copy
and
paste
it
in
and
originally
so.
But
I
guess
I
guess
the
issue
is
that
the
this
metric
can
be
found
here
when
you
delete
out
the
the
example
metric
or
the
example
link
you're
no
longer
seeing
the
that
originally,
it
was
intended
to
be
the
markdown
file.
E
And
you
can,
and
you
can
jump
back
and
forth
between
the
the
markdown
and
the
comments
being
made
as
well.
G
F
B
A
All
right,
so
I'm
gonna
call
that
that
bit
done
and
stefano
did.
You
wanna
talk
more
about
this
working
group,
metrics
and
okr's
issue.
D
D
But
the
main
issue
that
we
have
in
some
open
source
projects
that
are
backed
mainly
by
big
enterprises
is
that
that
sometimes,
when
we
collect
these
metrics,
we
could
try
to
to
set
up
some
kind
of
guidelines
related
to
how
to
transform
this
interesting
metrics
about
open
source
software
to
targets,
because
you
know
merely
that
to
improve
some
of
these
metrics.
D
If
you
are
not
going
to
transform
this
in
okay
rs
or
something
like
similar
kpi
performance
metrics,
it's
hard
to
change
something.
D
In
what
we
have
as
an
enterprise
git
keeper
in
the
code
on
in
the
repository,
so
for
example,
if
you
want
to
increase
the
request
response
time,
okay,
you
need
some
way
to
transform
this
in
in
a
from
metrics.
To
target
I
mean,
and
so
generally
some
very
large
company
in
silicon
valley
could
be
organized
around
okay,
less
other
kind
of
metrics
that
are
generally
reviewed.
D
Medium
large
enterprise
companies,
I
mean
so
when
generally,
we
have
many
people
that
our
employee
that
are
working
directly
on
the
project
and
we
are
an
external
community
member
and
so
on
so
yeah.
The
problem
here
is
that
also
when
we
are
going
to
transform
metrics
in
targets,
we
have
some
kind
of
social
and
technical
problem
like
the
one
that
they
have
commented
in.
The
next
comment,
ready
to
cap
the
law
and
also
in
good
heart
law,
so
merely
was
something
related.
I
think
that
we
can
try
to
read
this
more
in
general.
D
Probably
we
can
talk
quotes
about
this
later,
but
it's
related
to
to
the
point.
That
is
something
like
when
we
are
going
to
transform
metrics
in
targets,
we
we
generally,
we
are
going
to
overfit
the
the
metrics
as
a
target,
and
so
generally
is
something
like
the
metrics
is
not
good
anymore,
because
it's
a
target
that
is,
that
is
going
to
be
overfitted
by
the
organization
or
people
working
on
the
target.
So,
for
example,
if
we
are
going
to
tell
okay,
we
have
the
put
request
response
time
metric
or
the
merge
window.
D
D
D
A
D
Yeah,
it's
something
like
that,
and
also
it's
some
a
sort
of
worrying
about
when
we
are
doing
this
kind
of
model,
we
need
to
always
handle
the
the
more
general
effect
about
good
heart
law
or
company
law,
and
so
we
need
to
really
care
about
how
to
mitigate
this
kind
of
effort
that
I
think
that
quite
natural,
when
we
are
going
to
tell
to
some
kind
of
companies
yeah,
you
can
transform
these
metrics
in
your
internal
goal
and
I'm
not.
We
can
have
this
kind
of
effect.
B
B
The
way
that
we're
kind
of
working
right
now
is
that
in
any
one
of
our
working
groups
like
common
or
evolution
or
risk,
we
have
a
set
of
metrics
like
atomic
metrics,
like
you
know,
time
to
merge
a
pull
request
or
time
to
get
a
first
comment
like
a
kind
of
a
very
small
metric.
The
metrics
models
are
really
serving
as
ways
to
bring
these
specific
metrics
together
in
ways
that
might
so
like
a
metric
model
might
be
responsiveness,
for
example,
and
there
are
a
variety
of
different
ways.
We
could
look
at
responsiveness.
B
We
could
look
at
renal
response
to
prs.
We
could
look
at
response
to
issues.
We
could
look
at
response
on
a
slack
channel.
We
could.
You
know
there
are
a
bunch
of
different
ways,
so
the
model
would
be
responsiveness
and
then
we'd
bring
together,
half
a
dozen
smaller
metrics
or
just
metrics.
I
should
say
just
bring
together
a
half
a
dozen
metrics
that
help
us
comprise
what
that
model
might
look
like.
D
D
What
it,
what
are
the
risk
merely
to
overfit
the
responsiveness
I
mean,
for
example,
just
to
make
an
example
could
be
that
you
are
just
going
to
the
triagers
to
quickly
close
all
the
bugs
and
and
forward
that
to
stock
overflow
or
or
other
kind
of
super
too
quickly
I
mean
so.
We
are
going
to
lower
the
quality
of
the
triage
of
the
tickets
if
we
are
going
to
transform.
For
example,
there
is
responsiveness
in
in
a
target
for
or
to
just
to
transform
their
responsiveness
in
in
a
target.
D
D
Is
that
sometimes,
when
you,
you
can
have
some
quantity,
quantitative,
metrics,
generally
it's
hard
to
grasp
to
balance
that
kind
of
bias
with
qualitative
methods,
because,
generally
you
have
this
kind
of
metrics
that
could
be
automatically
collected
like,
for
example,
the
response
time
on
a
ticket,
but
yeah
generally,
if
you
are
going
to
fit
that
specific
matrix
generally,
it's
hard
to
be
balanced
with
another
kind
of
metrics
automatically
collected,
and
sometimes
this
really
require
a
quality,
a
qualitative
metrics,
for
example,
just
a
feedback
by
the
issue
about
how
how
much
is
so
satisfied
with
the
ticket
closure?
D
I
mean
so
yeah.
I
think
that
there
is
always
an
implicit
risk
also
if
we
are
talking
about
responsiveness
in
the
model
that
we
are
going
to
create
a
bias
when
we
transform
this
to
the
to
a
target
for
the
teams.
B
B
C
B
And
let
the
world
figure
out
what
that
might
be.
So
I
I
think
this
is
interesting
to
think
about
that.
We
could
not
saying
do
it,
but,
like
think
about
a
way
to
express
the
the
bias
that
could
exist
with
a
particular
metric
or
how
a
metric
could
be
gamed.
D
It
started
to
be
a
problem
when
some
organization
is
just
trying
to
take
your
metrics
and
push
that
as
a
target.
So
so
merely
some
kind
of
employee
could
try
to
optimize
that
metrics,
mainly
because
generally
in
large
organization,
if
you
are
not
going
to
impact
these
metrics
inside
some
kind
of
kpa
or
okay
s,
so
okay,
rs
and
kva
is
something
that
need
to
be
miserable.
D
So
it's
something
related
to
a
method,
so
you
cannot
set
an
okay
s
or
kpa
that
could
be
not
expressed
as
a
miserable
index,
and
so
I
think
that
all
these
are
candidates
for
open
source,
open
source
performance
I
mean,
but
the
problem
is
that
yeah,
you
need
to
always
understand
yeah.
You
need
to
transform
target
for
company
to
have
a
real
effect
in
the
in
the
quality
of
the
repository
on
in
the
to
improve
the
community
some
community
index.
D
But
you
need
to
always
be
then
aware
what
start
to
happen
when
you
transforming
a
target
and
what
kind
of
bias
you
could
create
with
these
metrics.
That
is
something
honestly
happen
in
the
okrs
on
kpi
in
general.
It's
not
unrelated
to
charles
metrics,
but
is
a
more
general
topic,
but
I
think
that
if
we
want
to
have
an
impact
with
our
metrics,
we
need
to
find
a
way
to
to
tell
to
the
companies
how
to
transform
these
metrics
in
target.
D
I
think
that
is
less
relevant
for
melee,
for
more
informal
groups
related
to
open
source
project
or
repository.
But
generally,
I
think
that
it's
quite
important
when
a
large
project
is
backed
by.
D
I
mean,
if
not
it's
really
hard
to
impact
on
that,
especially
because,
generally,
you
can
have
an
open
source
team
that
is
not
strictly
related
to
the
engineering
team,
so
you
need
general
to
set
this
kind
of
metrics
as
a
goal
for
the
engineering
team
that
are
working
every
day.
With
the
repository.
A
So
I'm
going
to
suggest
that
we
follow
up
on
these
discussions
in
maybe
one
of
the
metrics
models
channels
or
working
groups.
Someone
are
you
based
in
europe.
D
F
That's
good,
are
you,
are
you
based
in
europe
yeah,
I'm
not
okay,.
D
But
currently
I
work
in
melee
with
as
a
contractor
to
silicon
valley
companies,
and
so
I
work
remotely
for
them,
and
so
I'm
involved
in
many
open
source
projects
like
also
tensorflow
and
other
kind
of
projects
like
goblin,
cb
and
so
on.
Melini
in
the
field
of
artificial
intelligence,
computer
vision.
A
Cool
the
reason
I
asked
about
your
location
is
because
I'm
also
in
europe
and
the
metrics
models
meetings
are
at
a
terrible
time
for
us.
That's
the
one.
Chaos.
C
A
Kind
of
maybe
pop
into
the
slack
channel
for
the
metrics
models
and
have
some
of
the
discussion
some
of
the
discussion
there.
But
that's
that's
what
I'm
going
to
suggest.
A
Okay,
cool
next
thing
we
have
on
the
agenda
is
reviewing
old,
metrics.
B
Is
me
so
we're
starting
to
so
as
we
have
our
70
or
so
metrics
that
are
released
right
now.
This
is
from
that
community
conversation
done.
The
community
call
so
we're
taking
a
look
at
reviewing
older
metrics,
simply
because
a
lot
of
them
need
updating.
They
need
the
language
needs
to
be
updated.
They
were
released
years
ago
and
some
may
not.
Some
may
be
completely
good.
So
it's
anything.
That's
green
on
this
list
that's
under
released,
and
so
this
doesn't
preclude
any
metric.
B
We
agree
that
this
doesn't
preclude
any
metric
that
might
be
close
to
a
release.
Those
are
it's
still
cool
to
move
those
forward.
This
is
just
about
reviewing
the
older
ones,
and
so
the
the
proposed
process
right
now
and
looking
for
feedback
from
anyone
is
that
we,
if,
if
there's
a
metric,
that's
going
to
be
reviewed
or
the
metric
that's
under
review,
we
open
a
new
issue
for
that
metric.
So
you
know
in
the
case
of
common
that
could
be
like
organizational
diversity
or
bot
activity.
B
So
this
isn't
about
reopening
bot
activity,
but
just
post
an
issue
to
the
to
the
original
one
or
post
a
link
to
the
original
one
and
then
listing
specific
ways
that
the
the
metric
could
be
improved,
and
so
some
of
us
right
now
we're
going
through
different
working
groups
to
kind
of
make
suggestions
and,
like
I
have
a
list
of
dei
things.
So
sometimes
it's
formatting
issues.
Sometimes
it's
grammar.
Sometimes
it's
not
following
the
proper
template,
those
kind
of
things.
So
we
could
make
a
comment
on
all
that
and
then
don.
B
B
It's
like
we've
added
the
label
that
I
just
mentioned
earlier.
We
make
sure
that
we
put
this
in
the
translations
repo.
You
know
what
I
mean
if
there
are
changes
to
it
and
then,
if
you
can
kind
of
scroll
down
just
a
little
bit
on
the
content
quality,
it
would
kind
of
be
to
highlight
at
a
high
level
what
are
the
things
that
are
proposed
to
be
proposed
changes.
B
So
if
it's
just
formatting
make
sure
that
you
add
date
of
last
review,
major
minor
editorial
changes
right
that
it
follows
the
most
updated
metrics
templates,
so
just
kind
of
it's
the
same
thing:
it's
just
a
little
bit
more
targeted
to
a
review
on
a
particular
metric
and
that's
that's
it,
and
so
then
it
would
be
part
of
the
release
cycle.
You
know
what
I
mean
like
this.
Would
essentially
be
like
a
new
metric,
but
it
would
just
kind
of
be
labeled
differently,
so
open
to
thoughts
or
feedback
on
on
this
process.
G
B
B
All
of
the
conversation
about
edits
to
the
metric
will
also
be
at
the
bottom
of
a
long
list
of
conversations,
so
we're
just
trying
to
raise
all
of
those
things
to
the
top,
and
that's
also
why
we
suggested
making
a
link
to
that
old
issue.
So
it's
still
they're
still
connected,
but
it
this
approach
just
kind
of
moves
things
up
a
little
bit.
B
E
B
B
A
So
follow-up
question:
how
do
we
want
to
organize
ourselves
to
do
all
of
this?
Because
we've
got
a
whole
bunch
of
metrics
to
go
through.
B
So
right
now
between
sean
elizabeth
myself,
kevin
and
venia
we've
all
kind
of
agreed
to
take
a
look
at,
I
believe
I'm
assigned
to
common,
so
take
a
look
at
the
the
different
working
groups.
So
sean
would
actually
start
going
through
that
list
of
the
green
rows
in
common
it
wouldn't
be.
B
It
doesn't
have
to
be
a
you
know
like
a
perfect
list
from
sean,
but
it's
at
least
a
starting
point
to
kind
of
help
orient
folks,
and
it's
okay
too,
like
for
any
of
us
kevin
sean
myself
and
vegan
elizabeth-
that
if
a
metric
looks
good,
we
just
that's
it.
You
can
just
kind
of
mark
it
in
the
spreadsheet
and
you
know
like
if
it
was
just
released,
then
it
should
be.
Okay,.
G
So
is
there
a
indicator
on
the
spreadsheet
whether
the
metric
is
under
like
review
like
it's
until
we
ever
release
green
under
community
review,
yellow
like
any
labeling
in
that.
A
E
B
So
you
can
see
in
the
remarks
see
row
15
code
of
conducted
event
like
that's.
This
is
like
a
rework.
So
basically
this
is
this
one's
taking
a
lot
of
work,
so
I
essentially
just
recreated
a
new
google
doc
for
it.
So
we
kind
of
go
back
to
the
beginning.
We
go
all
the
way,
create
a
google
doc.
We
work
in
the
google
doc,
it's
the
same
process
that
we
did
for
a
new
metric.
It's
just
this
process
starts
with
a
bunch
of
data
in
it
already.