►
From YouTube: CHAOSS Common Working Group Meeting August 17, 2023
Description
Meeting minutes are here: https://docs.google.com/document/d/1xsii5tfmmDwWpuhrFcBJMeYeT3RipJdiCdHrbi0NalQ/edit#heading=h.6q1kh4eoj9gd
A
Hey
everybody,
it's
August,
17th
Thursday-
and
this
is
the
common
working
group
for
chaos.
I
hope
everyone's
doing.
Good,
quick
reminder.
This
is
under
the
chaos
code
of
conduct,
as
all
of
our
meetings
are.
So
just
keep
that
in
mind
and,
of
course
we
don't
care.
If
you
have
your
cameras
on
or
off
in
this
meeting,
we
talk
about
things
that
are
common
across
chaos,
things
mostly
around
operations
in
chaos.
A
We
also
do
some
Metric
development
here,
as
well
as
as
other
topics,
so
I'm
gonna
share
my
screen
and
we
will
get
right
if
you
have
not
answered
the
question.
Whoever
put
that
in
here
was
really
really
good
question
yeah.
It
is
pretty
hard,
isn't
it
Jen?
It's
too
hard,
I,
don't
know
and
we'll
just
go
through
these
items
as
if
anybody
has
anything
they
want
to
add,
feel
free
to
do
so.
Otherwise,
we'll
just
knock
them
out
one
at
a
time.
B
Yeah
I'm,
just
a
just
a
note:
oh
I'm
sorry
was
someone
else
trying
to
talk
nope,
okay,
just
a
note
for
the
Common
working
group
meeting
or
trying
to
kind
of
move
the
agenda
so
that
it
includes
the
the
updates
from
the
the
Liaisons,
but
I
I
still
want
to
have
the
ability
to
to
add
kind
of
open,
Agenda
items.
So
that's
why
I've
kind
of
added
them
towards
the
top
so
I'm
thinking
about
this
as
kind
of
a
template
moving
forward
for
our
meetings.
A
C
If
I
put
this
in
all
of
the
meetings,
it
is
just
a
reminder
to
to
complete
the
survey.
It
is
super
short
it's
not
hard,
but
we
really
want,
as
many
people
as
possible
have
used
chaos
tools,
even
even
in
the
past,
to
fill
it
out
and
just
give
us
give
us
your
feedback
and
let
us
know
what
what
challenges,
in
particular,
you've
had
with
with
using
the
the
tools
and
metrics,
because
that
would
be
super
helpful
for
us
moving
forward.
C
C
Your
mind,
yes,
thank
you.
That's
a
good
point.
I.
My
plan
is
when
I
get
back
from
vacation
on
September
12th
I
will
look
at
the
results,
see
if
we
have
enough
to
to
analyze.
If
we,
if
we
still
need
some
more
responses,
then
I'll
leave
it
open
another
another
week
or
two.
This
is
this
is
hard
timing
right,
like
everybody's
on
everybody's
on
holiday
and
people
are
busy
getting
their
kids
back
to
school
and
there's
just
a
lot
of
stuff
going
on
for
people.
A
I
was
wondering
too,
if
this
is
something
you
would
want
to
just
leave
open
in
general,
just
as
a
feedback
mechanism,
we
could
even
link
it
on
the
website
somewhere.
C
Don't
know
but
yeah
I
haven't
really
thought
about
that.
I
think
it
would
be
really
good
to
collect
some
ongoing
feedback
from
people,
but
we'd
probably
do
it
as
something
separate.
It
might
have
some
of
the
same.
Some
of
the
same
questions
and
kind
of
a
similar
feel
yeah.
That's
a
really
it's
a
really
good
idea.
Let
me
think
more
about
that.
A
A
A
Okay,
so
we
will
hop
past
hop
past
these
open
ones,
but
if
people
do
have
things,
they
want
to
add
knock
yourself
out.
That's
why
we're
here
we'll
jump
down
to
the
context
group
liaison
if
there
are
any
updates
from
these
as
the
under
scientific?
Well.
First,
let
me
just
stop
here
at
Osco,
I'm
gonna
need
a
reminder
of
who
these
Liaisons
are
because
I
don't.
E
D
A
Okay,
fair
enough,
do
you
want
to
talk
about
this
contributor
experience
Sean.
D
Yeah
so
in
the
I
think
this
is,
of
course
not
unimportant
to
much
of
the
the
chaos
Community.
D
The
particular
way
it's
constructed
in
a
scientific,
open
source
universe
is
that
there's
there's
a
couple
of
characteristics
of
that
world
that
are
that
are
different.
One
is
there's
a
general
reluctance
among
scientists
that
build
small-scale,
open
source
software
to
join
together
under
some
kind
of
foundation.
Obviously
things
like
numb
High
and
the
Lego
under
the
focus
organization.
So
there's
there's
kind
of
a
split
identity,
but,
unlike
I,
would
I
would
say
in
the
corporatized,
open
source,
space
and
scientific
open
source.
D
There's
there's
a
real
in
understanding
of
a
need
to
create
a
positive
contributor
experience
in
order
to
retrain,
retain
small
small
numbers
of
contributors
who
are
in
most
cases
not
paid
or
are
on.
What's
called
soft
money
or
you
know,
so
they
don't
have
any
guarantee
of
continuing
to
be
paid
for
the
work,
and
so
this
contributor
experience
metric
model,
which
seems
similar
to
some
of
the
other
metric
models
that
we've
developed
candidly.
D
This
is
kind
of
how
the
stories
around
it
have
been
started
to
develop
inside
the
the
open
source,
scientific
software,
community
and
I.
Don't
know
in
the
context
of
common,
as
it's
currently
constructed,
how
much
detail
we
want
to
go
into
I
I
think
my
understanding
is
that
one
of
the
things
that
we're
going
to
do
in
these
groups
is
flesh
out.
Metrics
models
that
are
conceived
by
the
working
groups
that
we
are
liaison
liaising
with
am
I
am
I,
remembering
that
role
correctly.
B
B
One
and
two
I
think
there's
kind
of
a
second
step
where,
where
we
ourselves
kind
of
liaison
with
the
metrics
models
working
group
to
help
this
metric
or
to
help
this
model
get
get.
B
Yes
and
and
I
think
in
in
that
sense,
I
think
we
can.
We
can.
We
could
edit
this
and
add
our
input
to
it
and
I
think
we
also
want
to
be
kind
of
a
almost
a
reviewing
agency
to
to
kind
of
help
them
get
it
get
it
released
right
to
provide
feedback
on
the
release,
yeah.
D
D
So
so
would
this
go?
Is
this
really
does
this
pose
long
in
common,
because
it's
a
metrics
model,
or
maybe
it's
useful
to
talk
about
what
the
other?
This
is
kind
of
what
the
scientific
open
source
working
group
is
focused
on
right
now,
so
I
bring
and
share
it,
and
if,
if
you
want
to
take
a
few
minutes
to
review
it,
I
say
let's
and
give
feedback
I
think
that
would
be
helpful
if
it
should
be
pivoted
to
metrics
model
working
group,
I'm
cool
with
that
as
well.
So
I.
E
A
E
D
That's
just
my
so
so
we
need
to
Okay.
So
with
that
in
mind,
then
I
think
maybe
the
update
is
that
there
needs
to
be
an
action
item
to
develop
a
second
contributions,
metric
and
I'm
I'm
happy
to
take
that
action
item
because
auger
already
produces
that
metric.
So
it
should
be
pretty
easy
to
describe
it.
B
So
I
do
think
it's
appropriate
to
look
at
the
model
itself
as
well,
so
the
the
next
step
when
we're
done,
though
that
is
I,
think
that's
when
we
I
think
that's
when
we
move
it
to
the
metrics
model
working
group
right.
So
maybe
we
we
have.
Someone
from
this
group
then
present
it
to
the
metrics
model.
Working
group
after
we've
had
maybe
a
pass
at
it.
D
Yeah
I'll
say
that
by
before
the
next
I'll
just
work
to
develop
this
in
the
next
week,
I
put
it
on
x2
the
second
contributions
metric.
So.
B
Enough
I
will
second,
contributions
is
on
the
agenda
for
this
meeting.
I
believe
yeah
all.
D
Right
well
then,
let's,
let's
are
we
done
with
scientific,
open
sources,
update.
D
Pass
through
it
sure
that
would
be
you
know,
any
comments
are
helpful.
I
think
if
you
know,
especially
with
regards
to
questions
of
coherence,
clarity
is
there
two,
you
know
any
any
kind
of
feedbacks
that
would
help
the
model
itself
become
more
easily
consumed
or
used.
I
think
would
be
helpful.
A
So
for
folks
who
are
new
to
this
meeting
or
to
any
of
our
working
group
meetings,.
A
Sometimes
where
we
will
we'll
leave
the
recording
going,
but
it'll
just
be
us
all
kind
of
typing
in
this
dock
here
together,
collaborating
on
this
stuff.
So
if
you
are
wanting
to
add
changes,
we
we
usually
use
suggesting
and
then
we
go
through
all
of
the
suggested
changes
at
the
end
and
see
which
ones
we
want
to
keep
so
just
for
anybody
who's
new
who
hasn't
kind
of
gone
through
this
exercise
before
that's
what
we
do.
E
A
D
B
Yeah
my
my
comment
was
similar
to
that
there's
a
there's
a
lot
of
metrics
in
here.
Maybe
so,
maybe
it
is
a
collection
of
of
other
models
that
we're
looking
at.
E
Model
squared
I
I
wanna
note
that
the
I
I
noticed
this
as
I'm
writing
a
blog
post
to
break
down
the
supermodel
that
some
of
these
metrics
are
likely
already
in
buckets
and
categories.
E
So
it's
worth
considering
whether
or
not
it
makes
sense
to
try
to
bucket
these
in
the
chaos
categories
that
they
come
in
or
if
they
I'll
put
this
in
a
comment,
but
I'll
also
say
it,
but
you
could
just
take
from
the
chaos
categories
to
break
it
into
sub
models,
or
you
could
categorize
it
based
on
what
you
think
is
appropriate.
A
D
Question
a
model
would
need
to
answer
and
I
just
made
a
content.
In
fact,
there's
really
can
the
new
contributor
function
with
the
documentation,
which
is
the
Curious
point
about
these
buckets?
It's
I,
don't
know
if
that
collection
of
metrics
could
be
presented
as
a
single
bucket,
because
it
really
is
just
about
I'm
new
can
I
can
I?
Is
there
enough
documentation
that
I
can
figure
out
that
I
can
get
started.
D
It
emerged
in
the
discussion
I
honestly
I
know:
I
did
not
start
it.
I've
been
in
I've,
been
in
many
discussions
about
it.
So
I.
G
D
Course
of
the
the
science
community
meeting.
C
So
here's
here's!
Why
here's
why
I
asked
the
reason
I
ask
is
because
kubernetes
has
had
a
contributor
experience,
special
interest
group
for
a
really
really
long
time
and.
D
I
I
I
can't
say
for
certain,
but
I
would
say:
I
am
almost
certain.
No
because
these
people
are
coming
from
an
entirely
different
world
than
the
kubernetes
means
are
and
contributor
community.
So
I
I
doubt
that
they
have
that
experience
and
I
think
if,
if
you
love
to
know
somewhere
with
media
link,
I
assume
somewhere
in
there,
there's
a
link
to
the
materials.
B
So
for
for
most
of
these,
these
metrics
that
are
in
here
we're
kind
of
we're,
making
the
assumption
that,
for
example,
if
documentation
is
good
or
bad,
that
has
that
that
informs
contributor
experience.
Is
that
correct.
D
Yeah
I
mean
my
own
experience
in
open
source
is
if
I
can
get
a
project
running
the
documentation
is
sufficient
for
me
to
make
contribution
and
get
up
and
hopefully
enough
to
get
it
running.
Then
right,
you
know,
you're
not
going
to
get
new
contributors
and
perhaps
you're
not
going
to
get
people
to
use
it
either.
B
D
Yeah
I,
just
yeah
I
described
actually
in
the
meeting
I,
can
remember
remind
me
that
in
this,
in
the
last
discussion
of
this
I
suggested
that
some
of
these
metrics
are,
after
they
come
before
the
retention
or
the
good
contributor
experience,
and
others
are
indicators
of
so
that
antecedents,
that's
the
throw
is
looking
for.
It's
like
good
mentorship,
good
label
inclusivity.
Those
are
antecedents
that
you
know,
I
would
say,
pull
requested
issue.
Responsiveness
are
really
critical
actually
and
that,
but
second
kind
of
second
contributions.
That's
that
really
is
an
outcome.
B
Is
it?
Is
it
possible
that
this
metric
could
look
at
the
relationship
between
those
outcomes
and
the
and
documentation
documentation?
For
example,
is?
Would
it
be
the
relationship
or
looking
at
them
kind
of
or
all
of
these
metrics
just
being
examined
independently.
D
Maybe
you
know
that
it
inspires
a
thought
given
that
there
are
so
many
metrics
here
that
perhaps
a
metric
model
that
talks
about
and
we
may
have
one
already
I-
have
to
go,
look
creating
a
positive
new
contributor
experience.
You
know
the
onboarding
kinds
of
things
and
then
measuring
would
be
a
separate
metric
model.
Perhaps
because
there
are,
there
are
outcome
measures
I
can
think
of
other
than
second
contributions.
B
D
D
D
D
Yeah
I,
just
this
is
I,
think
a
brief
update
that
in
the
academic
or
University
open
source
again
you
you
do
have.
There
are
a
lot
of
different
conceptualizations
of
what
even
open
source
program
office
is
at
a
university
and
what
it
does
and
and
I
would
say
it
breaks
down
along,
maybe
two
main
lines.
One
is
it's.
D
The
open
source
program
office
is
viewed
as
a
way
of
corralling
intellectual
property
and
assets
that
are
developed
by
the
university
and
promoting
them,
and
the
second
is
thinking
more
about
how
open
source
software
is
consumed
in
the
course
of
the
conduct
of
science
leading
to
funding
of
of
other
other
projects
and
there's
always
a
question
at
the
University
about
whatever
investment
is
or
is
not
being
made.
D
Is
there
a
return
on
it
and
so
I
think
there's
there's
a
need
in
this
community
community
I
would,
in
this
on
this
working
group,
thought
characterize
it
it's
to
help.
Tell
the
story
of
the
value
of
Open
Source
in
science.
The
scientists
themselves
know
it
it's
part
of
their
day-to-day
work
and
existence,
and
they
can't
live
without
it,
and
they
sometimes
need
to
create
it,
and
they
have
limited
knowledge
about
how
to
maintain
it
as
a
sustained
asset
or
build
community
around
open
source
and
and
so
there's.
D
D
G
I,
don't
really
have
comments,
but
I
did
just
want
to
say
that
eager
I
hope,
I'm
saying
that
right
and
I
had
volunteered
as
the
university
Liaisons.
C
D
G
Did
not
even
occur
to
me
to
come
into
here
and
like
do
anything,
and
it
also
felt
like
that
group
was
like
still
kind
of
feeling
its
way
along.
So
I
wasn't
sure
we
were
with
that
like
so
it's
not
I'm.
Only
saying
that
just
so
you
know
and
I'm
still
new
to
the
project.
So
I'm
a
bit
on
the
timid
side.
I,
don't
know
how
long
he's
been
participating,
but
so
hopefully
we'll
we'll
get
our
ourselves
into
the.
D
World
and
and
thanks
thanks
for
noting
that
Jen,
because
I
I
just
happen
to
be
very
familiar
with
the
community
and
I,
couldn't
remember
what
I'd
agreed
to
liaise
with
I
was
pretty
sure
science
I
couldn't
remember
about
University
and
so
I
filled
in
some
items
on
the
agenda.
That
I
had
been
part
of
those
conversations
yeah.
So
thank
you,
I
well,
I
didn't
know
before
this
meeting
that
this
was
the
thing
to
do
so.
C
I
do
think,
maybe,
though,
we
need
to
figure
out
how
to
make
the
liaison
updates
a
little
bit
shorter,
because
I
feel,
like
we've
spent
a
lot
of
time.
Just
kind
of
I
feel
like
we've
spent
a
lot
of
time.
Just
talking
about.
What's
happened
in
those
in
those
groups,
as
opposed
to
focusing
on
how
the
common
group
can
help
those
those
other
groups,
because
we
can
a
lot
of
us
attend
both
of
those
or
we
can
read
the
summaries,
so
I
think
we
need
to
think
about
ways.
E
C
A
B
You
want
to
maybe
a
five
minute
limit.
A
Okay,
so
let's
thank
you
Sean
for
that
quick
update
and
we
will
go
ahead
and
move
on
I.
Don't
know
if
we
have
anybody
here
who
attends
those
meetings.
A
Actually,
they
are
off
for
the
summer.
Now
that
I
say
they'll
come
back
in
September
can
be
restructuring,
update.
B
Do
you
wanna
the
nod's,
not
here
so
I
would
I
would
say
no
on
that
one.
A
Oh
there
we
go
okay,
self-merge
rate
I,
think
Matt
was
gonna
clean
it
up
and
bring
it
back
to
the
group,
but
we
have
Ray
here
who
is
kind
of
the
one
who
brought
this
up.
So
we
wanna
look
at
this.
F
Yeah
I
was
yeah.
Sorry
I
haven't
been
in
this
meeting
for
a
few
months,
but
I've
been
following
the
progress
on
the
Google
Doc
I.
Think
one
remaining
question
is:
should
we
have
two
separate
metrics
for
thing
apis
that
are
emerged
by
the
original
contributor
and
the
PRS
that
are
merged
by
the
original
author?
Without
reviews,
I
mean
I'm
more
interested
in
the
PRS
that
are
self-merge
without
reviews
like
I
mean
for
projects
that
are
small.
F
It's
understandable,
like
a
lot
of
pis,
could
be
merged
by
the
author,
but
as
long
as
there
are
reviews
that
are
done
by
other
people
that
I'm
I'm,
mostly
okay
with
that
but
I'm
okay,
either
way
just
having
one
metrics
just
focusing
on
ones
without
reviews
or
just
having
I
think
somebody
made
a
comment
about
those
are
distinct
and
interesting
enough,
so
that
I
mean
it
says.
Anonymous
I,
don't
know
who
let
that
comment,
but
I'm
okay
either
way.
F
Well,
so
the
PRS
that
are
merged
by
the
original
author,
what
reviews
or
without
any
reviews.
So
the
comment
I
was
making
was
that
if,
if
it's,
even
if
it's
merged
by
the
same
author,
as
long
as
there
are
decent
amount
of
reviews
that
were
done
by
other
people,
that
I
think
I'm?
Okay
with
that,
especially
for
smaller
projects
and
smaller
companies.
But
if
it's
a
self-merge,
you
know
five
minutes
after
submitting
the
pr
that's
a
little
bit
more
problematic.
So.
C
F
D
A
A
C
Yeah
I
think
I
I
think
these
are
two
slightly
different
things.
I
think
that
we
should
focus
on
whichever
one
Ray
you
originally
intended,
and
the
one
that
you
find
most
interesting
and
then
put
the
other
one
as
a
filter
on
that
one,
because
I
think
there.
E
C
Go
either
we
could
go
either
way
and
I
think
one
can
be
a
filter
on
on
the
other,
depending
on
on.
What's
most
interesting
and
I
I
do
agree,
Ray
I
think
the
the
one
that
the
case
where
they
merge
it
without
anyone
else
looking
at
it
is
the
most
problematic
one,
but
you
could
also
you
know
you
could
also
have
a
filter.
That
shows
you
know
the
the
other
case
which
they
self-merged
it.
But
there
are.
There
were
reviews.
F
Cool
I
mean
that
that
certainly
works
for
me.
So
I
guess
the
answer
to
your
question:
Elizabeth.
It
would
be
an
and,
but
we
can
add
or
as
a
sort
of
a
filter
and
option,
but.
F
A
So
this
one
looks
really
really
good
now
I
think
Matt
was
gonna,
still
go
through
and
accept
all
the
changes
and
we'll
do
one.
Maybe
one
final
look
next
week
or
I,
don't
even
know.
If
we
need
that.
D
C
And
I
think
Matt
took
the
action
item
to
clean
it
up
only
because
Rey
wasn't
available,
I
think
I.
Think
at
this
point,
Ray
should
take
the
action
item
to
clean
it
up
since
it's
his
metric
and
he
should
you
know,
get
credit
for
it.
By
doing
the
doing
the
pr
I
sort
of
feel
like
we've
talked,
we've
talked
with
someone
to
death,
I
think
we
should
PR
it
and
and
if
there's,
if
there's
any
more
questions
we
can,
we
can
think
about
it.
Then.
F
E
B
E
B
Make
sure
you
add
for
the
people
on
this
call,
make
sure
you
add
your
your
name
to
the
the
known
contributors
at
the
bottom
of
this
document.
If
you
feel
like
you,
have
a
contributed
to
the
the
document
in
any
way.
Thank
you.
B
C
Is
this
sorry
was
this
a
is
that
is
this
a
pull
request.
C
Yeah
I
think
I
I
think
I
fundamentally
disagreed
with
this
pull
request.
My
I
guess
maybe
I
should
take
another
look
at
it,
but
I
I
do
think
that
it
should
be
closed
without
without
merch,
because
you,
basically
it
it-
was
a
completely
different,
take
on
the
metric
that
wasn't
actually
how
we,
how
we'd
implement
it
and
I,
don't
think
he
ever
added
more
information
to
it.
It.
B
Was
almost
it
was
almost
a
different
metric
right,
yeah.
C
Yeah
yeah,
when
he
was
talking
about
something
completely
different
than
than
what
we
had
proposed
and
what
we
had
implemented
in
compass,
which
is
fine
if
he
wants
to
submit
that
as
a
separate
metric.
But
I,
don't
think
it's
a
pull
request
on
this
one
because
it
fundamentally
redefined
the
metric.
Okay.
D
Okay,
I
think
we
close
it
with
the
comment
that
we,
this
would
redefine
a
metric
that
that
is
being
used
and
people
find
Value.
And
if
you
want
to
suggest
this
work
as
a
separate
new
metric,
then
we
would
recommend
that
you
do
that
or
some
way
to
keep
it
positive.
If
we
see
value
in
the
proposed
new
metric,
but
I
I,
certainly
just
reading
through
it
just
now
doesn't
certainly
look
like
what
that
metric
is
anyway,
so
it
does,
it
shouldn't
be
merged,
should
be
closed
without
merging
I
agree
with
that.
B
Do
we
do
we
feel
that
maybe
there
was
some
confusion
in
the
in
the
existing
metric
that
LED
this
individual
to
propose
this?
Do
we
need
to
edit
for
clarity
or.
C
Yeah
I,
don't
know
a
lot
of
people
have
been
confused
by
this
metric.
I'll
be
honest.
I
I've
already
tried
to
edit
it
so
that
it
was
more
clear
to
people,
but
I
think
it's
I
think
it's
a
metric.
That's
a
little
bit
difficult
for
people
to
understand
and
I'm,
not
sure
that
we
can
fix
that
in
the
description.
C
That
said,
if
anybody
has
ideas
for
how
we
can
fix
that
in
a
description
I'm
all
for
that.
E
D
Yeah,
it's
it's
a
basically,
it
puts
simply
it's.
Is
it
your
backlog,
throwing
I
don't
have
any
trouble
understanding
the
first
paragraph
of
this
metric
yeah.
C
A
lot
of
people
like
to
look
at
change
requests
just
look
at
the
new
stuff,
so
the
new
stuff
that
came
in
over
over
the
past
month
and
how
many
of
those
have
they
have
they
closed,
which
isn't
which
does
not
address
the
backlog.
It's
a
kind
of
a.
How
are
we
keeping
up
with
the
stuff
coming
in
right
this
minute
and
I
fundamentally
see
that
as
something
completely
different
yeah.
C
And
and
mine
does
the
change
request,
closure
ratio
is
is
all
about
the
backlog.
D
B
Not
I'm
wondering
what
that,
where
that
issue
is
located,
was
it
located?
Oh.
C
No,
this
is
actually
something
different.
I
think
this
came
out
of
the
the
Liturgy
of
folks.
Do
some
blog
posts
around
some
of
their
metrics
and
some
of
their
metrics
they've,
put
in
a
whole
bunch
of
detail
in
this
blog
post
about,
in
particular
this
one,
the
backlog
management
index,
but
it's
not
actually
defined
chaos
metric,
so
I
think
we
asked
them
to
if
they
could
come
back
and
propose
it
as
a
metric.
C
D
B
B
Is
that
fair
or
shall
we
we're
sure
I'll
move.