
►
From YouTube: CHAOSS.Evolution.June.3.2020
Description
CHAOSS.Evolution.June.3.2020
A
A
A
It
looks
to
me,
like
code
review
efficiency,
is
partially
developed,
so
I'm
going
to
mark
that
in
progress,
maintainer
response
to
merge,
request
donation,
we
don't
have
a
document
for
that
issue:
contributors
as
a
filter,
common
metric,
I,
don't
I'll.
Certainly
it's
a
in
progress.
I'll
put
that
and
then.
A
Forks
is
being
Forks
has
been
discussed
as
a
metric
in
another
working
group
that
I'm
in
I
think
risk,
so
maybe
I'm
gonna
say
triangulate
with
risk
and
in
fact,
I
haven't
the
tab
right
here.
Yes,
risk
is
also
working
on
it
for
release,
so
I'll
call
it
in
progress,
triangulate
with
risk
and
then
review
comment
diversity.
Let's
see,
if
that's
it
all
developed.
A
B
A
A
A
Possibly
making
these
sub
filters,
but
the
people
who
are
in
that
discussion
are
not
on
this
call,
so
I
will
I
think
we
cannot
proceed
with
much
more
of
that
discussion.
I
think
we
can
talk
about
that
these
metrics.
These
are
the
five
metrics
that
we're
going
to
work
on
for
release
and
ask.
If
anyone
wants
to
volunteer
to
take
a
first
crack
at
drafting
these
metrics,
we
will
have
one
more
meeting
before
the
review
period
opens.
Does
anyone
want
to
take
a
shot
at
developing
any
of
these
five
metrics
more.
A
C
A
A
Why
don't
you
work
on
the
straight
up
metric
here
and
with
the
risk
group
or
work
on
translating
into
a
trending
kind
of
a
metric
which
I
think
is
the
print?
The
main
concern
for
the
risk
working
group
is
the
number
of
forks
is
a
indicator
of
risk,
which
is
it's
somewhat
the
same,
but
I
think
the
trajectory
of
it
is
more
important
in
risk.
A
B
A
All
right,
I
assigned
it
to
him
and
I
think
that
is
kind
of
where
we're
at
I
don't
know.
If
we
want
to
do,
we
want
to
work
I
guess
we
have
two
choices
at
this
juncture
we
can
and
and
because
a
lot
of
the
people
on
the
call
are
I
mean
I,
guess
but
I
I
don't
know.
Do
people
want
sometimes
on
these
calls
we
actually
develop
one
of
the
metrics
like
we,
we
should
I.
A
B
A
B
B
A
um
Basically,
the
pull
requests
are,
are
an
indication
of
growing
contributor
rates,
so
the
number
of
full
requests
merged
and
closed
is
an
indicator
of
growth,
and
the
proportion
of
for
requests
merged
in
closed
are
also
well
I.
Think
well-documented
at
this
point
is
having
a
high
relationship
to
the
growth
of
contributor
community.
So
the
more
that,
though,
the
higher
the
percentage
of
pull
requests
from
outside
your
core
group
that
are
accepted
the
greater.
D
G
A
So
yeah
for
those
of
you
who
are
new,
essentially
what
we
do
is
we
we
work
on
the
document
kind
of
edit
wherever
you
want
to
add
things:
it's
not
a
centralized
process.
It's
usually
perhaps
we'll
spent
about
12
minutes
on
this
and
then
kind
of
talk
about
it
before
we
get
to
the
end
of
the
meeting
mm-hmm
and
that
is
acceptable
to
all.
A
G
A
G
Under
implementation,
aggregators
and
parameters
on
our
implementation,
yeah,
so
if
aggregators
and
parameters
go
right,
underneath
the
implementation
heading,
those
are
both
just
bolded
as
the
subheadings.
The
filters
is
its
own,
its
own
bubble,
medic
yeah
and
so
in
the
its
parameters.
And
what
was
the
parameters
and
what
evaluators
yeah
so
actually
I
kind
of
want
to
revisit
the
question
that
Kevin
had
about
where
it
belongs,
because
in
the
spreadsheet
it
says,
reverse
reviews
merge,
/,
coach
contribution,
acceptance
which
I
don't
know
if
we
have
contribution
acceptance
elsewhere
by
Edition
mix.
So.
G
You
should
change
the
question
because
I
pulled
it
out
of
my
blood
okay,
because
I
think
contribution.
Acceptance
to
me
is
how
many
pull
requests
were
merged
versus
how
many
pull
requests
are
not
merged,
ie
closed
or
still
open
right.
So,
instead
of
just
being
closed
and
I,
think
that's
that's.
How
I
would
define
contribution
acceptance
like
country's
acceptance
rate
and
they
won.
F
G
That
case
I
think
you
would
belong
here,
but
if
we
I
will
say
if
we
do
want
to
keep
this
question
like
hi
answer
this
question
in
another
metric
I
mean
this
question
that
is
currently
here
would
better
fit
in
to
development
activity,
because
it
is
just
measuring
the
how
many
not
what
ratio.
My
real
question
is:
are
we
trying
to
measure
contribution
acceptance
here
or
reviews
merged
close
just
that
number.
A
A
E
I'm
still
failing
to
see
the
I
know,
there's
kind
of
an
implicit
connection
to
growth
here,
mmm
but
it's
it's
really
not
that
strong
I
would
for
this
one.
I
wouldn't
I,
don't
know
that
we're
I'm
it's
it's
fine
living
in
community
growth,
but
I
wouldn't
mention
community
growth
in
the
in
the
metric.
Let's
see,
I
think
this
is.
This
is
really
this
is
really
about.
So.
A
Rishi
ratio
ratio
right,
so
the
ratio
I
so
the
way
that
it's
written
in
the
very
short
description
in
the
spreadsheet
ratio
reviews
accepted
declined
has
been
used
primarily
by
people
wanting
to
understand
community
growth.
So
Hager
has
done
a
number
of
custom
reports
for
communities
and
when
this
question
is
asked,
it
is
directly
related
to
community
growth.
There's
not
the
question
that
I
put
in
the
document
draft,
but
it's
it's
the
question
of
the
spreadsheet,
so
I,
so
yeah
I
understand
what
you're
saying
Kevin
my
only
experience.
G
G
Acceptance
and
focus
explicitly
on
in
the
body
of
the
like,
in
the
actual
restriction
of
the
metric
focus
on
the
ratio
of
merged,
commits,
merge,
pull
request,
merge
reviews,
sorry
to
all
non
merger
use
if
we
frame
it
in
terms
of
contribution,
acceptance,
I
think
it
belongs
in
community
growth,
because
people
will
associate
contribution,
acceptance,
I,
think
more
with
community
growth.
But
if.
F
G
E
A
The
way
that
it's
been
operationalized
in
three
times
people
have
asked
for
these
in
custom
reports
has
been
the
change
in
that
ratio
over
time.
Okay-
and
usually
it's
in
combination
with
the
duration,
so
the
time
to
first
responds
and
the
time
to
emerge.
Close
are
usually
presented
alongside,
but
that's
a
separate
matter,
and
it
is
in
the
three
cases
we've
done
it.
It
is
this
metric
contribution,
acceptance
rate,
the
rate
ratio
of
reviews
merged
closed.
A
E
Don't
I
don't
care
either.
I
was
just
referring
to
growth
in
the
metric
itself.
I
didn't
either
very
careful,
because
I
think
this
is
kind
of
this
is
a
metric
that
can
tell
you
a
few
things
and
it's
not
necessarily
just
about
there's,
there's
a
there's
a
specific
case
or
a
way
of
looking
at
it,
where
you
can
look
at
where
it
connects
to
commune
with
explicitly
but
otherwise.
E
A
A
Like
Hakeem
there's
one
case
where
there's
a
community
culture
where
everything
just
gets
a
fast
response,
I
think
I
think
we
can
agree
review
acceptance.
Ratio
is
the
name
we
could
work
on
the
metric
and
then
at
the
the
next
meeting,
which
will
be
the
last
meeting
before
the
release
period.
We
can
make
a
final
decision
on
whether
or
not
review
acceptance
ratio
belongs.
A
G
A
A
G
G
G
G
G
E
G
A
E
Yes,
I
think
you're
thinking
different
things
here,
you're
thinking
of
a
of
a
of
a
review
where
someone
goes
and
reviews
a
pull
request.
However,
in
the
language
of
evolution,
a
review
is
synonymous
with
a
pull
request.
So
what
were
we
actually
are
talking
about?
Pull
requests
right
now?
Okay,
and
not
uh
not
not
the
act
of
reviewing
a
pull
request:
okay,
good!
You.
C
A
A
I
mean
I,
guess,
I'm
thinking
concretely
functionally
an
auger
time
period
is
also
a
parameter
that
gets
passed
and
it's
your
expression
of
what
aggregator
you
want
to
use
and,
and
so
I
think
I
guess
an
auger
I
know
it
can
be
both
and
I.
Don't
know
if
we
wanted
to
find
I
guess:
I
am
questioning
it
because
I
am
grounded
in
one,
because
one
concrete
implementation
of
our
metrics.
G
G
G
And
well
in
this
metric,
it's
a
parameter,
but
what
I
think
would
make.
The
most
sense
to
me
is
to
have
a
parameter.
One
parameter
be
the
period
of
time,
which
is
to
start,
but
actually
I,
don't
know
if
period
is
the
time
of
the
race,
the
right
one,
but
a
smart
and
finish
date.
Those
are
planners,
but
also
a
granularity
like
weekly
monthly
annually,
and
you
can
combine
those
as
parameters
and
you
have
a
count
aggregator
to
say
between
these
two
dates.
A
And
end
date
and
I
think
where
this
has
become
in
evolution.
Working
group
in
particular
the
history
is
and
I'm
the
historian
since
I've
been
here
from
the
start,
filters
I
think
was
initially
intended
as
a
section
for
that
covered
all
of
this.
That
is
now
under
parameters
and
aggregators,
and
so,
given
that
we've
decided
that
filters
is
optional,
perhaps
the
start
and
end
date.
A
I
should
go
away
and
then
we
don't
need
a
filter
section
at
all,
like
it's
optional
and
I,
think
we've
covered
everything
that
would
be
a
filter
under
parameters
and
aggregators
and
I
think
we
kept
I
think
we
kept
filters
as
an
optional
thing,
mostly
for
backward
compatibility
of
metrics
we'd,
already
published
to
be,
if
I
think
about
what
this
is
actually
years.
None
of
that
is
completely
accurate,
so
we
should
okay,
so
there's
no
need
for
the
filter
section
whatsoever
on
this
one
and
I'm
glad
we
made
that
decision.
A
B
A
If
someone's
interested
in
working
on
a
different
metric
than
the
ones
we
identified,
I
went
through
this
spreadsheet
and
said:
ok,
these
are
the
ones
that
look
like
works
been
done
or
that
we
should
work
on,
and
but
if
there
are
others,
the
whole
community
is
welcome
to
work
on
others
and
we
can
incorporate
them
into
our
release
as
well.
We
are
not
a
we,
don't
we're
not
serving
a
key
control
function
by
putting
them
in
the
minutes.
We
are
only
serving
a
sort
of
coordination
function
so.
F
A
Yes,
so
what
what?
What?
What
happens
is
that
it's
not
discussion
like
we're
doing
right
now?
It's
actually
during
the
public
comment
period
for
each
metric
that
we
are
ready
to
release.
We
create
an
issue
in
github
and
take
it
as
metrics
release
discussion,
and
there
is
a
standard
block
of
text
that
goes
into
that
issue
where
so,
as
people
review
the
metrics
that
are
released
candidates,
they
would
make
comments
in
in
the
against
the
issue.
That's
related
to
that
metric
and
as
the
community
addresses
those
questions,
concerns
issues.
A
F
A
So
I
changed
the
state
of
the
five,
possibly
four
if
work
shows
are
done
in
risk
that
we're
working
on
for
this
release
to
in
progress
from
considering
at
the
beginning
of
the
meeting,
you
know,
I
think
we
have
the
force
function
of
a
release.
Schedule
now
and
I
examined
with
the
group
that
was
here
to
start
sort
of
which
ones
had
at
least
some
work
done
on
them
or
which
ones
we
think
are
important
to
release
now.
A
D
E
C
E
E
E
A
E
A
A
A
Yeah
yeah
I
mean
I,
don't
know,
I,
don't
know
that
so
I
think
it
is
bringing
together
a
couple
of
existing
metrics,
possibly
so
what?
What
do
we
call?
That,
generally,
are
those
composite,
metrics
or
I
I.
Think
composites,
a
good
word
I've
been
looking.
The
word
synthetic
we
have
not
like
we
did.
We
built
augur
augur,
has
what
would
be
described
as
synthetic
metrics
inside
of
it,
but
we
have
not
done
a
ton
of
work.
Defining
them
are
they
in
the
community,
but
it
looks
like
we're
sort
of
evolving
in
that
direction.