►
From YouTube: CHAOSS.Risk.July.14.2020
Description
CHAOSS.Risk.July.14.2020
A
Sorry,
kate,
okay,
so
yeah.
This
is
the
risk
meeting
on
july,
14,
2020
and
our
first
item
of
discussion
is
some
feedback
that
we
got.
Unfortunately,
kate
and
I
were
not
able
to
listen
in
following
a
presentation
that
we
did
on
a
community
report
focused
on
contributors
to
the
zephyr
community,
and
it
sounds
like
there
was
an
additional
one
hour
that
was
spent
last
week.
Discussing
sustainability
in
the
context
of
that
report
is
that
right,
kate,.
B
Yeah
they
were
talking
about
poll
works
with.
Like
me,
they
do
have
improved
full
request.
Behavior,
and
you
know
there
right
now.
The
community
is
looking
is
working
towards
their
next
lts
to
basically
prune
down
their
backlogs
and
improve
certain
behaviors
in
certain
areas,
and
so
this
report
was,
I
think,
seeded
some
of
the
discussions
I
will
reach
out
to
kumar.
B
Then,
since
you-
and
I
neither
ivatsky-
were
there
and
see
if
he
has
any
summary
for
us
to
share
with
the
group,
because
he
was
leading
the
discussions
of
the
poll
requests
last
wednesday.
A
A
Yeah
and
then
the
the
review
deadline,
just
everyone
knows
for
metrics
release,
has
been
extended
to
july
31st,
so
there
is
still
time
to
comment
on
existing
metrics
and
I'm
going
to
suggest.
We
try
to
put
a
couple
metrics
under
review,
since
most
people
wait
till
the
end
to
review
them
anyway.
A
The
ones
that
I
think
we
have
ready
if
folks
are
ready
to
move
on
forks
is
going
to
be
released
under
common.
If
I
understand
the
discussion
from
that
working
group
last
week,
correctly
matt-
and
do
I
yes,
okay
and
then
the
two
that
we
had
the
most
well
developed
are
pull
request,
discussion
and
stakeholder
influence,
and
we
also
have
some
work
on
code
complexity.
A
A
I
don't
know
if
you're
seeing
I
don't
know
what
you're
seeing.
I
don't
know
what
you're
seeing
your
console?
Oh
okay.
So
that
would
be
that
that
would
be
the
wrong
thing
to
have
shared.
It's
always
a
guess
which
of
these
little
boxes
is
the
right
one.
So
we
can
see,
for
example,
on
one
repo
that
this
is
the
number
of
repositories
that
have
tags
in
pivotal.
A
I'm
rebuilding
the
tags
for
zephyr,
which
is
why
I
don't
have
those
right
now,
but
we
can.
We
can
see
what
I'm
trying
to
do
is
put
together
a
here's,
the
total
count
of
this
tag
and
then
here
the
total
number
of
repositories
this
quiz.
This
query
is
the
total
number
of
repositories
that
use
this
tag
at
one
point
or
another,
and
I'm
trying
to
get
the
logic
down
to
merge
those
in
a
verifiable
way
that
that
I've
got
the
query
right.
B
Do
we
have
a
feel
for
tags
over
time?
What
the
like
a
lot
of
these
things,
the
titles
get
added
and
then
removed
as
they
change
as
they
use
them
as
an
indicator
for
state.
B
Do
we
have
once
we
sort
of
establish
the
tags
that
sort
of
a
question
of
you
know
what
what
would
what's
the
what's
the
proportion
of
tags
at
a
slice
of
time,
but
then,
what's
also
what's
being
used
in
the
flow.
A
D
B
Right
because
you
know
what's
sort
of
interesting
one
of
the
things
I'm
studying
right
now
on
the
linux
kernel
is
I'm
looking
at
the
use
of
reviewed
by
attributed
by
and
so
forth,
whereas
in
github.
It's
you
know
putting
these
tags
on
is
the
way
some
of
that
stuff
is
signaled
yeah,
and
you
know
the
question
is:
if
you
know
people
will
start
off
doing
it,
do
they
just
sort
of
stop
or
do
they
reinforce
best
practices
in
their
own
community?.
B
And
so
you
know,
if
we
can
look
at
it
from
that
perspective
of
you
know,
are
the
tags
being
used
effectively?
Are
they
signaling
what
they
need,
and
you
know,
is
it
worthwhile
if
there's
so
many
tags
worthwhile
them
just
basically
focusing
on
reinforcing
a
subset
of
the
tags
and
using
them
effectively
as
a
community?
Will
that
help
improve
the
dynamics.
B
B
Right
yeah,
so
you
know
get
a
focus
on
a
consistent
usage
to
build
the
practice
better.
A
All
right:
okay,
yep!
That's
the
that's
kind
of
the
update
on
tags
that
network
plods
along
my
google
summer
of
code.
Students
have
kept
me
rather
busy
for
the
last
month,
as
you
may
imagine,
so
you
got
anything
else
you
want
to
discuss
regarding
tags,
kate,.
B
I'm
just
wondering
you
know
from
the
aspect
of
risk:
are
there
certain
tags
that
we
want
to
flag
as
being
useful
in
the
risk
space
or
not
because
some
of
you
could
you
could
argue
that
some
of
the
tag
stuff
might
be
common
as
well
too?
That's
the
only
thing,
I'm
wondering.
A
So
we
do
have
I
I
guess
I
did
have
some
questions
about
the
community
reports.
You
know
as
we
as
we
go
through
and
do
these
community
reports
with
zephyr
and
with
other
communities.
A
We
did
a
report
for,
and
we
also
have
got
one
for
new
community
and
those
are
going
to
be
released
today
or
tomorrow
in
a
public
way,
so
that
others
can
use
them
against
their
author
databases.
A
And
so
I
guess
I
suppose
my
question
is
when
we
start
talking
about
the
community
reports
and
the
areas
of
interest,
I
think
there's
there's
a
collection
of
like
seven
to
ten
metrics
or
analyses
that
we've
done
in
each
of
these
reports.
A
That
answer
different
sets
of
questions,
and
so
I'm
wondering
if
a
community
report
is
one
thing,
because
these
are
reports
that
we
can
generate
ad
hoc
at
will
or
we're
minutes
away
from
being
able
to
do
that
we
have
them
in
jupiter
notebooks
that
can
be
transformed
into
api,
endpoints
and
sort
of
automated
reporting.
Once
we
know
what
the
repos
are.
B
Yeah,
I
would
I'd
say
that,
ideally,
we
want
the
community
reports
to
have
certain
key
elements
like
the
work
that's
been
done
until
now
has
been.
You
know
targeted
based
on
questions
that
the
communities
are
interested
in
relevant
for
them,
but
I
think
we're
missing
the
overview
information
about
the
community
as
a
whole.
Yeah
like
the
stuff
that
you
were
doing
about
commit.
You
know
the
commits
over
time
like
say
we're
focusing
on
pull
requests.
Okay,
that's
one
dimension,
but
there
was
some
of
the
work
that
was
being
done
for
the
ci
project.
A
B
A
B
Yeah
it
was
published
as
white
paper
at
the
lf.
I
think
so.
I
can
actually
get
it
I'll
bounce
that
to
you.
I
think
there
was
a
subset,
oh
and
on
the
guess.
On
that
note,
I
think
they've
gotten
one
more
sem.
B
Anyhow,
there's
one
other
proprietary
tool:
that's
going
to
open
their
database
up
to
that
group,
so
there's
probably
going
to
be
a
refresh
on
what
the
key
projects
are,
but
that's
something
we
can
talk
about
another
in
another
form.
A
Okay,
now,
this
report
may
be
expanded.
B
Let's
just
look
at
what
are,
if
you're,
looking
for
a
general
community
report,
we
probably
don't
want
all
the
deep
diving
on
pull
requests
that
we're
doing
here.
So
yeah.
F
B
B
B
B
Discussions
about
the
community
pretty
much
okay,
I
think
I
think
it's
a
it's
a
question
of
you
know
if
the
community,
if
we're
seeing
a
trend
line
and
the
community
wants
to
engage
to
help,
try
to
correct
it,
you
know
what
what
things
make
the
most
sense
for
them
to
focus
on
what
behaviors
make
the
most
sense
for
them
to
change
that
are
considered
healthy.
I
don't
think
we've
got
a
good
incense
of
that.
Yet.
C
So
just
a
couple
thoughts
overall
on
the
community
report,
so
one
of
the
things
we're
working
on
with
annie
and
brian,
because
they're
helping
with
the
community
report
is
that
so
we
would
generate
this
this
one
community
report,
for
you
know
whoever
is
making
the
request,
and
one
of
the
questions
we
do
ask
is:
would
you
like
an
updated
version
of
this
report
at
some
time
period
later,
six
months
a
year,
whatever
that,
what
whatever
we
think
could
be
sustainable
within
the
chaos
project
to
produce
these
reports?
C
B
I
think
seeing
the
temporal
component
prior
to
that
first
report
of
what
is
the
trend
line
is
useful
and
then,
after
that
I'd
say
probably
quarterly
enough
of
a
read,
I
don't
think
like
say
monthly.
We
get
out
of
github,
okay
or
wherever
yeah
fairly
easy,
but
I
think
six
months
a
year
quarterly.
You
know
something
in
that
line
and,
seeing
you
know
have
you
made
an
impact
in
the
trend.
B
Lines
is
what
you're
looking
for,
because
that's
certainly
what
I've
been
looking
for
with
leper,
which
was
you
know
what
have
we
been
doing
to?
Actually,
you
know
start
to
improve
like
right
now
with
zephyr.
They
agreed
to
go
to
two
independent
reviews
for
each
commit
now
and
I'm
curious
to
see
whether
or
not
that's
going
to
reflect
in
the
sure
in
the
commit
lines
right.
So
it's
useful,
though,
if
you
know
we
can
see
after
three
months
that
hey
it's
causing
issues,
do
we
want
to
dial
it
back?
B
C
B
And
I
can
think
of
a
couple
communities
they're,
probably
worth
working
with,
to
see
if
they
can
do
some
course,
corrections
in
particular,
I
think
some
of
the
new
tool
chains
and
from
the
fsf
stuff.
Okay,
please
help.
C
Okay,
that's
helpful
and
then
the
community
reports,
I
think
you
had
you
had
made
the
comment
about
prs
right.
The
community
reports
are
we're
trying
to
find
that
balance
between
kind
of
giving
folks
something
that
is
useful,
of
course,
and
then
also
maybe
wants
them
to
ask
deeper
questions.
So
the
report
obviously
can't
be
like
what
shawn
is
doing
with
you
right
with
with
zephyr.
That's
we
just
don't
have
that
ban.
I'm
guessing.
A
C
C
So
what
if
we
did
something
along
the
lines
of
the
the
the
primary
report
still
is
a
single
page
for
the
communities
that
aren't
like
in
the
position
that
kate's
in
that
can
actually
handle
that
amount
of
data.
A
A
A
Yeah,
I
mean
it's
essentially
the
amount
of
time
they
wait
as
a
function
only
of
the
number
of
repositories
they
want
to
see
that,
for
so
auger
or
gramor
lab
any
of
the
tools
can
go
back
and
collect
the
entire
history,
and
it's
just
a
question
of
how
long
it
takes
to
collect
that
data
and
that's
mostly
a
function
of
two
things:
the
number
of
repositories
well,
three
things:
the
number
of
repositories,
the
number
of
commits
and
whether
or
not
there
are
more
than
one
substantial
commit
in
the
history
of
the
repository.
A
So
counting
that,
like
we
always
have
to
count
that
once
there's
always
that
moment
where
they
move
it
right,
but
when
there
are
multiple
of
those
that
slows
down
the
counting
processes.
So
that's
that's
the
only
like
mitigating
factor
in
like
most
most
collections
that
we've
done.
We
can
count
everything
in
a
week
or
less
even
with
3
000
repositories.
It's
where
you
get
into
some
of
these
large
infrastructural
projects,
where
someone
has
changed
a
ton
several
times
that
that
we
get
that
the
process
just
takes
longer.
B
Yeah,
the
other
thing
to
matt's
point
is
that
that
history
is
mined
and
the
trend
is
there
to
figure
out
if
an
action
has
been
effective,
then
you're
having
to
wait
six
months
yes
forever
before
you
get
into
this
data
set
exactly
yeah,
that's
effective
right!
That's
that
was
quite
matte
right
or
am
I
mistaking
it.
C
Yeah
and
I'm
I'm
honestly,
I'm
being
a
little
stubborn
on
the
one
pageness
of
this
report,
just
because
it's
a
I.
I
want
this
to
be
a
and
I
could
be
wrong,
but
the
thought
is
this
is
a
pdf,
that's
shareable,
easy
to
understand
by
folks
that
are
deep,
technologists
in
the
community,
but
also
perhaps,
managers
in
a
company
that
have
participants,
and
if
you
want
to
know
more,
we
can
always
give
you
more.
B
B
C
I
have
a
sketched
out
page
in
terms
of
layout,
but
it's
open
to
what
the
metrics
can
be.
B
A
So,
for
example,
like
I,
I
think
the
trick
with
trying
to
keep
it
to
one
page
is
really
that
to
show
anything
over
time,
and
I
think
it's
only
by
showing
things
over
time
that
it
makes
it
more
useful
than
what
you
get
on
github
is
that,
for
example,
here's
we
could
do
one
year
of
zephyr,
I
suppose
for
some
of
these
statistics,
but
time
time
requires
space.
If
you're
going
to
include
a
temporal
dimension,
it
consumes
space
and-
and
so
these
could
be,
these
graphs
could
be
smaller
right,
like
this,
isn't
a
powerpoint
format.
D
C
And
so
it's
just
kind
of
finding
that
balance
of
useful
and
short
and
concise.
A
And
I
think
we're
I
think,
we're
meeting
with
annie
today
or
tomorrow
or
thursday.
I
think
our
next
meeting
it's
an
every
two
week
meeting
and
we
didn't
meet.
C
A
A
Are
there
other
the
two
other
things
I
have
are
just?
I
can
show
you
the
community
report
on
contributors,
which
is
new
since
the
last
time,
if
you're
interested
or
we
can
discuss
the
couple
of
metrics
that
are
moving
toward
release.
C
So
the
either
is
fine.
The
the
metrics,
like
I'd,
recommend
that
you
wait
until
after
july
31st
to
have
the
metrics
that
you're
talking
about
be
part
of
the
continuous
cycle,
because
kevin
has
kind
of
closed
the
like
website
slash
window,
but
the
review
period.
Okay,
okay,
yeah!
So
like
everything
gets
a
month
review
at
this
point,
and
so
this
is
the
period
of
not
contributing
more
metrics.
A
I
don't
know
if
that
makes
sense
to
you,
but
yeah
yeah.
The
process
has
been
evolving.
I
was
hoping
to
throw
a
couple
of
metrics
into
the
pool,
since
people
do
most
of
the
reviewing
at
the
end,
but
that's
fine.
I
can
wait
until
we
can
wait
until
after
the
release
and
do
these
as
mid-release
metrics.
C
C
A
C
Yeah
I
okay
connecting
the
continuous
metrics
release
with
the
regular
release,
was
a
little
bit
trickier
than
I
think
we
thought
it
was
going
to
be
process
wise
turn.
A
Up
sure
enough,
we'll
just
our
next
meeting
will
be
very
close
to
the
end
of
that
review
period,
so
I
think
we
can
get
ready
to
put
them
out.
Do
we
want
to
talk
about
them,
or
do
you
want
to
see
the
community,
the
contributor
report
or
the
new
community?
The
contributor
report,
which
is
of
preference
to
the
group,
who
remains.
C
A
Sure
I
think
it
does
okay,
so
this
is
zephyr
contributor
analysis,
loading.
A
Loading
okay,
so
this
is
the
report
that
kate
mentioned
received
a
good
deal
of
active
discussion
in
the
in
apparently
a
meeting
that
followed
the
hour
that
we
spent
with
the
technical
steering
committee
for
zephyr.
So
I
think
that
suggests
that
the
kinds
of
metrics
that
we're
working
on
might
be
useful
for
a
broader
collection
of
people.
A
What
what
we
have
in
the
report
are
the
ability
to
categorize
drive-bys
on
a
community
basis,
so
for
one
in
this
case
we
took
a
drive
by
to
be
anyone
who
makes
one
and
only
one
contribution-
that's
easy
repeat:
contributors
can
be
defined
as
greater
than
or
equal
to
n
contributions.
For
this
specific
report,
the
n
was
two.
A
So
if
we
look
at
this
period
of
the
18th
july
or
january
1st
2018
until
end
of
may
2020,
you
can
see
that
in
the
zephyr
project,
70
of
people
are
repeat
contributors.
The
caption
explains
that
repeat
or
drive
by
is
determined
not
by
the
window
of
analysis
but
by
the
entire
history
of
the
repository.
So
if
you
made
a
contribution
prior
to
january
1st
2018,
you
would
be
considered
a
repeat
contributor,
even
though
that's
not
the
window
that
we're
doing
the
analysis
and
does
that
make
sense.
D
A
A
So
if
I'm
interested
in
a
particular
analysis
window,
whether
or
not
I'm
a
drive-by
or
repeat
contributor
in
this
version
of
the
report
goes
all
the
way
back
to
the
beginning.
So
if
I
made
one
contribution
in
this
window
of
analysis,
I
would
be
considered
a
repeat
contributor.
If
I
made
a
contribution
in
2016.
A
Yes,
and
that's
that's
explained
in
the
caption,
which
my
experience
is,
I'm
looking
for
the
slideshow.
A
This
is
right.
This
is
for
the
main
zephyr
repository
in
this.
In
this
reports
case,
we
you
could
do
the
same
analysis
on
a
collection
of
repositories
and
it's
kind
of
up
to
us
to
decide
if
doing
so
on
a
collection
means
in
one
report
which,
for
some
communities
it
might
or
if
it
means
a
report
set
for
each
repository
or
if
it
it
could
even
mean
collection
like
a
sort
of
a
comparison.
So
the
zephyr
pull
request
analysis.
We
did
compared
six
repositories.
E
Is
that
something
that-
and
maybe
this
isn't
the
right
place
for
this
this
question,
but
is
that
something
that
we
would
want
to
give
people
control
over
if
they
request
a
report
from
us
like?
Is
this
a
community
report
or
are
we
going
to
say
it's
just
one
repository
if
you
want
more,
that's
another
whole
issue.
A
I
think
that's,
I
think
we
want
to
at
least
offer
a
standard
report
in
the
one
to
end
pages.
Whatever
we
decide,
the
standard
is
for
each
repository
that
we
wouldn't
want
to
get
into
the
comparison
game
in
the
standard
report
only
or
we
would
want
to
specify
some
max
like
a
max
n
like
six
of
comparisons,
and
that
running
the
comparison
for
six
is
just
as
easy
as
running
an
analysis
for
one
okay
it.
A
So
it
kind
of
depends
on
how
complex
we
want
the
inquiry
or
the
question
answer
form
to
be
because-
and
I
think,
having
some
samples
to
share
is
helpful
for
people
to
decide
some
of
these
things.
That's
the
piece
of
feedback
that
I
don't
know
how
to
put
into
annie's
form.
But
I'd
like
to
talk
about
on
thursday.
A
C
A
A
Accordingly,
yeah
I
mean
my
my
experience
is
that
without
a
temporal
dimension,
it's
hard
for
people
to
look
at
that.
I
look
at
what
we
have
as
a
report
and
not
and
find
and
not
find
a
lot
of
value
in
it.
So,
however,
concise
it
is,
I
do
think
there
need
to
be
a
temporal
dimension
so
that
people
can
see
trends.
A
And
then
this
is
the
mean
response
time
for
closed
pull
requests
with
the
observation
that
long
running,
pull
requests
are
often
rejected
and
you
can
see
what
the
numbers
at
the
top
are.
Are
the
total
number
of
pull
request
contributors
per
month
and
the
repeats
are
in
orange
and
the
drive-bys
are
in
blue?
A
A
D
A
C
See,
oh,
I
see
when
I
put
it
in
presentation
mode.
It
creates
a
different
window.
I
see
yeah,
we
didn't
see,
presentation
mode.
We
were
still
in
just
the
regular.
A
Yeah,
of
course,
because
it
created
a
different
window.
I
apologize
for
that.
Let
me
I
was
trying
to
make
it
easier
to
read,
but
let's
assume
it's
sufficiently
easy
to
read
and
I'll
go
back
to
what
I
was
showing
you.
E
So
can
you
sorry
to
interrupt
you,
I'm
not
sure
how
this
relates
back
to
the
risk
piece
of
it.
A
So
risk
one
of
the
one
of
the
elements
of
risk
is:
is
my
community
likely
to
continue
to
draw
new
contributors
in
from
a
zephyr
perspective?
They
are
very
interested
right
now
in
increasing
the
number
of
people
who
contribute
to
the
project,
because
the
number
of
companies,
depending
on
it
and
its
use,
is
growing
at
a
greater
rate
than
the
number
of
contributors.
So
I
think
the
understanding
the
nature
of
the
contributors
for
that
project,
which
is
in
high
growth
mode,
is
a
high
priority
for
for
other
projects.
A
There
may
be
different
metrics
that
they
pay
attention.
To
fact,
that's
a
different
project.
Twitter
was
extremely
concerned
about
ensuring
the
global
diversity
of
their
contributor
base,
and
so
those
were
questions
that
we
focused
with
on
them.
There
are
there's,
probably
a
family
of
things
that
projects
are
interested
in
and
increasing
the
diversity
of
pull
request.
Contributors
is
one
that
I
think
is
pretty
common
in
high
growth
communities.
E
Would
there
be
ever
be
a
a
case
where
that
increases
the
risk?
Because
you
have
just
all
these
new
people
that
don't
understand?
Maybe
the
context
of
the
you
know
the
project,
maybe
they're,
introducing
thugs
just
because
they
don't.
You
know
they're,
all
these
new
people
that
don't
have
the
history
or
whatever
is
that?
Is
that
ever
a
case.
A
E
Okay,
interesting
so
if
there
was
a
high,
a
high
ratio
of
like
mostly
drive-by,
and
not
that
many
repeats
that
could
be
seen
as
problematic
just
because
of
the
amount
of
energy
and
effort
it
takes
to
get
those
one-time
things
through
with.
A
A
C
A
Yeah
atlero
has
done
a
lot
of
work
with
drive-bys
and
there
are
many
cases
where
they
make
small
contributions.
That
fix
one
small
bug
and
it's
extremely
helpful.
Oftentimes
they're
people
who
are
just
fixing
one
small
thing
that
they
see
broken
and
it's
affecting
their
critical
path,
and
so
they
fix
it.
E
A
This
is
the
this.
Actually,
this
heading
is
wrong.
A
I
don't
know
why
this
heading
is
still
there.
That's
an
error,
but
this
is
drive
by
and
repeat
contributors
per
month,
so
for
each
month
on
zephyr,
you
can
see
the
total
number
of
people
who
make
a
pull
request,
contribution
or
really
contributions
of
any
kind.
So
in
this
case
you'll
see
in
the
next
slide.
There
are
six
different
types
of
contributions
that
zephyr
wanted
to
count
and
the
total
number
of
people
making
their
first
contributions.
A
The
drive-bys
and
the
repeat
contributions
are:
you
can
see
that
just
like
the
pie
chart
this
kind
of
shows
you
a
monthly
breakdown
of
how
many
repeats
and
how
many
drive
buys
and
what
I
was
explaining.
What
you
couldn't
see
before
is
there
is
a
tendency
for
drive,
buys
to
sort
of
move
higher
and
repeats
to
descend
towards
zero
towards
the
end
of
any
analysis
period,
which
is
simply
a
function
of
the
way
that
contributions
get
processed
in
the
end.
So,
for
example,
a
pull
request
doesn't
really
count
until
it's
resolved
either.
C
A
C
Oh
but
like,
why
is
it
skewing
that
way?
Is
there
a
release,
that's
happening.
A
That's
the
net,
so
augur
has
recently
included
release
data
into
the
contributions
into
the
database,
and
so
we're
going
to
start
running
these
analyses
with
release
lines
through
here,
but
yeah
generally,
when
you
see
a
peak
in
contribution
like
in
march
of
2020,
but
we
happen
to
know
that
was
a
release
as
we
do.
No
september
of
2019
was
released,
so
you
see
more
activity
at
a
release
and
then
less
activity
immediately
following
a
release.
A
C
A
A
D
A
That
is
a
that
is.
That
is
definitely
something
we
discussed
with
the
tsc.
That's
a
good
point.
I
had
forgotten
that
point,
but
yes,
that's.
That
is
definitely
part
of
what's
happening
there,
but
yeah.
If
your
first
contribution
is
recent,
then
the
likely,
as
as
elizabeth
said,
their
likelihood
of
having
made
a
second
one
is
lower.
If
I
go
back
to
first-time
contributors
in
2017
the
likelihood
they've
made,
a
repeat
contribution
is
different,
and
that's
why
parameterizing?
What
we
call
a
repeat
contributor
is
a
is
something
the
project
can
do.
A
A
These
are
the
six
types
of
contributions
that
zephyr
was
interested
in
commenting
on
issues
and
pull
requests
are
two
of
the
events
that
one
might
reasonably
say.
I
don't
consider
that
a
contribution
I
just
want
to
look
at
opened
issues,
closed
issues
and
pull
requests
or
commits,
for
example,
and
commits,
are
definitely
a
function
except
in
some
very
isolated
cases
of
core
contributors
directly
committing
code.
C
So
one
of
the
things
I'm
thinking
about-
and
I
know
we're
out
of
time
but
is
for
the
community
report,
obviously
like
what
you're,
showing
in
slides
two
three
and
four
and
I'm
sure
five
and
would.
C
C
So
what
you're
showing
are
obviously
visuals
that
are
taking
the
chaos,
metrics
or
chaos
defined
metrics
and
combining
them
in
ways
that
are
more
meaningful
than
just
looking
at
the
metrics
in
isolation.
A
A
Adding
this
is
really,
I
guess
time,
and
I
I
guess,
issues
and
pull
requests
are
separate.
Metrics
and
comments
are
separate.
Metrics
are
right,
we're
integrating
we're
essentially
taking
so
there's
a
couple
ways
to
look
at
it.
One
is
contribution,
is
a
filter
has
filters
for
all
of
these
things
in
it,
but
we
also
have
separate
metrics
for
pull
requests
and
issues
around
and
commits
around
each
of
these
things
as
well.
C
A
C
Kind
of
thinking
about
them
as
discreet
metrics,
but
in
fact
maybe
the
better
way
to
think
about
it
is
listen.
Here's
a
couple
like
the
view
that
you
have
in
front
of
us
right
here:
here's
a
view
that
is
a
collection
of
these
measures.
It's
a
view
that
brings
together
a
lot
of
the
thinking
that
has
come
from
the
chaos
project
exactly
and
here's,
and
so
then
the.
If
I
stick
with
the
one
pager,
the
one
pager
would
only
have
say
four
or
five
of
these
views
right
like
what
you're
showing
here
under
the
hood.
C
Yeah
and
say
here,
but
we
don't
want
to
show
them
as
discrete
metrics,
because
alone
they're
not
super
useful,
but
collectively
they
really
start
providing
insight.
So
then,
the
report
is
a
like
that
that
for
kind
of
these
composite
views
and
then
I
like
the
idea
of
a
glossary
at
the
end
or
something
that
says
you
know
just
here
here-
are
all
the
discrete
ones
that
are
behind
the
scenes.
C
C
A
All
right,
so
I
don't
know,
what's
on
the
agenda
for
the
11
o'clock
meeting,
I
will
probably
be
in
a
car
for
it.
Okay
I'll
have
my
laptop
available.
I
won't
share
screens,
but
these
links
are
public
and
if
there's
any
of
this,
that
you
want,
I'm
not
saying
you
have
to,
but
if
there's
any
of
this
that
you
do
want
to
share,
I
don't
know
what
else
is
on
the
agenda,
but
if
there's
anything
that
you
want
to
discuss
during
the
meeting
feel
free
I'll
have
it
at
my
disposal.
A
This
one
is
the
well
even
it's
this
kiosk,
it's
the
community.