►
From YouTube: CHAOSS.Community.August.18.2020
Description
CHAOSS.Community.August.18.2020
A
So
this
is
the
august
18th
edition
of
the
weekly
community
call
for
chaos.
Welcome
everyone,
I'm
glad
you're.
Here
it's
good
to
see
everybody.
The
agenda
is
linked
in
the
chat,
but
I
will
do
that
again
just
in
case
anybody
missed
that
and
if
you
haven't
done
so,
please
add
your
name
under
attendees.
A
A
Really
good
feeling
I
thought
it
was
the
dogs
I
had
peed
on
the
ground,
so
I
was
actually
glad
that
it
was
not
that
yeah,
it's
a
little
easier
to
clean
up.
If
it's
just
water
unless
smelly,
you
know
when
you
do
the
smell
okay,
I
don't
know
anyway
all
right.
Sorry,
I
digress.
I'm
glad
this.
Can
you
edit
this
part
out
that'd?
Be
awesome?
Not
if
you
can.
A
Fantastic
anyway,
I
hope
everybody's
doing
good,
but
let's
jump
in
and
get
started
so
we're
gonna
put.
I
think
it's
a
good
idea
to
put
our
students
up
earlier
in
the
agendas
because
I
feel
like
they
were
always
getting
the
short
end
of
the
stick,
which
is
not
fair,
and
I
don't
like
that.
So
we'll
start
out
with
our
google
summer
of
code
students.
If
there's
anybody
on
that,
wants
to
give
us
a
quick
update.
A
I
was
going
to
say:
I
don't
think
there
is
all
right
fair
enough
just
so
everyone
knows
we
do
link
to
recent
updates
for
two
of
our
students,
every
chaos
weekly
newsletter.
A
So
if
you
read
that,
like
I
know
you
all
are
reading
it
religiously
100
every
time
you'll
see
every
week,
you'll
see
two
more
updates.
So
that's
on
there.
If
you
are
curious,
so
let's
move
along
to
the
google
season
of
docs
we're
excited
to
have
jaskrat
and
shoya
as
our
two
season
of
docs
students.
So
thank
you
for
participating
and
we're
super
excited.
Jessica
is
going
to
be
working
on
our
community
handbook
and
shoya
is
going
to
be
working
on
the
dni
badging
documentation.
A
So
we're
really
excited
to
have
you
both
a
part
of
chaos,
officially
and
yeah.
Anything
to
add
to
that
anyone.
D
Well,
the
very
first
thing,
like
I'm
very,
like
happy
to
connect
with
you
all
today
and
very
like
thanks
for
giving
me
this
opportunity,
and
it's
indeed
to
be
a
part
of
this
kiosk
community
in
the
google
seasonal.
D
I
really
look
forward
working
with
the
project
on
the
white
handbook,
like
the
handbook
for
the
kiosk
community
and
I'm
sure,
like
you're
gonna
have
like
come
up
with
that,
like
I'm
sure,
like
I'm
gonna,
come
come
up
with
some
really
interesting
aspect
of
the
handbook
for
the
community,
so
the
I
I
just
wanted
to
know
like
how
and
like
how
do
you
plan
like
how
the
comment
do
you
plan
to
begin
with
this
particular
stuff,
since
we're
gonna
have
the
one
month
for
the
community
bonding
stuff
so
like
how?
D
B
D
Well,
I
think,
like
this
is
gonna
be
till
sixth
of
december.
I
guess.
B
Gotcha,
well,
I
I
mean
you're.
I
think
you
are
in
a
big
advantage
because
you
had
done
so
much
work,
exploring
the
community
handbook
and
you
had
been
very
open
about
it.
For
so
long.
I
mean
you
were
very
transparent
and
shared
your
thoughts,
and
so
I
I
mean
I
think
it's
a
good
idea
that
you
mentioned
that
maybe
we
should
meet
as
mentors
but
in
terms
of
community.
E
B
F
B
Yeah,
I
can
send
out
an
email
to
just
find
a
time
that
we
could
all
meet.
That
would
be
convenient
for
you
and
and
other
people
as
well,
and
we
can
just
kind
of
talk
what
that
first
month
could
look
like
I'm
guessing.
You
can
probably
start
on
some
of
the
community
handbook
work
sooner
rather
than
later.
H
Yeah-
and
I
are
we
have
and
asa
have
a
thursday
at
9am
cbt
already
built
out.
We
could
also,
I
think,
meet
during
that
time.
If
you
want,
if
it's
available
for
everybody.
D
Well,
thursday,
9
a.m
is
it
is
it
ist
time
or
like
central
youtube
time?
What
what?
What's
your
time.
E
H
All
right,
we
are
strong
into
our
second
palette
testing.
Right
now,
we've
got
we've
actually
gone
through
the
whole
process,
with
one
event
already.
So
let
me
let
me
give
a
little
bit
of
background.
We
are
doing
events
only
for
the
second
pilot
testing
and
probably
for
launch
so
that
we
can
focus
on
quality
rather
than
quantity,
and
we
are
working
on
kind
of
updating
and
getting
people's
feedback
on
how
the
process
works
and
getting
ready
for
launch
at
that
point.
H
So
we're
really
excited.
We
are
a
little
short
on
reviewers,
so
we'd
like
to
ask
that
anybody
who
wants
to
be
involved
and
has
the
bandwidth
for
that
email
me
at
that
email.
I've
included
there
and
we've
already
got
a
lot
of
really
good
feedback.
H
We
did
a
meetup
event
kind
of
mock
event
based
off
of
the
beans
internet
currency
from
the
late
90s,
so
that
was
kind
of
fun,
and
if
anybody
wants
to
submit
any
events,
just
go
ahead
and
send
me
an
email
and
we're
really
excited
to
get
ready
for
launch
we're
going
to
be
considering
virtual
events
after
the
pilot
testing
before
we
go
into
launch
with
the
maybe
a
different
set
of
metric.
H
Well,
definitely
some
different
metrics,
but
we
have
a
lot
more
to
prepare
for
and
we're
getting
ready
for
that
for
launch
about
mid-september
is
what
we're
looking
at
now
and
that's
all
I
have
for
badging
at
the
moment.
H
H
E
B
Okay,
because
I
think
that
again
keeping
it
super
simple,
is
really
important
for
people,
whether
you're
a
reviewer
or
a
submitter.
B
So,
and
I
think
I
had
seen
it
as
a
reviewer
just
a
few
places
where
there
was
a
little
misalignment
between
what
I
was
supposed
to
be
reviewing
and
what
seemed
like
was
being
asked
for.
B
You
know
what
I
mean
yeah
so
and
then
I'm
looking
at
elizabeth,
should
we
we,
if
we're
gonna,
do
the
like
mid-september
say
a
month
from
now
that
we're
looking
at
kind
of
doing
a
soft
start
on
this,
where
events
could
actually
submit
a
request
for
a
badge
we
are
going
to
need
to
recruit
reviewers,
and
I
mean
we
probably
need
to
go
outside
of
the
chaos
project.
A
Okay,
matt
s
is
there.
I
have
only
done
the
submission
I
haven't
done
the
reviewing,
but
if
is
there
there's
documentation
for
how
to
review
obviously,
and
do
we
have
any
kind
of
do
we
have
an
application
process
for
the
reviewers
to
go
through.
A
A
H
E
H
We
got
kind
of
a
guide
on
how
to
you
just
add
your
name
to
the
list.
Kind
of
like
the
original
process
for
submitting
a
project
or
event
would
be,
but
you
just
add
your
name
to
the
list
and
then
once
your
name
is
added
to
the
list,
it
will
just
start
assigning
you
to
once
that
pull
request
is
merged
it
will.
It
will
you'll,
be
in
the
bin
for
being
assigned
to
different
events,
and
things
like
that.
H
A
Yeah
so
yeah,
because
we'll
need
to
recruit
reviewers
before
we
launch
to
the
to
the
public
to
submit
their
events.
So
at
some
point
we
can
maybe
set
up
a
timeline
of
like.
When
can
we
start
to
recruit
reviewers
and
then,
when
we
can,
whatever
whatever
and
I'll
do
whatever?
I
can
help
as
far
as
outreach
and
contacting
people.
H
Okay,
they're
not
that
high
priority
and
sends
you
guys
to
send
you
another
email
once
once
I
have
that
finished.
Okay,.
H
We
have
defined
qualifications,
but
at
this
point
it's
been
more
of
a
hand-picked
basis.
I
think
qualifications
is
something
that's
going
to
come
with
the
pull
request
process.
That
might
be
a
good
thing
to
define
during
our
next
meeting
about
this
as
well.
H
I
H
B
Yeah
so
matt,
could
you
maybe
think
about
the
timeline
that
elizabeth
had
mentioned,
and
could
you
also
just
draft
like
an
outreach
email
that
we
could
send?
That
would
be.
That
would
point
to
the
become
a
reviewer
process
that
you
were
talking
about.
H
B
B
So
the
okay,
so
the
process
is
that
somebody
obviously
submits
an
event
and
then
reviewers
are
assigned
to
the
event
and
is
there
like
a
meta
reviewer?
So
let's
say
elizabeth
and
I
do
the
review
and
we're
totally
in
line
with
one
another
right.
That's
pretty
easy,
but
let's
say
that
elizabeth
was
like
more
diligent
than
I
was
and
I
missed
some
things
so
we're
out
of
alignment
with
one
another
is
there
is.
H
So
we
have
a
couple
commands
that
are
run
to
create
the
badge
and
one
to
end
the
issue
or
close
the
issue
and
move
it
to
a
border
quest,
and
that
is
done
by
the
moderator.
We
are
also
under
the
idea
that
the
moderator
handles
discrepancies
as
well
so.
H
The
chaos
people
are
under
more
of
the
role
of
maintainer,
but
there's
some
we
can
be
cross
over
with
moderation
as
well.
The
moderator
is
more
of
a
role
that
that's
definitely
a
hand-picked
role
right
now,
but
it's
it's
a
rule
of
this
person
understands
cni
pretty
well,
and
they
want
to
be
helpful
in
the
situations
that
require
leadership.
H
E
B
A
long
time
and
then,
as
the
reviewers
have
questions,
the
moderator
can
either
answer
the
question
directly
or
know
how
to
get
in
touch
with
like
say
you
or
just
they
know
how
to
solve
the
problem.
And
if
we
had
moderators
from
external
that
might
be
a
role
that
they
just
wouldn't
be
able
to
yeah.
H
B
H
At
the
moment,
we
we're
looking
at
having
two
until
we
until
we
get
indication
that
we
need
like
if
we're
getting.
If
we
have
five
events
open
at
once,
we're
gonna
get
more
than
two
moderators.
B
Okay,
well,
I'm
happy
to
to
help
in
that
regard.
Okay,
for
sure.
H
B
And
do
we,
I
might
have
some
dog
barking
going
on
in
the
background,
do
we
have?
We
probably
want
to
have
a
web
page
that
you
know
it
could
be
pretty
simple
and
it
might
list
who
the
moderators.
H
B
B
H
Our
plan
for
this,
this
launch
is
to
have
a
page
that
says
it
gives
an
overview
kind
of
like
a
lot
of
other
pages
like
the
auger
page
or
the
google
lab
section
page
is.
It
gives
an
overview.
It
gives
ideas
on
what
what's
going
on
with
the
like
who's
in
what
position
stuff
like
that
as
far
as
moderators
go,
but
then
it
will
also
say
click
here
to
apply
for
an
event
badge
and
then
it
will.
H
F
E
All
right,
I
have
one
so,
yes,
I
was
kind
of
thinking
for
the
applicant,
seeing
the
reviewers
comment,
no
comments,
but
the
reviewers
check.
So
let's
say
the
viewers
are
to
check
that
this.
This
does
not
actually
satisfy
this,
so
I
think
if
the
applicant
should
not
see
the
checks
but
can
see
the
comments,
I
don't
know.
If
so
so
we
don't
have
a
scenario.
Okay
mars
actually
did
not
check
mine,
okay,
marking
his
name
and
probably
it
could
cause
a
conflict
or
something
I
don't
know.
If
get
my
points.
H
Oh
well,
our
fix
for
that
is
that
the
checklist
can't
be
checked
or
unchecked
by
the
people
who
aren't
contributors
on
the
project.
So
if
I
were
to
be
an
applicant
from
outside
of
the
organization-
and
I
come
in
and
submit
for
a
badge-
I
can't
edit
those
review
checklists
as
they
go.
It's
only
the
other,
the
reviewers
that
can
do
that
part.
I
Ruth
are
you?
Are
you
also
asking
if
this
is
a
a
blind
review
process
like
do
the
do
the.
E
I
H
Okay,
sorry
for
misunderstanding
there,
I,
the
that's
kind
of
the
one
aspect
of
the
project
being
open
and
transparent,
is
that
we
will
have
you
will
be
able
to
trace
it
back
to
who
said
it
and
what
they
said,
but
I
think
that's,
okay,
at
least
for
the
time
being.
Do
we
have
any
other
thoughts
on
that.
B
So
we've
been
pushing,
so
I
guess
a
couple
points.
So
thanks
for
bringing
this
up
ruth
in
terms
of
being
able
to
see
who
the
reviewers
are,
we've
always
pushed
for
being
open
and
transparent,
and
the
idea
is
not
to
blame
somebody
for
not
checking
something
but
to
work
with
the
reviewer
and
the
applicant
to
either
find
a
solution
to
something
that
wasn't
checked.
I
guess
in
this
case
or
to
to
have
an
opportunity
to
to
easily
speak
to
the
reviewers.
E
B
B
B
E
B
Yeah,
I
think,
that's
a,
I
think,
that's
a
very
good
idea,
so
I'm
I
like
the
idea
of
it's
basically
a
rubric
right,
so
the
checklist
is
a
way
to
the
reviewer
is,
is
evaluating
the
application
and
being
transparent
to
the
applicant
as
to
how
the
review
is
going
to
occur
totally.
I
think
it's
a
great
idea.
I
have
no
problem
with
that.
B
Again,
it's
just.
It's
meant
to
be
developmental,
so
the
more
we
can
start
start
by
knowing
each
other
and
start
by
kind
of
working
towards
the
same
goals
the
better,
because
if
the
applicant
and
the
review
are
off
a
little
bit,
if
like
what
the
applicant
thinks
they're
doing
and
what
the
reviewer
thinks
they're
reviewing.
If
those
are
off,
we
have
to
bring
those
together
ultimately,
anyway,
in
the
process,
and
so
the
sooner
we
can
get
those
aligned
the
better.
A
A
I
would
also
maybe
just
add
that
it
might
be
a
good
opportunity
for
moderators
to
step
in,
and
just
you
know,
smooth
that
over
if
possible,
so
that
might
be
an
additional
task
that
we
add
on
to
that,
because
it
I
can.
I
can
foresee
that
becoming
an
issue
with
someone
who
feels
very
strongly
about
their
event
and
takes
it
very
personally
instead
of
in
the
spirit
with
which
this
review
is,
is
supposed
to
be
given
so
yeah.
That
would
just
be
that's
a
great.
I
think.
That's
a
really
good
point.
A
B
Makes
me
think:
should
we
have
a
code
of
conduct
for
the
review
process
that
states
something
along
the
lines
of
this
is
a
developmental
process.
This
is
meant
to
improve
the
ways
that
we
all
think
about
diversity
and
inclusion
in
our
events
and
then
subsequently
our
projects,
it's
expected
that
you
behave
in
a
professional
manner.
It's
something
along
these
lines.
I
don't
know
what
quite
what
that
would
look
like,
but.
I
A
A
If
you
know,
if
that
is
brought
up
or
if
someone
challenges-
and
you
know
wants
to
to
you
know,
have
it
have
an
appeal
or
have
more,
I
don't
know
more
reviewers
or
something
I
don't
know
it
just
might
be
nice
to
work
that
out
prior
to
that
becoming
an
issue.
H
Yeah
something
that
I
want
to
note
there
is
that
we
have
we're
working
out
the
semantics
of
that
process,
but
we're
building
that
process
right
now.
It
begins
with,
on
the
badge
being
the
result,
kind
of
being
created
for
the
badge
and
the
badge
being
created,
and
then
the
moderator
asks
the
user,
the
the
submitter
the
applicant.
H
If
they
want
to
continue
with
this
one
or
they'd
like
to
talk
about
it
a
little
more
and
that's
about
that's
kind
of
the
beginning
of
that,
we
we
want
to
prompt
them
for
if
they
have
feedback
and
then
and
then,
if
they're,
okay
with
the
badge
and
they
they
agree
with
them,
they
consent
to
it.
Then
they
have
the
badge
at
that
level.
For
this
year,.
H
But
it's
kind
of
a
long-standing
process
right
now
we
want
to
make
that
a
little,
maybe
either
more
automatic
or
shorter.
B
Matt,
do
you
have
a
sense
of
of
the
metrics
that
are
used
as
part
of
the
process,
the
number
of
them
that
are
subjective
measures
so,
for
example,
like
the
code
of
conduct
being
available
online,
that's
kind
of
a
yes,
no
kind
of
thing,
even
things
I
think,
like
the
event
providing
diversity
access
tickets
seems
clearer
to
me.
B
H
There
are
a
couple
that
I
can
think
of
that
a
couple
of
line
items
and
like
the
review
checklist
that
I
can
think
of
like
like
one
that
you
had
actually
brought
up
in
the
review
process,
was
that
we
have
to
we
don't
ask
for
their
process.
But
we
we
had
asked
them
to
explain.
H
We'd,
ask
the
reviewer
to
say
if
they
had
a
good
process
for
picking
their
speakers
for
speaker
diversity,
but
we
have
a
cup
we're
working
out
a
lot
of
those
in
the
new
branch
and
as
we
as
we
see
the
problems
we
we're
picking
them
out.
But
are
you
saying
we
want
less
subjection
in
the
review
process.
B
Okay,
yes,
it
was
based
on
this
conversation.
It
seems
like
that
would
be
easier.
I
mean
if
the
if
the
entire
thing
was
a
judgment,
call
by
the
review,
that
would
be
pretty
tricky.
That's.
B
You
know
if
we
start
saying
things
like
your:
your
selection
process
is
good
or
bad.
B
That
would
be
pretty
tricky
to
determine.
Okay,
we'll
take
this
into
account
too
yeah,
and
I
think
my
suggestion
would
be
to
definitely
err
on
the
side
of
just
objective
measures
and
over
time
it
would
probably
be
easier
to
open
them
up
a
little
bit.
Okay,
if
we
feel
like
it's
just
not
providing
enough
insight.
H
B
That
sounds
good.
I
don't
know
what
other
people
think
about
that.
F
G
H
I
did
notice
tola,
I
see
you
here
and
I
think
this
is
probably
the
last
day
of
your
outreach
internship
did
you
want
to.
I
just
want
to
say
I
appreciate
you
for
being
around.
Do
you
have
anything
you
wanted
to
say
about
that
too?.
J
Yeah,
I'm
actually.
J
J
B
A
B
Yeah,
so
that's
me,
I'm
just
going
to
show
something
really
quickly
here,
so
this
is
I
took
so
this
is
what
the
community
reports
would
start.
Looking
like.
B
This
spot,
this
white
spot
over
here
would
be
the
logo
for
the
community.
So
this
is
the
community
report
of
the
chaos
community
and
I
didn't
feel
like
putting
the
ks
logo
there.
B
So
so
this
is
from
georg
so
georg
had
shared
this
earlier,
and
I'm
just
taking
the
text
that
you
had
provided,
george
for
both
of
these
visuals
and
then
I
I'll
have
to
play
with
spacing
a
little
bit
garrick.
I
may
end
up
shortening
some
of
these
text
blocks.
If
that's
okay,
yeah.
B
F
I
was
I
I
so
I
did
have
that
impression.
We
need
to
play
with
the
white
space
and
the
formatting.
E
F
B
Can
probably
squeeze
this
text
in
to
be
the
size
of
the
figure
and
kind
of
bridge
them
together
a
little
bit
better,
but
I
think,
with
these
with
these
horizontals,
we
have
basically
two
choices.
One
is:
was
this
kind
of
layout?
The
other
is
a
visual
with
a
like
a
wide
description
below
like
we
just
basically
have
four
rectangles
going
down
so.
B
And
then
we
also,
if
you
recall,
we
wanted
to
create
and
I'll
start
working
on
this
to
a
glossary
of
terms,
so
at
least
or
at
least
a
glossary
of
the
metrics
that
are
appearing
in
here.
B
So
I
you
know
just
obviously
commits
would
be
one
of
them
issues.
Closed
issues
would
be
another,
so
I
just
kind
of
pointing
people
as
to
what
the
metrics
are
that
exist
here.
So.
B
I
think
the
thought
was
because
shawns
are
going
to
be
these
heavily
composited,
composited,
composite
metrics
and
so
just
helping
unpack
that
a
little
bit,
but
if
we
can
take
care
of
it
in
the
text,
I
would
that
would
be
great,
because
then
we
could
stick
more
to
the
one
pager,
which
I
am
absolutely
in
love
with.
As
all
of
you
know,
I
don't
like
more
pages,
so
I
mean
if
you
think
that
this,
what
I
had
shown
was
sufficient
enough.
B
F
On
the
logos,
grimoire
lab
needs
their
grammar
lab
name,
if
possible,
add
it
underneath
or
next
to
it.
F
Then
do
you
have
enough.
B
Totally
fair
yep
good
idea
I'll
play
around
with
that
too.
Maybe
next
time
I
can
come
up
with
like
a
variety
of
different
versions,.
I
Also,
I'm
sorry
also
it's
not
it's
not
completely
clear
to
me
where
the
I
know
that
the
project
logo
is
going
to
be
in
the
top
left-hand
corner,
but
even
with
that
project
logo
there.
It's
not
completely
clear
to
me
where
these
metrics
are
coming
from.
Could
we
include
maybe
a
list
of
urls
for
the
repos
that
informed
these
visualizations.
I
B
B
Know
what
we
could
do
is
the
the
there's,
a
request
that
comes
across
so
they're
going
to
be
filling
out
that
form
right
and
it
creates
a
sales
force
record
that
actually
contains
all
of
the
repos.
We
could
actually
just
print
out
that
record
I
mean
because
that
has
the
repos
on
it.
So
this
is
basically
saying
here's
the
record
that
you
gave
us
here's.
The
report.
A
B
Okay
and
then
it's
then
it's
not
something
we
have
to
do
other
than
just
print
off
that
email
and
virtually
staple
it.
To
the
back
of
this
thing,
okay
and
then
I
guess,
george
too,
I
had
one
question
for
you
on
this,
so
I
know
that
you
were
using
cauldron
to
produce
this.
B
F
F
F
B
A
Okay,
we
have
eight
minutes
left
the
the
to
do
survey.
It
looks
like
we
want
to
actually
work
on.
That
is
that
right?
Whoever
put
that
on
here.
A
A
A
B
I
just
for
the
people
who
are
familiar
with
the
to
do
survey,
one
of
the
things
if
you
could
think
about
as
to
what
we
actually
get
out
of
doing
something
like
this,
so
it
takes
work
and
we're
going
to
collect
data,
and
is
it
just
to
report
back
the
data?
Do
we
actually
want
to
you
don't
have
to
answer
this
now?
Is
it
you
know?
Is
it
just
to
provide
the
results?
Is
it
to
actually
change
the
way
that
we
work?
Is
it
I
don't
know?
A
Okay,
so
the
next
item-
I'm
pretty
sure
sean
put
on
here,
but
I
don't
think
he's
actually
on
the
call.
So
I.
I
Of
that
discussion,
so
the
the
spreadsheet,
as
we
as
we
all
know,
is,
is
one
of
the
the
few
places
where
all
of
the
working
groups
kind
of
collaborate
together.
I
One
of
the
one
of
the
issues
that
that
has
kind
of
come
up
is
that
some
metrics
are
kind
of
duplicates
that
are
being
handled
in
other
working
groups.
I
So
in
the
in
the
risk
working
group
meeting
this
last
week,
we
were
we
were
talking
about
kind
of
the,
how
we,
how
we
handle
those
metrics
that
end
up
being
identified
as
a
duplicate,
it's
being
worked
on
in
common.
Do
we
want
to
keep
them
on
our
list?
Is
there?
I
Is
there
the
possibility
that
the
that
this
metric
is
going
to
appear
again
on
the
list
at
a
later
date,
if
we,
if
we
start
deleting
them
from
the
from
the
list,
so
we
started
talking
about
other,
basically
tags
or
categorizations,
that
we
could
possibly
add
to
some
of
these
metrics,
and
one
of
one
of
them
that
we
talked
about
was
possibly
a
duplicate
tag
where,
if,
if
the
metric
is
a
duplicate
of
a
metric,
that's
occurring
in
another
repo,
we
can
just
mark
it
as
a
wreath
as
a
duplicate,
but
we
want
to
keep
it
in
that
spreadsheet
so
that
it
doesn't
come
up
again
at
a
later
date,
like
we
understand
that
this
is
a
duplicate,
that's
being
worked
on
elsewhere
and
then
the
other
tag
that
we
talked
about
a
little
bit
was
possibly
an
under
revision
tag.
I
B
B
So
if
anybody
would
like
to
comment
on
how
much
prettier
the
spreadsheet
looks,
you
are
more
than
welcome
to
do
that
because
I
spent
time
making
it
cleaner.
Look
at
that.
Isn't
that
nice
gorgeous
yeah.
Well,
I,
like
I,
put
the
green
ones
at
the
top
right.
They
were
all
like
scattered
throughout.
It
was
driving
me
crazy.
So
so
I
do
you
see
these
in
rows.
Three
and
four.
B
B
I
felt
like
it
was
getting
complicated
and
I
felt
like
we
were
getting
we're,
always
just
one
like
edge
case
away
from
needing
a
new
drop
down
thing
and
one
of
the
nice
things
about
the
spreadsheet
is
that
we
can
bring
it
up
honestly
in
any
working
group,
and
I
think
from
my
from
from
my
experience,
everybody
understands
it
really
quickly
and
so
my
only
hesitation
for
adding
more
drop-down
stuff,
not
that
it's
incorrect
or
is
a
bad
thing,
but
it
creates
more.
F
I
agree
that
it
makes
it
more
complicated
and
looking
at
the
nature
of
those
gray
items
is,
they
are
metrics
that,
in
my
opinion,
shouldn't
even
be
listed
in
the
list
anymore,
because
they're
not
things
we're
actively
working
on
they're,
not
things
that
we
are
planning
to
advance
or
release
they've
moved
on.
They
live
somewhere
else
now,
and
I
don't
think
this
spreadsheet
is
where
we
need
to
keep
track
of
the
history
of
our
work.
I
G
Or
even
many
at
times
there
is
a
metric
which
is
like
important
to
have
it
in
the
focus
area,
but
that
is
being
used
and
or
developed
in
another
working
group.
But
if
we
don't
include
in
the
list,
it
will
keep
on
popping
okay.
This
is
the
metric
we
are
missing
and
we
need
to
have
a
focus
so,
instead
of
just
like
finding
out
whether
the
working
group
is
working
on
it,
we
just
have
a
cross
reference
that,
yes,
this
is
a
metric.
G
It
doesn't
need
to
be
pop
again
and
again
in
the
conversation,
but
it
has
been
worked
in
that
area
and
we
can
cross
reference
it.
This
is
how
it
came
up
like
lines
of
code
is
an
evolution
metric,
but
for
risk.
It
is
very
important
to
have
lines
of
code
if
we
not
list
it
here,
it
will
again
come
up
in
a
future
conversation
that
okay,
we
need
to
look
at
the
lines
of
code
and
there's
no
metric
for
it.
G
G
Considering
is
like,
we
are
going
to
work
on
it,
but
that
matrix
is
already
worked
in
another
group.
We
are
just
cross
referencing
it
here
that
the
work
is
done.
It
is
there
we
can
just
refer
it
over.
There.
B
G
A
I
think
that,
as
the
the
list
grows,
it's
going
to
be
harder
to
remember
which
metrics
are
in
other
groups.
So
to
that
point
bernard,
I
think
it's
an
I.
I
personally
agree
with
the
fact
that
it
should
have
a
pointer
just
as
a
as
a
reminder
and
maybe
even
put
it
under
its
own
section
at
the
bottom,
just
like
other
metrics
or
archived,
or
something
that
just.
A
Right
right,
just
because
it's
it
comes
up
a
lot
and
as
we
get
more
contributors
that
don't
have
the
historical
reference
or
context,
then
yeah
you're
right,
like
these
things,
keep
coming
up.
But
it's
a
good
reminder
that
oh
yeah,
that's
right!
That
was
over
in
risk.
So.
B
All
right
I
mean
that
makes
sense,
so
the
cross-referencing,
I
think
there
is
also
an
issue
I
know
we're
over
time,
but
there's
also
an
issue
of
historical
conversations
which
I
think
is
different
than
cross-referencing.
Is
that
right,
kevin.
I
We
talked
about
so
that
that's
part
of
the
the
conversation
of
do
we
want
to
cross
cross-reference
it
and
add
it
as
a
duplicate,
metric
and
point
it
to
the
other
working
group.
Or
do
we
want
to
delete
it
from
our
list
right
and
if
we
delete
it
from
our
list,
then
we
lose
that
then
we
lose
that
conversation.
So
that's
that's
another
argument
in
favor
of
just
tagging
it
as
a
duplicate
or
cross-reference
referencing
it
to
another
working
group
right
just
to
maintain
that
this
is
this
is
a
communication
inclusivity
metric.
B
Okay,
so
as
long
as
it's,
I
guess,
I'm
becoming
more
comfortable
with
it.
As
long
as
like
it's
a
metric
that,
as
george
pointed
out,
is
kind
of
moving
through
the
chain
towards
release,
as
opposed
to
just
a
like
a
dumping
ground
of
metric
ideas
like
in
prior
conversations,
because
we
had
a
repo
that
was
called
github.
B
A
Alternatively,
we
could
put
in
a
process
that
we
say:
okay,
anytime,
we
have
a
new
idea
for
a
metric
first,
we
have
to
search
and
see
if
it's
already
been
done
and
if
it's
already
happening
somewhere
else
and
then
that's
just
part
of
the
process
that
we
we
do.
Maybe
we're
doing
that
already.
I
don't
know,
but
maybe
we
make
that
more
of
a
an
ingrained
thing
that
we
just
search.
First.
G
I
Yeah,
like
a
human,
a
human
ask
all
right
and
to
to
to
matt's
previous
point.
Also,
this
spreadsheet
should
not
be
it
shouldn't,
be
a
collection
of
just
ideas
that
we
have
for
metrics.
These
should
be.
These
should
be
metrics
that
we've
given
some
some
thought
to
before
they're
added
to
the
to
the
repo
right.
It
shouldn't
just
be
a
bucket
of
ideas.
B
A
Okay,
well
we're
over
time,
so
we'll
go
ahead
and
end.
I
guess
we'll
sort
that
out
in
the
upcoming
weeks,
thanks
everyone
for
participating
and
for
coming
to
say,
hi,
and
we
will
see
you
next
week,
bye.
Everybody.