►
From YouTube: CHAOSS.Common.July.23.2020
Description
CHAOSS.Common.July.23.2020
A
Hi
everybody
and
welcome
to
the
common
call,
if
you're
watching
this
video,
you
have
no
idea
what
time
it
actually
is
and
how
much
time
we've
been
spending
prior
to
starting
this,
but
anyway
super
cool.
I
will
share
the
the
minutes
here.
You
should
have
access
to
those
if
you
could
add
yourself
to
this
shortened
meeting.
That
would
be
great.
Thank
you
for
the
happy
birthday
wishes.
A
I
know
my
spot
on
this
call
so
cool
so
the
for
today.
I
think
we
had
a
couple
outstanding
action
items,
so
daniel
and
dawn
aren't
on
today,
so
obviously
we'll
be
tabling
those
action
items.
C
D
A
Work
on
them
in
parallel
with
because,
like
I
think,
that's
just
a
screenshot
that
you
could
probably
grab
pretty
quickly
yeah.
D
Yeah
I've
been
yeah-
I
spent
most
of
yesterday
and
last
night
until
about
midnight
tweaking
the
auger
community
reports,
and
I
have
a
meeting
with
my
team
about
that
after
this
okay.
So,
just
to
give
you
an
idea,
let's
see
language
distribution
within
a
project
implementation
of
time
to
first
response,
yeah.
A
Okay,
I
think
the
probably
the
biggest
thing
for
today
is
to
take
a
look
at
if
there
are
any
comments
on
the
metrics
that
are
under
review
and
if
so
kind
of
look
through
those.
I
don't
know.
If
there
are,
I
didn't
look
ahead.
D
A
D
A
A
D
B
A
Okay,
so
we
have,
you
can
see
I'm
looking
at
the
common
issues
for
the
release
candidates.
We
have
three
release
candidates
at
the
moment,
so
we
can
buy
one
of
them
time
to
no
time
to
first
response.
Where
is
time
to
first
response,
where
am
I
putting
that?
It's
I'm
obviously.
B
D
A
B
B
B
D
B
B
A
C
D
A
D
That
that
makes
perfect
about
a
month,
yeah,
okay,
incompletes.
D
D
D
A
D
A
Okay,
so
in
terms
of
the
answering
things
for
the
release
coming
up.
D
A
Yes,
so
as
far
as
this
one
goes,
anybody
comments
moving
on
it
got
a
heart
and
a
looks
good
to
me
from
matt.
So
I
like,
like.
D
A
E
June
time
to
close
was
part
of
the
continuous
release.
Oh.
E
A
Don't
start
with
me,
I
apologize
for
causing
you
to
make
me
apologize.
Yes,
let's
continue
with
this
forever,
so
I'm
guessing
this
one
is
not
gonna.
Have
any
real
comments
at
this
point,
so
looking
good.
A
E
D
D
And
what
happens
in
github
and
get
lab
that,
although
functionally
they're
serving
the
same
purpose
conceptually,
what
we
would
be
measuring
are
are
just
completely
different
things
and
we
need
to
think
about
conceptually
what
are
the
measures
that
are
useful
for
garrett
and
other
things
that
maybe
like
it?
But
you
can't
compare
what
garrett
does
like
if
you
created
a
comment
metric
with
garrett
data,
there's,
there's
really
no
useful
comparison
with
that
same
metric
using
github
get
lab
data.
D
There's
just
not
and
we'll
have
this
discussion
in
evolution,
but
I
think
it's
going
to
have
to
be
a
different
metric
and
we're
going
to
change,
reviews
to
pull
and
merge
requests,
and
do
that.
That's
my
opinion,
we'll
see
what
people
talk
about
and
how
it's
discussed
at
the
evolution
meeting
I'll
stop
talking
now.
A
And
so
to
that,
to
that
also
kevin,
it
looks
like
this
is
called
reviewing
code.
Do
you
see
that
on
my
screen?
A
A
A
A
A
D
D
A
D
D
You
know,
as
you
move
through,
that
onion
of
love
types
of
contribution,
your
your
your
role
in
the
project,
evolves
and
you're,
but
that
changes
the
type
of
contributor.
You
are
right,
yeah
so,
like
you
always
start
as
a
drive-by
in
a
sense,
because
you
always
make
that
first
contribution,
you're
a
repeat
contributor
after
some
level
of
continuous
contribution
over
time.
D
C
Types
of
examples,
types
of
contribution,
not
the
country
contributors,
also
new
contributors
comes
in,
but
types
of
contribution
also
evolves
like
a
project
is
now
more
focused
on
the
development
side
when
it
matures
more
focused
on
the
governance
or
documentation
side.
So
the
type
of
contributions
also
evolves
over
the
period
of
time.
D
Yes,
yes,
the
quantity
quantities
of
those
types
of
contributions
evolve
over
time.
I
guess
I
was
arguing.
The
nature
of
them
doesn't
evolve
over
time,
whereas
the
nature
of
the
individual
contributor,
I
think,
does
evolve
over
time.
You're
right,
they'll,
be
the
project's
characteristics
evolve
in
the
same
way
that
the
person
does.
A
A
Hold
on
so
we
have
types
of
contributions
which
is
under
review.
We
have
contributors
which
is
a
released
metric
and
contributors
is
about
the
names
of
people.
It's
like
authorship,
it
even
says
so,
somewhere
authorship
somewhere
in
there.
Oh,
who
are
the
contributors
who
it's
about?
The
person's
name
basically
collect
author
names,
which
is
not
the
types
of
contributor.
C
F
I
skip
quick.
Sorry.
Can
I
get
a
quick
question
about
this?
I'm
I'm
curious
how
you
would
automate
or
pull
that
data
from
somewhere
of
how
a
like
what
category
someone
falls
into.
Would
it
be
based
on
the
type
of
contribution
that
this
person
has
made
or
like
how?
How
would
we
programmatically
define.
F
A
From
what
I
could
understand
there,
there
was
like
a
self
like
a
self
reporting
tool
that
can
capture
that
data.
So
I
would
watch
it
yeah
and
I
think
that's
what
matt
is
trying
to
show
here
in
this
visual
and
that
we
could
pull
that
data
as
recognized
participants
in
a
project.
A
A
A
A
A
B
F
Some
definition,
but
yeah.
E
Those
really
feel
like
tasks,
though
those
those
aren't
really
types
of
contributors.
I
think
a
a
contributor
is
going
to
contribute
through
multiple
different
tasks.
B
One
of
you,
I
know
that
over
at
gnome
that
tree
has
a
new
project
to
look
at
scalable
onboarding
and
they
are,
as
part
of
their
work,
defining
the
roles
that
are
in
the
gnome
project.
So
maybe
that's
something
where
we
can
borrow
from.
B
A
C
A
Okay,
okay,
so
then
the
last
thing
is
the
release.
Notes
kevin.
Was
there
anything
here
that
needed
to
be
taken
care.
E
A
A
A
G
E
E
And
maybe
add
a
maybe
add
that
it's
for
the
the
next
regular
release,
which
will
be
in
for
the
next
january,
continuous
release
it's
for
the
continuous
release,
so
it
yeah
it
will
be
on
the
continuous
release,
but
it's
going
to
be
for
the
next
regular
release
of
january
right.
That's
what.
A
D
B
Period
on
on
the
continuous
metric
release
cycle-
the
way
I
understand
it
is
we
put
it
on
the
website
when
we
start
the
review
period
and
then
it
stays
on
the
website
until
the
next
release
and
then.
E
D
E
E
D
Guess
I
guess
the
trick
is
something
else.
If
sorry,
if
there's
a
continuously
released
metric,
that
people
are
really
against
that,
that
is,
you
know,
receives
a
lot
of
opposition.
This
hasn't
happened,
but
if
there's
a
continuously
released
metric,
I
guess
th,
this
30-day
period
formally
or
informally
gives
us
a
policy-based
reason
to
rip
it
down
as
opposed
to
making
it
sit
there
for
five
months.
While
we
wait
for
our
next
release,.
B
E
D
E
You
can
you
can
respond
to
those
comments
and
make
edits
during
that
review.
Comment
period.
A
A
But
if
people
want
more
information
great
or
do
we
do,
we
want
to
create
ways
to
filter
through
those
you
see
what
I'm
saying
so
we
can
either
just
leave
it
like
a
published
dictionary
and
be
like
whatever
you
just
work.
Your
way
through
it's
not
alphabetical
either.
So
good
luck
or
we,
we
create
filters.
E
I
was
kind
of
I
kind
of
think
we
need
both
right.
We
need
to
have
the
ability
to
see
all
of
the
release
metrics,
but
we
also
need
to
have
the
ability
to
to
filter
and
and
okay
visualize,
the
metrics
in
a
in
a
very
consumable
way.
D
That
leads
people
to
look
up
a
reference
section
for
an
alphabetized
list
of
atomic
metrics,
which
will
have
which
will
be
alphabetized
by
the
metric
or
basically,
here
may
be
categorized
by
something.
But
the
atomic
metrics
will
be
critical
because
they're,
the
shared
definition
that
was
missing
before
chaos,
but
the
stories
will
be
become,
will
become
how
people
want
who
people
decide
to
start
understanding.
What
we're
doing.
C
D
A
So
I
will
so
that's
the
seed
all
I'll
say
is.
I
will
lean
towards
simplicity
in
how
we
filter
and
show
things
I
find
that
surprising,
but
okay.
D
C
D
A
Weird
tick
that
I
have
so
anyway.
E
Point,
even
even
if
we're
striving
for
simplicity,
as
this
list
gets
longer,
the
complexity
is,
is
going
to
increase
so.