
►
From YouTube: CHAOSS.Evolution.September,12.2019
Description
CHAOSS.Evolution.September,12.2019
A
That
I
think
there
are
a
couple
things
you
can
do
in
terms
of
identifying
evolution,
metrics
and
one
is
identifying
the
metrics
that
are
already
in
place
in
augur
or
in
grimore
lab
and/or
in
grimore
lab,
and
it
might
actually
make
sense
to
try
to
map
between
the
two,
so
whatever
those
evolution
metrics
might
be.
That
are
both
in
both
tools.
A
B
C
A
A
A
Providing
insight
on
evolution,
yeah
number
two
would
be.
You
know
where
your
lab
overlaps
and
then
perhaps
things
that
were
more
lab
is
doing
that.
Auger
is
not
and
then
from
there
identifying
from
those
metrics
which
which
are
actually
evolution,
related
metrics.
You
know,
so
you
might
throw
out
I'll
just
pick
something
like
age
of
pull
requests.
We
think
through
it
and
we're
like
actually
that's,
maybe
more
appropriate
in
a
value
in
the
value
working
group.
So
maybe
then
approach
pass
right
and
I've
brought
it
out
as
an
evolution
thing,
but
maybe
in
in.
E
B
A
A
B
B
We
need
to
start
okay,
I'm
actually
making
recently
in
our
API
Docs.
We
have
a
category
called
experimental
and
I
am
creating
a
list
from
that
API
like
right
now
to
about
three
minutes
to
finish
it.
So
all
the
sudden,
you
see
like
a
giant
piece
from
Shawn,
because
it's
just
easier
to
do
it.
I
didn't
want
to
paste
400
lines
of
API
Docs.
That
I
was
just
gonna
edit
out
and
then
they
do
go
back.
I
love!
B
B
Okay,
so
you
have
a
table.
I,
don't
know
how
my
stuff's
gonna
paste
in
this
table,
but
let's
find
out
they
paste
it
in
one
row.
I
could
make
it
I
put
it
in
Excel.
How
am
I,
making
hmm
I
just
want
to
keep
it
simple?
That's
all
yeah
no
cover
how
to
paste
without
pasting
one
row
at
a
time
that
may
be,
you
know,
there's
only
14
here,
so
maybe
others
paste
one
row
at
a
time
and
actually
there's
really
only
seven,
because
we
have
these
organized.
B
B
E
B
D
B
D
B
H
B
F
That's
gonna,
you
know
uh
just
like
a
clarity
sake
of
my
front.
I
guess
our
the
metric
releases
like
are
they
backwards
compatible
in
that
one
suite
would
be
symmetric,
it's
not
gonna
go
away.
Is
it
like?
We
might
like?
We
reserve
the
full
right
to
change
the
composition,
like
a
calls,
the
metrics
right,
just
renew
them
every
time
you
release
them,
like
you
know,
is
this
most
only
gonna
grow
or
could
we
like
say
um
you
don't
think
this
is
evolution
anymore
and
then
throw
it
out.
B
A
F
So
are
we
going
to
what
I
suppose
another
question
would
be
like?
How
are
we
gonna
do?
How
are
we
dealing
with
the
history?
Are
we
just
gonna
overwrite
it
and
say
this
is
just
now
the
new
version
are
we
gonna
for
every
metric
would
say?
Okay
here
was
the
person.
One
definition
here
was
the
version.
Two
definition:
here's
the
version.
Three
definition
I.
B
Think
there
are
some
cases
where,
for
example,
I've
opened
an
issue
in
this
working
group
where
we
want
to
add
a
little
bit
of
clarification.
Mm-Hmm
I
think
we
actually
close
and
fix
that
too.
What's
meant
so
sometimes
the
definitions
end
up
with
a
little
bit
of
ambiguity
and
let
me
go
to
when
we've
got
to
implement
them.
They've
been
interpreted
in
different
ways
and
so
I
think
those
are
the
kinds
of
changes
that
we
welcome.
A
One
means
sometimes,
we
have
to
yeah
I,
don't
think
we
want
to
change
fundamentally
what
the
metric
is
about.
I
guess.
My
point
was
that
sometimes,
as
an
example,
one
of
the
risk
metrics
there's
been
a
request
to
provide
a
better
picture
of
what
a
software
bill
of
materials
might
looks
like
might
look
like,
and
so
in
version
2
we
can
update
that
picture.
Okay,
that's
all
right!
Those
are
the
kinds
of
changes,
I'm
thinking,
yeah.
F
B
A
F
F
Do
we
think
we
need
to
go
back
and,
and
again
are
these
it
should
we
just
focus
for
now
on
I
mean
for
now
I
think
we
should
all
get
to
just
focus
on
the
new
ones
that
we're
doing,
because
we
have
a
lot
of
work
to
do
there,
but,
apart
from
that,
are
we
focused,
you
know.
Is
it
more
like
we
want
to
figure
out
more
metrics
to
add
or
make
the
ones
that
we
already
have
like
really
really
detailed,
I.
Think.
B
Any
part
of
it
is
that
for
augur
and
gomorrah,
both
it's
helpful
to
say,
okay,
you
know
we
provide
these
metrics,
so
ostensibly
people
we
work
with
have
said
they
want
these
metrics
and
so
creating
formal
chaos.
Definitions
around
them
is
helpful
and-
and
it's
helpful,
it's
helpful
for
people
trying
to
consume
chaos,
because.
I
J
B
B
Yeah,
yeah
I
was
too
busy
making
sure
that
I
found
the
definitions
that
we
had
for
the
ones
we've
implemented
and,
of
course
there
is
no
can
these
are
just
how
we
have
to
find
them
right,
like
whatever
we
call
these
metrics,
we
can
decide.
These
are
filters
on
other
metrics
that
already
exist,
for
example,.
A
Folks,
yeah,
it
were
more
alive
a
tiger
coming
up
with
a
consistent
way
to
talk
about
these,
so
in
in
my
mind,
when
I
says,
when
I
see
an
X
by
any
of
those
metrics
in
that
table.
Along
with
the
question
me,
that
does
mean
that
auger
and
grimore
lever
are
actually
deploying
them
the
same
way
technically
they're
different
mmm
the
engine
under
the
hood
yeah.
But
the
way
that
metric
is
understood
should
be
the
same
in
both
yes
yeah.
B
A
A
D
Shall
I'm
wrong
go
ahead?
Yeah
I
just
saw
that
we
have
Daniel
as
well
joy,
no,
no
Daniel,
yeah,
I'd
invited
him
and
Marika
so
to
clarify
what
we're
doing
right
now
is
in
the
minutes.
We
started
a
doctor
at
a
table
where
we
want
to
add
all
of
the
metrics
that
we
already
have
implemented
their
kilometers.
They
are
defined
in
the
working
group
or
not,
and
so
where
you
can
help
the
most
is
to
add
at
the
bottom
of
the
list.
All
of
the
metrics.
You
know
we're
more
lab
already
provides
than.
L
A
B
I
mean
so
example,
for
example:
there's
some
metrics
that
likely
span
repo
groups
in
our
API.
We
classify
the
four
counts:
delaying
the
computing
language,
the
license
counts.
We
classify
those
under
a
risk
and
I
think
some
of
those
are
proposed
under
risk.
However,
it
would
be
completely
reasonable
to
you
know:
I
think
we
know
at
the
end
of
this
process
we're
going
to
end
up
with
a
matrix
where
some
of
the
metrics
are
useful
across
working
groups.
B
It's
just
a
question
of
which
working
groups
are
actively
working
on
defining
them
right
now
and
I
think,
probably
that's
an
activity
for
Matt
and
I
and
others
thank
you're.
Now
the
co-chair
of
the
chaos.
Well,
oh,
he
was
the
only
candidate,
so
I
didn't
ask
him
for
the
results.
I
mean
Putin
doesn't
lose
I,
don't
think
I!
Think
yours,
no!
No,
that's
a
bad
come
I'm,
not
trying
to
draw
that
contrast
in
any
meat
in
any
conservative
way.
B
A
C
A
For
this
second,
even
for
version
2,
we
won't
capture
everything
because
there's
just
gonna
be
a
limited.
Every
metric
that
is
below
on
that
table.
Every
metric
that
is
below
those
that
are
released
are
going
to
require
work,
yeah,
and
so
the
reality
is
is
for
version
2.
We
may
expect
about
the
same
number,
what
it
was:
a
2
4,
6,
8
10
in
the
first
release,
mm-hmm.
A
L
L
M
L
B
L
L
All
you
know,
these
kind
of
things
of
the
very
name
of
beta
source
might
be
something
might
mean
something,
but
then
at
the
same
time,
if
we
go
for
each
of
them
like,
for
instance,
maxilla,
this
is
a
CSV
file
where
you
can
see
all
of
the
fields
that
we
have
for
each
of
the
each
of
the
data
sources,
and
so
this
is
an
index
in
elasticsearch.
So
potentially
each
of
these
might
be
visualized
in
an
evolutionary
chart,
but
we
want
to
take
really
high
quality
metrics
for
this.
L
A
L
D
A
From
those
there
may
be
five
five
to
ten
fields
that
would
be
evolutionary
evolutionary
innate,
it
would
kind
of
help
reveal
or
provide
transparency
on
the
growth,
maturity
and
decline
of
a
project
yep.
That's
me
what
I
think
that's
exactly
what
this
exercise
is
about,
and
so
we
just
need
your
at
this
point
really
just
your
best
discretion.
L
A
L
A
L
This
is
kind
of
the
big
areas
we
can
analyze,
and
this
includes
data
sources
as
git
repositories
or
gallery
repositories,
or
basically,
development
and
code
review
process,
and
we
have
some
other
for
ticketing
systems
today,
where
we
include
maxilla,
JIRA
I,
not
read
mine
all
of
these,
and
we
have
some
over
4c
ICD
and
then
we
have
some
other
for
I,
know
social
networks,
as
perhaps
meetups,
but
kind
of
these
Twitter.
This
is
tough.
L
So
if
we
are,
let's
say
a
small
private
and
we're
start
from
scratch,
perhaps
we
have
our
github
account
and
that's
all,
but
then,
if
we
want
to
start
growing
from
a
from
an
evolutionary
point
of
view,
the
infra
we
need
is
like.
Okay,
probably
we
need
some
communication
channels.
So
what
are
we
gonna
use?
What
do
you
think
about
this
course?
Mainly
lists,
etc?
2K
has
happened,
something
like
this.
Then
we
started
to
grow
in
the
number
of
mailing
lists,
but
we
added
mainly
lists
right
and
then
we
decided
to
go
for
whatever.
L
L
L
A
Yeah
I
mean
I.
Think
honestly,
as
we've
often
said
in
the
past,
that
the
work
in
the
chaos
project
is
is
meant
to
improve
transparency
for
people,
we're
not
going
to
solve
everything.
So
if
we
can
articulate
even
just
three
metrics
or
five
metrics,
it's
a
positive
step
in
the
right
direction.
hmm
So,
whatever
those
might
be
so
perhaps
codes,
the
ones
that
are
currently
deployed
and
augur
I
see
that
augur
has
a
list
and
it
looks
like
there
is
some
overlap.
If
I
look
at
this
yeah
I'm
sure
there
is
so.
A
L
I
M
A
A
L
I
mean
it's
not
going
to
work,
but
for
all
of
the
data
sources,
but
perhaps
we
can
add,
like
data
sources,
support
it.
Yes,
just
a
comment,
for
instance,
a
number
of
contributors-
I,
don't
know
your
specific
definition
of
contributor
in
in
this
working
group.
But
if
we
assume
that
a
contributor
might
be
someone
doing
whatever
in
any
data
source,
then
we
can
say
we
can
measure
the
number
of
contributors
like
yeah
across
all
of
these
data
sources.
And
then
we
have
a
list
of
them.
A
B
A
A
B
F
I
think
that's
it
pretty
much
if
we
see
them
do
anything
on
github.
In
a
repository
we
had
the
mystic
contributor,
even
if
it's
just
commentating
this,
you
are
opening
an
issue,
but
at
first
I
think
if
our
definition
is
actually
pretty
much
on
par
with
somebody
doing
it
something
in
a
data
source,
because
our
data
sources
primarily
are
again
how
they
get,
and
we
just
conclude
everybody's
doing
something.
You're
now
a
contributor
they're,
adding
some
some
value
to
the
project.
A
So,
in
terms
of
defining
the
metric
that
would
be
number
of
contributors
this
row,
that's
the
discussion
that
would
have
to
occur
is
what
constitutes
a
contributor,
and
so
in.
The
hope
would
be.
Is
the
the
way
that
the
folks
at
augur
think
about
a
contributor
and
the
way
that
thing
folks
at
grimore,
lab
think
about
a
contributor
is
actually
the
same.
E
A
Then
the
definition
is
deployed
consistently
across
the
two
tools
and
we
don't
have
to
solve
what
a
contributor
is
now,
but
that
would
be
a
candidate.
This
is
really
what
happened
on
all
of
the
prior,
the
prior
metrics.
So,
for
example,
if
we
talk
about
what
a
review
is,
Shawn
and
hey-zeus
would
just
spend
a
ton
of
time
coming
to
agreement.
B
As
to
what
constitutes
a
review
well
in
that
that
one
became
somewhat
confusing,
because
the
review
is
really
outward
for
pull
request
and
the
github.
Partly
it's
like
a
standard
yeah
and
then
there's
software
reviews
that
are
implemented
differently
on
different
platforms
that
are
dif
different
things
that
we'll
have
to
label
differently
and.
A
A
Trying
to
for
awhile
we
we
have
a
lot
of
moving
parts.
So
there's
a
lot
of
things
that
grimore
lab
is
doing
and
there's
a
lot
of
things
that
are
doing
so
all
this
table
is
trying
to
do
is
just
bring
bring
people
together
at
some
point.
So
if
it
from
this
point
we
could
say:
listen,
we're
gonna
actually
take
a
look
at
this
Hutton
one
that
I'm
highlighting
hole,
request.
Acceptance
rate,
that's
a
metric
that
we
agree
we
want
to
have.
A
We,
as
a
group
here,
agree
that
we
want
to
release
in
version
two
the
work
that
I'm
doing
that
would
actually
occur,
not
in
this
not
in
this
table,
but
in
a
different
document,
so
it
would
actually
be
scaled
out
there.
If
that's
here
concern
so
this
table,
I,
don't
think
I
agree.
This
table
won't
scale.
C
A
100%
the
idea,
and
then
the
real
work
lies
in
actually
writing
up
the
markdown
file
to
define
that
metric
yep
that
actually
defines
that
metric.
So
example,
new
contributors
or
not
I'm,
sorry
number
of
contributors
seems
easy.
Out-Of-The-Box,
out-of-the-box,
maybe,
but
quite
likely.
The
way
that
it's
interpreted
by
the
folks
that
are
working
at
augur
in
the
way
that
it's
interpreted
by
folks
working
for
more
lab
is
is
different.
It's
probably
new
I
mean
aspect.
B
They're
not
like
materially,
like
fundamentally
different
things,
but
there
might
be
different
stuff.
We
count
like
you
know
when
we
first
defined
commits
I
think
there
was
a
lot
of
discussion
about
what
constitutes
a
commit
correct,
so
that
I
doubt
there's
anything.
That's
as
hairy
as
that.
So
we've
climbed
the
highest
mountain
already
I
think
I.
I
Have
a
question:
can
we
also
measure,
like
the
number
of
contributors,
pay
release
cycle
in
such
a
way
that
if
we
look
contributors
across
different
releases,
we
can
see
a
few
activities,
because
at
some
point
a
contributor
might
be
very
engaged
at
one
release
at
some
point,
the
activities
might
drop
on
things
like
that.
It's
also
interesting
to
see
how
a
contributor,
the
conductivity
of
a
contributor
is
spread
across
several
releases.
I
I
B
I
Like
I
was
just
saying
like
if
we
could
capture
since
we
were
talking
about
contributors,
you
know
because
sometimes
we
might
just
say:
okay,
this
person
is
contributing
to
this
project,
and
that
is
it,
but
inside
that
same
project,
if
that
project
is
doing
releases,
let's
say
three
months
in
six
months
we
might
say:
okay
in
this
particular
release.
This
person
have
this
kind
of
contribution
in
this
release.
We
just
keep
those
releases
across
different
sorry,
we
keep
contributions
across
different
releases,
so
at
some
point
it
might
be
increasing.
I
D
F
F
F
Then
there
could
be
like
the
number
of
new
repository
contributors
like
they
haven't,
made
a
commit
and
a
release
before
the
one
we're
looking
at
and
then
the
number
of
returning
deposit
repository
contributors
per
release,
as
in
I've,
made
a
commit
with
that
miss
release
and
also
one
prior.
So
that
could
give
us
idea
of
how
many
new
people
did
we
get
this
time?
Did
we
see
a
drop
in
the
number
of
people
who
came
back
and
then
just
how?
Many
overall
do
you
have
just
at
the
top
of
my
head.
L
I
L
A
Would
suspect
in
this
case
that
if
you
have
60
to
70
dashboards,
it's
quite
possible
that
one
metric
in
multiple
dashboards,
exactly
yeah
and
you
you
just
aggregate
them
in
different
ways
based
on
the
particular
use
case,
I,
don't
I,
don't
does
do
you
think
that
precludes
us
from
trying
to
define
what
those
are
I.
Don't.
A
B
D
D
A
A
D
B
C
A
Dear
to
your
point,
whether
their
atomic
metrics
or
composite
metrics,
even
looking
at
this
list,
that's
in
the
table
in
the
minutes,
there
are
some
that
are.
We
have
both
in
there,
which
is
the
point
being
I,
think
it's
fine,
whether
we
consider
a
metric
as
a
composite
or
or
an
atomic.
So,
for
example,
the
one
I'm
highlighting
right
now
aggregate
summary
that
is
clearly
a
composite
metric.
D
Yeah,
my
point
was
not
about
whether
they're
classifies
one
or
the
other
I
wanted
a
different
approach
at
figuring
out
which
one
which
metrics
we
can
focus
on,
because
I
think
looking
at
this
schema
annual
share.
There
are
so
many
possibilities
on
how
to
combine
them,
but
actually
looking
at
the
dashboards
that
companies
are
looking
at,
gives
us
a
good
set
of
metrics
that
are
proving
in
the
wild.
Yeah
no
I
agree.
A
I
think
the
one
hesitation
on
the
aggregate
metrics.
So
again,
if
I
look
at
that
one
that
I
highlighted
this
one
I'll
put
it
in
the
chat
for
folks,
if
you're,
not
in
the
document,
that
one
aggregate
summary
it
would
require
definitions
of
what
a
watcher
a
star
and
a
fork
and
a
commit
and
a
committer
and
a
pull
request
merged
are.
B
J
B
Think
it
for
me,
it's
helpful
because
I
think
it
gives
us
an
inventory
of
metrics
that
you
know
can
be
defined.
That
will
then
create
greater
alignment
with
the
software
and
I
think.
It
also
opens
up
a
bunch
of
new
discussions
that
we
can
have
around
those
metrics
about
which
things
are
the
discrete
metric,
and
how
do
we
define
the
aggregations,
mm-hmm,
so
I
think
I.
Think
I
think
it's
very
helpful,
at
least
for
me
and
maybe
I'm
the
only
person
I
actually.
F
M
L
L
So
the
way
we
see
we
were
producing
value
for
the
industry
was
through
the
presentation
of
certain
use
cases
and
for
piece,
the
aggregation
of
certain
metrics.
So
when
we
are
several
data
sources,
when
we
merge
certain
metrics
into
one
thing,
your
scales
seems
to
be
more
valuable
from
that
perspective.
So
just
an
example.
The
velocity
metric
I,
don't
remember
those
who
request
purchase,
commits
or
kind
of
this.
L
But
there
are
prolly,
so
we
are.
We
are
using
two
metrics
into
one
to
define
a
new
one
and
then
this
seems
to
be
kind
of
a
good
indicator
for
young
people.
So
then,
having
that
implemented,
I
think
it's
more
useful
than
having
commits,
in
the
one
hand,
and
put
request
in
the
other.
Very
the
point
is
about
defining
all
of
those
use
cases,
because
our
next
step
was:
let's
define
these
cases
and
then
we
have
the
dashboards.
L
But
again
we
are
not
covering
what
everyone
thinks
it's
useful
and
basically,
if,
if
you
see
something
or
a
dashboard,
people
have
an
opinion.
So
then
it's
like
a
I
have
an
opinion.
So
I
would
like
to
see
this
here
and
not
is
there
so
our
final
decision
was
okay:
let's
have
a
really
flexible
dashboard
that
everyone
can
play
with
and
they
can
build
their
own
thing,
which
is
what
we
are
trying
to
and.
A
M
A
A
I
actually
I
agree
with
you
that
velocity
is
a
more
insightful,
more
revealing
metric
than
pull
requests
alone
or
committers
alone.
That
was
I
think
that
was
one
of
your
points.
Correct
mm-hmm,
but
I
still
I
think
there
has
to
be
a
definition
for
all
requests
and
there
has
to
be
a
definition
for
contributors
and
if
we
grow
more
lab
handles
contributors
is
different
than
the
way
that
auger
handles
contributors.
That's
a
problem
to
me:
yeah.
A
A
A
A
I
L
A
D
M
In
the
way
to
scale
this,
as
you
said
not
starting
by
the
foundations
of
the
things
that
build
the
things
build
whatever
we
want
to
build
here,
I'm,
basically
starting
by
okay,
let's
define
the
basic
the
statements
or
the
basic
foundation
for
open-source
development,
20
meters
we
have
maintainer
or
where
I
commit
first
contribution,
and
that's
it
that's
the
first
definition,
an
industry,
member
I,
would
say
I
would
expect
I
would
be
expecting
okay.
This
is
what
we
are
going
to
measure.
M
We
are
going
to
measure
contribution,
so
what's
the
contribution
and
then
when
we
have,
that
is
okay
for
revolutionaries
point
of
view.
What
other
things
do
we
need
to
measure?
And
probably
we
start
with
very
basic
things.
The
CIA
here
is
most
of
the
people
in
this
call
our
technical
people,
so
we
know
that
we
can
measure
everything
we
can
measure,
even
the
pressure
of
the
air
to
know
okay
and
add
the
pressure
or
not
for
evolution.
M
So
probably
another
thing
is:
let's
start
by
simple
things
that
we
know
that
works.
We
all
have
some
employee
for
next
releases.
We
can
also
try
to
engage
industry.
Okay.
For
me,
what
does
growth
means
to
you?
It's
we
already
have
everything
this
define.
We
know
our
contribution
so
probably
for
you.
It
like
I
only
need
to
know
how
many
people
is
20
with
into
my
code
review
process.
Yes,
I,
think
that
could
be
with
this
case
of
surveys.
I,
don't
know
about
some
kind
of
interesting
group
around
yeah.
M
Yeah
I'm
put
in
a
sense
that
people
don't
need
think
about
which
that
is
already
there,
because,
from
the
technical
point
of
view,
we
always
focus
on
the
data.
You
know:
okay,
we'll
go
there.
Oh,
it's
ready
data.
What
for
everyday
time
for
Twitter
and
go
for
okay?
Forget
about
that.
What
does
it
mean
to
know
for
you
what
makes
meaningful
to
measure
growth,
maturity
or
decline?
What
does
it
mean
and
then
let's
try
to
fill
that
sentence,
yes
stemming
from
the
industry
into
questions
and
then
into
metrics,
and
then
we'll
have
the
matrix?