►
From YouTube: CHAOSS.Evolution.April.24.2019
Description
CHAOSS.Evolution.April.24.2019
B
C
So
there's
a
spreadsheet
in
terms
of
related
to
the
release
of
metrics
and
then
I
have
a
wall
charity
when
we
get
sure
when
we
get
there
and
then
then
the
other
thing
that
kind
of
came
up
yesterday
or
that
I
brought
up
yesterday
was
just
kind
of
making
sure
that
the
working
groups
and
the
software
so
grew
more
alive
and
augur
and
trying
to
work
in
concert
with
one
another
to
not
only
release
the
metrics
but
actually
deploy
them
in
software
and
how
that
can
be
identified.
Pretty
clearly
whether
it's.
B
B
And
I
think
I.
Think
of
all
the
working
groups
we've
been
pretty
closely
aligned
with
with
that
from
the
start.
I
know
initially
augur
and
it's
been
one
of
its
prototypes
and
currently
creates
a
link
between
the
a
metric
and
the
chaos
definition
and
when
I
was
I.
Think
I
was
talking
to
Daniel
at
the
Leadership
Summit.
He
indicated
that
similar
functions
and
links
are
either
in
room
or
lab
or
on
the
roadmap
for
more
lab
yeah,
hey
soos,
you
can
correct
me.
That
was
a
wrong
conversation
if
I'm
remembering
something
that
never
happened.
A
A
114
matrix
files
are
missing,
and
this
is
just
an
open
discussion
on
one
tooth
on
what
to
do
with
the
files
that
are
missing,
because
we
don't
have
the
detailed
definition
for
the
metric
yet
and
if
you
look
at
the
all,
the
thread
must.
My
final
suggestion
was
to
mention
in
the
readme
that
this
is
intended
because
doesn't
make
a
lot
of
sense
to
here
completely
empty
files,
I
mean
only
the
damn
blades
and
pin
will
start
working
with
them.
A
So
if
we
agree
there
is
right
now
somebody
to
reasonable
request
for
them,
and
that
would
be
mainly
a
clarification
in
the
Whitney.
Just
saying
that
I
think
we
start
working
seriously
with
metric.
We
are
not
going
to
produce
an
empty
file
for
you.
So
if
you
agree,
I
think
that
we
can
go
to
the
next
one.
What
do
you
think
any
comment
so.
B
I
mean
not
so
these
I
would
close
that
issue,
because
that
is
part
of
how
we
make
it
easy
for
newcomers
that
they
want
to
fill
in
a
metric
to
actually
go.
Do
that
because
we
give
them
the
section
so
I
think
having
the
empty
ones
are
good.
Is
that
is
that
our
sense
of
why
we
would
close
it
to
just
say
that
this
is
part
of
how
we
welcome
newcomers
or
is
there
some
other
thought
process?
Yeah.
B
C
A
D
A
Okay,
then,
the
next
one
is
this
discussion
about
metrics
for
measuring
efficiency.
You
can
have
a
look
at
the
thread
after
now,
we
didn't
have
a
lot
of
feedback.
I,
just
encourage
you
to
try
to
tell
anything
may
want,
because
a
whole
life
to
start
working
and
B
is
in
the
next
two
weeks
before
the
next
meeting
so
trying
to
hear
for
proposal
to
the
next
meeting.
A
Basically,
to
have
any
kind
of
comment
on
whether
people
agree
with
the
metric,
so
they
want
different
ones
or
they
have
any
issue
that
they
want
to
discuss
so
that
the
process
is
a
bit
more
transparent
and
inclusive.
For
that.
Okay,
you
know,
working
with
the
same
definitions,
and
so
on
is
in
some
cases
is
a
bit
difficult.
You
know
just
your
idea.
Your
opinion
is,
is
much
more
distant.
Yep.
A
C
A
C
So
I
Shawn
is
dropped
off
so
I'm
just
trying
to
repeat
what
you
were
saying
just
so
I
have
it
in
my
head.
Okay,
this
is
on
the
efficiency.
This
is
on
one
particular
metric,
so
there's
kind
of
two
things
you're
proposing
here
one
is
that
any
metric
that
should
be
discussed
use
it
in
a
pull
request,
generically
sorry,
yeah.
C
E
C
D
C
C
A
I
think
I
think
that
the
current
list
of
metrics
is
based
on
the
previous
PMD
work,
so
that
the
idea
is
to
to
which
extent
people
are
happy
with
it,
and
in
that
case
it
is
only
a
matter
of
writing
the
detail.
It
information
for
each
of
the
metrics,
or
they
want
to
you
know,
include
some
others
or
change
the
focus
for
how
we
are
measuring
efficiency
or
whatever.
A
Okay,
so
the
if
there
are
no
more
comments
about
this
one,
can
we
move
to
the
next
one
with
this
136
one?
This
is
so
afraid
about
whether
we
should
be
including
some
testing
for
the
reference
implementations.
If
you
look
at
it,
basically
that
the
current
status
is
to
deal
with
that
later
on,
because
this
doesn't
seem
to
be
ability
right
now,
if
you
go
to
the
end
of
the
thread
so.
A
The
idea
is
to
try
to
run
the
matrix
on
the
notebook.
Maybe
later
and
my
proposal
is
you
keep
this
I,
do
not
open
up
close
but
keep
it
in
mind
and
when,
after
google
Summer
of
Code,
you
have
hopefully
a
lot
of
reference
implementations,
we
can
decide
how
to
make
the
testing
for
them,
because
right
now,
I
find
it
a
bit
premature,
because
we
still
don't
have
many
of
reference
implementations
for
talking
about
their
seniors.
I.
C
A
That
that
could
be
one
of
roads,
so
the
idea
is,
since
the
reference
implementations
are
software.
He
had
to
avoid
testing
for
that
softly,
and
one
of
the
other
approaches
could
be
to
identify
some
repositories
and
random
matrix
from
them,
so
that
we
know
once
and
again
what
the
result
should
be,
and
we
can
convert
that
with
the
result
by
the
reference
implementation,
without
some
digression
or
something
else.
But
there
are
some
other
approaches
that
would
have
to
be
taken
like
detecting
I,
don't
know
a
signal
condition,
for
instance,
okay,.
B
D
F
G
B
Data
that's
tested
that
it's
tested
against
so
like
right
now,
I
think
the
easiest
thing
is
to
have
Percival
put
together
reference
data
that
that
is
in
the
structure
that
the
the
the
what
a
prototype
implementation
or
the
reference
implementation
of
the
metric
uses
and
I
think
we
would,
in
the
long
run
if,
as
other
formats
emerge,
for
structuring
the
data.
As
long
as
the
reference
implementation
of
the
metric
simply
takes
an
input
string,
we
could
produce.
B
First
of
all,
we'd
have
the
reference
data
that
we
would
use
just
for
our
standard,
Travis
CI
test
cases,
but
then
we
would
also
just
be
sure
to
somewhere
say
that
you
know
we're
not
explicitly
coupling
any
data,
collection
or
structure
of
the
data
to
metrics.
For
example,
we
may
choose
for
issue
metrics
to
pull
from
five
or
six
different
common
issue
trackers
and
just
get
that
really
structured
in
the
same
way.
B
B
B
I
think
for
now
that
we
just
say
it's
a
policy
that
you
know
we
will
have
reference
data.
This
will
be
a
reference
data,
we're
probably
going
to
get
it
from
Percival
here
at
the
start,
and
we
just
have
kind
of
a
policy
that
the
you
know
of
decoupling
that
good
software
engineering
principle
just
to
not
couple
the
origins
of
the
data
to
how
the
test
reference
the
reference
test
data
is
generated.
That's
all.
A
B
C
C
G
B
C
A
I'm
not
saying
doing
that
after
the
implementation,
but
after
we
have
some
more
experience
with
implementation,
because
implementations
are
still
very
new
and
we
only
have
one
or
two
of
them
right
now,
which
are
not
doing
their
work
for
three
and
we
need
to
define
an
ischemic
ecause
right
now.
The
implementations
are
notebook
and
ferb
in
this
table.
We
need
to
convert
it
into
libraries
or
something
that
at
least
model,
so
we
can
test
the
model
or
something
later
yeah.
B
A
A
G
A
Definition
of
event
on
issues:
this
is
a
discussion
and
how
to
define
inactive
issues
and
around
on
issues.
In
fact,
and
if
you
look
at
my
latest
comment,
I
am
basically
proportion
by
definition
for
inactive
issues,
which
is
the
one
that
was
reported
earlier
and
and
I
am
not
happy
with
area
of
abundant
issues,
because
I
don't
think
that
people
usually
level
is
as
such
in
issue
tracking
system,
which
means
it's
very
difficult
to
detect
them
with
inactive
issues.
I
think
they
are.
H
Yes,
I
think
it's
you're
right
on
the
factor,
it's
difficult
to
detect
abundant
issues,
so
the
definition
of
which
is
inactive
issues,
and
then
you
have
to
sort
them
out
by
period
of
time,
and
this
is
something
that
is
pretty
much
individual
project
that
you
know
in
a
quarter
semester
half
a
year
one
year.
They
can
review
inactive
issues
and
based
on
the
classification,
whether
there
is
this
a
pending
issue
or
it's
important
or
it's
a
story
whatever
it
is.
F
A
What
I'm
saying
is
that
it's
very
unusual
that
the
project
is
rather
something
that
they
are
closing,
usually
they
just
closed
and
which
means
that
it's
very
difficult
to
find
out
how
to
use
this
metric
in
the
in
the
real
world.
That's
the
only
thing
so
I
agree
that
if
we
could
have
the
information
of
this
issue
as
Alan,
don't
that's
quite
interesting
Beth.
My
impression
is
got
something
that
we
just
hand
out.
You.
B
A
B
I
would
suggest
instead
is
that
we
implemented
as
and
the
issue
is
a
parameter
on
what
is
already
the
sort
of
implementation
of
an
open
issue
in
some
cases,
and
we
say
that
a
project
or
somebody
doing
analysis
can
say
you
know
anything
over
the
90
days
with
no
comments
are
there.
Just,
for
example,
older
than
such
90
days,
with
no
comments
or
activity.
B
A
B
So
from
a
something
known
as
a
software
person,
that
is
a
consumer
of
the
metrics
I
know,
it
would
be
helpful
for
me
to
have,
as
so
interactive
means
something
to
me
like.
Okay,
there's
been
no
activity,
but
that
might
be
okay
and
I
would
look
at
those
in
a
different
way.
Then
I
would
look
at
and
define
not
necessarily
the
implementation
might
be
exactly
the
same,
but
the
way
that
I
would
define.
B
This
is
an
abandoned
metric,
because
I
think
there
is
a
in
practice
a
need
sometimes
to
distinguish
between
the
list
of
things
that
are
inactive,
but
possibly
for
a
reason
and
abandoned,
like
we
just
aren't
dealing
with
this
at
all
and
we
left
it
out
there
and
we
don't
know
why
I
guess
it
would
depend
on
the
practice,
but
I
think
defining
the
metric
might
might
be
helpful,
but
I
might
be
overthinking
it
as
well
and
I'm.
Fine.
If
we
just
say
that's
what
we
mean
by
an
actor
and
maybe
just
throw
a
for
example.
B
C
B
I'm
thinking
like
I'm
thinking
like
I,
see
your
point
about:
let's
not
create
a
metric
for
every
possible
temporal
state
of
a
thing
that
we're
measuring
I'm,
suggesting
that
so
maybe
we
close
the
abandoned
issue
and
not
have
abandoned
issues,
but
in
the
inactive
issue
metric
we
add
a
sentence
that
says
you
know.
For
example,
some
projects
have
different
parameters
for
things
that
are
simply
inactive
and
things
that
are
abandoned
so
that
two
things
one.
The
notion
of
abandonment
is
addressed
by
chaos
and
two.
If
somebody
searches
our
metric
definition.
B
Looking
for
abandonment,
they'll
find
that
word
in
the
inactive
metric
metric
and
and
then
that
will
basically
it'll
show
up
in
a
search
of
our
metrics
and
they'll
find
what
they
need,
whereas
if
we
don't
have
any,
we
don't
have
that
word
anywhere.
Then
somebody
who
thinks
about
it
that
way
is
not
never
gonna
find
that
metric
or
it
not
find
it
easily.
B
B
A
B
One
sentence
add
one
a
sentence
that
says
in
some
projects:
project
issues
that
are
inactive
may
exist
between
30
and
80
days
and
if
they
met
classify
issues
that
are
abandoned
as
between
90
and
days
and
infinity
and
and
just
sort
of
adding
that
sentence,
I
think
clarifies
the
what
enacted
metric
is,
and
it
also
makes
the
word
abandon
show
up
if
they
do
a
search
which
will
make
it
will
help
people
find
the
inactive
metric
for
the
thing
that
they're
looking
for.
So
it's
really
just
a
information.
Retrieval
point.
A
C
D
A
This
stands
on
a
metric
that
we
have
to
know
good,
and
it's
related
to
me
so
resolution,
which
is
something
that
we
are
still
I
mean
issue
resolution
efficiency,
which
is
the
part
of
efficiency
with
we
still
are
discussing.
So
that's
why
we
still
don
him
in
the
focus
area
and
a
specific
definition
for
this,
which
is
very
related
to
the
other
issue
where
we
are
discussing
on
efficiency
and
what's
what's
name
so
it's
it's.
C
A
A
A
A
B
A
A
A
A
Okay,
then
we
can
move
to
99,
which
is
addressed
with
planning.
Commands,
include
requests
number
19,
so
we
decided
at
some
point
to
march
90
and
to
work
on
it,
but
I
think
that
we
should
leave
this
issue
open.
So
that
will
remind
that.
We
need
to
do
this
because
that
command
passing
was
very
sensible
and
the
idea
was
to
define
the
definition
deciding
how
to
count
open
to
the
casino
today
which
we
opened
it
and
so
I
plan
to
take
care
of
that
with
a
new
pool.
A
A
A
A
C
A
C
A
One
of
those
is
switching
if
you
go
to
pull
request.
One
of
those
is
the
public.
Was
there
91,
which
I
propose
to
close
for
now,
because
we
were
waiting
for
cows
hit
Bank
for
a
while
for
tip
of
months
and
for
some
reason
we
couldn't
provide
a
feedback,
but
my
impression
is
that
we
are
in
in
sync
with
his
original
proposal,
so
I
would
move
on
acceptable
requests
and
work
in
the
use
case
of
the
finding
the
use
case
either
with
him,
or
we
chose
a
medicament
if
we
cannot
find
the
time.
A
And
unable
to
still
leave
the
two
issues
open
because
in
one
of
them
we
still
need
to
previously
the
Poudre
good
betting
that
we
need
some
input
back
out
and
for
the
other
one.
We
still
need
to
define
the
use
case,
because
we
need
to
define
the
matrix
and
accountants
and
say
a
bit
more
because
after
you
know,
we
had
the
refining
after
the
level
of
questions
always
not
metrics.
A
E
A
E
Idea
behind
this
pull
request
is
to
display
focus
areas
in
a
table
so
that,
when
someone
comes
to
the
readme
that
they
can
immediately
see
what
a
focus
area
is
about
and
also
to
create
a
readme
in
the
focus
area
repository.
So
when
someone
looks
through
the
repository
structure,
they
get
an
easily
displayed
overview
of
the
focus
areas.
A
A
C
A
C
Of
this
is
actually
pretty
simple,
so
so
you
can
see
kind
of
across
the
bottom.
There
are
tabs
for
every
working
group
right,
the
DNI
evolution
risk
value
in
common
and
then,
if
you
just
take
a
look
at
the
DNI,
they
have
their
focus
areas
and
the
metrics
that
are
currently
being
attended
to
equal
the
focus
areas,
and
then
the
DNI
workgroup
just
went
through
an
exercise
to
say
that
you
know
red
will
work
on
it,
but
it's
just
not
gonna
be
part
of
the
release.
C
We
just
don't
have
it
worked
out
very
well
all
the
way
to
green,
which
means
yeah.
This
is
great.
We
think
we
have
this
pretty
well
spelled
out
and
we'd
like
to
include
this
as
part
of
the
release,
and
so
what
this
does.
Is
it
it?
It
does
a
couple
things
right,
so
it
it
asks
the
working
groups
to
kind
of
reflect
on
the
metrics
that
they
have
in
the
state
that
they're
in
and
identified
candidate
metrics
for
the
release,
which
is
great
and
then
the
second
thing
it
does
is.
C
It
creates
a
central
place
for
myself
and
Kevin
so
that
when
we're
putting
these
out
onto
the
website,
I
don't
have,
we
don't
have
to
kind
of
go
working
group
by
working
group.
You
don't
I
mean
we
don't
have
to.
We
have
a
central
place
that
just
the
working
groups
have
identified
metrics
for
release,
make.
A
C
A
B
B
C
F
A
Those
are
the
metrics
that
we
consider
as
fully
dan,
so
that
means
critical
delays.
Okay,
we
basically
do
that
after
the
corresponding
pull
request
is
accepted
for
this
time
and
at
every
moment.
Okay,
this
is
the
metric
that
we
want
to
believe
with
respect
to
the
matrix
in
which
we
are
working.
Now
we
are
doing
that
by
blogs.
So
that's
why,
in
the
case
of
the
goal
activity,
we
are
basically
done
in
the
gold
efficiency.
A
We
are
starting
the
discussion
now
and
yes,
if
we
start
with
all
efficiency,
the
idea
would
be
to
try
to
deal
with
all
the
questions
in
both
efficiency
trading.
Indeed,
that
is
a
bit
different
from
what
you
person
now,
because
it
seems
that
it's
a
bit
like
I
mean
in
the
case
of
a
spreadsheet.
It's
like
a
very
big
dis
metric
from
here
and
from
there.
A
A
So
that
means
that
the
mapping
with
what
you
have
in
the
spreadsheet
could
be
green.
Green
is
what
we
do
have
in
the
focus
area.
File
in
the
table
of
release,
method,
yellow
would
be
the
Pope
or
the
the
goal
with
which
we
are
working
now
sure
and
blue
means
I
mean
the
rest
of
the
goal
up
to
the
end
here
and
red
means
the
rest.
If
you
want
that,
we
somehow
translate
that
into
a
straight
set
like
this
one.
Of
course
we
can
do
yep.
C
A
A
Just
different
methodologies,
so
my
personal
view
is,
if
you
are
trying
to
address
a
question
you
need
to
find
out
which
metrics
you
need
for
that,
and
only
when
you
about
with
the
metrics.
You
can
really
say
this
question
is
addressed,
but
of
course
they
move
to
the
next
one
and
so
on
yep.
The
doctor
said
a
way
of
doing
that.
So
I
understand
that
the
diversity
and
in
class
you
may
prefer
to
just
terrific
from
here
and
there,
because
that
the
way
of
working
is
different
and
that's
fine
with
me
and.
C
So
I
think
even
D&I
still
kind
of
follows.
The
focus
area
goal
question
metric
approach.
They
definitely
do
and
I
think
in
their
case.
They
or
you
can
tell
me
if
it's
otherwise
but
the
metrics
that
are
labeled
as
green,
have
they're,
definitely
part
of
a
goal.
They're,
definitely
attempting
to
answer
a
particular
question
and
then
the
weight
of
the
metric
would
be
presented
as
been
thought
through
mm-hmm,
no.
C
A
In
insert,
if
you
want,
or
if
you
think
that
it's
useful
that
we
produce
has
bit
chip
like
this,
one
I'm
happy
we're
doing
there.
But
the
only
thing
is
from
my
point
of
view.
It
would
only
make
sense
for
the
goals
that
we
already
working
with
because
for
the
others
we
don't
really
know
even
the
metrics.
A
hundred.
A
C
A
Okay,
then,
if
you
want
what
what
we
can
do-
and
maybe
some
some
people
can
volunteer
and
help
on
this
would
be
to
basically
transfer
the
list
of
the
metrics
that
we
have
right
now
in
the
code
review.
Sorry
in
the
code,
development
focus
area,
so
those
that
are
green,
we
can
include
in
the
mid
in
the
spreadsheet
and
I,
would
include
those
that
are
in
the
goal:
efficiency
as
well,
because
we
are
going
to
start
working
with
them.
Yeah.