►
From YouTube: CHAOSS.Risk.November.4.2019
Description
CHAOSS.Risk.November.4.2019
A
C
We
have
the
so
there's
a
listing
under
the
SPD
X
API.
If
you
call
all
the
licenses
kind
of
like
a
see,
this
link
for
more
information
about
the
license.
I
took
the
API
call
and
I
match
them
out.
So
each
one
of
those
links
that
you
click
when
you
click
the
license
itself
is
what
spds
recommends.
You
see
when
you
look
for
that
license:
okay,
really
I.
A
A
C
A
Sounds
good
I,
don't
think
we
have
any
updates
on
the
license
compliance
summit.
It
looks
like
I'll
be
going
to
that
and
presenting
some
of
the
work
that
we're
doing
in
this
working
group
as
well
as
with
order
we
had
an
action
item
to
start
the
CII
metric
development
and
I.
Don't
know
if
I
noted,
where
that
was
coming
from.
C
C
B
A
A
No
apparently
not
what
are
you
doing?
I
need
to
run
the
the
we
have
the
value
worker
in
place
now
so
I
can
give
you
actual
data
for
Zephyr
that
fits
that
structure
that
I
sent
him.
Okay
and
I
thought
I
had
done
that,
but
apparently
I
did
not
so
I
will
do
that.
Okay
here
the
next
day
or
so,
because
I
just
have
to
turn
it
on.
Basically.
A
All
right
and
then,
let's
see
next
item
on
the
agenda
and
so
then
there's
complexity,
measures
for
every
file
in
there
and
some
summaries
that
I'll
send
you
as
well
and
coming
up
I
think
we
talked
about
risk
metrics
for
safety
and
security,
which
we
want
to
have
some
work
done
well
before
we
meet
with
Kate
I
think
in
either
next
I
think
it's
next
week.
Okay,
as
a
separate
meeting,
oh
yeah.
A
B
B
If
you
kind
of
scroll
down
and
I
can
change
this,
so
this
goes
back
to
the.
There
are
a
lot
of
images
in
here.
This
is
the
release.
This
is
a
release,
metric,
isn't
it
it
is
a
release,
metric
I'm,
updating
it
to
the
new
template,
I,
see,
okay,
yep
and
so
there's
a
lot
of
stuff
in
here,
which
is
like
old,
augur
data
models.
A
B
Do
you
see
like
under
tools
providing
metric,
basically
is
where
I'm
at
mm-hmm,
so
what
I
remember
what
I
want
to
do
here
is,
if
you
have
an
endpoint
to
show
the
endpoint
yeah
yeah,
because
right
now,
if
you
take
a
look
at,
you
can
see
like
row
43
it's
like
some
old
test
coverage
data
model,
PNG
I'm,
guessing
that's
not
applicable
anymore,
it
is!
Is
it
even
that's
the
new
data
model?
Is
it
yeah
all
right,
take
a
look
at
that.
A
A
A
B
I'm,
like
so
I'm
hesitant
to
in
any
of
these
metrics
remember,
we
talked
about
this
before
like
say,
augur,
provides
it
orgrim
or
lab,
provides
it
versus
augur
or
Chrome
or
lab.
Has
the
potential
to
provide
it
sure,
they're
kind,
of
two
different
things
to
me,
yeah
so
I,
don't
wonder
what
are
people's
thoughts?
I
mean.
A
My
thought
would
be
like
if
we're
gonna
for
the
existing
metrics,
my
thought
would
be
to
wait
until
we're
like,
because
we've
got
about
two
months
for
the
next
release
yet
and
I
think
there'll
be
tools.
I
think
there'll
be
tooling
developed
around
some
of
these
metrics
likely,
both
in
grimore
lab
and
auger
between
now
and
then
so.
Instead
of
taking
things
out
of
published,
metrics
I
would
suggest
we
just
sort
of
put
this
on
a
list
of
metrics.
B
A
B
C
C
A
B
A
A
B
D
A
And
so
yeah,
this
test
coverage
is
a
really
unique
case,
because
most
there
are
specialized
tools
for
every
language,
so
the
to
like
good,
more
lab
an
auger
for
test
coverage,
statement,
test
coverage
and
subroutine
test
coverage
are
going
to
have
three
choices.
One
is
to
store
the
output
of
test
tools
in
some
kind
of
abstracted
way
to
integrate
the
test
tools
directly
into
themselves
or
to
provide
some
kind
of
like
data
interface
for
loading.
B
We're
so
like
so
as
far
as
you're
concerned
augur
provides
this
metric
no.
A
I
wouldn't
say
that
I
think
an
end
point
is:
is
nice
I
think
end
point
is
a
good
boundary,
but
it
doesn't.
It
doesn't
necessarily
solve
all
the
questions
that
you
have
about
what
an
implementation
means
and
I
think
when
you're
talking
about
test
coverage,
it's
a
it's
a
very
unique
case.
I
think
in
general,
I
would
say
that
augur,
orgrim
or
lab
would
need
to
be
able
to
accumulate
the
data
for
a
metric
on
their
own
without
and
then
that
would
be,
you
know,
sort
of.
A
If
you
have
an
endpoint,
that's
the
implication,
mm-hmm
I
think
in
the
case
of
test
coverage
we
haven't
gotten
I,
don't
think
grimore
Labs
gotten
deep
into
how
we
were
how
we're
going
to
include
this
in
our
tooling
and
I.
The
evaluate
the
initial
work
that
I've
done
tells
me
that
the
conversations
I've
had
indicate
that
there
are
commercial
tools,
language,
specific
tools
and
so
test
coverage.
A
B
See
so
in
terms
of
auger
providing
this
metric
mm-hmm
like
what
like
that
I
mean
like
the
golden
eight
or
can
I,
not
say
it,
because
I
mean
these
metrics.
What
we're
trying
to
do
with
these
templates
is
not
provide
like
this
long,
extensive
narrative
that
we're
trying
to
say
that
here's,
how
we
describe
it,
here's
what
the
objective
of
measuring
test
coverage
would
be
here
are
some
ways
you
could
filter
on
test
coverage,
say
over
time
or
over
code
file
and
your
tools
that
provide
insight
into
test
coverage
right
so
I
think
with.
A
Test
coverage
you're
we're
going
to
have
a
challenge,
balancing
simplicity
and
coherent
language
with
the
complexity
of
the
problem,
space
that
that,
in
this
specific
case,
there's
a
tension
there.
That
is
different
than
most
of
almost
all
of
the
other
metrics
that
we're
dealing
with,
and
this
metric
is
important
because
of
its
impact
in
real-time
operating
system.
Embedded
systems.
B
A
B
So,
actually,
using
pointing
to
the
tools
that
do
this
work
might
be
a
thing
because
if
you
think
about
it
in
value
mm-hmm
in
the
value
working
group,
we
have
what's
that
one
thing
that
Kokomo
velocity
remember
right
and
what
we
did
to
the
CN
CF
tool
right,
which
is
a
tool
providing
this
metric,
mm-hmm
and
I.
Think
we
could
do
the
same
thing
here.
So
let's
just
do
that.
I
suggest
we
remove,
at
least
at
this
point,
Harger
well.
A
I
think
I
think
we,
if
there's
tools
that
provide
a
way
for
people
to
store
the
data
from
their
test,
runs
and
then
see
that
coverage
information.
I.
Think
you
want
that,
because
it's
gonna
then
present
the
metric
of
a
cell.
That
way
is
useful.
So
it's
it's
not
a
it's
a
tool
that
will
that's
different
than
providing
it
doesn't
generate.
The
output
that's
necessary
for
the
metric
to
exist,
but
it's
stored.
It
right,
which
I
think
is
useful.
If
you
want
to
see
a
full
picture
of
a
piece
of
software.
Okay,.
A
B
B
B
A
Yeah,
what's
shown
with
augur,
is
just
this
is
where
you
would
load
the
output
of
one
of
these
tools
or
many
of
these
tools,
and
we
can
provide
a
standard,
JSON
format
that
you
can
generate
the
output
in
all
right.
Oh
building
an
endpoint
is
trivial,
but
we
can
do
that
too.
So,
could
you
do
me
a
favor.
A
B
B
A
B
A
B
Data
an
endpoint
would
be
also
implementation
is
the
top
level
heading
under
which
there
are
filters,
visualizations
and
tools
providing
the
metric
and
data
collection
strategies
right.
There
are
those
four
sub
headers
under
implementation
filters,
visualizations
tools
providing
the
metric,
and
so
we
have
in
visualizations
in
here,
and
we
could
have
some
description
under
implementation,
which
is
kind
of
like.
What's
here,
that's
fine,
this
is
you
said
there
was
a
data
loading
heading.
A
Or
something
like
that,
there's
a
data
collection
strategy
heading
so
I
think
that
this
description
of
the
scanner
underneath
implementation
is
actually
a
data
collection
strategy.
More
so
than
it's
a
metric.
It
is
a
metric
output
as
well,
so
do
socks
or
auger
s
bomb
it
or
s
or
STD.
X
is
going
to
generate
a
file
that
I
think
is
useful.
Yeah.
A
Yeah
there's
many
strategies,
but
ultimately
all
they're
all
doing
is
scanning
the
file
looking
for
license
headers
or
declarations
yeah,
exactly
cool
you're
using
right.
So
this
is
and
that,
but
this
is
part
that
we're
showing
is
not
the
end
imitation
of
getting
the
data.
It's
the
implementation
of
generating
the
data.
A
I
think
this
one
I
mean
yeah,
so
this
is
a
case
where
the
tools
already
made
augers
already
made
significant
changes
to
how
this
works,
and
so
the
description
that's
here
should
be
changed
to
reflect.
The
current
state
of
the
basically
auger
SP
DX
is
what
two
socks
was
and
the
integration
is
pretty
tight.
Now
yeah.
C
A
We
accept
this
pull
request
so
that
you
can
then
make
those
modifications
Matt.
Yes,
so
that
seems
like
the
easiest
thing
to
do,
because
otherwise
now
we're
giving
the
other
options
for
me
just
to
get
rid
of
this
one
and
then
start
over,
but
no
I
think
there's
value
added,
because
you're
adding
the
template
right.
Yep
yeah
I'm
just
going
to
merge
that
one
and
then
Matt
Snell.
You
can
fork
and
pull
it
and
make
an
update
based
on
what
we
actually
have
now
and.
C
D
B
C
A
A
C
A
A
On
this
I
would
say
at
our
next
meeting,
we
should
update
what
we're
going
to
do
for
the
spreadsheet,
because
I
think
we
can
get
the
Kokomo
okay
model
in
there
and
some
of
the
bug,
age
and
I
mean
I.
Think
there
are
things
in
here
that
we
can
produce
metrics,
for
that
will
be
helpful,
but
I
don't
necessarily
want
to
make
those
updates.
Today,
yeah.
B
And
I
do
think:
that's
fine,
I
do
I
think
maybe
I've
been
thinking
a
little
bit
about
the
second
release.
You
know,
and
you
know,
if
we
have
CII,
maybe
it
all
that
it
really
would
take
I
think
is
even
just
two
or
three
metrics
I.
Don't
think
this
has
to
be
a
big
quantity
of
metrics.
Right
is
what
I'm
saying
for
the
second
release.
So
if
CI
is
one
in
the
coca
most
office
to
DNI
is
working
on
two
at
the
moment.
A
A
Lists
traffic
pull
requests,
discussion,
IRC
activities,
forks
like
these
are
all
metrics
that
I
think
we
can't
demonstrate
already
yeah
and-
and
we
should
perhaps
just
have
them
implemented
and
somewhat
like
when
I
see
lines
of
code
and
commits
I
think
those
actually
might
already
be
implemented
in
evolution
mm-hm,
and
we
should
create
a
cross
reference.
Yep.
B
No
I
agree.
That's
a
good
idea
and
I
was
talking
when,
in
the
DNI
meeting
today,
like
some
of
these
metrics
honestly
shouldn't
be
that
hard
to
write
up
like
Forks
mm-hmm.
No,
exactly
sometimes
a
title
is
basically
the
description.
Yeah
there's
not
much
more
to
it
than
that.
Honestly,
oh,
so
sometimes
there's
just
not
a
lot
of
text
associated
with
these
metrics,
but
it's
nice
to
formalize
them
so,
but
like
Bane's
model,
COCOMO
model
of
human
labour
invested
that'll
take
a
little
bit
more.
B
C
Have
this
many
of
like
almost
all
Apache
licenses
on
my
files,
but
then
like
there
are
other
ones
that
are
kind
of
the
coverage
is
kind
of
weird
and
they've
got
like
pretty
much
an
even
spread
of
like
Apache
and
BSD,
and
maybe
some
MIT
in
there
I
think
the
ones
that
have
more
of
a
monolithic
structure
generally
seem
to
be
bigger
repositories,
I'm,
not
sure
how
I
would
look
at
it.
We'd
have
to
be
able
to
ignore
things
like
web
pack
or
node
modules.
Just
things
like
that
to
be
able
to
do
that.
C
A
E
B
A
C
A
I
think
yeah
I
take
a
look
at
the
license
coverage.
I.
Think
one
of
the
central
questions
there
I
know
is
off
top
my
head,
but
was
how
many
of
the
files
do
you
have
license
is
declared
versus?
Don't
that
that
was
the
intention
of
coverage
spread
spread,
is
kind
of
saying
how
many,
how
much?
What
is
the
density
of
each
license
type,
which
is
what
we
were
getting
at
with
the
enumeration
of
the
counts?
Yes,.
C
A
B
E
A
E
A
A
C
A
E
The
larger
universe,
new
OSI,
is
the
Stewart
of
the
open
source
definition
and
with
the
discussion
going
on
about
case
this
OSI
approved
or
not
I
know
there
are
some
companies
that
have
guidelines.
It
has
to
be
an
OSI
approved
license
and
for
the
OSI
itself
they
just
wondered
as
a
marketing
the
vehicle
okay.
A
A
Looking
at
the
list
of
Euler
OSI
licenses
and
I
I
think
it's
pretty
clearly.
It
includes
MIT,
it
includes
Apache,
it
includes
canoe
and
BSD
in
Mozilla.
It
includes
all
of
the
most
significant
well
recognized
actual
open-source
licenses.
So
it's
not
excluding
anything
that
I
think
is
widely
recognized
as
an
open-source
license.
So
it
sounds
like
there's
some
kind
of
attempt
to
create
sort
of
Fringe
open-source
ish
licenses
and
call
the
work.
Open-Source,
yeah
I
think.