►
From YouTube: CHAOSS.Risk.June.2.2020
Description
CHAOSS.Risk.June.2.2020
A
B
B
A
A
A
Let's
see,
don't
have
nets
now
so
I
know
he
did
a
lot
of
work
on
the
our
turbot
in
the
last
week.
So
I
think
he
finished.
Actually,
but
let's
start
I
mean
shall
share
my
screen
and
we
can
start
by
looking
at
the
poorer
groups.
I
have
to
be
enabled
to
share
my
screen
based
on
security,
rule
8,
7
503
of
zoom
policy
manual.
7
I
am
I
just
made
that
up
by
the
way,
I
don't
know
if
it's
that
specific.
A
A
A
B
A
B
A
B
A
A
A
A
I'd
like
to
have
a
little
bit
of
a
discussion
or
a
further
discussion
about
how
to
work
the
work
of
Alisa
into
into
what
we're
doing
in
this
working
group
or
what
the
proper
way
to
think
about
that
relationship
isn't
mean,
obviously
were
building
metrics
so
that
our
Alisa
is
it's
in.
Let
me
get
the
links
from.
B
C
B
B
That's
right
so
it
was.
There
was
some
talks
about
some
of
the
metrics
that
are
being
looked
at
from
the
data
analytics
perspective
that
there's
a
student
in
Germany
I
thought
I
think
it's
interning
at
BMW
working
on,
and
so
let
me
see
if
I
can
pull
up
that
link
and
give
it
to
you
guys
in
the
chat
I.
B
B
B
Know
this
little
shoe
that
need
to
be
get
put
in,
but
the
one,
for
instance,
is
data
analytics
for
kilpatt
review
is
potentially
you
know
what
they're
trying
to
look
at.
Is
you
know
in
the
safety
world?
A
lot
of
sort
of
you
know?
Is
there
traceability
back
to
the
requirement
for
code
changes,
and
so
thank.
B
And
so
you
can,
you
know,
sir,
you
can
listen
to
the
recording,
but
there's
other
places
in
this
as
well
that
you
know
analytics
and
metrics
are
interest
as
useful
as
part
of
the
evidence
being
justified
to
work
at.
B
The
caller
graph
tooling,
might
be
useful
as
well
but
I'd
say
it's
it's
a
level
removed
from
where
we
are
right
now,
and
so,
if
they
are
gonna
be
needed.
It's
just
I'm,
not
sure,
therefore
I'm
not
sure
how
to
bridge
the
two
together
to
completely
yet.
But
if
there's
definitely
interest
from
your
perspective,
Shawn
I'll
definitely
keep.
You
did
yeah.
B
B
A
B
A
B
A
Obviously
the
colonel
is
a
special
case.
Do
you
think
there
are
potentially
some
best
practices
that
we
should
consider
when
we're
developing
metrics
tests
and
I
guess
metrics
that
produce
analysis
of
tests
for
things
that
are
not
as
hardened
as
the
colonel?
The
words
are
there
are
there?
Is
there
a
pattern
from
your
knowledge,
that's
worth
following,
as
we
think
about
how
to
represent
testing
metrics
and
implement
testing
metrics
in
the
chaos
project,
mm-hmm.
B
B
One
of
the
things
that's
relevant
is
bug
fixing
okay,
and
so
we
can
tell
this
from
what's
done
and
put
into
the
backports
fixes
and
other
projects
use
things
like
counting
that
vulnerabilities,
but
at
the
end
of
the
day,
they're
all
bugs
and
we'll
also
be
safety
specific
bugs
inner
hazards.
I'm
a
timing,
another
perspective
to
that
we
really
have
a
good
system
for
taking
care
of
yet
that
I
know
of.
Let's
say
there
may
be
things
there,
but
I
don't
know
of
it.
B
B
A
I
understand
we
were,
we
were
discussing
Cass
goals
for
understanding
safety,
critical
system,
testing,
metrics
or
testing
metrics
in
general.
It
sounds
like
the
Linux
kernel.
Has
some
basic
data
analytics
that
they
do
to
within
the
curl
and
one
of
the
questions
that
I
had
that
you
might
be
able
to
contribute
to
as
if
there
are
patterns
in
some
of
those
processes
that
we
might
look
at
for
developing
metrics
I
mean,
obviously
we
can
think
of
things
like
test
coverage
or
other
safety
critical
system
measures,
but
systems
struck.
A
You
know,
stress
tests,
I
assume,
that's
what
stress
ng
does
so
stress
stress
test
might
be
a
category
I,
don't
know
what
fuzzing
is.
A
D
B
A
A
B
So
I
think
it's
sort
of,
like
you
know,
testing
and
it's
basically
summer
up
being
able
to
summarize
the
you
know,
the
testing
that's
found
summarizing
the
bugs
that
are
found
and
summarizing
the
evidence
of
discussion
around
things
being
committed.
You
said
three
sort
of
categories
I'm
seeing
emerge
on
the
kernel.
B
The
current
rate
for
the
current
Lestat
I've
heard
the
kernel
is
there's
effectively
one
commits
per
hour
into
the
stable
trees,
which
those
are
bug
fixes
generally
her,
and
so
it's
nine
commits
per
hour
or
into
the
upstream
and
then
one
commits
per
hour
into
the
stable
there's
a
rate
of
bug
fixing.
So
you
could
say
we
use
that
as
an
inference
to
that.
Okay.
B
D
A
B
D
A
A
C
A
B
A
A
A
A
D
D
We
able
to
tag
it,
like
example,
code
if
it's
something
we
loose
in
production
related
to
production,
so
to
say
that
we
support
this
and
we
have
a
long-term
commitment
on
this
or
maybe
it's
a
product,
that's
submitted
by
a
developer,
but
it's
it's
not
core
business.
It's
based
on
time.
We
go
back
to
here,
political
contributions
and
so
on.
So
some
kind
of.
D
D
D
A
A
D
A
A
C
A
C
A
We
already
I'm
surprised
that
the
Forks
metric
isn't
proposed
for
release,
because
there's
certainly
some
that
we
should
well
it's
not
defined.
Yet
there
are
certainly
some
like
the
in
progress
ones
that
we
have.
We
have
working
metrics
in
both
software
tools
for,
and
it
would
be
a
good
idea
to
put
those
metrics
into
definition
stage.
So.
C
Like
if
I'm
really
gonna
I
was
gonna,
say
I
mean
really.
The
goal
would
be
like
in
the
next
three
weeks
that
you
would
get
basically
whatever
metrics,
you
think,
are
appropriate
kind
of
out
of
Google
Doc
and
Intuit
github
repo
cuz.
You
could
still
take
comment
for
another
month
yeah,
you
know
so
just
kind
of
like
it's
85
percent
done
or
90
percent
done,
as
posted
in
github.
A
A
A
I
feel,
like
that's
an
evolution,
language
declaration
and
read
me
okay,
this
might
look,
you
might
be
just
simply,
there's
a
which
working
group
is
it
common
language
to
some
distribution
common
and
is
looking
at
actual
language
use
in
repose
is
declaration
useful,
I'm
language
declarations,
kind
of
interesting
thing,
because
it
essentially
is
all
that
it's
used
for
on
most
platforms
as
declaring
your
initial
get
ignore
and
oftentimes.
It
doesn't
end
up
being
the
primary
language
of
the
repository.
So
that
might
be
one
way
I
would
maybe
suggest
we
take
that
back.
A
A
A
C
A
A
Issue
resolution
time,
I'm
pretty
sure
exists,
I'll
check
on
that
software
vulnerabilities.
That's
ours,
pull
request
discussion,
nobody's
doing
that
forks,
I,
don't
think
anyone's
done
that
code,
complexity,
I!
We
have
that
data
I,
don't
know
if
we
started
building
the
metric
we
haven't,
but
yeah
so
I
think
I.
A
A
A
There's
probably
four,
and
actually
we
could
probably
do
pull
request
discussion
as
well,
so
I'll
see
if
I
can
swing
a
little
enlist.
Some
assistance
I
think
those
are
the
four
that
we
have
working
software
around
and
we
should
probably
get
in
the
metrics
from
a
risk
perspective,
especially
as
PDX
approved
licenses.
I
mean
right,
that's
the
you
know.
These
are
things
that
we've
done
and
we
just
have
to
write
the
metric
and
that's
just
a
little
bit
of
time
to
write
those
metrics.
So.