►
From YouTube: CHAOSS Risk Working Group August 11, 2022
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
And
they've:
okay,
they
move
the
record
button
around
on
zoom,
but
it's
I
found
it
so
welcome
to
the
risk
meeting
for
august
11th
2022
and
the
things
that
we
have
on
our
agenda
include
maybe
revisiting
our
update
risk
new
metric
and
taking
a
brief
look
at
our
new
prs
and
then
possibly
identifying
new
new
metrics
or
metric
models
that
we
may
want
may
want
to
focus
on
developing
right
now.
B
A
A
One
that
doesn't
actually
I
was
just
talking
about
this
today
with
someone
else
was
test
coverage
is
not
a
metric,
that's
implemented
anywhere,
and
one
of
the
one
of
the
challenges
with
test
coverage
that
we've
discussed
in
this
group
before
that
I
was
mentioning
to
others.
Is
these
tools
for
doing
test
coverage,
testing
or
evaluation
tend
to
be
language?
Specific
and
many
of
the
tools
tend
to
be
proprietary
instead
of
open
source,
and
so
much
of
the
gap
with
test
coverage
surfacing
is
along
along
the
lines
of.
We
know
that
it's
good,
we
actually.
A
Actually,
if
you
look
at
the
definition
for
that
metric,
I
think
it's
fairly
tight
and
so
far
as
it
defines
test
coverage
very
succinctly
with
a
formula
using
some
literature
from
the
software
engineering
world.
So
I
think
the
metric
is
clearly
defined.
I
think
actually
calculating
it
on
a
project
by
project
basis
is
where
the
trickiness
comes
comes
into
existence.
B
Actually,
I
think
the
repeatability
aspect
that
you've
got
down
there
underneath
quality.
I
think
we
could
probably
get
closer
to
that
one
pretty
quickly
because
there's
a
lot
of
work
on
binary
like
the
reproducibility
and
code
reproducibility
from
a
quality
perspective.
B
I'm
looking
at
the
one
that
you've
got
red
software
providence,
looking
at
red
the
last
one
on
the
red
one,
and
I'm
almost
wondering
about
this.
B
B
I
think
that
one
is
given
all
the
work
that's
going
on
with
reproducible
builds
and
the
fact
that
most
distros
are
incorporating
a
lot
of
that
now.
We
also
need
that
in
you
know
it
it's.
You
know
what
portion
of
the
packages
are
binary.
Repeatable,
like
you
know,
can
you
can
actually
go
for
a
repeatable
build
with
I?
I
think
that
is
a
really
interesting
one
to
see
if
we
could
start
to
get
into
the
adoption
main
scene.
A
Yeah
I
mean
I
I'm
certainly
seeing
the
binary
repeatability
piece
much
more
easily.
I
guess
it's.
I
think
it's
better
tooled
now
than
it
was
even
a
year
ago,
yeah
github,
for
example,
provides
a
number
of
really
useful
tools,
probably
partly
out
of
the
demise
of
travis
ci
and
their
relationship,
their
their
github
actions
and,
like
we
have
new
like
an
auger,
I
can
speak
from
that
experience.
A
Every
all
of
our
docker
containers
build
every
time
we
do
a
main
branch
update,
and
you
know
so,
there's
a
there's
a
test
not
only
for
the
tests
that
we
have,
but
also
that
the
binaries
compile
as
docker
containers
consistently
and
often
times.
I
think
we
identify
sort
of
gaps
in
our
continuous
integration
through
docker
builds
that
fail.
You
know,
docker
is
much
more
picky
about
certain
things
than
just
a
regular
command
line
build,
so
it
forces
an
explicitness
onto
a
project.
B
A
Yeah,
I-
and
I
think
so
this
is
this-
is
a
place
where
that
there
are
things
like
the
the
cmmi
that
exists
in
the
software
engineering
world
that
show
methods
and
practices
for
software
for
requirements,
traceability
that
exist.
None
of
them
are
lightweight,
though,
and
I
think
for
open
source
to
have
something
like
that
work.
It's
got
to
be
significantly
better
than
what's
available
to
the
non-open
source
world.
Today,.
A
C
So
I
wonder
about
where
we
are
in
the
agenda.
I'm
not
sure.
A
A
A
A
All
right,
I
am
not
going
to
pause
the
recording,
because
whenever
I
pause
the
recording,
we
have
all
the
discussions
that
I'd
really
like
to
capture,
but
for
those
of
you,
viewers
on
the
internet
watching
youtube.
You
may
speed
past
this
section
and
look
for
chatter.
C
So
where
are
we
on
the
completion
of
this
metric
and
what
are
people's
thoughts
on
incomplete
facets
of
it?.
C
And
let
me
kind
of
give
a
little
background
again
just
to
refresh
the
conversation
that
interdependencies
between
between
components
sometimes
lead
to
downstream
users
of
a
component
being
stuck
with
an
unupdated
version
of
the
component
and
this
being
a
risk
so
that
potential
users
of
that
component
want
to
be
able
to
evaluate
the
likelihood
that
the
version
they're
kind
of
stuck
on
is
going
to
be
problematic.
C
A
B
B
A
A
Other
ecosystems
have
a
lot
of
tightly
coupled
dependencies
and
whenever
I'm
in
a
tightly
coupled
ecosystem-
and
I
use
the
node
ecosystem
as
my
example,
it
seems
that
one
update
forces
me
to
re-examine
30
other
things
like
there's
a
there's:
a
trim
tab
to
use
a
flight
navigation
term.
It's
a
very
small
control
that
can
have
a
significant
impact
on
how
my
plane
is
flying.
A
B
So
there's
there's
an
individual's
perspective
on
this
is
my
system.
I
want
to
use
this
package,
I
better
upgrade
it
or
there's
a
bug.
I've
got
upgraded,
there's
a
project
version
of
hey.
These
are
my
dependencies
in
my
project
and
there's
enough
and
there's
updates
that
have
to
happen
then,
and
then
there's
a
full
like
os
perspective
on
risk.
B
Like
one
of
the
things
I
remember
when
I
was
still
like,
in
fact,
there's
probably
a
probably
an
odds
labs
also
lta
talk
about
when
I
was
still
from
that
time
period
and
trying
to
figure
out.
B
Do
I
accept
this
stick
into
this
package?
What
are
the
ripple
and
add-on
effects
and
so
there's
a
whole
notion
of
mining
through
the
dependencies
to
understand?
Am
I
likely
to
hit
something
that's
on
edges
of
branches?
If
I
could
get
wrong
or
am
I
going
to
hit
the
core
and
it's
going
to
jeopardize
the
release.
C
I
think,
in
a
way,
that's
that
that's
the
calculation
that
we're
aiming
to
help
with
you
know
optimizing
right
when
anybody
projects
avoid
being
stuck
at
that
moment.
D
So
at
the
0.1,
under
the
description,
there's
sort
of
the
first
thing
is
you're,
assuming
that
there
is
some
sort
of
long-term
support
policy
expressed,
so
this
metric
is
only
feasible
if
they
do
have
something
like
this
published.
Where
would
that
be?
Is
there
a
standard
place
for
this
to
exist?
I'm
not
really
aware
of
that.
A
B
D
C
I
think
I
think
that
this
criteria
applies
whether
or
not
there
exists
a
long-term
support
policy.
C
The
existence
of
long-term
support
policy
is
one
of
the
ways
that
you
would
evaluate
it,
so
this
might
be.
We
could
maybe
simplify
it
by
eliminating
that
are
still
within
a
declared
lts
window,
so
that
the
criteria
is
about
the
history
of
back
according
to
legacy
versions,
with
the
existence
of
an
lts
policy
suggesting
you
know,
good.
B
B
B
B
A
B
And
I'll
give
you
a
data
point
why
this
is
catching
up
in
my
head
right
now,
as
we
just
we're
just
pushing
out
the
two
three
release
for
spdx,
hopefully
later
today
and
one
of
the
new
fields,
that's
in
there
that
got
introduced
is
the
from
the
usage
group
is
valid
until
which
is
effectively
your
expiry
date.
B
So
we're
going
to
have
ways
of
starting
to
track
that
now
and
the
data
folks
for
ai
wants
this
stuff
too.
So
there's
multiple
people
wanting
it.
B
B
C
I
think
that's
a
that's
a
really
usable
metric
right,
you
can,
you
can
go,
find
out
and
it'll
have
a
you
know.
It'll
mean
something
you
can
do
something
with
that
information.
B
A
B
C
I
think
if
we
limit
the
scope
to
security
patches,
it's
more
tractable
and.
D
That
can
be
hard
to
measure,
though
I
was
just
talking
about
this.
Other
project
leads
that
security
issues
are
not
often
publicly
noted
and
issues
and
pull
requests,
because
they
don't
want
to
call
attention
to
the
fact
that
there
could
be
a
security
or
vulnerability
issue
in
the
project.
C
I
think
that
is
totally
correct.
I
live
that
every
day,
but
I
think
advisories
are
pretty
good.
C
So,
typically,
there's
some
widely
used
upstream
package
like
dash
fetch,
which
is
relatively
small,
and
it's
used
in
a
lot
of
packages
and
an
advisory
will
be
published
against
it
and
that
will
cascade
down
to
all
the
downstream
packages
and
you'll
see
issues
posted
requesting
an
upgrade
to
the
core
dependency.
A
A
C
C
I
think
it's
possible
that
that
would
be
kind
of
a
one
that
would
be
a
part
of
update
risk
right
like
is
there
a
update
policy,
an
expressed
explicit
support
policy.
B
A
Yeah,
I
know
so
what
I
don't
know
is:
do
I
jump?
Do
we
jump
to
do
we
jump
over
to
that
or
do
we
try
to
finish
this
or
I
don't
what
what
do
we
view
the
dependencies.
B
B
Quite
frankly,
I
there's
a
factor
here.
I
don't
know
quite
how
to
say,
but
it's
like
have
other
people
already
done
this
successfully
or
is
there
a
problem
working
as
here
be
dragons
effectively
right,
and
I
don't
quite
know
how
to
quantify
the
how
to
articulate
the
here
be
dragons
or
what
what
sort
of
testing
has
been
done
on
the
update
and
against
the
hereby
dragons
portion
you
know
is
the:
has
the
update
been
gone
through
some?
What
qa
process
has
the
update
gone
through?
A
B
B
A
It
seems
axiomatic
to
me
that
the
faster
you
rush
into
a
zero
release,
the
more
likely
you
are
to
encounter
issues,
and
it's
always
a
trade-off-
am
I
going
to
get
some
feature?
I
really
need,
or
some
capability
stability
security
that
I
really
need
with
a
new
version,
and
that
motivates
me
to
take
the
risk
of
going
with
the
dot
zero
version.
Or
do
I
wait
for
the
updates
to
kind
of
solidify
and
mature
the
product?
A
C
I
would
propose
that
this
is
this
is
about
not
not
all
sorts
of
dragons,
but
on
maybe
dragons
that
we
can
factor
and
name
them,
so
one
would
be
chances
of
that.
Any
updates
that
do
come
around
are
too
buggy
to
use
and
then
there's
a
risk
that
no
updates
will
come
around
to
address
a
newly
encountered
security
problem.
A
C
So
let
me
propose
that
we
factor
the
dragons
and,
let's
name
the
dragons
and
any
dragons.
We
can't
name.
We
leave
for
a
future
project.
C
So
I
think
that
there's
a
reliability
dragon,
which
is
to
say
that
updates,
will
introduce
bugs
and
there's
a.
B
Well,
yeah,
so
like
this,
like
stability,
like
you
know,
is
the
system
still
stable
after
you've
done
the
update
right?
The
overall,
like
you
know,
and
the
performance
is
an
issue.
Certainly
you
know
if
you
apply
something
in
one
space,
are
you
suddenly
going
to
be
crashing
your
memory
elsewhere,
whatever.
C
Would
you
remember.
C
Would
you
think
of
that,
as
as
purely
functional
impacts,
performance,
reliability
and
and
so
on,
or
would
there
be
other
facets.
B
Functional
does
everything
still
work
performance?
Does
it
work
without
you
know,
it
doesn't
work
within
your
expected
time
periods
and
then
reliability
is,
you
know
it's
gonna.
Suddenly
cost
surprises
elsewhere
as
a
system
stable
effectively.
C
I
think
reliability
is
kind
of
a
super
set
of
of
the
functional
and
performance
items
and
then
have
others
upgraded
and
encountered.
Dragons
is
kind
of
another
way
of
expressing
3ai
reliability
dragon.
C
I
think,
there's
going
really
only
one
drag
in
here,
which
is
the
overall
reliability
of
updates
and
we've
broken
it
down
into
functional
performance,
and
I.
A
Right
and
then
there-
and
there
are
certainly
updates
in
my
life,
where
the
function
functionality
disappeared
or
I've
like.
As
a
software
engineer,
I've
always
viewed
changing
a
method.
Signature
is
something
I
simply
will
not
do,
because
I
don't
want
to
break
things
for
people
who
are
using
something,
but
I
find
there
are
projects
where
that
ethic
is
not
yeah
cared
for
much.
D
I'm
sorry,
I
was
just
thinking
from
a
measurement
standpoint
and
I'm
curious
if
others
have
different
experiences
here,
but
when
I've
been
looking
for
sort
of
stability
of
releases
or
versions
in
like
areas
that
are
breaking
sometimes
you
do
look
at
issues
around
the
project,
but
you
could
also
look
at
chatter
and
other
forums,
I'm
just
kind
of
thinking
about
from
a
measurement
standpoint.
D
D
C
Neat,
like
kind
of
like
social
signals,
smell
yeah,
yeah
that
that
would
be,
but
it
would
be
unwise
not
to
check
that
and
wise
to
check
it.
You
know
you
should
go
look.
D
I
just
the
only
challenges
I
feel
like
we
could
get
like
we're,
trying
to
keep
it
general
and
versus
there's.
Always
the
like.
I
had
an
issue
in
in
my
sandwich
context
versus
issue
with
the
thing
like
it
works
on
one
platform,
but
not
another,
but
I
I
don't
want
to
get
into
that.
I
feel
like
that's
too
specific.
C
Well,
I
wonder,
what's
stopping
us
from
finishing
this
today
and
I
think
that
in
minutes
I
think,
being
more
specific
and
limiting
the
number
of
contexts
being
less
general
is
something
that
can
enable
us
to
complete
the
task.
A
So
one
of
the
questions
that
is
arises
in
my
brain
when
I
listen
to
this
discussion
is:
have
we
scoped
this
properly?
In
other
words,
is
update,
risk
really
multiple
different
metrics
that
look
at
things
like
support,
windows
and
reliability,
functional
and
performance
changes
separately,
or
do
we
want
to
try
to
keep?
Is
this?
Is
this
in
fact
a
discrete
singular
metric,
and
I
don't
need
to
answer
it?
We
can
just
keep
working
on
this.
D
What's
going
to
say
it's
just
looking
at
what
we
have
the
the
very
first
sentence
under
description
is
a
discrete
thing:
measures
the
availability
of
security
patches
and
then
the
discussion
below
is
just
areas
that
might
be
discoverable
either
in
the
in
something
like
a
long-term
support
window
that
is
either
defined
or
not
defined,
and
then
the
implications
of
trying
to
apply
an
update.
That's
currently
not
supported
or
going
to
break.
D
I
feel
like
I'm
struggling
with
that
in
the
context
of
what
you
said
earlier,
kate,
which
is
sort
of
the
nested
problem,
which
I
think
we're
talking
about
something
slightly
different,
which,
in
terms
of
like
when
you
upgraded,
did
you
encounter
these
issues
versus?
D
Is
this
an
issue
in
that
specific
project
or
issue
in
dependent
projects
or
project?
It
depends
on
and
that's
kind
of
how
I
initially
interpreted
the
where
my
project's
dependencies
line
of
thinking
and
then
we
kind
of
settle
on
a
different
directive.
So
I
didn't
know
if
you
wanted
to
also
have
that
in
here,
but
I
do
think
it
is
it's
a
multi-faceted
problem
that
does
have
a
focus.
So
I
think
there
is
enough
focus
here,
but
it's
kind
of
like
interpret
as
you
will.
C
I
think
the
dependency
issue
falls
under
the
the
likelihood
of
there
being
updates
that
need
to
be
applied.
C
Right,
like
a
more
complex
package,
needs
work
and
something
that
comes
to
mind
is
the
default.
Logging
library
in
go
is
ancient
and
they
never
touch
it,
but
that's
probably
because
it's
really
really
dirt
simple,
and
so
there
are
rarely
gonna
be
needs
for
that.
C
So
I
want
to
try
to
encapsulate
the
items
we've
discussed
into
a
more
a
shorter
and
more
completable.
C
Number
two
record
of
changes
to
interfaces.
C
C
A
B
C
You
mean
kind
of
like
the
the
counter
of
a
record
of
changes.
Well,.
B
C
A
A
A
And
I
suppose
I
can
see
lucas's
earlier
point
that
functional
and
performance
are
woven
into
reliability
on
some
level,
and
I
also
I'm
not
helping
I'm
I'm
not
helping
us
get
to
consensus
here
at
all.
I'm
just
going
to
shut
up.
We
are
out
of
time,
but
I
don't
want
to
just
cut
this
off
because
I
would
like
to
finish
this
metric
at
some
point
and
I
think.
A
C
Still,
the
takeaway
would
be
that
there
would
need
to
be
a
metric
on
likelihood
of
a
need
to
update
one
on
record
of
how
often
the
interfaces
change.
How
often
updates
create
reliability
problems.
B
A
B
B
C
I
I
would,
I
would
honestly
feel
like
that
was
unnecessary
concession
like.
C
I
I
recognize
the
complexity
of
the
underlying
bits,
but
I
think
we
could
write
this
thing
and
you
know
500
words
and
it'd
be
pretty
clear
what
it
is
you
think
about
it
like
you're,
going
to
go
and
you're
asking
yourself.
Should
I
add
this
npm
module
and
you're
going
to
ask
yourself
some
things
like
is
that
do
their
support
forums
have
a
lot
of
people
complaining
about
security
problems?
C
You
could
that
could
be
broad
or
deep,
or
you
know
a
phd
thesis
or
not,
but
you
could
also
just
put
it
in
cloak
real
terms
and
it'll
work
pretty
well,
and
also.
I
need
to
point
out.
This
is
months
of
work,
this
month's
work.
This
has
got
to
be
the
10th
or
15th
editing
session.
I've
had
in
this
thing-
and
I
I
I
think
we
can
get
some
benefit
out
of
it
without
putting
it
on
these.
A
So
maybe,
with
with
that
discussion
in
mind,
maybe
the
thing
that
we
do
is
go
away
for
a
couple
weeks.
A
Think
about
whether
or
not
we
want
to
which
direction
we
want
to
go
as
a
group
come
back
to
it
in
two
weeks,
with
an
open
mind
about
what
we
do
with
update
risk
versus
developing
the
discrete
metrics
that
we
think
may
be
necessary,
and
if
lucas
I
don't
know
if
you
want
to
take
a
shot
at
fleshing
out
the
rest
of
update
risk
or
if
you
want
some
of
the
rest
of
us
to
do
that,
to
see
where
this
lands
based
on
discussion.
C
Yeah
that
sounds
wise.
You
know,
I'm
concerned
we're
bike.
Shedding
this
thing
to
death
right,
a
blue,
a
blue
shed
is
pretty
good
it
it.
It
doesn't
really
matter.
If
it's
turquoise
like
yeah,
we
will
be
better
in
the
future.
I
think
we
can
identify
these
factors
and
we've
made
really
valuable
progress.
C
Social
signals,
I
think,
is
a
really
useful
thing
to
add.
We
figured
out
the
lts
stand,
the
dragons
thing.
I
think
it
was
a
really
valuable
way
of
drawing
in
discrete
signals
and-
and
I
think,
if
we
draft
those
tightly
we'll
have
something
that's
deliverable
and
usable.