►
From YouTube: CHAOSS Value Working Group 11-18-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone
welcome
to
the
cures,
value
working
group
meeting
november
17..
Please
add
yourself
in
the
minutes.
If
you
want
agenda
is
pretty
light.
Maybe
so
first
is
on,
like
ospo
update,
ospo
plus
plus
update,
since
we
had
a
great
session,
so
any
feedback
or
anything
on
that.
C
That's
the
meeting
that
we
did
with,
I
think
josh
was
there.
B
I
think
the
story
from
chaos
is
that
the
like
our
interest
is
not
in
setting
up
ospos
inside
of
universities,
but
we
are
kind
of
at
the
ready
to
help
in
that
process.
A
B
B
Think
so,
because
I
think
we'd
be
overreaching
a
bit,
so
we
I
think
we
need
to
just
kind
of
follow
the
model
that
we
always
follow
in
the
chaos
project,
which
is
to
kind
of
let
let
the
communities
that
want
the
metrics
and
the
toolings
kind
of
speak
up,
and
then
we
help
capture
what
it
is
that
they
want
to
do.
I
don't
know
what
you
think:
sean
pizza
man.
A
So,
like
we
have
this
one
metric
in
the
development
process,
which
is
sphere
metric,
and
I
think
it
is
tied
to
this
ospo
plus
plus
thing
like
one
of
the
metric
related
to
that
area.
A
B
B
A
I
attended
one
of
their
meetings
on
the
fair,
and
so
in
that
aspect
they
were
having
two
focus
areas:
fair
on
the
data
side
and
fair
on
the
software
side.
It's
like
how
our
data
can
be
findable,
accessible,
interoperable
and
reusable
all
these
four
principles.
B
A
B
Maybe
it
was
your
pizza
yeah.
Maybe
my
concern
is:
is
that
if
we
develop
a
fair
metric
like
what
you
have
here,
you
may
want
to,
I
can
share
my
screen
so
like
if
we're
developing
a
metric
like
what
we
have
here.
That
fare
is
just
going
to
be
this
constantly
moving
target
and
we're
never
going
to
get
this
metric
quite
right.
B
C
But
what
I've
got?
I've
gotten
two
major
themes
so
far
out
of
my
discussion
with
the
folks
looking
at
hospital
plus
plus
and
university
aspos.
The
first
theme
is
alt
metrics
for
scientific
researchers,
which
is,
I
think,
what
we're
what
we're
kind
of
looking
at
in
the
in
the
case
of
this
fair
fare
metric.
A
C
C
But
it's
largely
because
I've
written
like
I
don't
know
hundreds
of
thousands
of
lines
of
code
for
augur
and
I
can
use
I've
used
auger
in
the
past
for
tenure
and
promotion,
and
I
can
use
the
auger
to
show
the
things
I've
been
doing
with
auger
and
all
the
other
stuff
as
an
old
metric.
But
I
have
to
make
that
up
myself
and
I'm
uniquely
situated
to
be
able
to
do
that.
I
think
most
scientists
are
not.
B
C
C
You
know
things
like
for
me,
google,
summer
of
code
and
and
trying
to
nurture
the
nascent
agar
community.
Those
are
all
those
are
all
parts
of
it,
but
I
don't
think
that's
a
metric.
I
think
that's
what
we're
starting
to
call
a
metric
electric
model.
It's
obviously
much
broader
in
scope
than
one
metric.
B
C
D
C
A
C
My
argument
would
be
that
it
is
a
scientific
output.
Okay,
now
I've
always
you
know.
All
most
of
my
research
has
involved
the
output
of
software
and
I've
always
put
that
under
scientific
output,
not
service,
and
I
get
depending
on
the
department
and
the
year
and
the
share
and
all
that
stuff.
I
get
varying
degrees
of
love
for
that.
C
B
B
So
maybe
I'll
give
mako
just
a
little
bit
of
context
here.
So
ospo
plus
plus
is
a
it's
a
group.
It
is
funded
by
sloan
and
jacob
green
is
leading
this,
and
it's
about
setting
up
ospo's,
open
source
program
offices
within
universities.
So
to
help
with,
like
tech
transfer
things
to
perhaps
help
with
rpt
processes.
B
B
Do
we
wait
for
the
ospos
to
kind
of
form
and
listen
to
what
they
have
to
say
in
terms
of
metrics
that
they
would
like
to
hear?
Or
do
we
try
to
enchant?
You
know-
maybe
maybe
it's
a
mix
of
both
predefined,
some
metrics-
that
we're
pretty
sure
you're
going
to
want
to
see
when
when
you're
ospo
has
stood
up-
and
I
think
sean
right
now
is
talking
about
setting
something
up
a
little
ahead
of
time.
So
that's
yeah.
D
That's
the
context,
and
so
what's
the
relationship
between
so
I
mean
I'm
familiar
with
fair
or
I've
become,
I
mean
I'm
vaguely
familiar
with
it
before
and
become
more
familiar
with
it
in
the
last,
like
five
minutes
so
like
I
mean,
I
know
that
it's
it's
like
the
only
context
in
which
I've
run
into
it
is
basically
about
like
it's
just
like
putting
identifiers
and
like
metadata
on
to
like
digital
objects
and
that
kind
of
stuff
right,
like
so
making
stuff,
essentially
like
computer
possible
and
findable
like
that
was
my
sort
of
under
I
think
so,
too
yeah
and
I
think
mostly
focused
on
I
mean
I
guess
it's
like
very
general,
but
mostly
focused
on
you-
know
the
kinds
of
more
traditional
sort
of
like
academic
products.
D
So
is
this
about
sort
of
like
extending
that,
like
the
simplest
version
of
this
would
be
like?
Okay,
great,
let's
extend
that
to
other
kinds
of
things
right,
I
don't.
I
don't
think.
B
They
just
haven't,
damn
figured
it
out,
they
haven't
figured
it
out
and
they
don't
talk
to
ospo,
plus
plus,
okay.
So,
like
those
two
groups,
don't
necessarily
talk
to
one
another
and
in
the
chaos
project
that
certainly
aren't
not
our
job.
I
don't
think
to
wrangle
that
relationship
like
I
don't
want.
I
don't
have
much
interest
in
doing
that,
and
so
the
question
would
be
like
from
a
fair
perspective.
Are
there
like?
B
Is
there
a
way
that
we
could
capture
kind
of
like
what
you
just
said
as
a
metric
like
if
you're,
if
you
care
about
fair
principles,
here's
kind
of
a
simplified
way
of
understanding,
fair
and
my
concern
is
that
I
don't
understand
fair
well
enough
to
actually
try
to
formalize
it
in
any
way
yeah.
My
understanding.
D
Is
very
like
superficial,
it's
like
like.
Have
you
put
a
doi
on
your
stuff
like?
Have
you
like,
so
it's
kind
of
like
you
know,
aaai
does
poorly
in
this
regard
because
they
just
have
urls
and
I
don't
know
like
azm
does
better
or
something
like
that
right,
like.
C
I've
I've
talked
quite
a
lot
with
some
of
the
fair
folks
like
dan,
I
mean
and
matt
has
too,
but
I've
had
a
few
additional
discussions
with
like
dan
katz
and
some
of
the
groups,
michelle
barker
and
and
that
that
small
group
there.
So
I
have
a
fairly
good
understanding
of
what
fair
is
after
and
some
concerns
about.
C
B
B
C
About
where
those
constructs
are
already
represented
in
the
chaos
project
and
the
tooling
that's
available
in
the
chaos
project,
okay,
where
and
then
somewhere,
it's
like
you've
got
all
this
stuff.
Chaos
has
done
here
and
all
the
stuff
that's
laid
out
in
this
30
page
pdf
here
and
it's
it's
like
one
of
those
tests
where
you
have
to
match
like
different
things
at
different
levels
of
abstraction
and
different
levels
of
comprehensiveness.
C
C
That's
all
part
of
making,
you
know
the
fair
software
available
and
findable
for
other
people,
and
I
think-
and
this
is
where,
in
the
past
I
think
there's
been
some
real
pushback
from
the
science,
the
open
source,
scientific
software,
folks
that
we've
talked
to,
but
the
absence
of,
and
so
this
is.
I
also
struggle
with
saying
this
all
the
right
way,
but
the
absence
of
any
organizing
foundation
or
institute
for
measuring
and
cataloging
open
source.
Although
they
offer
one
in
the
fair
proposal,
there
isn't
any
there's
no
money
behind
it
right.
C
C
On
the
one
hand,
and
on
the
other
hand,
you
have
a
really
stark
difference
between
what's
happening
in
the
r
community
and
what's
happening
in
all
other
open
source.
Scientific
software
like
the
our
community
is
tight.
Karthik
has
made
it
easy
to
accomplish
the
fair
principles
within
the
r
community,
but
everything
else
which
is
a
lot
of
things
is,
is
not
there's.
B
A
B
B
C
B
Is
this
a
list
of
the
principles?
Yes
they
have,
and
now
I.
C
Yeah
so
like
the
the
table,
three,
which
is
painful
three,
an.
B
B
C
A
C
C
Helpful
from
a
from
a
fair
perspective,
but
the
fair
people
are
also
not
fair,
is
part
of,
I
think,
communicating
this
scientific
value
more
broadly
and
creating
criteria
for
evaluating
it,
because
administrators
like
checklists,
but
I'm
not
there's-
and
this
is
one
of
I
would
say,
a
dozen
documents
with
different
ideas
in
it,
this
being
the
most
synthesized
and
comprehensive,
I
would
say
the
most
synthesized
of
them,
but
not
the
most
comprehensive.
B
C
Are
groups
like
there's
one
in
the
netherlands
that
I've
spoken
to,
but
not
in
a
while
who
are
trying
to
implement
tools
that
measure
the
fair
metrics
explicitly
and
okay?
And
my
argument
would
be
that
just
looking
at
the
fair
metrics
as
they're
expressed,
gets
you
a
certain.
It
gets
you
to
credibility
with
people
who
are
looking
at
software
as
as
a
scientific
asset
to
be
shared
and
something
that
we
can
speak
to
as
a
evidence
of
scientific
contribution.
A
C
C
This-
and
this
is
one
of
the-
and
so
this
is
why
this
has
been
a
complex
thing
in
my
headspace,
because
what
what
this
really
is,
is
it's
an
attempt
to
translate
and
map
the
things
that
were
done
in
fairford
data
to
software
as
an
asset?
These
are,
if
you
read
the
criteria
in
table
three.
These
are
the
types
of
things
that
are
frequently
seen
in
organizations
that
are
trying
to
create
standard
ways
of
cataloging
or
recording
data
data.
D
Yeah,
I
think
that
one
challenge
is
that
a
lot
of
I
think
this
document
is
sort
of
serving
two
purposes
like
the
fair
principles.
Right
one
is
just,
and
I
think
it's
less
like
here's
a
description
of
a
set
of
metrics,
I
mean
that
may
be
part
of
it
or
it
could
be
part
of
it
in
the
long
term.
I
think
a
lot
of
these
are
things.
These
are
things
that
are
not
actually
stored,
at
least
some
of
these
things
in
a
structured
way.
D
D
Standard
ways
of
declaring
these
things,
so
it's
less
like
we
need
to
measure
these
things
which
are
already
being
done.
Well,
I
mean,
I
don't
know,
there's
often
like
it
can
be
a
little
bit
complicated,
but
we
should
we
should
you
know
this
feels
like
like
these
are
things
that
should
be
recorded
in.
A
E
A
Attended
one
of
this
session-
and
this
was
the
talk
they
were
giving
that
these
are
the
principles
they
have
developed,
but
they
don't
know
how
to
measure
it.
And
then
I
asked
them
a
question
like
what
is
your
plan
and
they
said
we
are
collaborating
with
chaos
and
we
are
looking
at
it
how
to
measure
it,
but
we
are
not
yet
there.
So
that
is
one
of
their
to
do
things,
but
not
it's
not
that
they.
D
Are
there
yet
it
it?
It
just
feels
to
me,
like
the
the
the
way
for
them
to
solve.
This
is
less
like,
come
up
with
ways
of
measuring
data,
which
is
already
there
really
like
and
and
more
like,
provide
like
like
a
survey
or
a
structure,
some
sort
of
structured
thing
for
people
who
are
publishing
projects
to
record
a
lot
of
these
things,
or
at
least
some
of
these
things.
Where
do
we
get
these
things?
What
is
the
I
don't
know.
D
Like
how
can
we
place-
and
I
think
that
there
are
answers
to
most
of
these
questions,
but
I
think
that
the
answers
are
simply
are
going
to
be
like
often
not
stored
in
a
structured
way.
And
what
this
wants
is
a
description
of
those
things
in
a
structured
way.
So.
C
A
C
Actually
show
them
how
to
how
to
get
to
the
concepts
that
are
underlying
what
they're
asking
for,
without
asking
developers
to
do
a
whole
lot
of
things
that
there
are
no
tools
for
them
to
do
and
there's
no
standard
way
for
them
to
do
it
and
it
would
add
to
their
workload
and
not
be
a
high
priority
for
whomever
is
supervising
the
software
development,
because
that
person's
priority
is
usually.
I
need
this
to
work,
so
I
can
run
this
experiment,
so
I
can
write
this
paper.
D
Well,
I
mean
they
might
they
might
I
mean
I
think
that,
like
certainly
not
everyone
who
publishes
a
project
on
github
or
gitlab
is
going
to
that's
for
sure,
maybe
there's
some
additional
sort
of
like
veneer
or
something
or
some
additional
thing
that
you
can
build.
On
top
of
that
that
academics,
who
are
trying
to
publish
work,
that's
going
to
be
put
into
some
sort
of
institutional
repository
or
something
right
like
you
might.
D
Plug
in
the
git
lab
that
ospo,
you
know,
university
ospos
could
do
that,
could
store
these
things
yeah,
but
that
to
me
seems
like
a
much
more
like
promising
way
of
like
making
sure
these
things
are
trying,
like
the
alternative,
where
you
try
to
like
infer
answers
to
all
these
questions
automatically
just
seems
it
seems
really
hard
and
air
prone
yeah.
C
Like
metadata
clearly
and
explicitly,
let's
say
f3,
for
example,
metadata
clearly
and
explicitly
include
identifiers
for
all
the
versions
of
the
software.
It
describes
to
me
that
I
think
that
objective
is
a
little
ambiguous.
What
is
the
software?
Is
the
software
where
my
one
project,
or
I
think
I
would
interpret
that
as
the
software,
including
really
you
talk
about
identifiers
for
all
the
versions?
C
A
A
C
C
We'll
go
ahead.
John
I
mean
I
I
will
take.
I
mean
I
owe
michelle
the
documents
that
I've
described
for
like
a
month
and
a
half
now
so
I
would
be
willing
to
I'm
willing
to
just
say
I
will
have
that
draft
finished
prior
to
the
next
meeting,
perhaps
like
the
monday
prior
to
the
next
meeting.
So
people
have
time
to
review
it
just
as
a
they're
trying
to
just
an
initial
attempt
to
articulate
the
chaos
part
of
what
this.
D
A
So
so,
if
we
can
do
one,
that
means
that
list
can
be
in
the
form
of
all
the
matrix
that
we
can
consider
then
being
developed
over
the
period
of
time.
B
B
Whatever
that
thing
might
be
like
understanding,
newcomer
retention
could
be
your
model
name,
and
there
are
series
of
metrics
that
you
could
look
at
to
contain
a
better
handle
on
that.
So
so
I
think
the
the
step
would
be
for
sean
to
take
a
look
at
one
fair
principle
and
consider
how
metrics
could
infer
oops.
C
Yeah,
how
I
think
it's
I
think
it's
part
of
it
is
just
showing
that
one.
I
like
the
idea
of
one
example
that
is
mappable
one
example
that
is
less
easily
or
less
obviously
mappable
and
maybe
a
summary
statement
about,
because
there's
these
are
adjacent
problem
spaces
that
are
being
approached
from
very
different
epistemological
perspectives
and
putting
that
in
a
succinct,
paragraph
or
two
would
be
helpful
as
well,
and
I
mean
I
have.
I
have
struggled
with
the
mental
work
of
thinking
about
what
what
is
this.
C
B
Off
too,
I
think
I
think
doing
that
would
be
helpful,
but
then
I
think
also
attending
to
like
mako's
point,
which
is
if,
if
some
of
these
some
of
these
principles
like
they
can't
even
be
inferred
from
metric,
because
the
data
hasn't
been
articulated.
Clearly,
that
is
not
our
problem.
That
is
not
the
chaos's
problem
to
create
a
system
that
captures
that
data.
C
A
A
B
B
Well,
that's
good
to
me
this
whole
area,
like
it's
such
a
weird
area,
to
me
because,
like
when
we
talk
about
corporatized,
open
source
like
that,
like
that
ship
is
already
like
long,
it's
like
way
sailing
and
we
we're
just
kind
of
in
the
wake
of
corporate
engagement
with
open
source,
and
we
can
just
kind
of
capture
what's
already
out
there,
because
that
work
has
been
so
formalized
for
so
long.
B
C
The
tech
transfer
use
case
for
scientific
yeah,
open
source
metrics,
but
they're,
starting
from
no
understanding
of
what
a
pathway
to
monetization,
looks
like
for
an
open
source
project
like
universities,
I'm
with
the
with
exceptions.
I'm
sure
that
some
of
you
could
identify
generally,
don't
you
know,
create
companies
around
open
source
software,
that's
developed
as
part
of
their
scientific
enterprise
and
yet
right
that
that
possibility
and
the
the
possibilities
there
are
more
significant.
C
It's
just
that
business
development
and
ip
lawyer
people.
They
don't
understand
it
in
any
way.
That
makes
it
like
a
funding
stream
or
a
tech
transfer
problem.
Right
now,
like
that,
those
conversations
that
I
have
with
our
people
when
we
are
when
I
talk
to
our
general
counsel,
it's
it's
always
a
long
conversation,
because
there's
a
lot
to
unpack
that
the
lawyers
don't
understand.
B
C
C
A
A
B
E
Well,
I'm
very
interested
in
that
as
someone
who's
not
been
actively
following
all
this
stuff.
I
think
I
will
learn
a
lot
from
that
because
it's
my
first
time
reading
most
of
those
papers
so.
E
B
Okay,
that's
good
all
right!
No!
That's
I'm
trying
to
think
of.
Where
am
I
value
and
there
was
actually
something
that
came
up.
Sean
were
you.
You
were
on
the
asia
pacific
hall
on
yesterday.
C
C
Yeah,
I
don't
remember,
I
do
remember
labor
investment
coming
up
this
week.
I
think
it
was
in
the
asia
pacific
meeting.
C
C
B
C
Yeah,
oh
yeah,
and
that
was
that
was
the
one
where
I
was
saying
that
there
was.
I
think
I
think
that
metric
model
that
we're
working
on,
I
think,
that's
a
that's
a
metric
model
that
we
can
look
at
from
three
different
perspectives:
the
the
project
owner
the
development
or
contributor
community
and
then
the
community
of
use.
That
popularity
has
these
three
dimensions
and
certainly
it's
part
of
what
the
academic
altmetric
people
will
care
about.
But
there
are
three
discrete
foci
for
what
we
would
call
popularity.
I
believe.
B
Okay,
so
this
one,
so
this
could
be
a
candidate
for
metrics
model,
so
basically
vanad
the.
What
came
up
in
the
asia
pacific
call
yesterday
morning
was
the
the
metric
of
project
popularity,
which
june
brought
up
is
it's
just.
Is
she
wants
to
understand
it,
but
she's
not
sure
how
to
proceed?
Based
on
how
we've
written
this
particular
metric?
B
And
so
like
her
argument,
was
you
can
take
a
look
at
a
variety
of
things
to
help
understand,
project
popularity
right
and
she
doesn't
necessarily
measure
all
of
these
things.
So,
for
example,
like
people
attending
events,
and
what
came
up
is
that
perhaps
project
popularity,
because
it's
you
can
see
that
this
is
a
metric
mako.
This
is
we.
We
ran
into
this
early
that
some
of
our
metrics
are
these
actual
composite
metrics
like
if
you
want
to
understand
popularity,
it's
actually
a
collection
of
a
variety
of
things
and
we're
trying
to
avoid
that.
B
A
D
Get
it
yeah,
I
mean
that
makes
sense,
and
so
is
the
idea
that
there
is
some
like
is
the
concept
that
there's
some
like
sort
of
like
latent
thing
called
popularity,
and
that
all
of
these
things
are
measures
of
it
and
the
model
is
supposed
to
somehow
reflect
it.
Okay,
yes,
all
right.
That
makes
sense.
A
B
B
B
Oh,
it's
released
yeah.
This
is
a
release
metric,
so
2022
is
going
to
be
we're
going
to
be
spending
a
lot
of
time.
Cleaning
up
our
metrics
is
what
we're
going
to
be
doing
just
this
is
the
nature
of
working
with
documents.
I
guess
you
know
and
as
we
have
more
people
kind
of
reflect
on
them,
there's
better
ways
to
represent
the
work
that
we
were
doing
from
three
years
ago.
A
So
I
think
we
are
near
the
end
of
the
meeting.
We
have
four
minutes
so
anything
else
on
the
folks
that
they
wanted
to
discuss.
B
Yes,
I'm
showing
mako
the
other
metrics
models,
so,
for
example,
we
have
a
dei
event,
badging
program,
that
we
run
right
now.
We've
badged
42
events,
and
so
this
is
a
metrics
model
that
is
comprised
of
these
particular
metrics
that
are
all
released,
and
so
the
model
is
the
dei
event
badging
program
to
which
these
four
metrics
is
actually
up
to
six
now.
But
these
four
metrics
help
provide
insight
into
event,
badge
or
event
dei.
B
Another
model
could
be
project
decline,
and
these
are
things
that
you
could
take
a
look
at
to
better
understand
the
decline
of
a
project.
It
may
not
be
an
exact
science,
but
at
least
kind
of
gets
you
located
in
the
right
spot
to
think
about
the
decline.
Projects
totally
make
sense
and
sean
is
in
part
of
this
process
too.
Sean
is,
is
creating
notebooks
to
which
the
trace
data
metrics
model,
like
dei
event
badging.
D
There's
an
actual
there's
an
actual
model
like
it
could
be.
All
of
these
things
are
equally
important.
There
could
be
something
else:
correct,
okay,
yep,
cool.
B
B
D
I
this
was
great,
thank
you
for
thank
you
for
having
me
I'll.
Definitely
come
back.
I
don't
know
two
weeks
I'm
gonna
be
in
japan.
I
think
the
time
zone
may
be
quite
bad
well,
but
I
may
also
have
like
a
faculty
meeting
immediately
afterwards,
in
which
case
why
not,
if
I'm
up
for
three.
E
Years
so
so
so
I.