►
From YouTube: CHAOSS Value Working Group 9-23-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone
welcome
to
the
chaos
value
working
group
meeting
for
september
23rd.
Please
set
yourself
in
the
minutes.
A
A
We
are
proposing
researcher
reputation
metric
model,
so
what
metrics
will
help
to
assess
the
reputation
of
a
researcher
that
will
help
them
in
his
academic
career?
Maybe.
A
So
the
below
is
the
list
of
metrics
that
we
thought
maybe
some
can
somehow
will
help
to
assist
the
researchers,
reputation
right,
organizational
project
skill
demand,
job
opportunities
after
listening.
B
You
know
whatever
downloads
those
kind
of
things,
so
it
contains
all
that
kind
of
stuff
already
yep
so
like.
How
do
we
give
thoughts
on
how
we
would
manage
like
building
a
metric
model
that
hey
johan,
that
how
would
we
manage
building
a
metric
model
that
contains
a
metric?
That
itself
is
kind
of
defined
through
other
metrics
like
we're
getting
like
it's
getting
kind
of
meta
here,
yeah,
if
that
makes
sense,.
A
So
how
does
this
what
I
recall
how
this
discussion
evolved
when
we
were
working
on
this
academic,
open
source
project
impact
like
that
was
focused
on
the
project
that
academics
are
developing
and
its
impact
right?
But
the
reputation
is
a
holistic
picture
of
that
researcher,
rather
than
the
impact
of
a
particular
project
they
have
developed.
They
can
develop
multiple
projects.
A
They
can
do
other
things
which
carry
some
impact
for
them
in
the
on
a
bigger
picture.
B
B
B
So
that's
that's
kind
of
when
we
talk
about
metrics
models.
That's
what
we're
talking
about
here
is
what
would
be
a
set
of
some
number.
You
know
three
to
seven
totally
made
that
up,
but
three
to
seven
metrics
that
could
be
drawn
together.
That
would
provide
meaning
for
somebody
and
then
what
we're
going
to
do
is
on
the
website.
B
That's
a
little
bit
more
a
little
bit
more
focused
than
just
60.,
giving
somebody
60,
metrics
and
saying
here,
figure
it
out
yourself
so
so
for
the
metric
model
that
we're
looking
at
it's
called
researcher
reputation,
and
so
one
of
the
things
that
we've
been
talking
about
in
the
value
working
group
is
how
we
think
about
open
source
software
as
connected
to
to
to
researcher
productivity.
So
this
is
a
big
thing,
particularly
like
with
organizations
like
the
sloan
foundation,
rit
rochester,
institute
technologies.
Doing
this
kind
of
work.
B
Johns
hopkins
is
doing
this
type
of
work,
where
they're
starting
to
see
open
source
software
as
a
an
artifact
that
should
be
considered
in
tenure
and
promotion
cases
so
that
we
don't
just
look
at
grants
and
papers
kind
of
the
usual
things
that
we
look
at
so
there's
this
work
is
kind
of
coalescing
around
this,
so
we're
thinking
about
a
metrics
model.
B
That
would
be
just
that
researcher
reputation,
as
probably
we
need
to
be
a
little
clearer,
but
as
derived
from
the
software
that
they
produce,
because
you're
probably
quite
familiar
with
very
strong
software
developers
who
do
amazing
work
in
the
academic
space
and
contributes
very
strongly
in
that
academic
space,
but
they
get.
You
know
how
it
is
at
least
at
least
it
is
in
the
us.
Researchers
get
basically
no
credit
for
software
work.
E
F
It's
the
software
engineering
field
is
maturing
in
terms
of
the
artifact
badging
on
papers,
so
the
a
lot
of
conferences
are
starting
to
move
towards
our
artifact
checking,
so
the
researchers
are
encouraged
to
submit
their
artifacts
and
then
the
reviewers
can
go
through
to
validate
it,
and
then
there
are
different
steps.
Different
badges
up
until,
like
everything,
is
fully
validated
and
available,
and
this,
along
with
the
open
science
principles,
that's
also
grown
within
the
software
engineering
field.
F
F
So
I
I
just
transitioned
to
a
research,
this
research
institute
of
sweden,
so
yeah
a
research,
yeah
national
research
institute
and
what
I'm
trying
to
do
there
right
now
is
to
establish
some
kind
of
internal
open
source
function
that
that
cannot
try
to
push
this
way
of
translating
and
sharing
the
research
as
open
source
as
a
way
of
collaborating
yeah.
That's.
F
I
feel
I
think
open
source
will
become
more
important,
as
we
said
at
least
from
my
perspective,
as
this
artifact
validation,
prior
trend
and
open
science
is
becoming
more
and
more
adopted.
I
don't
know
about
the
I
feel
like
you
know,
I
cannot
talk
from
software
engineering
perspective.
B
Yep,
no,
that's
fair,
okay!
B
So
it's
it
actually
sounds
like
we're,
maybe
in
slightly
different
spaces
but
kind
of
heading
down
the
same
path
yeah
at
least
in
us
and
and
in
europe.
Just
like
what
do
we
do
with
these
artifacts
like?
How
do
we,
how
do
we,
it
seems
silly
to
just
dismiss
them,
and
so
we
need
to
start
accounting
for
them
a
little
bit
more
deliberately:
okay,
cool.
A
B
B
B
B
B
Lines
of
code
changed.
Nobody
would
care
about
that.
B
A
B
B
F
F
B
F
To
add,
wouldn't
it
be
because
there
is,
I
don't
know
some
service
that
can
add
a
doi
identifier
to
open
source
products.
I
don't
know,
I
don't
remember
that
that
service,
if
a
dui,
oh
yeah,
the
the
unicode
identifier.
B
Yeah
there
is,
there
is
the
the
journal
of
open
source
software
which
assigns
so
basically,
you
submit.
I
think
they
actually
give
you
a
doi
number
and
you
submit
it
and
they
review
it's.
They
review
the
software
as
things
like
having
a
a
well-structured
readme
like
a
fair
test,
suite
associated
with
the
software.
You
know
kind
of
the
software
things
and
the
journal
of
open
source
software,
I
believe,
does
give
you
a
doi
that
can
be
cited.
B
B
Think
it's
a
good
point.
I
like
johann's
point
that,
like
new
contributors
may
not
be
a
great
window
into
impact
that
or
reputation,
but
something
like
having
having
a
look
at
the
downstream
citations
or
some
sort
of
downstream
dependency.
A
How
about
downloads
and
clones
of
the
same
software
that
a
researcher
has
developed
will
that
be
considered
as
an
impact
or
when.
A
B
D
B
A
D
D
D
A
How
about
the
list
below
which
we
have
in
this
like
the
list
existing
matrix?
Can
these
help
us
to
answer
that
question
of
that
researcher
reputation
that
model.
A
B
E
B
D
F
Just
add
to
context:
I'm
maybe
novo
the
brick
it's
a
startup
from
ammo
in
the
sweden.
They
were
on
the
chaos
podcast
with
big
a
couple.
F
Yeah
so
they
they're
quite
new.
They
they
do
what
you
call
it
component
analysis,
static
component
analysis,
security
and
they've
also
added
license,
but
not
just
recently,
also
a
health
check
where
they
take
different
metrics
with
inspiration
from
chaos,
including
like
30
or
40,
something,
and
then
they
compile
it
into
what
they
call
features
and
then
into
two
or
three
different
flags.
So
so
the
companies
basically
can
can
scan
their
whole
code
base.
Yeah.
C
F
A
get
a
a
number
I'm
a
a
composite
metric
or
a
metametric
if
they
should
look
more
into
that
that
project,
but
maybe
they're
their
way
of
composing.
These
method
metrics
may
be
interesting
to
to
look
into
in.
In
this
context,.
B
B
B
B
B
B
B
D
I'm
working
on
like
there's
actually
a
pretty
it's
a
pretty
complex
map
of
interests,
okay
between
chaos
and
what
fair
is
trying
to
accomplish,
and
I've
spent
a
lot
of
time
reading
through
their
documents
and
their
positions,
and
I
think
I
think
there
are
some
there's
a
sort
of
impedance
mismatch
with
our
goals
that
I'm
working
to
explain,
but
rather
slowly,
okay.
Hopefully
I
can.
B
Do
that
soon,
could
you
also
in
that
discussion?
I
mean
talk
about
drawing
people
from
the
fair
for
our
space
into
the
chaos
discussion
too
yeah.
Absolutely
absolutely.
I
thought
that
was
a
good
point
on
that
board.
Mailing
list
like
we
need
to
find
we
can
provide
support,
but,
like
it's,
the
door
has
to
swing
both
ways,
kind
of
thing,
so
yeah.
A
Okay,
what
meeting
is
that
with
it's
working
like
research
data
alliance
and
they
are
working
on
the
fair
policy
and
they
have
some
meeting
today.
I
was
just
trying
to
go
there
and
get
the
idea
of
what
is
happening
in
the
domain
as
we
were
developing
this
metric.
That's.
C
F
B
Never
seen
this
china
visa
research
process,
but
acm
badges,
artifacts.
D
I
have
heard
of
it,
but
I
haven't
worked
with
it,
so
this
is
coming
out
of
the
xc
community
yeah.
F
I
mean
it's,
not
it's
not
exclusive
open
source
products.
It's
for
artifacts,
I
mean
you,
you
can
make
some
artifacts
research
artifacts
available
in
different
ways,
but
I
mean
it's
it's
a
way
of
looking
at
the
quality
of
a
scientific
research,
scientific,
open
source
project
to
have
this
badge
attached
to
it.
Then
there
is,
it
has
been
peer,
reviewed
and
gone
through
the
the
peer
review
process
related
to
a
research
conference.
F
On
part
of
the
yeah
and
researchers
are
encouraged
to
share
their
data.
If
not,
they
should
write
reasons
why
russian,
but
okay,
we
don't.
We
don't
smash
down
on
that
much,
but
they
they
are
encouraged
to
submit
their
artifacts
and
then
the
artifacts
were
done.
Peer-Reviewed
were
gone
through
them
depending
on
the
quality
or
the
level
of
it.
They
are.
C
F
A
A
question:
how
is
this
different
from
so
journal
of
open
source
software,
which
is
which
does
the
same
thing?
They
evaluate
the
software
and
they
assign
a
number
here.
They
are
assigning
the
pads
and
soft
software
general
open
source
software
journal
is
assigning
us
dui
number
to
that.
I
guess.
F
You
can
see
that
the
effect
that
is
used
to
create
the
results
or
whatever
it's
it
has
gone
through
this
weekly
process
from
how
to
have
this
level
of
volatility,
trust
the
verifiability.
A
B
I
don't
know
it's
possible
for
a
piece
of
software
to
not
hit
all
four
of
these
right,
but
if
we
just
have
a
fair
metric,
I
mean
so
we
we've
done
this
before.
Where
there's
an
organization
that
has
done
work
say
like
in
fact
like
project
velocity
right,
so
that.
B
Documented
the
work
that
they
were
doing,
and
so
the
fare
metrics
are
the
same.
This
is
not
necessarily
our
work,
we're
just
documenting
it
as
a
as
a
citable
or
findable
kind
of
description
of
what
fair
is
right.
So
are
there
like
sean
you're
involved
in
these
conversations,
too,
I
mean:
is
there
value
in
documenting
a
fair
metric
for
for
us
for
the
fair
community.
D
I
think
there
is,
but
I
think
what
that
metric
might
be
is
is
not
going
to
be
a
trivial
to
sort
out.
Why?
Because
I
think
what
they
want
from
metrics
is
not
how
we
think
of
metrics.
I
think,
there's
an
education
process
that
needs
to
take
place
before.
D
D
We
can
like
do
what
we've
done
for
other
scientists
and
build
them
instances
that
show
them
examples
or
show
the
metrics
that
are
relevant,
but
I
think
getting
that
then
mapped
to
what
their
concerns
are.
Is
is
a
it's
an
education
process,
they're
starting
from
a
very
rudimentary,
and,
I
think,
overly
influenced
by
some
rudimentary
work
in
the
in
the
mounting
software
repositories
community
that
that
that
they
see
as
sort
of
canon.
D
But
it
is
generations
behind
the
kinds
of
canonical
metrics
that
we've
defined
and
the
ways
that
we
are
using
them
to
understand.
Open
source
project
health
like
everything
for
the
most
part
at
msr,
is
commits
like
they
don't
they
don't
go
a
whole
lot
past.
That,
okay-
and
I
think,
that's
really
the
most
significant.
B
D
Commit
structure,
and
also
the
cataloging,
like
the
work
that
michelle
barker
does
can't
remember
the
name
of
her
organization,
but
recently
risa
yeah,
so
risa
does
some
cataloging.
They
rely
a
great
deal
on
the
cran,
our
community,
because
a
lot
of
this,
this
open
source
software
is
actually
written
in
r
and
and
so
there's.
I
think
it's
more
like
the
open
source
scientific
software
problem,
but
from
the
perspective
of
computer
scientists.
D
B
D
Yeah
it
would,
it
would-
and
I
think
I
think
I
think,
there's
wood
too,
but
what
findable
means
in
different
scientific
contexts.
I
don't
think
they've
resolved
that
different
kinds
of
science
handle
open
source
software
differently.
Like
we've
faced
that
issue,
often,
okay,
I
I
think
they
are
not
focused
on
that
diversity
of
practice.
Yet
because
there's
they're
focused
on
trying
to
define
fair
what
fair
means
as
a
very
high
level.
Okay,.
D
B
B
D
Findable
and
excess
accessible
is
easy,
so
that's
the
lowest
hanging
fruit
interoperable
is,
of
course,
that's
a
bag
of
worms.
Right,
yes,
and
that's
that's
probably
interoperable
is
the
place
where
scientific
communities
and
eve,
so
each
scientific
community
is
going
to
differ
in
what
they
think
that
is
and
in
within
scientific
communities.
You'll
have
a
mix
of
proprietary
and
open
source
software
that
exchanges
data
in
different
formats
and
the
proprietary
vendors
like
to
keep
their
formats
kind
of
below
the
chest
so
or
is
that
right
below
the
to
the
chest?
B
C
B
Okay,
one,
it
might
would
that
I
mean
if
we
could
kind
of
work
it
out,
even
just
a
little
bit
give
you
some
like
an
artifact
to
bring
to
that
discussion.
That's
like
here's,
yeah.
B
B
A
B
B
A
B
Community
is
developing
more
tied
to
findability,
okay,
okay,
I
think
that's
good.
B
All
right:
well,
we
didn't
finish
anything
today,
but
yes,
but
we
had
a
plan
for
next
week.
Yeah!
No,
that's
good!
I
think
we
got
through
a
couple
things
right,
so
maybe
just
I
mean
we
only
have
a
few
minutes
here,
but
just
to
recap,
we
may
need
to
think
about
researcher
reputation
as
a
metrics
model,
but
this
almost
seems
like
this
needs
to
wait
just
a
little
bit
until
we
start
addressing
some
larger
set
of
metrics.
That
could
help
inform
that.
B
A
B
Right
on
all
right
cool,
I
think
we're
good.