►
From YouTube: CHAOSS Value Working Group 6-17-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
see
thank
you,
thank
you
elizabeth.
So
we
are
recording
welcome
to
the
chaos
value
working
group
meeting
17.
Yes,
so
steven
has
shared
a
new
link
on
researcher
reputation
like
proposed
matter.
A
B
Yeah,
so
the
national
academies
are
kind
of
big
guns
in
in
academia.
There
used
to
be
these
old
tv
commercials
for
the
camera
was
merrill
lynch
or
one
of
the
other
financial
groups
right
where
their
cocktail
parties
or
their
their
dinners
out
or
whatever
and
and
there's
lots
of
talking
and
bustle.
In
the
background
and
one
guy
leans
over
to
somebody
else's
broker
is
merrill
lynch
and
merrill
lynch
says,
and
the
whole
room
goes
quiet
and
everybody
like
it's
going
like
this.
B
That's
that's
kind
of
like
the
national
academies
in
academia,
it's
like
they
speak
and
people
that
isn't
a
little
reference
by
the
way.
Yes,
I
know
I
know,
but
it
still
works
right.
B
I
refrained
from
adding
a
youtube
link
to
the
old
commercial,
so
you
know
trying
not
to
take
us
too
far
down
the
road
but
yeah.
There's
specific
call
outs
to
that
to
the
stuff
that
we
are
trying
to
figure
out
with
with
academic
value.
I
need
to
find
the
link.
B
And
so
as
part
of
that,
in
the
letter
itself,
there's
talk
here.
This
bullet
review
and
update
university
open
scholarship
policies
to
harmonize
with
blah
blah
blah,
so
there's
some
kind
of
relationship
there
and
services
and
training
towards
fair
principles
of
research,
but
especially
track
progress
toward
open
scholarship,
while
ensuring
that
new
scholarship
standards
and
policies
set
by
research
sponsors
are
satisfied.
B
So
that's
just
the
body
of
the
letter
and
then
this
two-page
document
that
there's
a
link
to
in
the
middle
of
it.
This
guide
to
supporting
open
scholarship
for
presidents
and
provosts
has
again
callouts
to
policy
work
about
reviewing
how
tenure
and
promotion
work,
valuing
diverse
types
of
research
projects,
metrics
that
incentivize,
the
open
dissemination
of
articles,
data
and
other
research,
outputs
and
valuing
collaborative
research.
C
So
there
this
is
interesting
because
looking
at
the
two-page
letter
that
pdf
one
of
the
things
that
let
me
see
if
I
can
share
my
screen.
C
So
down
in
here,
services
and
training
yep,
a
lot
of
this
a
lot
of
these
are
like
here
are
the
things
that
you
should
do.
They
don't
really
give
much
guidance
as
to
how
to
do
them
and
metrics
could
perhaps
start
drawing
some
of
that
out.
C
We
are
already
talking
with
the
fair
for
rs
folks.
So
on
do
you
know
dan
katz
at
illinois?
I
know
the
name:
okay
and
then
michelle
barker
who's,
she's
down
in
australia,.
C
B
C
If
I
just
picked
like
a
highest
level
thing
like
financing,
open
scholarship
right
like
what
would
be
the
ways
that
we
could,
are
you
thinking?
What
are
the
ways
that
chaos
could
help
kind
of
define
what
metrics
would
be
around
how
we
determine
financial
support
for
like
where
to
look?
You
know
what
I
mean
like:
do
you
have
a
grant
program
for
op
for
providing
open
access?
Do
you
have
a
what's
that?
What's
that
thing
we
have
here
at
uno,
gayorg
or
vanad,
the
open
digital
commons
thing.
B
Looking
at
this
merger
between
the
kinds
of
things
that
chaos
does
and
the
osf
the
open
science
format,
the
open
science
platform
that's
center
for
open
sciences
like
what
what's
the
then,
what
could
they
do
independently
or
together
to
kind
of
support?
Some
of
this
stuff.
B
B
Around
open
science,
there
have
been
journal
articles
and
research
studies.
It
says
you
know
those
professors
think
this
is
a
good
idea,
but
most
of
them
don't
actually
follow
the
practices
right
well.
They'll
follow
the
practices
because
they're
not
trained
it's
more
work,
there's
no
support
for
it
right,
I'm
going
to
do
all
this
work
to
be
open,
and
yet
all
the
university
cares
about
is
the
journal
article
anyway.
So
why
am
I
bothering
right?
C
Okay,
so
the
path
that
I'm
kind
of
seeing
is
the
chaos
as
a
project
is
focused
on
trying
to
improve
transparency
on
community
health.
Right
and
value
is
one
of
those,
and
so
the
intersection
as
faculty
members
are
producing
software
artifacts
like
how
do
we
ascribe
academic
value
in
that
software
work,
and
so
what
would
be
the
metrics
that
we
would
need
to
develop
to
help
people
administrators,
really
people
that
aren't
the
the
academic
themselves.
B
Well,
it
ends
up.
It
ends
up
being
the
academic
themselves
because,
as
as
the
tide
moves,
this
way,
the
the
academic
has
to
defend
it
right.
It's
not
just
the
admins,
defining
it.
It's
the
faculty
members,
here's
how
you
have
to
pitch
right!
This
is
why
this
is
important.
Here's
here's
my
community
around
my
research.
C
B
B
B
B
Is
it
that
you
replicate
so
you
just
download
it
and
copy
it?
You
don't
really
fork
off
of
it.
Is
it
that
you
don't
know
that
you
can
work
and
make
your
own
thing
and
upload
a
variation?
Is
it
you
know?
Is
it
part
of
the
practice
not
part
of
the
practice?
Is
it
ignoring
your?
Why,
right,
the
the
osf
platform
will
pull
in
stuff,
as
will
the
chaos
platform
from
your
github
stuff
software?
Why
is
nobody
forking?
What's
what's
the.
B
What's
the
science
equivalent
metric
of
a
fork,
and
how
do
you
crack
right?
How
do
we
find
a
way
to
say?
Well,
yes,
people
are
actually
replicating
my
work,
but
it
doesn't.
The
way
to
show
make
that
show
up
is
through
aside
from
the
journal
article
sites.
There
are
also
these
other
ways
in
which
the
fact
that
people
are
using
my
stuff
shows
up
and
what
are
they.
C
So
listening
to
talk,
I
jotted
a
note
down
they're,
like
maybe
from
a
chaos
perspective
like
something
like
downloads
or
forks
or
like
number
of
closed
issues.
Like
kind
of
these
repository
e
things
right
too
granular
of
a
metric
and
I
dropped
a
totally
random
metric
in
there.
That's
called
like
repository
software
activity,
and
it
could
be
measured
in
a
variety
of
different
ways
like
to
your
point
like
forks.
As
you
know,
forks
may
or
may
not
be
a
good
measure,
depending
kind
of
on
the
context.
C
C
You
have
zero
in
a
repository
management
perspective
like
just
on
github,
so
you
know
maybe
from
an
academic
perspective,
just
to
start
like
just
to
kind
of
like
break
the
ice
on
this.
It's
something
like
repository
software
activity
and
another
metric
is
something
like
you
know.
Software
citations,
you
know
like
are
people
it's
like
I'm
thinking
of
like
the
james
house
and
stuff
site.
B
Was
listening
to
you
there's
this
group
on
metadata
for
preprints?
B
That
may
be
another
piece
of
you
know.
I
don't
know
what
how
much
of
this
osf
already
does
and
what
the
crossover
is,
but.
C
I
think
pre-prints
are
really
important
too,
because
so
so
there's
the
published
journal
article,
which,
from
from
concept
to
publication
as
we
know,
can
take
five
five
years
and
so
pre-prince
really
became
pretty
critical
during
covid,
because
in
the
medical
community
waiting
for
a
paper
to
be
published
in
a
journal
as
having
scientific
impact,
it
was
just
too
long
like
the
cycle
was
just
too
long
and
so
preprints
serve
as
a
way
to
get
that
scientific
information
out
to
a
community
quite
a
bit
faster.
C
D
B
C
C
C
If
you
could
get
that
data,
that
would
be
fantastic,
like
my
piece
of
software
has
been
cited
a
hundred
thousand
times
and
then
the
software
itself
as
being
is
kind
of
adhering
to
fair
principles.
E
C
C
So
the
highest
level
could
simply
be.
We
have
three
metrics
that
help
I'm
just
trying
to
get
the
again
break
the
ice
on
metrics.
That
would
help
in
this
domain
and
the
three
metrics
would
be
those
listed
there,
repository
software
activity,
software,
citations
and
adherence
to
fair
principles
like
and
it.
If
down
the
road,
it
made
sense
to
make
a
new
metric.
That
is
simply
the
findability,
then
great,
if
it
makes
sense
to
make
a
new
metric.
That's
just
about
interoperability,
that's
also
great,
but
some.
C
E
E
C
C
A
C
Matt,
he
cannot
tell
you,
you
are
muted.
Oh,
I
just
said
hello.
Watchers
of
the
youtube
video
you
can
see
me
browse
the
web.
So
I'm
sorry
if
I
was
talking
so
georg.
These
are
your
points.
This
is
to
your
point
here.
Maybe
some
some
guides
to
what
can
constitute
the
particular
components
around
f-a-I-r.
C
C
C
A
C
E
Looking
through
the
table,
they're
more
aspirational
like
this
is
what
the
outcome
should
be
and
then
what
I
was
looking
for
is:
how
do
we
give
guidance
on?
How
do
you
answer
this?
So,
if
reusable
is
software's,
which
you
described
with
the
plurality
of
accurate
and
relevant
attributes,
how
do
you
actually
demonstrate
that
your
software
needs
this
criteria.
E
C
I
think
that's
fair,
I
think,
not
to
be
colloquial.
They're,
fair,
fair
but
like
some
are
considerably
more
applied
like
what
to
your
point
and
then
others
like,
I
saw
you
right.
I
did
see
one
inaccessible
like
adheres
to
community
standards
like
whatever
that
might
mean.
E
C
C
C
C
A
A
So,
even
in
the
remaining
time
we
have
one
metric
which
we
completed
a
little
bit.
Maybe
we
take
a
look
at
it
now
or
what
everyone
suggests.
A
Trick
or
pt
metric
that
we
we
developed
it's
like
almost,
maybe
I
would
say
eighty
percent
done.
We
just
take
a
look
and
finalize
it.