►
From YouTube: CHAOSS OSPO Working Group Jan 26 2023
Description
Minutes from this meeting can be found here: https://docs.google.com/document/d/1Bf6a1Ywi4m0Ywo4vuBBp3Q9_AA_QKbWf99WxAqRbpMw/edit
A
Thank
you
all
right.
So
today
we
have
I
sent
the
agenda
out
kind
of
ahead
of
time,
I'm,
seeing
if
Emma
is
on,
because
I
think
Emma
was
going
to
spend
a
little
bit
of
time
talking
about
work
that
they're
doing
at
Microsoft,
with
respect
to
metrics
they've
been
using
so
without
Emma.
Here
right
now,
I'll
just
postpone
that
item
I'll
just
go
to
the
next
item
on
the
list,
which
is
dawn.
B
Yeah
I
was
just
it
just
occurred
to
me
that
the
aspukan
cfp,
that
was
our
Summit
cfp,
closes
I,
think
Monday
or
Tuesday
next
week
and
I
was
thinking
it
might
be
interesting
to
put
together
a
metrics
panel.
That's
not
like
a
lot
of
times.
We
do
metrics
panels
at
these
events
and
it's
all
of
The
Usual
Suspects
from
chaos,
but
I
was
thinking,
maybe,
instead
of
doing
that
that
we
kind
of
limited
to
people
who
work
in
ospo's
and
do
chaos
metrics
so
kind
of
the.
B
We
can
talk
about
work
that
we're
doing
in
this
group.
We
can
talk
about
the
metrics
that
we're
using
in
individual
companies,
and
so
I
was
curious
if
people
thought
this
was
a
good
idea,
and
if
so,
if
there
are
people
working
in
osvos,
who'd
want
to
be
on
the
panel.
C
I'm
happy
to
but
I've
also
been
on
the
chaos
panel
of
Usual
Suspects,
so
I
don't
have
to
be
on
it,
but
I'm
happy
too
happy
to
support
it
with
questions
moderate.
Whatever
role
makes
sense.
B
D
A
B
See
I
think
Emma
Irwin
just
joined
Emma
I
have
a
quick.
We
skipped
your
agenda
item
and
went
to
mine
and
I.
Have
a
question
for
you.
Are
you
going
to
the
open
source
Summit
in
Vancouver
in
person,
and
if
so,
would
you
like
to
be
on
an
offspokon
panel
proposal
about
how
we
use
metrics
in
our
hospitals.
B
E
D
I
think
it
would
be
helpful
to
like
I
would
be
interested
in
like
collecting
questions,
I
mean
because
of
from
from
osbos
who,
like
maybe
want
to
do
metrics,
but
maybe
I
like
I.
Don't
necessarily
feel
like
I
could
sit
in
on
a
panel,
but
but
definitely
have
questions
for
for
the
group
here.
D
So
yeah
I
might
have
like
I
feel,
like
I
have
imposter
syndrome
when
it
comes
to
metrics,
so
I'm
not
sure
it
would
be
yeah
yeah,
maybe
okay,.
B
B
D
E
A
Right
great
well
I
consider
that
success
potentially
thank.
A
A
Job
and
like
I
said
I'll
share
that
Doc
in
the
slack
channel
just
a
little
bit
later
this
afternoon,
yeah
yeah
Emma
are,
you
is
Emma
still
on.
Are
you
there.
A
All
right,
so
we
will
continue
till
you
are
here,
but
we'll
continue
to
let
you
sort
out
the
issues.
Okay,
all
right,
so
I'll
talk
and
as
soon
as
Emma
hops,
on
we'll
we'll
go
to
her.
A
So
I
did
want
to
talk
just
a
little
bit
so
I
had
put
together
this
table
based
on
the
conversations
that
we
had
last
time.
If
you
recall
kind
of
what
are
some
goals
for
this
ospo
working
group,
then
if
you
haven't
had
a
chance
to
take
a
look
at
it,
if
you
could
just
take.
A
Kind
of
bring
ideas
together
and
then
things
that
we
are
currently
doing
in
the
chaos
project
and
areas
that
we
aren't
currently
doing
any
work
right
now,
so
I
think
the
first
one
was
around
metrics
model
development,
which
I'll
talk
a
little
bit
about
today.
A
Another
was
about
tool,
development
and
availability.
So
this
is,
for
example,
the
work
that
Sean
is
doing
with
ire
and
Luis
is
doing
and
the
whole
folks
with
grimore
lab,
are
working
on
and
making
those
tools
more
readily
available.
Within
ospo's
there
was
a
conversation
and
Anna
had
put
a
comment
in
there
too,
about
value
creation,
around
standards
patterns
and
taxonomies
and
I
think
Sophia.
A
You
have
talked
about
taxonomies
as
well
with
respect
to
this
work,
so
how
we
go
about
developing
those
and
then
the
last
two
are
more
Communications,
so
communication's
more
explicitly,
I
think
with
ospo's
kind
of
to
your
the
point
of
your
panel
done
like
how
do
we
talk
with
ospo's,
who
directly
who
are
using
chaos,
tools
and
Chaos
metrics
and
cast
metrics
models?
So
I
think
that's
that
falls
in
there
and
then
just
more
broad
I.
What
I
picked
out
was
just
broader
communication
about
the
work
that
we're
doing
just
add
open
source.
A
A
We're
going
to
do
all
the
things
by
the
next
by
the
next
meeting,
so
so
I
guess
the
one
I
wanted
to
talk
about.
Maybe
today
was
metric
model
and
Metric
model
development,
and
there
were
kind
of
two
two
parts
to
this.
A
So
one
is
just
as
a
is
kind
of
a
kind
of
a
setup
here
is:
is
the
chaos
project
develops
metrics,
so
single,
individual,
Atomic,
metrics
and
those
metrics
might
be
like
age
of
an
issue
or
comments
and
a
pull
request.
You
know
kind
of
very
finite
finite
things,
and
over
the
years
we
found
that
the
individual
metrics
themselves
didn't
have
a
lot
of
of
power
alone,
and
so
what
we've
been
doing
more
recently
is
developing
metrics
models
which
are
really
collections
of
metrics
that
are
meant
to
have
an
impact
collectively.
A
So
you
bring
the
series
of
metrics
together
to
produce
a
metrics
model
and
what
I'd
like
to
do,
and
Don
I'm
going
to
actually
bring
up
the
one
that
you
had
proposed.
The
starter
metrics
model,
just
kind
of
as
an
idea
on
on
what
a
metrics
model
is
about.
So
it's.
This
is
one
that
we
were
developing
last
week
in
in
one
of
our
other
working
groups
and
just
give
you
an
idea
here
of
what
a
metrics
model
look
looks
like
so
don.
B
This
kind
of
came
up
because
I
hadn't
really
thought
about
it
as
a
metrics
model,
but
these
are
the
four
metrics
that
I
use
as
kind
of
a
baseline
project,
Health
measurement
across
all
of
our
projects-
and
this
is
super
super
simple
super
easy
and
it
really
is
designed
to
give
maintainers
and
project
owners
a
start
at
looking
at
project
Health
with
the
idea
that,
if
it's,
you
know
if
it's
a
big
project,
if
it's
an
important
project,
they'll
build
on
it
and
and
do
some
other,
you
know
other
metrics.
B
B
So
I
did
that
with
the
idea
that
then
other
ospos
and
other
projects
could
use.
This
is
just
kind
of
a
baseline.
Let's
start
with
these
four
metrics
see
what
they
tell
us
and
then
figure
out
what
we're
missing.
So
this
is
kind
of
a
just,
a
sort
of
a
starter
pack
for
project
Health
metrics.
A
B
A
Thanks
so
thoughts
or
questions
for
Don
just
kind
of
on
this
metrics
model.
F
That
the
thought
I
have
as
I
look
at
it
is
Don
I
think
you
arrived
at
these
metrics
by
finding
metrics
that
are
useful
and
actually
employing
them
in
practice.
So
the
model
didn't
come
entirely
from
The
Ether.
It
came
from
your
practice
in
your
osbo
and
I
I.
Think
that
sometimes,
when
I,
when
we
think
about
metrics
models,
I
think
it's
important
to
remember
that
I
think
the
really
good
ones
probably
come
from
things
that
we
use
every
day
or
collections
of
metrics
that
we
we
are
likely
to
use.
F
They
aren't
necessarily
something
that
we
just
come
up
with.
Am
I
really
quiet
now
is
that
geez,
okay
I'll
try
to
speak
louder
and
figure
out
my
audio.
B
Yeah
I
know
that's
exactly
right,
I
mean
these
are
these
are
the
metrics
that
that
I,
that
I
use
on
a
on
a
day-to-day
basis
and
I
I
kind
of
picked
these
these
four,
because
I
think
they
tell
very
different
things
about
a
project
and
so
I
think
good
indicators
of
you
know
maybe
potential
problem
areas
where
someone
might
want
to
dig
in
a
bit
more.
E
B
D
C
Seen
these
before,
because
you
share
this
a
couple
in
a
couple
different
formats,
as
mentioned
and
I,
think
I
generally
I
love
the
idea
of
having
a
starter
pack.
I
think
the
only
thing
that
I
would
want
I
would
want
to
change
it.
C
I
would
just
slightly
want
to
clarify
that
this
is
really
applicable
to
you
mentioned
this
verbally,
but
like
your
the
projects
that
you
have
of
the
majority
ownership
of,
whereas,
like
that's,
not,
maybe
not
necessarily
be
all
the
projects
that
you're
monitoring
as
an
organization,
so
I
think
just
qualifying
it
as
a
project
that
you
have
sort
of
that
leadership
role
in
because
there
could
be
other
organizations
participating
in
it
because
those
things
you
have
more
control
over
things
like
responsivity
and
operational
Health.
C
C
In
the
group,
you
might
have
less
control
over
these
things,
so
I
think
just
qualifying
that
that
this
is
really
where
that's
aligned
and
it
doesn't
in
terms
of
my
my
own
personal
nitpicking,
I,
always
love
to
comment
in
areas
where
these
kinds
of
metrics
are
less
applicable,
say
if
you
have
any
kind
of
like
automation,
I
always
refer
to
time
to
close
being
heavily
influenced
by
Auto
close
policies
in
some
projects.
C
So
just
like
the
caveat
like
this
only
applies
if
this
is
actually
looking
at
human
driven
behaviors,
not
automated
driven
behaviors
and
some
metrics
don't
make
sense
in
this
context.
So
it's
mostly
just
around
like
slight
tweaks
and
ensuring
that
people
know
what
who
this
is
best
suited
for
and
where
they
could
go
wrong
and
other
than
that
I
love
it
and
I
think
we
should
publish
it.
B
Yeah,
those
are
all
really
good
points.
I
mean
you're,
absolutely
right.
We
actually
look
at
different
things
when
we
look
at
third
party
projects
versus
the
VMware
originated
ones,
and
these
These
are
designed
for
projects
that
you
have
a
lot
more
control
over
for
sure
and
yeah.
There
are
loads
of
caveats
around
some
of
these
some
of
these
metrics
and
when
they're
applicable
and
when
they're
not
so,
maybe
we
can
yeah
I'll,
look
at
how
how
best
to
add
some
of
that
to
the
metrics
models
are
good
points.
Thank
you.
B
A
Thanks
other
comments
and
I
Emma
I'm,
seeing
that
you're.
A
Well,
Emma
I'll
turn
it
over
to
you
here,
just
as
in
just
a
second.
Let
me
just
finish
a
thought
just
on
these
metrics
models.
Great
all
right,
so
so,
thank
so
the
point
being
that
we're
developing
these
metrics
models.
The
point
here
are
there's
another
Point.
Here
too,
is
that
we
also
have
the
spreadsheet,
where
we
are
tracking
the
development
of
other
metrics
models,
and
so
this
was
the
one
that
Don
shared
thanks.
Don
was
just
one
of
about
10
that
were
kind
of
working
towards
publishing.
A
A
Here
and
then
I
don't
know
if
you
saw
from
sorry,
maybe
back
up
just
real
quickly,
so
I
think
just
Sophia
and
Sean
and
Don's
point.
You
know.
One
of
the
things
is
that
if
we
can
communicate
with
uspos
a
little
bit
more
effectively,
we
can
help
identify
what
those
metrics
models
are
that
are
being
implemented
from
many
of
the
folks
that
are
on
this
call.
It
would
be
great
to
publish
those
and
share
those,
so
part
of
the
I
think
the
chaos
project.
A
What
we
can
do
is
try
to
improve
that
communication,
identify
what
those
are
and
work
to
publish
those
and
share
them
with
others
who
are
on
this
call,
and
also
just
in
the
to
do
group
more
broadly
and
then
just
to
to
also
kind
of
highlight
one
thing:
I,
don't
know
if
you
saw
this
was
from
Luis,
and
this
was
the
from
oscology
live
and
the
Netherlands
just
recently,
and
it
looks
like
they.
If
anybody
was
there,
it
looks
like
they
did
some
sort
of
Workshop
sessions
where
they
identified
through
the
goal.
A
Question
metrics
approach:
what
appears
to
be
some
metrics
models
in
terms
of
say,
community
activity
release
frequency
dependencies.
Some
of
these
things
look
like
Risk
related
metrics
models.
So
what
I'm
going
to
do
is
I'm
going
to
take
it?
I
haven't
had
a
chance
to
take
a
look
at
this.
I
only
saw
it
maybe
five
minutes
ago
or
10
minutes
ago
is
see
how
any
of
the
work
that
came
out
of
that
ospiology
live
session,
aligns
with
some
of
the
metrics
models
that
we
have.
A
So,
for
example,
we
do
have
a
metric
model
called
Community
activity,
I'm
curious
kind
of
what
what
was
what
came
out
of
the
workshop
here
and
the
work
that
we
have
done
so
I'll
try
to
bring
those
together
as
well,
so
that
we
don't
have
two
different
conversations
going
on
kind
of
in
two
different
directions:
all
right
so
and
then.
Finally,
our
our
goal
to
in
the
chaos
project
is
not
just
to
kind
of
collect
and
publish
these
metrics
models,
but
also
work
towards
implementing
them
in
chaos
software.
A
So
the
published
models
themselves
are
useful.
I
think
they
create
a
nice
conversation,
but
our
goals
are
also
to
work
with
Folks
at
grimoir
lab
and
Folks
at
auger
to
actually
get
these
metrics
models
to
be
Deployable
in
practice,
so
you
can
see
them
in
real
life
and
Sean
I.
Don't
know
if
you
have
any
comments
on
kind
of
the
processes
that
you
go
through
at
auger
in
deploying
things
like
metrics
models.
F
A
F
A
Go
okay
first,
the
idea
here
is
that
the
metrics
models
seem
to
have
a
lot
of
good
traction.
I.
Think
it's
kind
of
our
our
goal
here
in
this
working
group
is
part
of
our
our
overall
goals
in
2023
to
collect
what
those
metrics
models
are
that
are
currently
that
currently
exist
in
the
world,
work
to
help,
publish
them
and
share
them
with
other
people
that
also
get
put
them
into
practice
and
Implement
them
through
our
software.
A
B
G
G
So
I'm
not
going
to
do
presentation
mode
just
because
I'm
having
troubles
navigating
that
I,
don't
waste
any
more
time,
but
what
I
thought
what
I
would
just
thought
I
would
do
is
share
some
of
the
thought
processes
and
where
we
are
there's
a
lot
of
metrics
work
happening
inside
of
Microsoft.
This
is
specific
to
some
of
the
ospo
work
that
we're
doing
I
I
just
thought:
I'd
share
it
and
and
leave
put
it
out
there
for
for
feedback
or
anything
else.
So
this
is
a
deck
that
I
I
circulated.
G
G
But
the
background
is
like
almost
two
years
ago
now-
maybe
a
year
and
a
half
ago-
and
this
is
like
a
question
that
was
coming
up
all
the
time
from
maintainers
up
to
the
hospital
like
how
do
I
know
my
what
should
I
be
measuring?
You
know
what
is
the
Baseline
for
how
other
projects
are
doing.
G
What
do
I
hold
myself
to
these
are
probably
all
the
same
questions
that
that
you
get
and
I
just
have
this
little
parrot
here,
because
this
is
a
nudge
of
how
I
how
I'm
thinking
about
all
of
this
work
is
less
about.
Like
do
you
less
about
compliance
health
and,
like
you
know,
according
to
those
thing
more
about,
like
the
the
well-being
of
the
project
so
but
I
broke
this
question
down
or
the
way
that
I
think
this
question
breaks
down
is
into
kind
of
three
checklists.
G
So
the
first
is:
what
are
the
questions
that
we
should
be
asking?
How
can
we
answer
those
questions
and
then
how
do
I
interpret
those
on?
You
know
for
my
project
and
and
maybe
share
and
ask
other
people
like
how
do
I?
What's
that
community
of
practice,
piece
and
and
so
initially
I
broke
up
this
work.
G
This
way
we
had
the
kind
participation
of
Matt
and
Sean
and
Elizabeth
initially
last
year
in
a
working
group
to
kind
of
figure
out
what
are
the
questions
and
where
what
are
the
were,
the
biggest
heat,
I
guess
in
the
company
that
included
I,
think
Grace
I.
G
Also
saw
on
the
was
part
of
this
call
from
the
GitHub
side
just
trying
to
get
to
like
what
are
the
things
that
we
want
to
invest
in
first
in
learning
about
first,
so
these
are
some
the
kind
of
the
first
kind
of
focus
areas
we
call
them
and
I
I
apologize
if
I
get
the
way.
Chaos
describes
him
a
little
bit
wrong,
but
like
Focus
areas
like
Securities
Focus
there
and
within
that
there
might
be
a
set
of
questions.
G
So
that's
where
we
landed.
We
had
a
lot
of
great
conversations.
We
had
a
lot
of
sharing
and
also,
like
you
know,
via
the
company,
with
lots
of
different
projects
like
that,
it's
a
great
way
to
to
kind
of
cross-pollinate
having
that
kind
of
working
group.
So
so,
through
this
work,
we
and
thanks
to
chaos
we
did
have.
He
came
up
with
a
set
of
questions.
G
We
should
be
asking
we
very
heavily
heavily-
maybe
not
the
right
word,
but
like
made
sure
that
we
were
aligning
with
the
questions
that
chaos
had
published
like
we
weren't
trying
to
create
anything
new
here.
We're
really
trying
to
like
learn
from
our
peers
and
build
on
the
work
over
the
chaos
project
so
internally.
This
is
a
link
that
takes
you
to
a
repository
of
all
the
questions
that
we're
asking,
but
they
all
link
out
to
the
chaos
question.
G
So
you
know
I
think
this
is
really
helpful
to
folks
who
were
you
know
internally,
asking
some
of
these
questions,
but
that
they
can
see
this
work
is
happening.
They
can
be
a
part
of
it
if
they
want
to
be
outside
of
Microsoft
like,
and
that
is
encouraged
all
the
time.
G
Okay,
that's
I'll
refer
to
that
a
little
bit
later,
but
one
of
the
things
we
decided
to
do
was
to
experiment
like
with
bringing
one
of
these
metrics
into
a
visible
place
for
maintainers,
and
we
happen
to
have
like
some
contributors
to
the
open
ssf,
and
so
this
is
our
question.
How
does
my
repo
score
a
cardi,
fit
open,
ssf,
scorecard
I
think
that's
that
should
be
aligned
with
the
chaos
question
and
then
we
popped
into
our
internal
open
source
repository.
G
So
this
is
a
place
where,
if
someone
wants
to
come
and
look
at
their
repository
information,
these
are
some
of
the
things
that
they
can
find,
and
we
just
put
this
open
OSS
score
in
there
for
people
to
interact
with,
we
did
a
little
bit
of
you
know,
circulated
we
put
in
our
internal
newsletter
just
to
get
people
interacting
with
this
I
will
say
when
you
click
on
that
link,
you
get
the
full
breakdown,
so
this
was
something
that
we
also
took
to
Security
Experts
in
the
organization,
because
having
champions
in
the
expertise
area
that
seemed
really
important,
so
we're
able
to
get
some
validation
on
what
you
know
that
these
mattered,
probably
and
definitely
some
of
these
the
breakdowns
matter
more.
G
We
still
haven't
got
to
how
we
might
highlight
those.
If
someone
comes
to
the
repo
and
they
get
this
scorecard
and
they
open
this,
we
want
to
also
be
able
to
help
them
figure
out
like
where
should
they
invest
according
to
like
what
Microsoft
cares
about
and
we'll
make
the
project
better?
So
this
was
one
attempt
to
have
the
method,
and
this
is
like
the
the
challenging
part
I.
G
Don't
know-
and
this
is
why
I
proposed
in
last
meeting-
we
think
about
remix
and
reuse
of
methods,
because
anyway,
so
that
was
our
first
pilot.
But
then
we
had
this
broader
question.
If
we
wanted
to
say
someone
at
Microsoft
had
like
a
gold
star,
you
know
like,
or
the
gold
standard
for
open
source
like
what
would
that
look
like
we'd
sort
of
play
around
this
idea
of
of
tears.
But
that
didn't
quite
work,
because
we
don't
want
to
here's
a
mock-up
that
we
did
like
Braun
silver
gold.
G
We
didn't
actually
want
to
lock
levels.
We
realized
that
didn't
make
sense,
because
in
the
bronze
level
we
initially
had
new
contributors,
but
new
contributors
might
not
matter
to
some
people's
goals.
So
we
kind
of
played
around
with
this
idea
of
of
like
what
would
be
a
a
baseline
anyways
like
what
is
the
absolute
first
things
that
we
want
projects
to
measure.
Hopefully
I'm
not
going
too
fast,
I'm
trying
not
to
stay
up
too
much
time.
F
G
That
what
that
means-
okay,
I
I,
have
better
screen
like
an
iteration
of
this
mock-up,
but
yes
and
I'll
describe
that,
but
more
specifically
on
the
levels
like
from
the
categorization
it
just
it
just
didn't:
stick
it
didn't
quite
work.
The
way
we
wanted
to.
We
definitely
want
to
penalize
people
or
hold
them
back.
G
So
then
we
got
more
into
the
idea
of
contextual
categories
so
and
we've
even
iterated
on
this,
since
this
deck
was
created,
but
the
idea
that
there's
critical
metrics
that
absolutely
we
want
people
to
pay
attention
to,
and
those
are
things
like
taking
the
code
of
conduct,
training
right
like
we
want
every
one
in
a
leadership
or
maintain
a
role
to
understand
what
their
obligations
and
responsibilities
and
and
how
they're
supported-
and
we
have
training
for
that.
So
safety
is
a
critical
metric.
Security
is
a
critical
metric.
G
You
know
those
might
be
things
like
Discovery
and
use,
and
then
growth
mindset
or
things
like
new
contributors
and
and
that
kind
of
thing
I
will
say
that
this
is
our
most
up-to-date
screenshot,
but
I
will
say
that
critical
foundational
and
growth
actually
have
now
been
changed
themselves,
because
critical
means
a
lot
of
things
in
software
right,
like
someone's,
going
to
come
in
here
and
be
like
critical,
so
we've
actually
made
these
more
verbose
and
I'll
give
Justin
on
the
call
he
has
given
some
of
this
feedback
it.
G
It's
actually
going
to
be
a
sentence
like
things
that
you
should
pay
attention
to
once
a
month
or
something
like
that.
That
really
like
explains
the
metric
at
the
top,
but
they'll
still
be
categorized.
This
way
each
has
a
set
of
metrics
within
the
focus
area
or
sorry
in
the
contextual
area
and
as
well
as
Microsoft
average
only
for
those
metrics,
where
we
want
everyone
to
pay
attention
to
that
that
number.
These
are
all
made
up
by
the
way
by
our
designers
so
also.
B
G
G
That's
a
good
point
yeah
and
the
more
that
we
circulate
this
kind
of
thing.
These
are
the
types
of
feedback
and
thoughts
that
we
get
because
there's
just
basically
using
one
word
to
describe
something
is
problematic,
I
think
in
some
of
these
things
so
yeah
the
idea
that
we'd
have
their
score
and
average
for
critical
metrics
a
place
to
ask
questions.
So
we
have
working
groups
for
security
for
diversity.
Inclusion,
for
you
know
a
number
of
things
so
that
they
would
immediately
be
able
to
go
somewhere
and
ask
about
that
question.
G
A
G
I
No
you're
good
I
was
just
curious
about
the
use
of
average
instead
of
like
a
a
baseline
because,
like
it
seems
to
me
that
like
half
the
folks
would
always
be
under
the
average,
and
so
that
like
they
could
you
know
what
I
mean
like
someone's
always
going
to
be
behind,
but
if
you
had
like,
maybe
a
a
base
like
this
is
our
lowest
or
our
you
know
expectation
is.
It
should
be
at
this
number
and
you're,
either
below
or
above
that
expectation.
I
was
just
curious
if
you
had
thoughts
on
that.
I.
G
Think
that's
a
great
idea
is
my
thought
on
that
again
we're
just
playing
with
it,
but
you're
you're
right,
because
when
you
released
your
first
repository,
your
score
might
be
lower.
Just
because
you
know
some
of
the
things
that
you
might
do
haven't
happened
yet
so
I
think
that's
a
really
great
feedback.
Thanks
Elizabeth
there's.
F
A
it
reminds
me
of
a
GitHub
feature
that
existed
probably
three
or
more
years
ago
now,
where
it
would
tell
you
how
many
days
it's
been
since
you
made
a
contribution
and
I
know
that
they
took
that
away
and
I
think.
The
reason
that
they
did
is
because
it
was
creating
this
sort
of
artificial
psychological
pressure
to
to
commit
something
so
another
another
thought
to
keep
in
mind.
G
Yeah
and
I
mean
that's
a
question
also
to
other
hospitals
I'd.
You
know
I
feel
like
that.
Pressure
is
okay
in
some
areas
like
security
and
safety,
but
as
we
get
into
things
like
you
know,
new
contributors,
if
you
have
a
you
know
of
your
best
Factor
to
use
one
of
Dawn's
metrics
is
showing
like
hi,
then
that's,
okay,
that
we're
not
you
know,
there's
some
things
that
that
just
don't
make
sense
to
push
people
on.
E
So
I
have
a
question
about
so
is
it
possible
to
customize
these
metrics
for
a
full,
different
audiences,
like
worse
as
as
for
safety,
critical,
it's
important
to
have
okay,
how
how
much
contribution
they
have
recently,
rather
than
just
long
time
and
like
customizing,
some
metrics
for
different
audience
and
customizing
other
for
the
other
audiences.
G
Yeah
so
I
I
think
what
you're
saying
is
like
if
we
had
new
contributors,
which
is
one
of
ours,
how
would
you
customize
that
to
the
goals
of
the
team.
E
G
That's
why
I
would
love
to
see
come
out
of
this
group
selfishly,
you
know
and
I'll
like
I'll,
get
a
little
I
just
have
a
couple
more
slides
where
I
actually
try
it
try
and
describe
what
I
think
that
might
look
like
as
this
being
a
safe
place,
so
wait
I'll
just
say
that
the
the
other
thing
that
we
believe
or
that
we've
designed
in
this
is
like
this
call
to
action.
G
What
should
I
do
right
like
here's,
a
button
that
will
take
you
to
show
you
how
to
maybe
there's
a
video,
maybe
there's
a
tutorial
in
the
case
of
the
training.
There's
like
take
the
training
right.
It
just
opens
up
the
learning
platform
and
you
can
start
the
course,
and
so
that's
just
more
of
that,
you
can
see
the
growth
mindset.
This
is
also
from
a
design
perspective.
How
our
designer
thought
it'd
be
most
appealing
to
people
to
you,
know
just
collapse
and
expand
beneath
each
other.
G
So
at
this
point
we
have
a
mock-up
and
some
ideas
about
like
user
interactions,
because
we,
you
know
this
is
just
a
base
set
of
metrics.
We
want
it
to
be
possible
to
add
more-
and
this
last
checklist
here,
development
and
plug-in
extension,
something
that
that
seems
really
clear
is
like
building
this
in
our
internal
platform
doesn't
make
sense.
It
doesn't
make
sense
for
our
our
team
to
build
something
that
that
is
tied
to
one
platform
that
might
go
away.
G
It
doesn't
make
sense
from
the
perspective
of
collaborating
with
the
community
that
we're
building
something
that
might
not
be
easy
to
kind
of
pull
out
and
contribute,
so
other
people
can
play
with
it.
So
right
now
we're
I'm
saying
this
with
you
know,
like
the
industry
like
engineering
time
and
all
that
kind
of
stuff
is
hard
to
get
a
little
bit.
G
But
you
know
thinking
about
a
development
approach
that
is
platform
agnostic,
where
we
can,
for
example,
have
a
data
source,
a
query
and
a
visualization
that
anyone
can
go
and
Fork
remix
Tinker
with
shareback.
That's
the
model
that
that
I
think
will
be
helpful.
G
I,
don't
exactly
know
from
an
implementation
standpoint
how
that
might
work,
but
those
are
some
of
the
conversations
we're
starting
to
have
now,
because
I
think.
Ultimately,
this
is
how
we
share
back
to
the
community
as
well.
So,
for
example,
I
was
looking
into
the
auger
query
for
new
contributors,
and
it
was
oh.
G
G
There's
a
lot
there
there's
things
like
burstness
that
I,
don't
understand,
and
so
I,
basically
reconstructed
a
query
that
looks
at
PRS.
That
looks
at
issues
it
looks
at
you
know
some
really
simple
things:
I.
G
Yeah
so
I
mean
it
was
the
the
out.
The
output
of
the
auger
query
is
what
I
was
after
and
still
trying
to
copy
right,
where
there's
like
shows
new
contributors
and
I
wanted
to
contrast
it
with
the
time
that
employees
are
paid,
you
know
paid
employees
were
spending
and
you
can
sort
of
see
the
graph
the
the
time
employees
were
spending
go
down
when
there
was
more
new
contributors
like
there's
that
that
type
of
information
that
would
get
people
excited
but
to
go
back
to
the
implementation.
G
I
think
that's
where
I'd
like
to
get
to
that.
I
can
take
this
query
and
the
visualization
and
put
it
out
there,
for
you
know,
be
not
to
go
and
play
with
himself
and
then
we
share
things.
You
know
just
to
make
that
part
easy
and
it's
tied
to
a
chaos.
Question
right.
Like
you
know,
it's
all
comes
together
as
one
thing,
but
that
we
might
individually
in
hospitals
plug
in
different
ones
to
to
you
know:
I
still
can't
land
on
one
model
or
even
our
Hospital.
G
A
Yeah
sure
so
just
do
people
have
questions
or
comments
for
Emma
and
Grace
I
saw
you
had
put
a
comment
in
there
as
well.
I,
don't
know.
H
Yeah
I
I'm,
not
a
developer,
so
I,
don't
know
how
exactly
this
might
apply,
but
GitHub
next
shared
their
new
blocks
feature
at
Universe
a
couple
months
ago.
At
this
point
and
I
think
it
could
have
a
lot
of
different
applications,
and
so
one
of
the
things
I
was
talking
to
internally
about
is,
like
you
know,
is
there
a
way
to
customize
the
metrics
that
folks
show
for
things
like
health,
because
I
know
a
common
theme
that
I
hear
is
that
it
kind
of
depends
on
your
project.
H
Your
goals
like
what
you're
looking
to
do
and
so
and
just
thinking
about,
like
so
for
context,
I'm
working
on
the
sponsors
team
at
GitHub,
we're
working
with
a
lot
of
organizations
and
Enterprise
customers
who
are
sponsoring
folks,
and
we
want
to
show
them
the
ROI
on
their
investment
and
like
the
health
of
a
project
and
how
it's
at
least
remaining
stable
over
time.
H
But
there's
not
like
one
metric
for
everyone.
Obviously,
and
so
one
of
the
things
I'm
thinking
through
is
just
productizing
like
showing
Health
metrics
in
some
way,
but
more.
How
do
you
give
someone
the
tools
to
display
it?
Rather
than
like
figure
out
the
metric
for
everyone,
but
anyway
Emma
I'll
ping,
you
because
I
want
to
get
your
deck
and
talk
about
this
more.
G
H
G
Sorry
I
was
gonna,
say
the
access
to
data,
so
I
mean
I,
know
that
auger
pulls
in
its
own
GitHub
data.
I
know
that
we
do
the
same
and
internally,
but
GitHub
has
the
data
like.
How
would
you
know
that
would
be
amazing
to
somehow
just
have
access
to
that
for
each
repo,
but
we
can
talk
and
get
back
to
the
group
yeah.
G
G
I
I
think
so,
but
I
also.
G
H
Join
I
think
also
just
on
my
side,
like
I,
think
Emma's
already
talking
to
Ashley
who's
like
the
head
of
our
ospo
and
then
I,
there's
just
more
people
to
connect
within
GitHub
of
like
who
are
thinking
about
same
things.
That's
all.
G
A
H
F
A
A
A
question
for
you:
how
do
you
need
help
like
where?
Where
would
you
like?
You
know,
thoughts
and
insight.
G
I
would
really
love
to
be
part
of
a
discussion
on
building
that
I
mean
the
discussion
Grace
and
I
already
talked
about
having.
Maybe
we
can
break
off
in
that
because
for
me,
that's
the
most
significant
challenge
right
now.
We
don't
want
to
just
build
something
internally
to
fit
this
one
platform.
We
want
to
make
it
like
reusable.
G
F
Is
we
have
ways
that
we
classify
in
our
own
heads
the
like
the
main
project
and
then
the
smaller
projects
under
the
same
organization
and
in
general
I
think
people
analyze
the
high
velocity
projects
that
are
the
core
in
a
different
way
than
lower
velocity
projects
that
are
supportive
of
that
core
and
I
think
finding
ways
of
classifying
projects
that
have
face
validity
to
the
people
consuming
the
metrics
is
is
where
the
user
interaction
challenge
lies
like
at
a
high
level.
A
And
just
before
we
go
just
one
last,
oh
Sophia,
okay,
comments.
Just
I
want
to
remind
you
that
we
do
have
a
chaos
Con
coming
up,
February
3rd
in
Brussels
just
prior
to
fosdem.
The
schedule
is
in
there,
but
it's
it's
ten
dollars
to
register
so
and
you
can
follow
that
link
on
the
on
the
minutes.
A
If
you'd
like
to
register
the
morning
is
a
panel,
some
lightning
talks
and
some
working
sessions
to
which
one
of
the
working
sessions
is
around
ospo's,
probably
very
similar
to
the
work
that
Luis
had
done
and
we
have
an
ospo
plus
plus
event,
that's
going
to
be
co-located
at
lunchtime,
so
that's
around
open
source
program
offices
associated
with
universities
and
then
in
the
afternoon,
they're
Hands-On
sessions
in
one
track
with
auger
and
then
in
another
track
with
grimoir
lab.