►
From YouTube: Lightning Talks | CHAOSScon NA 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
So
simplicity
was
key.
Now.
Community
managers
and
metrics
aficionados,
like
those
of
us
here
in
this
room
and
at
kscon
we'd
love
to
dig
into
these
detailed
dashboards
right
to
understand
every
nuance.
B
On
each
of
the
four
metrics
and
at
the
bottom
of
the
chart,
I
added
more
description
to
give
people
a
little
bit
more
help
in
interpreting
it.
So,
starting
with
contributor
risk,
I
try
to
make
sure
that
one
or
two
people
aren't
making
all
of
the
contributions
so
that
if
a
single
person
left
the
project
that
it
could
continue
with
minimal
disruptions,
so
this
one
looks
really
good.
B
B
It's
also
important
that
we
reply
to
pull
requests
in
a
timely
manner
right.
So
our
internal
guideline
for
projects
is
that
every
pr
should
receive
a
response
from
an
actual
human
being
within
two
business
days,
so
the
top
line's,
the
total
number
of
pr's,
that's
the
black
line
and
the
green
line
below
it
shows
the
number
of
pr's
that
were
responded
to
within
two
business
days,
and
what
I
look
for
here
is
that
the
lines
should
be
as
close
together
as
possible
without
any
huge
gaps.
B
So
you
can
see
this
one
in
particular
looks
really
good
now,
while
quick
responses
are
important,
it's
also
important
that
we
keep
up
with
prs
and
resolve
them
in
a
timely
manner.
This
slide
shows
the
same
black
line
for
the
total
pull
requests,
and
then
the
green
line
shows
closed
prs.
So
that's
either
merged
or
closed
without
merge,
and
you
can
see
in
this
case
that
there
is
a
pretty
big
gap
there
for
several
months
and
the
reality
is,
though,
right.
B
Maybe
there
was
a
good
reason
for
this,
but
here
you
can
see
why
I
added
a
line
about
the
trend
to
to
both
of
these
graphs,
because
in
this
case
they
were
behind
for
a
while,
but
they've
been
doing
much
better,
and
I
don't
want
teams
stressing
out
about
the
fact
that
you
know
they're
they're,
showing
up
as
red
in
the
title
when
they're
already
working
to
improve
it
and
we're
already
showing
improvement.
So
I
think,
looking
at
the
trend
is
also
important.
B
And
finally
I
look
at
the
number
of
recent
releases-
and
these
include
all
releases-
not
just
the
big
ones,
but
even
just
like
the
tiny
point
releases
and
it's
critical
right
that
security
updates
and
bug
fixes
land
in
a
release
in
some
sort
of
timely
fashion
and
it's
important
to
get
those
new
features
out
too.
So
I
look
for
projects
to
release
something
about
every
month,
ish,
so
I'll
leave
you
just
one
final
thought.
So,
while
looking
at
project
health,
it's
important
to
remember
that
every
project
is
a
little
different.
B
A
A
I'm
going
to
talk
about
privacy.
The
dilemma
we
have
is
that
community
health,
metrics
necessarily
interact
with
individual
developer
metrics
and
individual
developer.
Metrics
have
privacy
implications
going
to
your
issues
sj
when
it
comes
to
academic
performance.
This
is
right
in
the
heart
of
those
privacy
issues.
A
Let
me
call
out
the
contribution.
Attribution
item
goes
right
there.
What
are
the
risks?
First
of
all,
our
work
as
open
source
developers
can
become
a
force
for
surveillance,
and
I
need
to
call
out
and
call
us
out
that,
in
our
influence
on
the
world
in
the
last
30
years
of
open
source
in
some
ways,
we
have
contributed
to
the
loss
of
privacy.
A
A
There
are
product
issues.
Users
are
often
happy
to
trade
privacy
for
convenience,
as
we
all
knew,
as
we
all
know-
and
this
creates
difficulty
for
us
when
it
comes
to
passive
surveillance
or
ways
in
which
we
contribute
to
surveillance
passively
by
allowing
products
that
are
more
compelling,
not
the
ones
that
we're
shipping
to
become
more
popular.
We
need
to
strive
to
create
compelling
products
that
enhance
privacy
and
we
can
enhance
surveillance.
A
A
Also,
we
can
create
metrics
that
would
invade
privacy
if
they
were
available
to
the
public,
but
that
are
only
available
to
the
individual
whom
they
concern.
I
call
these
mirror
metrics
a
strategy
for
making
mirrormetrics
available
is
to
allow
people
to
share
their
own
personal
metrics.
For
example,
you
can
share
your
fitbit
data,
but
by
default
it
is
not
shared.
A
A
That
is
one
option
you
have,
however,
we've
seen
with
euler's
in
terms
of
service
that
opting
in
is
often
a
technique
for
invasive
stuff,
and
we
shouldn't
hide
when
that
is
the
ultimate
impact
of
our
work
or
we
cannot
rise
above
the
fray.
We
can
get
into
the
mess
and-
and
we
can
deal
with
realities
developers
may
want
to
shed
privacy.
We
shouldn't
try
to
force
purism
on
them.
A
First
of
all,
keep
away
from
behavioral
metrics,
there's
a
lot
of
information
about
out
there,
and
you
will
see
in
commonly
used
commercial
products
metrics
like
the
standard
time
of
day,
typical
time
of
day
for
commits
by
a
developer.
So
a
manager
may
be
getting
reports
that
say
such
and
such
a
developer
makes
a
lot
of
commits
at
one
in
the
morning.
A
A
For
us
to
keep
away
from
what's
good
kudos
fly
in
the
face
of
gdpr
and
I
don't
think
that
there
is
a
community
consensus
against
kudis.
I
think
almost
all
developers
are
fine
with
that.
So
I
recommend
to
the
extent
that
you
are
going
to
avoid
best
practices,
aggregate
metrics
that
you
adopt
kudos
last
of
all
mirror
metrics
or
opt-in
metrics.
A
By
the
way,
I'm
at
5
minutes
and
10
seconds
now,
I
recommend
that
we
add
a
checklist
item
to
the
metric
quality
checklist
to
do
a
privacy
review.
I
think
that
it
probably
would
have
an
impact
on
some
of
our
thoughts
on
the
in
the
metric
review
period
at
this
moment,
and
that
is
it.
Thank
you
for
your
time.
A
C
Hello,
I'm
kaylee
champion
and
I'm
presenting
to
you
some
work
that
was
published
I'll
just
leave
it
actually.
On
the
previous
there
we
go
some
work
that
was
published
with
the
ieee
in
a
software
engineering
journal,
but
I
want
to
pitch
it
to
the
ospo
folks
in
the
room,
as
well
as
to
the
metrics
folks
in
the
room.
C
Some
of
you
have
seen
me
present
this
work
and
thank
you
for
your
kind
attention
once
again,
so
I
want
to
suggest
a
way
of
thinking
about
metrics
with
respect
to
digital
infrastructure
and
we
all
kind
of
came
together.
I
think
in
response
to
heartbleed,
but
also
since
then,
we've
been
thinking
about
supply
chain.
C
We've
been
thinking
about
solar
winds
and
I
want
to
think
about
how
our
our
digital
infrastructure
can
sometimes
have
flaws
in
it
that
we
find
it
difficult
to
see
and
this
kind
of
exemplifies,
maybe
that
for
us
and
this
suggests
kind
of
a
model
or
an
expectation
that
we
might
have
of
infrastructure
and
we're
going
to
go
from
model
to
metric
to
insight.
C
In
this
talk,
so
we
might
expect
that
the
roads
that
we
drive
on
the
most
often
the
most
important
bridges
over
the
most
most
traveled
areas
will
also
be
the
strongest
best
maintained
the
most
inspected
and
so
on.
But
unfortunately,
what
we
find
out
is
sometimes
that
that
doesn't
happen.
In
fact,
sometimes
a
bridge
can
be
better
than
it
needs
to
be.
That
might
be
maybe
some
wasted
effort.
It
might
be
beautiful,
it's
not
a
problem.
C
C
So
I
used
this
model
to
analyze
a
body
of
digital
infrastructure.
My
the
body
I
chose
was
the
debian
project
so
about
22
000
packages,
and
I
looked
at
about
the
history
of
resolution
of
about
500
000
bugs.
This
is
the
method
that
I
applied.
So
we
have
this
body
of
digital
infrastructure.
We
have
a
measure
of
quality
for
me
that
was
bug
resolution
time
identify
a
measure
of
importance
for
me
that
was
usage,
specify
a
relationship
between
quality
and
importance.
C
You
saw
that
in
the
model
with
the
dotted
lines
which
is
high
quality,
high
importance,
high
quality,
moderate,
moderate
low
low
and
then
test
for
deviations
to
find
relative
under
production.
So
look
for
your
trouble
spots.
Look
for
your
areas
of
risk
by
looking
for
under
production
all
right,
so
what
I
found
overall
quality
measurement,
especially
using
this
bug
data,
was
difficult
but
doable.
C
There's
not
a
lot
of
consensus
in
software
engineering
about
measures
of
quality,
unfortunately,
which
is
a
whole
separate
talk,
but
I
also
found
that
under
production
is
incredibly
widespread
in
debian,
which
is
very
unfortunate,
given
that
it
serves
as
one
of
the
backbones
of
the
web
and
the
cloud.
So
this
is
a
heat
map
of
the
per
package
analysis.
C
The
count
is
number
of
packages
and
you
can
see
there's
kind
of
some,
an
intense
area
of
kind
of
dark
color
here
and
that's
where
a
large
number
of
packages
cluster
together
that
are
high
usage,
low
quality,
that's
under
production,
that's
infrastructure
risk.
This
is
another
version
of
that
same
data.
C
It's
reflecting
the
approach
that
I
use,
which
uses
a
bayesian
technique,
and
that
means
that
everything
is
kind
of
a
distribution
rather
than
point
data,
but
again
the
sort
of
the
the
high
level
result.
Is
you
look
at
this
zone
here
of
excuse
me.
Underproduced
packages
is
very
intense
at
that
tail
and
that's
really
a
problem
all
right.
I
have
a
list
of
the
packages
that
we
found
to
be
most
underproduced
overall
and
when
I
presented
these
results
to
the
debian
community,
they
were
super
interested
and
super
engaged.
C
The
community
found
this
this
kind
of
approach
to
be
very
insightful
and
helpful.
So
taking
this
back
then
from
the
sort
of
metrics
into
the
insight
back
to
the
community,
I
think
was
a
really
great
win
for
this
project.
All
right,
so
what
we're
doing
next,
with
this
technique
that
I've,
given
you
an
extremely
fast
overview
of,
is
applying
this
technique
to
a
much
wider
range
of
projects.
C
C
I
guess
different
languages,
there's
a
number
of
directions.
One
could
go
we're
continuing
to
validate
this
measure
by
comparing
it
to
those
outcome
results,
and
we
want
to
understand.
Then
what
are
the
the
processes
in
those
communities
that
tend
to
drive
this
neglect
and
how
can
we
identify
those
factors
and
then
help
those
communities
to
intervene
before
things
go
wrong
before
we
get
another
another
heartbleed?
So
this
is
my
invitation
to
the
metrics
community.
Let's
collaborate,
let's
work
together,
you
see
that
this
is
a
little
bit
of
a
I'm,
a
computational
social
scientist.
C
So
I'm
coming
at
this
from
kind
of
that
academic
perspective,
but
I'm
very
open
to
to
collaboration
and
here's.
Some
of
my
contact
details,
like
several
of
you,
I
was
very
generously
supported
by
the
slum
foundation
in
this
project,
but
I
also
want
to
shout
out
to
the
community
whose
availab
efforts
to
make
this
data
available
made
this
work
possible
in
the
first
place.
Thank
you
very
much.