►
From YouTube: OpenSSF Identifying Security Threats WG (May 10, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thanks
for
thanks
for
making
it,
we
had
a
big
week
last
week.
Let
me
share
my
share
my
thing.
A
Yeah
we
have
the
the
the
the
openness
of
town
hall
on
monday.
I
hope
you
guys
were
able
to
either
see
it
or
view
the
recording.
The
blog
article
on
the
metric
dashboard
went
out,
there's
a
link
to
it
and
then
I
think,
on
thursday,
the
blog
article
on
the
security
reviews
project
went
out.
A
The
good
part
of
it
is
that
we
are
caught
up
the
bad
part
of
it
is
now
we
need
to
figure
out
what
we
want
to
do
next.
So,
let's
kind
of
start
with
that.
So
for
the
for
the
metric
dashboard,
there
were
a
couple
comments
in
the
slack
channel,
but
also,
I
think,
we've
talked
about
before
about
like
chaos
and
and
where,
where
chaos
fits
into
this
and
like
hey,
it
looks
pretty
silly.
A
If,
like
linux
foundation,
is
funding
two
completely
separate
projects
that
have
a
you
know,
substantial
overlap
and
I'm
not
sure
that
they
have
a
substantial
overlap,
but
I
would
like
to
understand
if
they
have
a
substantial
overlap
and
then
do
something
about
it.
So
I
reached
out
to
the
the
chaos
folks
I
joined.
A
They
have
a
slack
channel,
they
have
a
a
risk
working
group
which
I
believe
has
started
to
ask
some
questions
that
we
have
been
looking
at
technically
I
mean
it
may
overlap
more
with
the
scorecard
project,
but
regardless
of
like
where
it
actually
fits,
we
should
be
talking
to
them,
so
their
next
working
group
meeting
is
on
thursday
david.
I
understand
that
you
attend
those
regularly
or
or
at
least
have.
B
I
attend
the
chaos
working
group
meetings
regularly,
not
the
chaos
as
a
whole,
but
the
chaos
risk
working
group
is
probably
the
closest
now
you
know
I
can
give.
I
can
give
some
more
insights.
B
I
mean
the
the
chaos
in
general
works
a
lot
more
slowly
because
they're
they're
trying
to
make
like
you
know,
carefully,
discussed
rigorously
defined
metrics,
and
you
know
if
that
takes
six
months
or
so
that's
okay,
so
they
have
a
very
different
goal
and
unfortunately
they're,
I
think,
for
the
security
risks
they're
starting
from
a
they're
learning
as
they
go
okay,
so
their
timelines
and
interests
may
not
be
identical,
but
I
think
that
it
is,
I
think
what
you're
doing
is
exactly
right.
B
Some
discussions
between
I've
been
trying
to
encourage
that
both
directions,
for,
as
you
know,
sometime
you
know
and
just
fyi.
It
may
seem
weird,
but
the
linux
foundation
actually
doesn't
have
some
sort
of
official
rule
that
projects
can't
overlap.
If
you've
ever
worked
with
the
cncf
it
was,
it
was
a
should
not
a
must
yeah,
it's
not
even
necessarily
should
because
sometimes
it's
what
seems
to
well.
B
First
of
all,
I
think
in
many
cases
what
seems
to
overlap
usually
doesn't
for
one
reason
or
another,
and
but
I
think
basically
the
real
key
is,
you
know,
talk
and
if
there
is
a
way
you
know
I
mean
if
you
overlap
and
everybody
has
decided
you're
going
to
overlap
at
least
try
to
figure
out,
what's
distinctive
between
you,
but
there's
no
rule
against
it,
because,
at
least
in
some
cases
it's
kind
of
sped
development.
Oh
wait
a
minute
we
got
to
get
moving
or
they'll
do
x,.
B
So
but,
but
I
do
think
that
you
know
this
this,
you
know
talk
between
and
try
to
you
know.
Basically,
they've
got
got
some
ideas,
whether
or
not
they're
good
ideas
is
up
to
this
group
to
decide,
but
I
think
it's
good
to
at
least
take
a
look
at
what
they've
done
and
see.
If
there's
anything,
that's
that's
useful
and
if
not
that's
fine,
if
there
is.
A
Great
absolutely
cool,
so
so
yeah,
so
it's
so
you
should
have
a
little
bit
more
information
in
two
weeks
about
what
comes
out
of
that
I
mean
I
look,
let's
just
take
it
as
it
as
it
goes
there.
I
don't
want
to
go
with
any.
You
know
assumptions
on
like
what
the
future
should
look
like.
Let's
just
kind
of
see
what
happens
on
within
our
side,
though
you
know
the
the
next
set
of
metrics
and
kind
of
iteration
on
the
dashboard,
and
things
like
I
want
to
make
sure
we
don't
lose.
A
You
know
completely
lose
momentum.
So
right
now
we
have
the
the
current
set
out,
which.
B
B
What
I
would
propose
is
spending
at
least
some
of
this
of
today
kind
of
okay.
We
got
something
running
hooray
now,
let's
take
a
more
critical
eye,
look
through
and
talk
through
both
you
know:
what's
there
that
maybe
shouldn't
be,
and
maybe
what's
not
there,
that
should
be
yeah.
You
know
maybe
tweaking
some
of
these
things
and
basically
you
know
starting.
We
we
always
said
we
were
going
to
refine
once
we
had
something.
Well,
we
got
something
yup
I
proposed
now
is
a
good
time
to
refine.
B
With
the
caveat
that
you
know,
any
negative
comments
are
not
a
negative
comment
on
michael
on
you.
I
do.
A
C
A
Yeah
yeah
totally
fine,
so
yeah,
let's
do
it.
A
Let
me
you
know
what,
since
that's
probably
likely
to
take
up
the
rest
of
the
conversation,
let
me
just
so
as
far
as
budgets
go,
we
did
submit
the
budget
that
we
talked
about
last
time
to
the
governing
board,
we'll
I
I
I
feel
like
it
should
be
fine,
like
we'll
we'll
kind
of
see
what
comes
out
of
it,
but
you
know
we'll
know
more
at
some
point
and
the
security
reviews
article
was
published
and
sorry
and
I'm
going
completely
out
of
order
here.
A
So
I'm
super
sorry.
We
at
paris
welcome.
C
Yeah
hi,
this
is
paris
lucas.
This
is
probably
maybe
I've
only
been
to
two
meetings
for
open
ssf,
I'm
highly
interested.
So
right
now,
I'm
just
trying
to
be
a
fly
on
the
wall,
and
so
I
figure
out
how
I
can
better
help.
The
team
awesome
awesome,
love
it.
D
C
A
There
you
go
okay,
so
here's
the
dashboard
we'll
just
start
out
at
wherever.
I
was
right
out
of
criticality
and
david.
If
you
had,
if
you
had
thoughts
on
how
how
most
efficiently
to
kind
of
do
this
feel
free
to
drive.
I.
B
Think
that,
frankly
I
I
don't
so,
I
think
start
starting
at
the
top
and
just
kind
of
walking
down
just
as
you've
suggested
okay,
unless
unless
somebody
else
has
a
better
idea,
I
I
think
you
know,
at
least
at
least
it's
ordered
and
we'll
know
when
we're
done
right.
A
Perfect,
okay,
awesome,
so
so
project
criticality.
So
all
this
data
comes
from
the
criticality
score,
open,
ssf
project.
So
I
believe
this
collect.
In
fact,
what
I
should
do
is
I
should
have
the
criticality
score.
A
So
yeah,
so
this
the
da
the
data
that
is
collected,
is
here
and
then
what
the
what
so
the
data
is
calculated,
it's
aggregated
up
to
become
a
final
score
and
then
we
take
a
subset
of
that
data
and
use
it
here.
So
the
number
of
stars
is,
I
mean
this
should
be
the
number
of
stars
but
yeah.
So
it's
it's
a
field
called
stars
which,
if
you
go
back
here,
there
should
be
something
called
stars
somewhere.
A
B
Okay,
I
have
to
admit
I'm
very
skeptical
of
stars.
I
realize
some
people
like
them
because
they're
a
number.
E
B
B
D
Yes,
a
quick
thing,
I
was
thinking
well
as
of
right
now.
This
is
it
feels
like
this
is
our
best
metric
of
like
what
like
how
it's
the
only
thing
that
kind
of
tells
us
like
how
big
or
popular
it
doesn't
measure
how
popular
it
is.
D
So
if
we
do
get
something
better
like
how
many
watchers
there
are,
that
would
be
phenomenal,
but,
for
example,
like
the
stars
was
the
only
way
I
could
kind
of
somewhat
do
some
kind
of
popularity
sort
based
on
like
what
packages
you're
looking
at
and
stuff
like
that,
but
I
totally
agree
it's
not
like
any
kind
of
end-all
be-all
of
like
how
popular
this
project
or
how
important
it
is.
But
looking
at
everything
else,
I
don't
know
what
else.
F
A
So
so,
if
all
of
the
other
metrics
were
identical,
and
you
had
one
project
with
five
stars
and
the
other
one
with
fifty
thousand
like,
would
you
assume
that
the
one
with
50
000
was
more
important
to
because
remember
so?
The
whole
purpose
of
this
entire
row
is
how
influential
or
important
is
the
project
to
the
overall
ecosystem,
meaning
node
or
kubernetes,
or
these
other
projects
are
really
important
because
they
like
run
entire
businesses
like
lots
and
lots
of
them.
A
While
the
you
know
sorting
library
that
I
wrote
in
you
know,
some
in
lua,
that's
used
by
three
people
is
just
not
that
important.
The
question
is
just
do
do
stars.
Well,
stars
means
nothing
for
security.
Do
they
mean
something
for
importance
is
popularity
importance.
D
D
Okay,
I
was
just
gonna
say
like
if
there's
anything
else,
I
think
that
kind
of
trumps
stars
if
like
in
terms
of
like
how
popular
is
this,
you
know
how
many
down
like
how
many
watchers
or
viewers
are
down
like
I
don't
know.
But
personally
when
I
go
to
github
it's
the
same
thing
like
if
I
were
to
look
at
the
rest
of
this
criticality
panel
and
like
look,
I
could
see
a
similar
project
that
you
know.
D
Maybe
a
ton
of
people
worked
on,
but
it's
just
some
patch
together
thing
that
made
a
bunch
of
releases
and
it's
been
around
for
a
while,
like
I
still
wouldn't
know
how
kind
of
big
or
important
it
is
like,
the
stars
is
the
only
thing
I
can
kind
of
lean
on
right
now,
but
it's
not
a
great
metric
in
the
at
the
same
time,
so
yeah.
So
I
don't
want
to
keep
like
repeating
the
same,
but
that's
kind
of
how
I
was
thinking
about
it,
but.
A
So
I
mean
we
could
also
de-emphasize
stars
without
removing
it.
I
don't
know
how
that
would
work
from
a
ui
perspective,
but
well.
D
B
Yeah,
although
I
think
that
actually
has
a
similar
problem
to
the
stars,
I
mean
the
downloads
is
uni
is
specific,
but
all
it
takes
is
a
couple
ci
pipelines
to
download
each
time
and
your
numbers
go
up
to
the
stratosphere,
whether
or
not
it
makes
any
sense.
C
C
B
How
about
is
odd,
I
don't
know
it
might
may
or
may
not
be
similar.
Let's
see
here
is
odd.
Has
158
stars.
A
Well,
we
might
not
have
to
so
that's
actually
a
good
point,
so
the
criticality
score
already
has
the
all
the
maths
to
generate
that
final
98.6
percent
from
the
data.
Now,
I'm
just
not
sure
what
so
created
since
updated.
So
so
you
created
an
updated
contributor
count.
B
B
Go
up
at
the
top,
I
think
I
thought
it
was,
but
maybe
not.
Okay.
A
So
if,
if
that's
true,
then
maybe
just
listing
the
things
that
are
in
the
algorithm
here
would
at
least
more
faithfully
reflect
the
criticality
project's
view
on
things,
as
opposed
to
say,
here's
a
criticality
score
and
here's
a
bunch
of
other
data
that
we
like.
Maybe
that
belongs
in
a
separate
section
which
is
like
you
know
our.
D
View
on
things:
it's
not
it's
number
of
dependents,
number
of
like
other
projects
that
depend
on
this,
because
that's
kind
of
interesting
in
terms
of
like,
if
there's
a
whole
other
a
lot
of
other
projects
out
there
that
utilize,
this
that
can
kind
of
speak
to
its
importance,
maybe
or
usually
like
how
the
repository
used.
I
guess
I
don't
know.
D
I
guess
if
stars
isn't
part
of
the
you
know,
calculation
or,
however
they
deem
importance.
Then
I
guess
I'm
I
could
be
totally
kind
of
wrong
in
my
in
my
interpretation
of
it
so
might
require
more
research.
I
guess.
A
So
how
about
this?
So
in
the
in
the
interest
of
the
I
don't
know
purpose
of
the
work
group.
Would
someone
like
to
take
point
on
kind
of
rationalizing
what
we're
doing
with
for
for
the
criticality
score
specifically.
E
D
Can
certainly
look
into
that.
I
don't
know
if
anyone
else
wants
to
give
input
and
stuff
as
well,
but
I'm
very
willing
to
help
out.
I
don't
know
if
david
or
anyone
else.
B
B
D
B
That
sounds
like
I
love
it
yeah,
so
just
david
wheeler
and
and-
and
I
will
I
will
immediately
hear
about
it
and
I
will
make
my
own
commits.
That
sounds
like
a
great
plan.
Awesome,
okay,
so
yeah
and
you
know
again
mike
michael,
you
know
we
are.
We
are
so
grateful
for
all
that.
You've
done.
I
think
at
this
point
say
you
know:
what
can
we
do
to
refine
this
yep.
A
B
All
right,
this
is
something
I
know
a
little
bit
about
yeah,
so
I
mean
there
are
so
many
data
values
that
you
could
choose
to
display.
I
I
never
know
which
I
can
explain
to
you
by
the
way
why
the
dynamic
analysis
used
is
red,
but
the
dynamic
analysis
fixed
is
green,
and
that
has
to
do
with
the
subtlety
of
how
the
questions
are
worded.
B
The
dynamic
analysis
fix
says
for
all
the
problems
that
you
know
about
that
were
fine
with
found
with
dynamic
analysis.
Did
you
fix
them
all?
If
you
don't
use
dynamic
analysis,
the
odds
are
good
and-
and
nobody
has
told
you
something
separately
to
fix
because
it's
a
vulnerability.
A
Should
so
I
guess
that
then
that
goes
into
the
question
of
like
so
so
right
now
we
are
basically,
you
know
for
all
these
we're
not
doing
any
calculations
on
top,
but
like
should
there
just
be
a
dynamic
analysis
like.
B
I
mean
you,
you
could
I
mean
you
could
combine
like
static
if
you,
if
you
wanted
to,
you,
could
combine
the
static
analysis
used
and
fixed
and
you
know
just
take
the
you
know:
take
the
minimum.
If
you
basically
did
you
use
it?
If
you
use
it,
did
you
fix
it?
If
you
wanted
to
combine
those
two
into
a
single
value,
that
would
be,
I
mean
that
would
be
perfectly
reasonable.
B
B
B
Yeah
and
of
course
it's
there's
a
whole
bunch
of
criteria
there,
so
it's
always.
You
know.
The
good
news
is
that
the
overall
badge
level
will
tell
you
information,
even
if
you
don't
pull
out
all
the
details
and
you
have
a
link
to
the
actual
badge
entry
right.
So
if
they
wanted
to
see
all
the
details,
then
just
click
on
it
right.
B
A
This
one
here
is
the
that
should
be
no,
these
are
bugs
the
the
this
can
be
a
link.
H
B
B
Specific
things:
well,
I
guess,
if
you
combine
used
and
fixed,
you
won't
be
able
to
do
it
so
easily,
but
you
can
go
click
to
specific
criteria
answers
so
now,
if
you
use
used
and
free,
I
guess.
Maybe,
if
you
do
the
used
one.
B
Right
so
right,
if
you
put
a
sharp
after
that
url
and
then
the
name
of
the
criterion
it
has
to
be
like
the
yeah
static
analysis
used.
Is
that
I
I
don't
think
that's
actually.
I
think
it's
just
static
analysis
for
that
one:
okay!
B
Well,
no,
no!
No,
there
yeah
go
go
up
higher
the
id
right
above
there.
This
is
stat.
You
know
that
one's
that
we
know.
That's
the
the
div
three
like
four
lines:
five.
B
B
So
if
you
did
that
for
all
those
but
not
and
and
for
passing
itself,
you
just
bring
the
badge
entry
itself
and
then,
if
they
have
questions,
then
just
go
straight
to
it.
Yep
now
I
don't
know
what,
if
there's
any
other,
it
very
much
depends
on
what
you
think
is
important.
B
I
can't
simultaneously
easily
see
what
you're
displaying
and
what
and
the
possible
criteria.
B
If
I
look
at
the
criteria
for
the
best
practices
badge
I
mean,
there's
always
the
you
know
you
know
do,
do
you
know
the
no
common
errors
is,
do
they
think
they
know
how
to
develop
secure
software
right.
B
And
there's
also
the
no
unpatched
vulnerabilities
vulnerabilities
fix
60
days
is,
you
know
plausible
also,
but
I
you
can.
I
can
make
arguments
either
way
to
be
honest
right.
A
A
D
A
B
Yeah-
and
I
don't
know
that
either-
actually
you
know
there's
if
there's
if
there's
one
that
you
didn't
have
on
the
badging
on
the
badge
level,
there's
a
short
list
of
ones
that
people
often
fail
on
one
is
the
failing
to
tell
everybody
how
to
report,
and
the
other
is
not
having
any
tests.
So
actually,
maybe
that's
the
right
way
right
way
to
pull
these
up
is
the
test
one
and
the
reporting,
which
is
I'll
I'll,
find
the
name
of
that.
A
Yeah,
that
one
is
the
vulnerability
reporting
this
guy
here,
oh.
B
Okay,
okay,
so
it's
already
there
all
right,
so
I
guess
the
other
one
that
people
often
miss
all
right.
So
that's
the
hey.
Do
you
tell
everybody
how
to
report
vulnerabilities?
The
other
one-
and
I
know
some
of
you
will
be
horrified,
but
it's
automated
testing
yeah.
You
know.
I
I'm
sure
that
everyone
agrees
that
nobody
should
have
automated
tests,
but
it's
it's
disturbing.
How
many
projects
don't
have
any
automated
tests.
B
A
F
B
I
see
an
agreement,
yes,
it's
worth
it
and
you
might
want
to
call
it
automated
test.
I
mean
the
short
name
of
the
criterion
is
test,
but
probably
for
display
purposes.
The
automated
test
would
probably
be
the
a
clearer
term.
B
Yeah
and
that
would
at
least
then
you
would
highlight
the
ones
that,
where
people,
if
when
people,
miss
them
they're
one
of
the
there
are
some
of
the
most
common
missing
criteria.
So
you'll
see
right
away
kind
of
the
distinctions
between
the
folks
who
are
doing
well
and
the
folks
are
doing
not
so
well.
D
Do
you
guys
think
we
should
limit
our
our
kind
of
realm
of
metrics
that
we're
ever
going
to
put
in
here
into
like?
What's
already
exist?
What
already
exists
in
these
three
repos?
Because
then
do
you
think
it
might
be
misleading
that
that
big
score
on
the
left,
like
does
not
represent
some
of
the
metrics,
that
we
kind
of
just
toss
in
on
the
right
or
could
you
that
should
add
those
in
like
some
other
kind
of
miscellaneous
section
or
something
like
that?
Does
that
make
sense.
D
A
A
A
Calculated
and
I
think
that
this
one
is
just
the
diffraction
of
these
that
are
passed
versus
total,
I
think.
G
D
I
think
it's
like
in
case
we
find
other
useful
metrics
that
we
like
don't
want
to
limit
ourselves
necessarily,
but
I
can
see
that
perspective
for
sure,
as
well.
A
A
Passing
is
not
a
calculated
score.
Passing
is
exactly
what
the
badge
program
provides.
Its
criticality
is
exactly
what
the
criticality
project
provides.
So
the
only
way
for
us
to
accurately
represent
this
is
to
include
everything
and
only
those
things
that
the
criticality
score
uses
to
calculate,
but
I
think.
B
A
A
Yeah-
and
actually
I
I
do
want
to
to
do
things
like
that,
because
I
we
do
have
the
the
spreadsheets
and
and
all
the
the
stuff
that
we've
written
over
the
past
year
on
on
how
to
you
know
all
of
the
the
different
types
of
metrics
that
we
would
that
we
think
are
interesting,
and
if
it's
something
that
we
should
calculate
ourselves,
that's
great,
I
would
say
our
first
like
our
preferred
position,
should
probably
be
that
you
know
if
this
is
the
kind
of
thing
that
the
scorecard
probably
should
should
add.
B
Cool
yeah-
and
I
I
just
think
this
is
kind
of
not
this
is
a
kind
of
a
non-sequitur,
but
we
probably
ought
to
have
tool
tips
with
like
little
indicators
somewhere
on
these
major
headings
or
maybe
even
all
these
headings,
so
that,
if
somebody
doesn't
know
what
some
of
these
are,
they
can
just
hover
over
it
and
learn
more
yep
yeah.
I
see
you
have
a
little
yeah.
A
B
Yeah,
I
mean
the
fact
that
you've
got
some
tool.
Tips
is
great.
We
just
need
to
embellish
those
so
that
I
I
think
I
I
would
like
it
so
that
if
somebody
had
no
idea
they
would
they
could
hover
over
it
and
get
at
least
some
reasonable
idea
of
what
that
was
all
about.
Yep.
B
Do
you
want?
How
would
you
let's
see
here
so
how
does
that
work?
It
looks
like
you
can
actually
just
edit
right
here
in
grafana
the
text
that's
shown
as
a
tooltip.
Am
I
misunderstanding:
what
you're
showing
me
right
now.
C
B
B
B
Oh
this
is
the
description.
It's
the
description
field,
okay,
oh
hey!
That
almost
makes
sense.
Okay!
So,
let's
see
here,
if
you
have
go
over
stars,
does
that
have
anything?
No
okay?
So
if
you
like,
I
mean
you
know,
I
or
maybe
some
people
could
try
to
write
up
some
some
proposed
tooltip
text
and
then
you
can
enter
that
in.
I
presume
you're,
the
one
who
who
has
the
authorizations
to
go
edit.
These
fields.
D
No
worries,
okay.
I
can
make
I'm
making
plenty
of
plenty
of
parallel
progress,
so
I'll
yeah,
there's
no
overall
training.
B
Okay,
if
you
like,
I
could
I
could
try
to
write
up
some
some
text
for
that
yeah.
A
Yeah,
let's
do
it,
I
mean
it
if
it
is,
if
there's
a
short
text
in
the
badge
program,
api.
B
A
B
B
A
A
B
Okay,
okay,
I
see
we're
running
short
on
time.
Can
we
spend
a
couple
minutes
on
that
last
last
row.
Absolutely.
B
A
So,
each
of
so
one
thing
I
didn't
like
about
this
was
like
static
analysis
used
and
fixed
static
analysis.
No,
no
good
here.
B
What
I'm
guessing
that
just
means
that
whatever
static
analysis
system,
they
use,
isn't
being
discovered
by
the
scorecard
code
which,
to
be
fair,
is
pretty
limited.
So
yeah
I
mean
you're
showing
the
data
as
it
is.
You
may
not
like
you
may
not
like
the
inconsistency,
but
it
is.
It
is
correctly
reporting
the
inconsistency,
correct.
A
Correct
so
maybe
in
the
in
the
text
for
these
we
should
be
clear
that
the
scorecard
stuff
is
completely
automated
detection,
whereas
the
badge
program
is
a
partially
automated,
mostly
attested
right,
content.
F
B
The
sast
and
the
ci
tests
yeah
that's
probably
important.
D
B
B
B
A
B
I
think
we
just
found
a
process
for
all
the
scorecard
tooltips
there
yeah,
I
think,
absolutely
and
oh
by
the
way,
a
lot
of
the
ci
best
practices
badge
criteria.
B
If
you
don't
do
the
details,
just
the
criteria
themselves
either
the
text
of
the
criteria,
the
first
sentence
of
the
criteria
will
do
you
for
that
to
come
to
think
of
it
because
they
tend
to
be
one
sentence.
Also,
yeah.
Look
at
that
see.
B
I
think
we
just
eliminated
my
job.
Thank
you
for
your
limiting
my
job,
but
you
know
what
it's
it's
I
mean
it's
as
authoritative
as
you
can
get
and
we
can
always
extend
it.
Yeah.
Look
at
that.
Does
it
I,
like
that
yeah
cool
yeah.
I
don't
think
I
don't
think
anybody
will
have
a
real
problem,
understanding
that.
B
A
B
A
A
B
I
don't
think
you
put
in
branch
protection,
but
I
think
there's
some
debate
about
that.
So,
let's
see
here,
oh
we've
got
one
two,
three,
four
five
six,
so
you
selected
12
out
of
how
many
ever
there
are.
B
D
A
D
B
Yeah
and
do
you
want
to-
I
mean
this
may
seem
strange,
but
you
know
obviously
I
like
the
best
practices,
but
we
already
have
that
whole
row
already
included
earlier.
You
may
not
need
it
also
in
the
scorecard
yep,
and
you
know
it's
fine,
that's
part
of
the
calculations.
We
just
don't
need
to
show
it
separately
as
its
own
entry,
because
in
fact
we
show
it.
A
A
B
Okay
yeah,
if
it's
from
I
mean
people
typically
enter
that
as.
B
Yep,
we
actually
fill
that
in
from
github,
but
by
default.
A
B
A
There
we
go
so
so
yeah.
This
is
unhelpful,
so
maybe
yeah,
maybe
maybe
we'll
grab
the
description
from
github
from
github.
B
A
Yep
so
yeah,
you
know
what
shop
is
a
connectors
went
on
github
repose.
B
Yeah-
and
I
realize
our
time
is
almost
ending,
but
the
other
group
that
michael
probably
I
I'm
gonna,
have
to
find
and
get
you
connected.
Also
is
some
lfx
insights.
B
Legs
foundation
has
a
set
of
tools,
all
ready
for
that
support,
open
source
projects
called
lfx,
and
I
know
that
they're,
adding
more
information
for
metrics,
they're,
they're,
really
more
focused
on
people
who
are
managing
a
project
or
okay.
I
can't
see
you
know
as
opposed
to
you
know
it's
not
just
should
I
use
it,
but
you
know
how
is
contribution
going?
Is
there
a
bottleneck?
B
So
I
I
I
I
I've
had
some
other
issues
going
but
I'll
see.
If
I
can
connect
you
and
some
folks
in
lf
together
and
you
know
nobody
has
to
do
anything
different,
but
I'm
just
trying
to
get
people
connected
and
at
least
know
about
each
other.
Absolutely.
A
Awesome
in
the
last
minute
do
we
have
anything
else
that
anybody
would
like
to
to
bring
up?
Yes,
I
would
like
to
contribute,
but
I
have
no.
G
Person
as
so,
if
anybody
needs
help
yeah,
please
connect.
A
So
so
so
with
any
of
these,
I
mean
certainly
recommendations
for
kind
of
content
update.
I
mean
really
any
thoughts
that
you
have
on.
You
know
like
the
these
are
the
of
everything
that
the
badge
program
collects.
These
are
the
ones
that
we're
showing.
Is
this
the
right
subset
scorecard
text?
Here
I
mean
I
think
a
lot
of
is
is
just
polishing
on
the
dashboard
itself.
G
One
more
thing
is
like
it's
very,
I
don't
know
about
others,
but
for
me
it
seems
to
me
a
little
bit.
You
guys
have
gone
a
little
further
into
the
problem
and
I'm
having
a
hard
time
following
it
up
even
in
the
meeting.
So
if
somebody
could
guide
us,
if
there
is
anyone
else
who
wants
to
be
joined,
it
would
be
nice
as
well
sure.
D
How
prague,
if
you
want
to
ping
me
on
the
what
is
it
we
have
a
slack
channel
yeah
so
I'll,
be
like
editing
stuff
throughout.
So
if
you
need
help,
let
me
know.
A
Awesome,
thank
you
all
very
much.
I
appreciate
your
time
we'll
meet
again
in
about
two
weeks
and
I
will
send
a
google
now
to
google
a
doodle
for
potential
new
meeting
times
so
look
out
for
that
I'll.
Send
that
to
everybody
in
post
hunch
on
slack.