►
From YouTube: CHAOSS OSPO / TODO Working Group Meeting June 29, 2023
Description
Meeting Summary is here: https://chaoss.discourse.group/t/chaoss-ospo-todo-working-group-meeting-summary-june-29-2023/201
Meeting minutes are here: https://docs.google.com/document/d/1Bf6a1Ywi4m0Ywo4vuBBp3Q9_AA_QKbWf99WxAqRbpMw/edit#heading=h.cmdj676cd3yd
A
Okay,
so
welcome
to
the
chaos,
also
working
group
meeting,
sorry
I'm
getting
error
messages
from
Zoom.
Now
we
are,
we
are
under
the
chaos
code
of
conduct,
so
please
be
kind
to
each
other.
We
have.
We
have
a
pretty
full
agenda
actually,
and
people
are
willing
to.
A
You
know
happy
to
entertain
additional
items
if
somebody's
willing
to
add
any
additional
agenda
items,
but
with
that,
maybe
maybe
we
turn
it
over
to
Gary,
because
this
is
the
agenda
item
that
we
didn't
get
to
in
the
last
meeting,
which
was
around
AI
governance
and
hospice.
So
maybe,
if
you
want
to
kick
us
off.
C
We
were
talking
about
AI
quite
a
bit
in
the
last
conver
last
ospo
working
group
meeting
and
it
popped
into
my
brain
to
ask
the
question
of
how
folks
in
this
call
are
thinking
about
governing
Ai
and
if
that's
an
ospo
responsibility,
because
in
the
scope
of
ospo
maturity
models
and
working
in
organizations
that
are
staying
like
current
with
what's
happening
in
the
world,
AI
is
going
to
be
something
we
have
to
think
about
whether
it's
something
that
actually
winds
up
impacting
businesses
and
productivity
and
all
that
good
stuff
or
not
and
I.
C
Think
that
we
have
seen
some
interesting
requests
to
use.
Ai
we've
seen
some
interesting
approvals
and
some
interesting
denials
so
far,
and
it
usually
has
to
do
with
different
components
of
what
goes
into
an
AI
model
and
what
goes
into
an
AI
data
set
and
I
thought
I'd
open
the
floor
to
ask
that
question
and
I
have
my
own
opinions,
but
I'd
like
to
hear
what
people
have
to
think
first
and
then
kind
of
put
in
a
little
bit
more
context.
As
the
conversation
goes.
B
I
can't
say
from
my
experience
developing
chaos
metrics
in
the
open
source,
scientific
community
that
whenever
I
encounter
a
repository
that
stores
its
computational
models,
it
takes
a
really
time
a
really
long
time
to
process
changes
because
git
deals
with
them
like
they're,
like
giant
multi-gig
pieces
of
text.
So
there's
a
practical
matter
of
them,
not
really
being
software
artifacts
but
being
essential
for
the
execution
of
software.
D
Speaking
to
that
and
I
don't
really
have
an
opinion
around
metrics,
because
for
that
basically
that
same
reason
at
least
yet.
However
I
can
share-
and
some
of
you
may
already
be
aware
of
this
Stefano
executive
director
of
The
OSI
is
conducting
a
series
of
workshops
this
summer
and
fall.
D
D
So
if
any
of
you
are
there,
you
might
want
to
come
that
might
help
inform
this
discussion
a
little
bit,
but
yeah
I
definitely
agree.
It
should
be
something
we
should
think
about.
Metrics
wise.
E
Yeah,
so
regarding
governance
at
equinix,
the
same
set
of
people
who
are
talking
about
AI
are
the
same
set
of
people
who
are
talking
about
open
source,
there's
an
overlap
there,
but
it
actually
brings
in
additional
folks
from
legal,
considering
risk
and
responsibility
and
potential
liability
for
things.
E
There's
a
sort
of
a
heightened
sense
of
uncertainty
and
and
risk
assessment
going
on
which
I
think
I
can
say
openly
and
just
those
risks
go
well
beyond
the
usual
compliance
risks
that
open
source
folks
have
to
think
about.
So
it's
going
to
have
an
expanded,
expanded
audience
for
that
there.
So.
E
Yeah
I
think
it
happens,
I
think
it
happens
both
on
consumption
and
production.
If
I
consume
a
model
that
has
some
proprietary
information
in
it
and
then
that
leaks
into
my
internal
work
product
you
know,
could
that
have
bad
implications
as
well.
E
B
Somebody
shared
this
in
the
because
you
can
see
this
link
if
you
want
to
go
to
it
on
your
own
blueprint
for
an
AI
Bill
of
Rights.
It
might
be
useful
for
some
I
don't
want
to
go
through
here,
but
yeah
now
I've
not
lost
track
of
where
I
am
there.
We
go
I'm
back.
C
B
A
I,
don't
know
have
some
metrics
that
would
help
them
better
understand
if
their
people
are
using.
Some
of
these
particular
tools,
I
think
that
would
be
interesting.
Yeah.
A
How
we
do
that
I,
don't
know
Christine,
you
have
your
hand
up.
H
Yeah,
so
the
one
thing
is
just
like,
like
you
said,
the
metrics
could
be
just
looking
at
hugging
face,
and
so
some
authors
are
involved
in
just
helping
to
figure
out
which
models
are
open
source,
truly
open
source
kind
of
Open
Source.
And
what
are
the
licenses?
And
even
sometimes
it's
computationally
like
because
you're
trying
to
figure
out
what's
going
to
be
the
cost
of
it,
and
even
just
getting
some
of
that
information
could
be
related
to
metrics.
H
If
the
model
is
this
size
of
that
size
and
then
also
I'm,
also
part
of
an
open,
ssf
AML
thinking
of
becoming
a
working
group
are
related.
It's
kind
of
like
spun
up
because
of
this,
and
so
in
that
case
they
might
be
thinking
about
things
related
to
security
risks
and
bringing
out
white
papers,
but
even
just
like
with
I'm,
assuming
that
in
the
future,
as
an
oscilla
and
if
you're
in
charge
of
bringing
in
models
or
helping
with
security
and
evaluating
models,
there
might
be
some
metrics
that
could
be
of
interest.
B
Has
anyone
thought
about,
or
that's
really
good,
that
the
open
ssf
is
doing
something
I'm
curious?
If
anybody
has
thought
about
what
are
my
responsibilities
from
a
copyright
perspective?
So
if
I
build
a
model
for
machine
learning,
but
I
built
it
using
data
from
the
internet
that
I,
perhaps
explicitly
don't,
have
the
rights
to
distribute
but
I
use
that
you
know
non-distributable
Source
data
to
build
a
computational
model
is:
has
there
been
any
discussion
anywhere
about
what
then.
C
I
I
can
definitely
share
that.
There
are
models
in
the
Wilds
that
are
trained
from
data
sets
like
common
crawler,
which
literally
crawls
the
internet
and
makes
absolutely
no
copyright
claims
or
license
claims
about
how
that
data
gets
used
in
models
and
eventually
AI
like
tools,
and
so
those
things
are
fine
for
a
cool
project
that
you
publish
on
hugging,
face
and
make
absolutely
no
money
off
of
that
could
potentially
cause
problems
if
we
ever
use
them
in
a
commercial
product.
D
I
mean
adding
to
that.
I
can
share
that
and
I.
Don't
think.
This
is
a
surprise
for
anybody,
red
hat
and
other
organizations
that
I'm
aware
of
we
have
reflexively
told
everybody,
don't
even
think
about
using
any
kind
of
MML
or
L
llm
sorry
around
code
generation
because
of
the
copyright
issues
we
just
don't
know
where
it's
coming
from
and
where
it's
learning
from
so
again.
This
is
a
very
reflexive
reaction
that
I'm
sure
will
evolve
over
time.
But
right
now
everybody
we
don't
understand
it.
D
Know
right
because
we
just
we
we
don't
know
when
it
opens
ourselves
up
to
this-
is
the
co-pilot
discussion
really
for
GitHub
all
over
again.
You
know
we
it's
it's
hard,
so
yeah
we're
separating
it
out.
It's
the
the
llms
in
particular,
as
as
what
are
they
in
terms
of
artifact
or
whatever,
and
then
their
product,
though
that's
the
thing.
That's
raised
a
fairly
alarmist
and
conservative
response,
which
is
basically
until
we
figure
this
out
nope
we're
done.
D
You
know,
but
I
strongly
suspect
that
there
are
people
in
my
company
who
are
ignoring
that
too,
because
it's
not
half
bad.
You
know,
but
that's
another
conversation.
A
A
Okay,
thanks
for
bringing
that
up,
Gary
that
was
really
interesting,
interesting
discussion,
Matt
you're
next
on
the
agenda.
F
I'll
try
to
be
brief.
It's
an
update
from
our
conversation
from
two
weeks
ago
and
so
I'm
trying
to
bring
a
couple
threads
together
here.
So
one
is
some
of
you
may
know
or
may
not
know.
We've
been
in
there's
a
been
invited,
pretty
much
to
do
a
book
chapter
for
the
book
that
Anna's
putting
together
in
the
to-do
group
around
metrics.
F
That
could
be
useful
in
an
osbo
sense,
and
so
one
of
the
questions
that
we
kind
of
had
when
we
were
thinking
about
initial
drafts
of
the
book
chapter
was
we
should
it
might
be
useful
to
know
what
other
chapters
were
about,
so
we
could
either
work
with
them
or
not
say
something
different
or
not
repeat.
What's
being
said,
you
know
just
to
get
some
context
on
what
we
could
say,
and
so
that
was
one
thread
that
was
kind
of.
F
What
metrics
and
metrics
models
might
be
useful
for
people
in
open
source
program
offices
and
so
as
opposed
to
just
kind
of
throwing
out
metrics
and
metrics
models
and
saying
here
they're
all
available,
you
figure
out
how
to
use
them
and
how
to
make
them.
You
know
provide
context
in
your
own
sense.
Maybe
we
could
provide
some
of
that
context
for
other
people
as
they
approach
metrics
and
metrics
models,
so
I'm
trying
to
bring
these
two
threads
together
and
so
I'll
ask
Sean
to
click
on
that
link
which.
B
F
B
F
Okay,
so
this
should
look
familiar
from
a
couple
weeks
ago,
and
so
what
I
did
was
there
was
there
have
been
kind
of
a
couple
discussions
around
maturity
models
or
stages
of
growth,
and
the
reaction
to
that
has
been
mediocre
in
the
sense
that
not
all
not
every
open
source
program
office
has
to
do
everything
to
demonstrate,
say,
maturity
or
growth
right
and
that
you
don't
have
to
do
all
the
things,
and
so
that's
I
I
get
that
no
problem
there,
and
so
the
four
things
across
the
top,
and
so
at
this
point,
I'm
calling
them
osbo
activities.
F
Another
phrase
or
word
could
be
used.
No
problem
the
across
the
top
is
adoption,
education,
engagement
and
Leadership,
and
what
I
did
was
I
pulled
those
from
Sean.
Could
you
pull
up
that
PDF
from
down
below.
B
Yes,
so
I
can't
pull
it
up
because
it
downloads,
but
this
PDF
down
here
at
the
bottom,
in
the
notes
and
I
can
and
that
link
is
shared
in
our
notes
and
also
share
a
link
to
these
slides
in
the
chat.
So
people
can
download
it,
but
my
version
of
zoomed
is
for
my
version
of
the
browser.
Does
not
let
me
show
it
in
the
browser
it
just
downloads,
the
PDF,
okay,.
F
So
the
PDF
is,
it
was
a
you've,
probably
all
seen
it.
It
was
a
publication
from
the
Linux
Foundation.
It
talked
about
stages
of
growth
of
an
ospo
and
they
provided
a
maturity
model
in
that
in
that
document,
it's
not
too
deep
into
the
report.
Maybe
page
six
or
seven
and
I
thought
why.
Why
would
I?
Try
to
you
know
reinvent
some
of
those
heads
when
in
fact,
I
think
it
was
Chris
antichek
who
had
written.
That
document
has
kind
of
done
this.
F
Office
down
below
are
particular
practices
that
may
or
may
not
be
representative
of
adoption
may
or
may
not
be
a
representative
of
education
and
so
on,
and
so
forth
and
I
tried
to
try
to
kind
of
fill
those
out
based
on
the
conversation
we
had
a
couple
weeks
ago
and
then
Sean.
If
you
could
go
to
say
slide
four
within
there,
then
so
under
the
education,
legal
education
would
be
three
objectives
again.
F
So
it's
really
meant
to
frame
frame
the
conversation
that
occurs
often
in
in
this
group,
so
that
we
could
understand
where
metrics
and
Metric
models
might
be
useful
against
a
particular
objective
against
a
particular
practice
and
in
that
domain,
coming
from
that
PDF.
So
not
only
might
it
help
us
do
two
things.
Two
things
one
is.
It
might
help
us
think
about
metrics
and
metrics
models
that
achieve
particular
things
inside
of
an
aspo
and
two.
F
B
F
B
B
F
F
A
B
F
A
J
A
Oh
yeah,
this
is
something
that
that
Anna's
been
driving
within
the
to-do
group
and
it
is
a
pretty
interesting
way
of
looking
at
it.
That's
a
really
good
point.
H
Sea
of
roles,
because
you
were
talking
about
activities
and
some
of
the
things
that
happened,
so
it
might
even
governance.
Project
management
somewhere
in
there
should
be
compliance,
but
it'll
help
us.
So
maybe
on
the
rolls,
because
I
think
in
your
talking
about
roles
as
well,
yeah.
A
What
what
I'm,
what
I'm
thinking
about
this,
and
maybe
this,
is
why
you
this
is
probably
why
you
posted
it
is
I
think
this
might
help
us
figure
out
what
the
gaps
are
in
the
activities.
A
J
J
D
D
J
B
J
B
Yeah,
no,
it's
there's,
obviously
a
lot
here.
A
Cool,
if
somebody
has
some
time,
there's
some
really
good
comments
in
the
chat.
If
people
could
paste
some
of
those
into
the
doc.
That
would
be.
That
would
be
awesome
like,
like
Remy's,
seen
the
pedal
model,
which
might
be
an
interesting
way
of
describing
this,
but
in
the
essence
of
time
I
am
going
to
go
ahead
and
move
on.
A
We
have
several
agenda
items
left
left.
We
have
kind
of
an
open
call,
and
we
talked
about
this
I
think
in
the
last
meeting
about
how
we
can
talk
more
publicly
about
the
metrics
and
metrics
models
we're
using.
Should
we?
What
do
people
think?
Should
we
try
to
cover
that
today
or
we
have
some
more
on
the
viability,
metrics
models?
We
also
have
a
couple
of
miscellaneous
items
at
the
end
that
people
have
added.
A
Party
emoji
deputs
it's
next
week,
thumbs
up
to
talk
about
it
now.
A
A
Thanks
that
was
that
was
a
ridiculous
way
of
doing
this,
but
done
so
Gary
viability,
metrics
models.
C
Hi
I
wanted
to
bring
this
up
with
this
group
first.
This
is
like
a
heavy
overlap
with
two
other
working
groups,
the
metrics
model
working
group
and
the
risk
working
group,
but
it
also
has
some
steak
and
ospos,
because
I
I
see
the
people
who
will
likely
use
viability
to
be
people
in
ospo's
who
are
considering
whether
or
not
certain
open
source
projects
are
viable
for
their
organization.
C
I've,
put
together
four
components
of
viability
based
out
of
conversations
that
we
had
about
viability
before
the
conversations
that
we
did
have
where
I
walked
through
what
the
viability
model
looked
like
was
very
productive
and
extremely
helpful
to
me
in
shaping
what
these
look
like.
So
I
wanted
to
put
it
out
there
that,
with
these
categories
here,
I
would
really
appreciate
anybody
who
has
any
time
to
donate
or
any
interest
in
shaping
what
viability
should
look
like
or
hey.
C
C
E
Yeah,
so
I
don't
want
to
go
way
deep
into
this,
but
I'm
sure
that
risk
fits
into
some
part
of
this
viability
thing
which
is
sort
of
more
forward-looking.
C
Yeah
and
those
kind
of
things
I
want
to
be
able
to
capture
in
some
way
in
the
model.
Some
way,
hopefully
like
not
I,
don't
want
to
like
have
a
sum
on
the
scale
metric
where
it's
like.
I
think
that
this
shouldn't
be
used,
because
that's
not
helpful,
but
I
am
definitely
open
to
anything
that
you
think
is
easy
to
cat
or
is
categorizable
in
that
way
like
just
off
the
top
of
my
head.
I.
C
Think
elephant
Factor
might
be
good
for
that,
if
red
hat
or
whatever
is
maintaining
a
large
stake
in
a
project
that
that
we're
using
it
may
go
from
being
viable
to
not
viable
based
on
other
contexts,
but
like
those
are
the
kind
of
things
and
the
kind
of
thoughts
and
comments
that
I
really
want
to
like
make
this
model
the
best
that
it
can
be.
K
C
Yeah
and
I
also,
in
contrast
to
what
I
just
said,
I,
don't
want
it
to
be
so
cold
that
you
can't
put
any
context
in
or
you
can't
have
any
opinion
that
you
would
need
to
elaborate
on.
As
you
look
at
viability
but
like
I'm,
trying
to
reduce
it
and
find
a
good
balance.
A
Hey
thanks
Gary,
so
we
have.
We
all
have
the
action
item
to
have
a
look
at
the
docs
above
and
provide
some
some
feedback.
If
you
have
some
some
feedback
for
Gary,
okay,
so
the
next
one
is
that
our
OSS
EU
panel
has
been
accepted
talking
about
determining
osbo
value,
so
that's
myself,
Chan,
David,
harsh
and
Matt.
So
that's
super
exciting.
So
we're
excited
to
be
doing
that
in
in
Bilbao.
A
A
Cool,
so
that's
that's
pretty
exciting
I'm
gonna
guess
that
they're,
probably
some
other
related
talks.
I
have
a
talk
about
how
to
grow
your
contributor
base.
So
she'll
have
a
a
metrics
section
as
well.
Anybody
else
have
talk
set
ossu
that
you
want
to
mention.
D
A
Nice
yeah
rummy's
got
his
hand
up.
L
Yep
so
the
hospitalology
monthly
meeting
that
the
to-do
group
does
we
had
a
meaning
a
couple
of
months
ago.
Maybe
it
was
March
talking
about
ospo's
and
highly
regulated
environments.
It
was
well
received
and
we
proposed
a
sort
of
part-
two
follow-up
for
that,
so
that
has
been
accepted
for
the
EU
Summit
I'll
be
attending
remotely.
But
our
friends
from
like
Amsterdam
and
a
few
other
oslo's
in
the
government,
space
and
energy
space
will
be
attending
in
person.
H
Oh
sorry,
I
have
a
talk.
It's
around
experiences
in
the
hospital
for
a
GitHub
management
and
there's
gonna
be
an
element
in
there
around
audit
and
metrics
that
we
use,
and
so
that's
also
been
accepted.
My
tie
into
Eric's
next
agenda
item
as
well.
C
It's
not
my
talk
but
I'll
plug
my
friend
Natalie
vladko
did
a
Berlin
buzzwords
talk
about
building
on-ramps
for
non-code
contributors
in
open
source.
That
is
also
going
to
be
at
the
EU
event.
A
Nice
Sophia.
I
I
also
talking
in
on
metrics
I
phrased
it
as
metrics
office
hours
and
basically
it
was
mostly
around
applying
and
communicating
metrics
inside
of
companies,
but
I
don't
know
if
I
can
attend
yet
so
we'll
see.
Maybe.
A
Fair
enough
Christine,
do
you
still
have
your
hand
up,
or
is
that?
Okay,
just
an
artifact,
Eric.
K
Hello,
yeah
I
tagged
this
on
to
the
end
of
it
and
it's
okay
to
not
take
your
time
here,
but
feel
free
to
ping
me
offline,
as
I
said,
but
basically
so
my
team
here
at
GitHub
has
been,
as
a
few
of
you,
have
seen
the
organization
metrics
dashboard
and
the
project
dashboard
I've,
Been,
Working,
On,
We're,
actually
gonna
Sunset.
That
dude
is
an
data
back-end
changes
that
we
are
unable
to
keep
up
with,
but
we
wanted
to
Parlay
that
work
into
a
more
general
purpose.
K
Api
focused
solution
for
folks
that
will
include
some
like
a
reference
implementation
on
some
common
open
source
Tooling
in
alongside
of
that,
we
wanted
to
see
if
there
were
things
that
folks
are
doing
in
with
their
metrics
today,
if
you're,
graphing
or
building
time
series
data
off
of
GitHub
API
date
off
of
the
GitHub
API,
that
is
either
has
chunks
of
it.
K
That
are
missing,
that
you
have
to
fill
in
or
are
computationally
expensive
that
if
we
could
use,
spend
some
time
improving
that,
on
the
back
end,
to
make
your
lives
easier
and
to
make
the
tools
better.
We
have
some
engineering,
time
and
capacity
to
do
that,
so
I
wanted
to
put
a
call
out
and
just
feedback
and
again
feel
free
to
I'm
on
the
chaos
slack
so
feel
free
to
pay
me
there.
But
if
you're,
if
there's
things
that
you're
doing
with
the
give
API
that
you
wish
were
better.
K
Basically,
let
me
know
about
that
and
we
will
Endeavor
to
to
fix
those
fix
that
stuff.
I'm
also
really
curious.
If
people
do
have
existing
like
existence,
proofs
of
building
community
health
or
Project
Help
dashboards,
using
chaos
metrics
on
tooling
that
you
could
share
I'd,
love
to
see
it
and
and
basically
see
if
we
can
turn
that
into
a
like
a
reference
implementation
for
people
who
are
just
getting
off
the
ground
and
want
to
get
started
with
with
metrics.
K
What
what
would
they
would
be
sort
of
the
beginner
dashboard
that
they'd
be
interested
in
looking
at?
And
how
can
we
make
that
a
templatized
reusable
thing
that
we
could
that
we
could
share
with
with
people
that
are
just
getting
off
the
ground.
A
You
have
one
what's
what's
missing,
that
I
would
like
to
see
in
the
the
GitHub
API,
which
is
dependence,
so
dependencies
are
in
there
and
if
you
go
to
the
insights,
tab,
there's
dependencies
and
then
there's
dependents
and
the
dependents
are
not
in
the
either
API.
A
We
don't
think
we
want
to
work
on
this
anymore
and
we're
like
no
I
mean
there's
there's
way
too
many
people
who
rely
on
this
and
you
need
to
you
need
to
have
a
plan
for
that.
If
that's
what
you're
going
to
do
and
it's
going
to
take
a
year
or
two,
but
but
we
have
no
way
of
doing
that
in
an
automated
fashion,
I
mean
I've
used
criticality
score.
K
Well,
no
promise,
that's
a
great
example:
no,
no
promises,
obviously,
but
that's
a
closely
related
to
an
area
that
we're
working
on
so
I'd
love
to
talk
more
about
that
and
see.
If
we
can
help
out
there.
Yeah.
A
That'd
be
great
Christine,
you
have
your
hand
up.
H
Have
it
do
I
need
to
find
it
from
the
insights
dashboard
so
so
I'll
give
you
more
more
feedback,
but
that
was
one
that
came
off
the
top
of
my
head.
J
K
Here's
a
and
I'll
call
cauldron
or
here's.
It
was
an
open
source
tool
that
you
could
use.
Here's
here's
a
way
to
get
started
with
it
and
here's
the
kind
of
first
first
five
things
that
you
should
look
at
that
would
that
would
help
you
understand
the
the
health
of
your
projects,
gotcha.
F
A
Yeah,
okay,
cool
one
more
thing
for
the
agenda:
should
we
should
we
skip
the
meeting
on
July
14th
for
Fosse,
because
it
does
look
like
in
chat?
A
lot
of
a
lot
of
us
are
going
to
be
at
Fosse.
F
G
A
Okay,
so
let's
let's
plan
to
do
that
and
we'll
just
plan
to
cancel
the
the
one
on
the
14th
Elizabeth,
is
that
something
you
can
fix
on
the
calendar?
Okay,
awesome!
So
we'll
we'll
do
we'll.
Do
that
anything!
Anything
else,
quick
that
we
want
to
talk
about
in
our
remaining
I,
don't
know
how
many
times
I've
asked
those
it
just
does
not
stick
in
my
head.
Do
we
end
at
a
quarter
tail
or
ten
till.
A
Okay,
sorry
anything
else.
People
want
to
talk
about
quickly.
G
Started
to
use
some
of
the
like
metrics
to
on
my
NFI.
G
Repositories
that
way
we
can
like
met
Acom
best
practices
on
policies,
Etc
or
cohorts,
to
make
OS
plus
problem
easier.
G
Because,
like
we
have
like
a
our
team,
that
I
was
in
depositories
but
like
I
differ,
normally
net
speeds,
so
I
was
like
I'm
wondering
if
anyone
else
and
like
they're
they're
some
experiences
like
garment,
taking
Visa
comments,
metrics
and
like
turning
them
into
cohorts
of
of
repositories
that
are
used
by
ospo,
as
opposed
to
lycom
repository
on
the
tough
things
shown
to
the
repositories
themselves.
A
It's
a
really
good
question:
I
mean
it's
it's
something
that
that
we've
done
with
the
metrics,
at
least
within
the
CNC
app,
so
I
I,
don't
exactly
know
how
we've
done
this
with
within
auger
or
grammar
lab,
but
but
it
is
something
that
we
look
at
within
cncf
Dev
stats,
because
there
are
projects
like
like,
for
example,
kubernetes.
That's,
that's
not
just
multiple
repos
that
you
have
to
look
at
it's
multiple
GitHub
orgs
that
basically
form
the
basis
of
one
project
and
I
know
it
at
VMware.
A
One
of
the
things
we've
looked
at
is
we
have
a
couple
of
of
projects
that
are
within
some
of
the
VMware
orgs
that
have
you
know
four
or
five
repos,
but
it's
all
kind
of
the
same,
the
same
project
and
you
know
we
want
to
look
at
it
as
the
same
project
and
not
necessarily
as
as
separate
separate
repositories.
I'm
curious
Sean.
What
you
think
about
that
from
an
auger
perspective,
whether
you've
looked
at
like
groups
of
repositories
or
so.
B
We
we
handle
it
two
ways.
The
first
is,
if
you
just
put
a
GitHub
organ,
we'll
get
all
the
repos
for
you.
The
second
is
now
with
auger.
You
can
log
in
and
create
a
list
of
a
group
that
is
composed
of
whatever
you
want
to
be
composed
of.
So,
if
you
just
want
to
look
at
one
group,
one
GitHub
org
as
the
group
or
if
you
have
a
project
or
area
management
or
interest
that
spans
multiple,
orgs
and
select
repositories,
you
can
create
your
own
group
for
that
as
well.
Yeah.
G
Like
so
so,
like
topia
player,
I'm
like
talking
about
like
the
automatically
making
groups
of
repositories
right
so
like
all
the
like
repositories
that,
like
I,
built
a
package
or
all
the
repositories
that
have
like
more
than
39
permits
in
the
past
like
60
days
and
then,
if
you
like,
I,
combine
those
two.
G
You
get
like
actively
developing
repositories
that
like
build
a
package
and
so
like
you
can
like
build
up
on
understandings
of
like
groups
of
like
repositories
at
scale.
That
would
like
otherwise
require
you
to
like.
G
G
Right
yeah
so,
like
speaking
like
some
of
that,
like
Matrix
work,
and
why
not
their
thresholds.
That
then,
like
say
like
like
between
the
cut
off
A
and
B,
it's
a
cohorts,
type.
G
And
then
every
like
repository
would
have
a
true
or
false,
like
value
or
like
on
every
type
of
a
cohort.
B
We've
from
an
analytical
perspective,
that's
similar
to
this
idea
that
I'm
looking
right
now
with
aggravated
across
like
a
hundred
thousand
repositories
from
different
domains,
trying
to
describe
how
different
organizations,
corporate
government,
scientific
academic,
how
how
their
patterns
of
Engagement
are
different
and,
and
so
the
ability
to
say
okay
I've
got.
Maybe
my
hospital
has
11
000
repos
I'm,
most
interested
in
the
ones
that
have
activity
and
then
creating
a
group
of
those
I.
Think
is
what
I
hear
you
saying.
G
G
Yeah
so
so,
like
we've
got
like
come
Unity
cohorts
of.
E
B
A
I
Yeah
I
was
thinking
about
it,
because
I
actually
also
struggle
with
this
problem,
in
that,
when
we're
looking
at
everything
it's
hard
to
reply
groupings,
because
a
lot
of
them
are
subjective,
so
I
think
for
for
select
projects
or
we
have
more
robust.
Metrics
are
tracking
around
them.
I
We've
had
project
leaders
say
with
explicitly
want
to
be
included
in
those
metrics
programs,
which
could
include
things
outside
of
just
that:
Central
organization,
from
an
organization
management
perspective
as
osbo,
encouraging
practices
and
better
groupings,
and
they
do
have
explicit
policies
around
organization
creation
one
year
project
can
have
its
own
organization.
I
Goaling
and
I
presented
this
publicly
a
number
of
times
so
not
spilling
any
beans,
but
we
have
our
goaling
organization,
which
has
55
repositories
undergoing
organization,
but
then
there
are
other
projects
delve
and
play
with
code
that
sit
under
either
personal
repositories
or
other
affiliate
repositories
that
are
actually
part
of
an
integral
to
the
goaling
community,
and
so
we've
manually
added
them
in
for
our
metrics
program
and
to
look
at
it.
So
I
think
for
things
that
we
are
actively
looking
at,
then
it
does
require
some
like
sort
of
manual
selection.
I
The
other
way
that
things
can
happen
is
there
is
some
logical
grouping
with
functionality
say
things
like
the
cloud
native
space,
where
there's
also
like
the
affiliation
with
the
cncf
and
that's
the
ncf
umbrella,
and
that
does
have
some
related
correlation
to
our
organizational
structure
as
Google.
So
a
lot
of
say,
those
projects
are
people
who
work
on
this
project,
I'll
roll
up
to
the
same
VP,
so
they
tend
to
be
looked
at
together
or
grouped
together.
I
Logically,
so
there
is
an
element
of
organizational
structure,
that's
also
at
play
in
terms
of
how
we
group
and
think
about
project
spaces
and
then
there's
a
on
the
other
side
on
the
compliance
team.
Where
there's
a
lot
of
logical
grouping
that
are
happening
for
things
like
activity
levels.
Are
we
working
on
this?
Are
we
not
working
on
this?
Do
we
need
to
put
this
as
a
candidate
for
archival,
and
so
then
there
are
things
applied
to
our
own
Repository
that
are
more
around
compliance
checks.
How
much
we
are
like.
I
Is
this
an
archive
project,
a
current
project?
Do
we
need
to
go
Taps
like
what
are
the
actions
we're
going
to
take
because
it's
in
sort
of
this
middle
ground?
And
so
then
there
are
also
logical
groupings
that
are
happening
as
sort
of
a
management
and
hygiene
perspective
absolutely
but
I
think
on
it
on
an
aggregate
scale.
This
is
not
something
that
has
been
addressed
well,
I.
I
Think
some
of
the
things
that
we're
thinking
about
are:
how
do
we
create
a
better
metadata
structure
around
it,
because
something
like
this
kind
of
information
can't
really
go
on
GitHub,
necessarily
because
it's
something
that
might
change
and
also
like
I,
don't
think
that
other
people
that
are
pulling
from
these
repositories
GitHub
necessarily
care.
And
it's
because
it
is
subject
to
how
we
think
about
these
repositories
and
groupings
of
them
as
a
company
and
so
I
think
a
project
that
I'm
really
hoping
can
help.
I
Us
eventually
would
be
something
like
guac,
which
is
happening
inside
the
open,
ssf
Community,
which
is
looking
at
creating
a
metadata
structure,
around
repositories
and
labeling
and
other
types
of
data,
not
to
say
this
is
anywhere
near
ready.
Yet.
But
it's
something
that
I've
been
following
as
a
way
to
potentially
start
to
provide
more
logic
around
how
we
can
understand
where
these
things
fit
inside
organization.
A
Well,
thanks
I
think
we
are.
We
are
now
a
little
bit
overtime,
but
I
think
that
was
worth
the
discussion,
because
that
was
a
really
interesting
discussion
and
I
think
something
a
lot
of
us
struggle
with
I
would
encourage
you
Justin
to
maybe
ask
that
question
again
in
slack
and
see
if
maybe
some
people
who
aren't
in
this
meeting
have
have
some
other
perspectives.
A
A
Okay,
so
we're
gonna
skip
the
next
meeting.
So
we'll
we'll
see
you
all
in
a
month
thanks
everybody.