
►
From YouTube: CHAOSS Metrics Models Working Group 4/26/22-4/27/22
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
Go
um
welcome
everybody
to
the
metrics
model.
Work
group
meeting
today
is
april.
26
27th.
I
think
we're
going
to
revisit
some
things
today
that
we
had
from
the
asia
pacific
call
last
week.
I
think
there's
still
some
discussion
points
there
as
well,
so
I'm
going
to
go
ahead
and
share
my
screen.
Can
somebody
put
the
minutes
in
the
chat
really
fast.
C
A
There
we
go
so
thank
you,
so
um
the
first
just
the
first
really
quick
thing
is.
This
is
largely
maybe
for
yehui
or
folks
that
are
interested
in
the
conversion
rate
metric.
So
we're
done
with
the
google
summer
of
code
applicants.
You
know
all
the
applications
are
done
and
same
with
outreachy.
So
at
this
point
we
have
to
decide
kind
of
how
we
want
to
basically
who
we
want
to
select
for
the
mentorship
program.
A
D
C
D
C
And
one
of
them
is
building
it
in
grammar
lab,
and
I
didn't
know
if,
if
I
I've
asked
george,
if
there's
a
grammar
lab
person
who
could
co-mentor
just
I
don't
know
how
well
you've
mastered
grimoire
labs.
So
I
didn't
want
to
automatically
throw
that
all
on
you,
but
if
you're
comfortable
with
it,
I
won't
worry
about
it,
but
um
usually
grow.
More
lab
folks
are
super
about
helping
during
google
summer
code.
B
E
B
A
D
C
A
On
the
call,
but
um
what
we
do
is
we
have.
There
are
a
large
number
of
applicants
this
year
and
both
so
usually,
we
just
ask
for
the
mentors
to
provide
maybe
a
top
four
that
they
have
an
interest
in,
so
that
we
can
get
the
list
down
to
a
manageable
group
that
we
can
talk
about.
You
know
what
I
mean
or
five
or
whatever
it
might
be
just
so
we
can
get
it
down
so
we're
not
talking
about
maybe
20
people
or
25
people
yeah
so
and
then
from
there.
E
A
All
right,
um
the
next
thing
is
the
the
metric
model
step
forward
proposal
and
we
talked
about
this
in
the
asia
pacific
column.
I
think
we
think
there
were
some
kind
of
some.
These
are
just
notes
from
the
from
the
asia
pacific
call.
So
I
think
there's
a
few
things
that
we
still
need
to
sort
out.
I
think
the
the
step
forward
proposal
was
received
really
really
well
in
the
um
asia
pacific
call
last
week
and
so
june.
A
C
G
F
Okay,
I
can
quickly
go
through
and
I
just
want
to
show
our
step,
so
we
have
five
steps,
so
the
most
important
step
is
step
um
step.
Two
metric
step
step
two
and
in
the
picture
is
a
picture
three
picture
three.
So
this
um
I
just
want
to
show
our
current
step
is
the
metric
step
two
and
the
matrix
module
step
one.
F
F
A
F
A
B
E
A
F
A
C
A
A
A
D
F
F
F
D
Sorry
yeah
I
mean
I
I'll,
show
you
and
like
how
introduced
some
visualization
from
using
by
the
notebook
into
the
matrix
model.
We
really
love
this
idea,
so
we
want
to
do
some
enhancement
uh
means
that
we
we
introduced
this
algorithm
definition
in
the
notebook
and
and
the
data
inside
I
mean
not
just
uh
to
show
the
uh
visualization.
A
A
C
A
C
um
So
so
so
the
the
data
would
be,
for
example,
and
daniel,
and
I
have
discussed
this-
that
the
data
could
be
gremore
lab
data.
It
could
be
auger
data-
that's
not
that's
not
relevant
where
the
data
comes
from,
but
we
could
decide
on
delivering
it
in
a
json
format,
regardless
of
the
origins.
That
seems
like
a
smart
idea
that
we
wouldn't
have
dependencies
on
a
particular
tool,
but
we
could
actually
have
shared
data.
C
D
B
C
For
for
either
auger
or
grammar
lab,
it's
pretty
easy
to
say
we
called
these
apis
and
pulled
this
data,
and
this
is
what
we
did
to
transform
it
exactly
and
for
the
data
insight.
That
would
be
the
piece
that
we
would
then
evaluate
right
like
in
terms
of
like
when
you
think
about
any
you
know,
proving
any
algorithm
has
utility
it's.
It's
there's
an
evaluation
like
this.
C
Do
you
agree?
Yup,
yup,
okay,
yeah
this?
Actually,
this
from
uh
how
you
do
this
perspective,
this
makes
it
significantly
more
clear
than
the
approach
that
we
did
by
essentially
burying
all
of
this
in
a
notebook
and
a
series
of
api
calls.
So
we
never
persist
the
data,
for
example,
in
the
work
that-
um
and
I
did
um
so
I
I
like
the
structure
that
you
produced.
C
I,
like
your
straw,
I
like
the
structure
much
better
and
I
think
it's
a
model
to
follow,
because
I
think
I
think
we
can
get
to
that
evaluation
point
much
more
clearly
with
the
model
that
you
that
you've
laid
out
here
in
this
other
improvement.
Are
you
talking
about
the
insights
yeah
so
so
the
data
insight
is
like
a
thing.
We
can
evaluate
the
algorithm
that
produced.
It
is
clear
and
separate
and
the
data
that's
used
to
generate
it
is
persisted
and
clear.
D
D
D
B
E
E
C
Yeah
and
this
this
leads
to
a
like
uh
question
regardless,
and
I
have
an
outstanding
question
of
interest
that
we
haven't
solved
yet,
which
is,
if
we
use
notebooks.
It
would
be
really
great
if
there
was
a
way
for
people
to
make
a
comment
on
a
notebook,
and
I
don't
know
that
there's
a
plugin
that
does
that.
But
if
anyone
discovers
one
I
think
it
would.
That
would
be
exceptionally
useful
and
then
we
could
host
these
metric
models
somewhere
as
well.
D
Yeah,
I
think
I
think
draca
has
already
set
up
how
to
how
to
set
up
the
whole
notebook
environment
um
on
one
requirement
files.
If
I
remember
correctly
in
the
community
awareness,
I
think
we
can
based
on
this
file,
changi
and-
and
I
and
android
could
work
together
to
set
up
the
whole
notebook
set
uh
environment,
setup
uh
guidelines.
D
C
D
C
C
E
H
D
D
D
A
B
C
um
That
folder
makes
the
way
that
the
metric
is
calculated
transparent.
So
it
separates
the
presentation
of
data
from
the
machine
learning
or
data
analytics
that
are
used
to
provide
data.
That's
displayed
okay
and
then
the
data
insight
is
it
is
that
the
other
yeah
the
data
insight
is
essentially
that
becomes
the
thing
that
we
show
people,
so
they
can
look
at
data
about
their
community
and
provide
us
feedback
about
basically,
an
evaluation
of
the
utility
of
a
metric
model
and
I'm
sort
of
intersecting.
Some
conversations
that
yehui
and
I
had
on
slack
about
this.
C
H
E
I
love
the
uh
I
love
the
slides
that
you
created
uh
those
are
excellent,
slides
in
the
last
meeting.
We
were
kind
of
talking
a
little
bit
about
what
uh
what
a
metric
is,
what
a
model
is
and
what
a
focus
area
is
and-
and
to
be
honest
with
you,
my
thought
I
had.
I
had
gone
to
that
data
information,
knowledge,
the
d-I-k-y
or
d-I-k-w
triangle
as
well.
When
I
was
when
I
was
thinking
about
that.
E
So
when
I
saw
this,
it
was
just
like
oh
yeah,
I
was
kind
of
thinking
of
that
as
well,
so
so
very
cool.
uh
The
the
comment
that
I
would
make
on
this,
though,
is
that
we
are
becoming
very,
very
implementation,
heavy
and
I
question
whether
we
want
to
go
that
far
in
the
metrics
models,
because
it's
it's
really
for
me.
C
I
think
the
the
discussion
that
yehui
and
I
had
in
slack,
really
resonated
with
me
that
we
should
about.
We
need
to
evaluate
these
different
metric
models,
because
I
think
that
provides
some
kind
of
evidence
to
people
that
they're
useful
and
I
think
we're
going
to
learn
that
some
metrics
models
that
we
dream
up
are
not
useful,
um
and
so
I
think
we
can
sort
of
approve,
publish
a
metric
model
at
the
definition
stage
and
I
think
there's
like
a
separate
checkbox
that
I
think
is
really
important
or
will
become
really
important.
C
D
D
That's
why
we
create
or
set
up
this
matrix
model
working
group
together
and-
and
we
already
stepped
into
this
east
world
means
we
have
to
prove
that
matrix
model
could
prove
that
this
matrix
with
a
some
logic,
neologic
side
could
provide
value
for
those
people.
So
we
have
to
verify
that.
Prove
that
that's
why
we
provide
some
real-world
design
and
you
by
provided
by
our
group
and
provide
design
data
inside
and
those
data
could
from
us,
could
form
community
monitors
with
their
real
experiences
working
in
the
community,
but
and
finally,
they
would
find
out.
D
Does
it
really
this
matrix
model
could
work
for
for
them
and
also
I
have
a
as
you're
concerned.
I
agree,
and
I
think
we
can
include
me
and
shane
and
other
people
together
in
this
meeting.
We
can
provide
some
implementation
support
on
on
on
the
matrix
model
we
already
have,
and
if,
if
uh
I
mean
for
for
like
issue,
100
or
response
needs
some
metrics
model,
if
they
don't
have
info
not
telling
enough
time,
um
I
mean
uh
trenchi
and
julia,
and
I
could
also
provide
such
support
on
that.
D
D
A
E
E
uh
The
the
working
group
and
the
creation
of
the
metrics
models
uh
and
uh
that
we're
almost
jumping
past
the
model
into
implementation,
and
uh
perhaps
we
need
to
slow
down
and
spend
a
little
bit
of
time
on
the
model
and
creating
that
recipe
for
implementation
before
we
jump
uh
directly
into
implementation
and
then
the
and
then
a
little
bit
of
the
confusion
of
what
that
implementation
would
be
for
our
working
group
uh
because
we
do
have,
we
do
have
software
uh
working
groups.
uh
Where
is
the?
Is
the
implementation
better
handled
in
augur?
E
C
Sean
one
of
the
things
I
think
came
out
of
a
discussion
that
daniel
and
I
had
when
we
visited
in
madrid
and
in
some
of
his
team,
is
what
we'd
like
to
do
inside
chaos
is
sort
of
make
the
software
part
a
little
bit
more
agnostic
and
supportive
of
efforts
like
this.
So
we
can
produce
these
json
files
with
whatever
tool.
Somebody
knows
whatever
makes
the
most
sense,
and
then
it
comes
to
fruition
in
a
notebook.
C
That's
really
successful,
of
course
we're
gonna
roll
it
into
gremore
lab
and
maybe
we'll
roll
it
into
auger,
but
providing
tools
that
let
us
get
to
this
point
where
we
have
data,
where
there's
a
much
broader
skill
set
of
there's
a
larger
number
of
people
who
understand
how
to
run
a
jupiter
notebook
against
a
json
file.
Then
there
are
people
who
have
the
time
and
inclination
to
learn
all
of
auger
or
all
of
gremore
labs.
So
I
think
this
could
help
us
build
the
software
community
inside
of
chaos
by
taking
away.
A
Okay,
thank
you
for
this
conversation.
um
My
just
this
is
part
of
it
is
just
listening
and
part
of
my
thoughts
too,
but
um
one
I
do
think
I
thought
about
this
like
in
in
chaos.
I
think
we
have
to
reach
towards
insight.
I
think
we
have
a
program
that
does
that
right
now
with
dei
badging
and
I
think
it's
hugely
successful.
A
You
know
what
I
mean
that
we
kind
of
do
one
one
at
a
time
or
two
like
slow
like
really
slow.
This
is
part
of
the
conversation
yeah.
We
take
our
time
with
it,
as
opposed
to
just
kind
of
producing
them
yeah.
I
see
your
thumbs
producing
them.
The
way
that
we
produce
metrics,
sometimes
but
part
of
that
part
of
that
slowness
does
include
this
evaluative
component.
You
know
what
I
mean.
A
I
think
my
my
thought
would
be
is
that
we
at
least
start
the
work
here
and
if
it's
clear
that
we
can't
support
it
in
a
two-week,
you
know
an
every
other
week,
cadence
kind
of
thing,
or
it
ends
up
occupying
an
entire.
The
entire
meeting.
Everything
then
then
we'll
cross
that
bridge
when
we
come
to
it,
but
it
until
then.
I
think
I'd
kind
of
like
to
keep
it
here,
because
I
think
dei
badging
even
started
in
the
dei
working
group
until
it
kind
of
became
its
own
thing
and
needed
its
own.
E
E
Maybe
we
could,
if
we
could
think
of
getting
to
metrics
model
step
one
as
oh
sean,
okay,
getting
to
metrics
model
step,
one
is
creating
the
recipe
for
the
model
right,
so
we
get
to
step
one,
and
this
is
the
recipe
for
the
model,
and
then
we
get
into
step
two,
and
this
is
let's
all
get
together
and
and
make
dinner
right
the
implementation,
let's
cook
it
up,
but
I,
but
I
I
think
that's
cut.
That's
for
me.
I
feel
like
there
should
be
kind
of
two
distinct
steps
where
you
are
like
step.
E
A
A
A
C
World
insight
to
it
so,
and
if
and
and
I
would
I
mean
it's-
it's
not
insignificant
that
yahui
thinks
that
openoiler
may
be
able
to
provide
us
with
some
infrastructure.
Where
effectively
we
could
stand
up
a
version
of
grimoire
lab
and
a
version
of
auger
where
people
threw
us
their
repos.
We
could
generate
these
json
files
with
in
relatively.
D
C
D
Maybe
next
the
next
meeting
we
can
prepare
some
example
for
the
on
the
other
matrix
model.
Not
just
the
community
aware
companies
and
and
under
connectivity,
but
but
for
the
exact
uh
working
matrix
model,
can
provide
some
quick
demo
online
to
show
how
easy
it
is
to
set
up
the
whole
things,
and
with
that
we
may
be
sharing.
We
can
work
together
to
to
provide.
C
C
D
I
mean
I
mean
the
people
working
here.
We
have
different
uh
background,
we
are
professors,
we
are
community
managers
and
we
are
hospital
peoples,
but
we
definitely
not
all.
Not
all
of
our
of
us
are
data
scientists,
so
we
don't
understand
right,
so
we
have
to
make
the
whole
data
handling
uh
simplified.
uh
Simply
so
that's
our
purpose.
A
D
E
D
You
that
uh
how
to
using
notebook,
okay
and
the
json
file
to
provide
value
of
examples
on
the
on
the
other
matrix
model,
to
see
it's
very
quickly
and
also
about
I
mean
before
we
already
have
some
instance
on
our
side
of
group,
mobile
or
auger.
Actually,
it's
quite
simple
or
quickly
to
correct
to
collect
those
data
and
handle
those
data.
D
A
D
A
A
C
C
D
Yeah
I
mean
we
can
uh
we
can
fall
into
the
working
progress
as
we
have
done
before.
We
can
set
up
uh
google
doc
to
say
to
uh
to
give
us
a
proposal
to
say.
Okay,
I
have
idea
about
the
working
matrix
model
and
which
contains
several
metrics
we
get
I
gave.
uh
I
gave
this
metrics
model,
several
user
stories
or
use
cases
and
the
surrounding
with
some
matrix
definition,
just
like
the
template
with
matrix
mode
template.
D
We
have
this
that
that's
the
first
one
or
step
one
of
the
matrix
model
creating
a
setting
up,
and
then
we
can
step
into
the
okay.
If,
if
we
already
done
so-
and
we
can
step
into
the
next
step,
I
mean
we
can
quickly
using
the
green
live
auger
to
start
collecting
the
data
from
from
some
communities
and
set
up
those
notebooks
and
and
to
quickly
to
show
the
the
final
result-
and
I
hope
the
whole
working
flow
is-
is
friendly
uh
uh
freshly,
so
make
us
quickly
to
say
it's
a
result.
D
A
A
B
G
D
A
C
That
would
be.
That
would
be
my
goal.
I
don't
know
if
forgava's
with
us,
but
I
I
actually
I
I
it's
most
of
that
notebook
actually
depends
on
json,
that's
dumped
from
the
auger
api.
So
I'm
sure
you
have
it
all
anyway
right
we
would
just
save
the
json
files,
um
okay,
and
we
would
also
have
to
extract
the
algorithms
used
to
generate
the
json
files.
But
okay.