►
From YouTube: CHAOSS.Monthly.February.5.2019
Description
CHAOSS.Monthly.February.5.2019
A
C
A
Why
don't
we?
Why
don't
we
just
bring
that
up
right
now,
so
there
was
a
discussion
in
the
previous
working
group,
which
is
Don's
I'm,
calling
it
there
on
what
we
call
in
the
working
group
again,
this.
E
A
Why
I
asked
for
the
name
again
so
that
I
would
stop
calling
it
Don's
working
group
and
call
it
the
common
metrics
working
group
so
in
the
common
metrics
working
group
meeting,
which
was
just
before
this
Tobey
raised
a
question
which
I
think
has
been
in
the
background
of
our
discussions
for
some
time?
We've
never
gotten
concrete
about
it,
which
is
at
what
point?
B
F
So
the
myth
we
see
having
a
long,
long
release
cycles
about
how
software
was
done
a
couple
of
years
ago
and
what
we
G
shipping
new
versions
of
the
standards
everyday
em
to
me.
Well,
the
browser
vendors
have
adopted
what
we
G's
model
so
I
mean
to
me.
This
is
a
model
that
tends
to
work
better
and
things
to
get.
F
We
always
have
gets
hashes
as
versions
if
we
really
need
to
have
something
so
I
would
strongly
recommend
moving
to
a
solution
where
there
is
nothing
like
a
release
cycle,
but
just
every
time
you
make
a
change
that
has
been
accepted.
This
changes
published
and
that's
the
new
version
of
the
standard.
So
you
know
just
to
get
the
conversation
primed.
G
People
writing
tools,
it's
useful
to
actually
have
versions
to
standardize
on
and
part
of
this
job.
That
part
of
the
challenge
here
is
for
the
metrics
that
we
publish.
We
want
tools
available,
that
a
limp
them,
and
so
it
seriously
being
updated
and
being
a
moving
target.
Well,
quite
frankly,
disincentivize
a
lot
of
kill
people,
and
so,
if
you
can
at
least
say
that
this
is
version
1,
this
is
a
baseline.
Here's,
a
starting
point:
people
can
be
keeping
up
and
tool
vendors.
G
A
I
can
say
from
the
perspective
of
auger,
we
we're
using
a
version
of
the
metrics
repository
so
that
we
can
show
you
which
metrics
we've
implemented
in
which
we
haven't,
but
it
is
the
absence
of
a
conserve.
A
canonical
release
makes
it
difficult
because
there
have
been
dramatic
changes
in
the
structures
of
the
repositories
that
we
reference
that
have
a
lot
of
downstream
work.
So
we
insulate
ourselves
from
that
by
working
off
of
our
own
fork
and
absorbing
those
up.
A
G
A
A
D
Mother
of
the
procedural
and
also
logistical
half
of
this
is
that
we
could,
in
theory,
set
up
an
organizational
page
or
project
chaos.
That's
hosted
by
github
that
we
that
basically
becomes
the
webpage
for
the
project
chaos
site
in
terms
of
like
this
kind
of
repository,
because
to
give
you
an
example,
what
I'm
talking
about
these
are
like
organization
pages
and
I'll
drop
one.
D
So
this
is
the
one
that
I
actually
am
in
charge
up
for
Red
Hat,
and
it
has
absolutely
nothing
to
do
with
metrics
but
I'm,
throwing
it
in
the
chat
because
it
kind
of
defines
what
we're
doing
we're.
Basically
just
listening
all
the
projects
that
we
have
a
you
know
stake
in,
and
we
for
lack
of
a
better
term.
D
We
just
keep
that
all
in
repo
for
the
webpage
itself,
we
could
I
would
imagine,
set
something
similar
up
for
our
organization
and
then
we
just
basically
when
a
metric
got
to
a
release
state
that
we
wanted.
We
could
probably
just
check
off
some
sort
of
internal
flag
and
say
this
version
of
the
metric
is
appropriate,
for
you
know,
release
so
to
speak
and
it
would
display
on
a
page
like
this.
However,
we
wanted
just.
F
D
Of
why
you
know
who's
the
audience,
for
this
is
going
to
make
a
big
difference
on
how
you
display
it,
because
if
you're
looking
for
you
know
like
software
developers
who
might
be
building
tools,
then
then
obviously
you
know
you
know
a
lot
of
what
Kate
is
saying
comes
to
mind.
But
if
you're,
just
looking
for
something
that
you
want
to
point
clients
to
and
I
agree
with
that,
because
I've
had
a
number
of
internal
discussions
and
read
at
and
it
would
be
great
because
they
say
what
kind
of
metrics
are
you
talking
about.
F
Yeah
I
mean
we
went
through
at
the
movie
scene
and
what
we
did
the
idea
of
different
audiences
for
very
long
time
and
even
was
the
huge
amount
of
resources
that
this
these
different
standards
have
in
terms
of
basically,
all
the
large
tech
companies
being
very
involved
wisdom.
We've
never
managed
to
get
anything
concrete
in
serious
in
terms
of
audience
breakdown
and
have
them
up
to
date,
because
that's
the
that's
the
other
problem
and
that's
my
biggest
concern
right
is,
you
know
the
minute.
F
You
start
having
a
number
of
different
versions
of
stuff
and
you
know
the
same
stuff
with
different
audiences
and
obviously
some
are
gonna
get
out
of
out
of
dates
and
then
and
then
no
one
knows
what
to
trust
anymore
and
that's
the
problem.
That's
the
biggest
problem.
I
have
was
github
the
way
the
owners
stuff.
As
long
as
you
get
home
right
now,
it's
pretty
much
every
time,
I,
look
and
I.
F
A
H
A
H
Mean
so
I
want
to
go
back
to
what
I
was
suggesting
I
think
we
had
this
discussions
in
the
past.
You
know,
even
if
it's
updated,
you
know
twice
a
year
right
before
each
chaos
con
right
right
before
wanting
any
and
the
one
in
North
America.
You
know
we
start
with
twice
a
year.
Here
is
sort
of
a
snapshot
of
where
we
stand,
and
here
is
sort
of
the
implementation
related
to
these
metrics.
H
G
Other
examples
along
that
line
right
that
might
be
worth
following:
here's,
what
we're
doing
with
the
SPX
license
list
where
we
try
to
list
in
the
license
ids
and
we
have
a
master
list
and
it's
versioned,
but
it's
updated
roughly
quarterly
or
what
people
add
in
and
in
some
ways
it's
a
metrics
are
sort
of
like
these
things
in
census.
Every
one
serve
agrees
that
this
is
a
metric,
we're
going
to
put
in
and
visible
and
it
gets
added
up
to
that
list.
G
G
We've
got
a
literally
the
same
website.
We've
managed
to
keep
going
because
people
also
can
machine
scrapers
and
things
like
that.
But
we've
got
this
one
site
that
we've
kept
up
and
going
week
week,
maybe
create
something
similar
there
for
Kaos,
which
is
you
know,
official
metrics,
and
then
we
just
reverse
it
every
time
we
formally
do
it.
So
you
sort
of
see
on
that
page.
If
you
scan
down
and
see
version
34
with
the
date
stamp.
G
So
there's
a
good
type
repo
behind
this
license
list
and
it's
automatically
generated.
We've
got
scripts
to
generate
it,
but
it
gets
generated
every
quarter
or
whenever
they
decide
that,
there's
enough,
that's
there
to
change,
and
so
that's
sort
of
the
master,
but
there's
a
version
that
the
tooling
can
refer
to
and
in
terms
of
the
metrics
adhere
in
terms
of
the
license
with
the
licenses
that
are
acknowledged
on
the
list,
then.
G
H
C
H
G
No
I'm
sure
you
probably
won't
but
the
full
name,
and
then
you
link
to
it.
We
working
on
having
templated
versions
of
information
and
each.
If
you
clicked
on
those
license
the
full
name
links,
it
would
go
to
a
template
effectively
that's
stored
in
github,
it's
very
close
to
the
type
of
it's
a
type
of
template
that
you're
talking
about
the
metrics.
Yes,.
F
G
F
F
Exactly
what
I'm
advocating
for
so,
if
that's,
what
everyone
agrees
is
the
simplest
thing
to
do.
I
agree,
that's
I
mean
one
of
the
key
reasons
why
I'm
pushing
for
sort
of
a
model
that
links
to
whatever
the
latest
life
thing
is,
is
because
it's
much
easier
to
set
up
right
and
yeah,
and
so
that's
the
kind
of
model
you're
looking
at.
That
would
be
great
for
me,
I,
wouldn't
that
would
be
fantastic,
especially
nice
and
pretty
yeah.
C
G
It's
a
scrub.
The
thing
is
that
this
gives
us
a
way
of
deprecating
identifiers
on
the
licenses,
and,
if
you
you
know
as
a
metric
gets,
you
know
solidified
and
changed.
You
may
decide
that
there's
a
better
way
of
representing.
We
want
deprecated
your
metrics
this
well,
so
let's
do
a
way
to
classify
that.
So
it's
a
top
level
and
for
the
most
part,
once
the
metrics
are
up
there.
The
same
way,
we
really
wouldn't
change
a.
G
We
really
wouldn't
change
a
license
definition.
Unless
someone
found
a
bug
wouldn't
be
changing.
These
metrics
I'm
left
you'll
find
a
bug,
but
if
you
did
find
a
bug,
you
could
update
it
without
you
know
going
through
a
major
Rev
story,
you
could
show
up
date
it
on
the
github
page
and
then
be
showing
up
here.
I
think
is
what
we're
doing
is
we're
cutting
our
github
repo
processing
at
generating
web
pages
and
just
putting
that
up.
It's
done
automatically
from
the
templates
that
are
sitting
in
our
github
repo,
those
web
pages.
G
A
I
have
one
question,
and
my
question
is,
if
a
few
times
in
the
course
of
doing
this,
we've
gone
through
the
definition
of
a
metric
and
identified
a
case
where
there
needs
to
be
something
added
to
the
definition,
because
the
question
comes
up
about.
How
do
we
handle
the
whitespace
commits,
and
how
is
that
accounted
for
in
the
definition?
Is
there
is
the
room
in
this
model
for
those
kinds
of
the
kind
of
evolution
of
an
existing
metric
definition?
Does
that
work,
yeah.
G
You
treat
them
like
bugs
effectively,
so
you
know
you're
putting
it
up
here.
Everyone's
agree
that
this
is
one
that
we
were
formally
falsifying
as
a
Kaos
metric,
and
it's
got
a
link
here
and
it
points
to
the
webpage.
That's
been
generated
from
our
github
repos,
the
github
repos
will
get
updated
behind
the
scenes
and
then
next
time
a
new
version
comes
up
to
generate
the
web
pages.
That
information
gets
encompassed.
G
C
G
Should
be
able
to
I
think
it's
a
question
of
if
you
use,
though,
if
you,
you
should
be
able,
as
long
as
the
same
templates
are
formed
used
in
different
repos
I,
think
it's
a
bring
a
question
of
where
you
push
it.
Putting
things
towards
I'd
I'm,
not
familiar
enough
with
your
structure
right
now,
but
probably
cool
Gary
into
a
discussion
and
see
what
makes
sense.
Yeah.
E
F
E
Was
just
saying
you
know
and
part
of
this
necessarily
professional:
the
workers
have
done
a
ton
of
work
and
nobody,
nobody
sees
it.
You
know,
and
we
have
no
real
way
to
you
know
nobody
wants
to
work
on
stuff
that
nobody
sees
and
that
we
would
like
to
kind
of
get
these
out
they're,
the
ones
that
are
done
so
that
people
can
feel
looking
consume.
G
A
G
Actually,
we
we
actually
have
a
chaos
metrics
page
but
repurposing
that
actually
just
be
the
metrics
metrics.
Now
we're
farther
along
yeah.
A
G
I
found
there's
a
new
person
at
the
LF
will
be
working
on
the
CII
and
I've
invited
her
into
this
meeting
to
awesome
when
she
ends
up
showing
up
here
today,
I'll
introduce
her
but
Jennifer
Wilkinson.
So
if
you
see
Jennifer's
name
she's
now
working
on
the
sea-ice
episode
next
month,
we
should
hopefully
start
to
be
building
up
critical
mass
for
those
metrics.
G
J
A
We
are
so
publication
of
metrics
who's.
Do
we
want
to
set
a
date
of
some
time
for
the
release
under
this
website
before,
let's
say
begley
before,
and
have
the
working
groups
take
up
which,
before
the
Linux
Foundation
Leadership
Summit
the
12th
ish
of
March,
and
do
we
want
to
have
hand
it
to
the
working
groups
to
identify
the
metrics
that
they
want
to
try
to
release
by
then
and
then
Georg?
C
F
C
C
A
F
So,
let's
take
go
ahead
if
you
want
to
discuss
this
offline
and
if
you
want
suggestions
like
how
to
architect
that
I'm
more
than
happy
to
answer
those
questions,
we
know
I've
done
that
kind
of
stuff
like
multiple
times
using
github
pages
or
this,
and
it's
actually
not
a
lot
of
work
to
do
so,
I'm
more
than
happy
to
help
set
something
up.
If
there's
want
on
your
site,
just
feel
free
to
ping
me
I'll,
fine,
you
know.
A
C
A
C
A
So
there's
a
pull
request:
that's
active
that
I
put
in
to
try
to
organize
some
of
the
activity,
metrics
that
have
been
labeled
different
things
in
the
course
of
our
work
as
working
groups,
so
that
growth
maturity
decline.
Worker
group,
in
particular
changed
a
bunch
of
metrics
names
and
so
I've
done
a
reconciliation
with
the
metrics
repository,
and
there
was
some
discussion
about
how
to
point
people
at
that
correctly
and
name
name,
the
ones
that
aren't
being
worked
on
and
I
addressed.
All
the
things
in
the
pull
request.
A
A
K
C
A
E
Did
we
had
our
first
meeting
an
hour
ago,
yeah
so
kicking
off
a
new
working
group?
We
had
mostly
a
discussion
about
kind
of
the
the
purpose
of
the
group.
What
we
wanted
to
tackle
the
first
bit
that
we're
going
to
try
to
tackle
is
the
organizational
affiliation
which
is
not
surprising,
since
that
was
the
biggest
gap
that
we
had
identified,
that
sort
of
encouraged
us
to
spawn
this
new
new
working
group.
E
E
So
what
we
realize
we
realized
in
the
past
few
of
these
weekly
meetings
is
that
you
know
we
have
a
lot
of
traction
and
growth,
maturity
and
decline
with
a
lot
of
traction
and
diversity
and
inclusion.
And
then
we
have
a
whole
bunch
of
stuff
that
no
one's
really
working
on
that's
still
really
important,
so
things
like
or
metrics
around
organizational
affiliation
and
organizational
diversity,
and
then
we
also
kind
of
at
the
same
time
we're
having
discussions
around.
E
What
do
we
do
with
this
metrics
repository,
which
has
just
loads
of
metrics,
that
no
one's
working
on
or
that
some
people
are
working
on?
We
don't
know
whether
they're
in
GMD
or
in
DNI,
and
we
we
don't
have
a
good
handle
on
this,
and
so
what
we
decided
was
that
we
would
spin
up
a
working
group
to
handle
some
basically
to
give
us
some
focus
around
some
of
these
metrics
that
are
important,
that
no
one's
working
on
so
whether
this
working
group
continues
indefinitely.
E
E
E
A
So
that's
taken
care
of
the
next
working
group
is
the
growth
maturity
decline
working
group.
We
have
a
pretty
good
we're
forming
or
solidifying
our
code
related
metrics
release,
and
we
have
a
couple
of
use
cases,
one
that
hey-zeus
I
think
talked
with
Ray
about
that's
nearly
ready
to
merge
and
then
another
use
case
around
the
community
managers
use
case
that
getting
close
to
being
ready
to
merge.
A
E
A
E
E
Was
our
MC,
which
is
fantastic?
We
had
kilos
from
ildikó
and
the
arrests
and
Brian
profit,
which
were
all
really
good.
I
thought
the
content
was
spot-on.
We
had
Kevin
videotaping
it,
so
you
will
eventually
be
able
to
watch
all
of
this
soon
that
gets
all
rendered
and
and
hosted
which
I
imagine
will
be
in
weaker
dude.
That's
my
guess.
E
We
ran
a
couple
of
tutorials
that
I
thought
went
pretty
well
for
diversity
and
inclusion.
We
actually
broke
people
up
and
had
them
to
find
some
of
the
diversity.
Inclusion
metrics
that
we
hadn't
gotten
around
to
yet
and
I
thought
that
some
of
the
groups
made
some
good
progress.
So
I
I
was
really
happy
with
with
Gascon
and
others
want
to
chime.
In
several
there.
H
Yeah
one
day,
I
mean
we
got
a
lot
of
stuff
packed
into
a
day
and
then
so
which,
which
is
good
in
that
we
covered
all.
We
try
to
cover
a
lot
of
content
as
much
as
we
can
but
wish
there
were
I
mean
I,
don't
know
how
we
we
do
this
for
the
next
next
one
I
was
thinking
you
know
in
you
know
in
life,
maybe
in
addition
tour,
even
instead
of
the
lightning
round
talks.
F
H
Maybe
we
started
an
hour
early,
it
started
nine
o'clock,
but
but
I
know
people
in
Europe
are
traveling
that
morning,
alright,
so
that
makes
it
tough
but
I
thought
the
like.
You
said:
Don,
the
the
workshops
went
really
well
with
people
participating
so
wish.
We
have
more
time
for
that
type
of
format.
C
E
L
E
In
a
one-track
event
with
that
many
people
to
make
it
interactive
so
so
the
only
bit
that
was
super
interactive
was
the
the
tutorials.
Ours
was
probably
a
bit
the
work.
Joris
nucleus
is
probably
little
more
interactive.
It
was
a
real
challenge
to
manage
I'm,
not
gonna
lie,
it
was
it
was.
It
was
pretty
difficult
logistically
because
there
were
so
many
people
to
get
them
to
work
out
right.
H
In
one
room
as
well
right
I
mean
I.
Think
if
we
go
back
to
I,
guess
four
Fossum
we're
going
back
to
Eva's
next
year,
so
we
can
partition
the
room.
So
we
can
have
maybe
two
tracks
in
the
afternoon,
but
yeah
having
it
just
one
room
was
definitely
difficult
that
we
had
to
sort
of
rearrange
chairs
and
all
that
the.
C
C
A
A
Think
if
you
go
look
at
the
Google
Docs
that
were
developed,
there's
actually
some
work
that
got
done
in
that
case,
so
that
was
I
think
a
really
good
model
for
engaging
people
in
chaos
con
and
as
the
working
groups
for
common
metrics
and
growth
maturity
decline,
mature
I
mean
I
think
that
is
the
model.
That
is
a
good
model
to
follow,
because
we
can
actually
get
some
work
done
while
we're
in
the
kiosk
on
instead
of
you
know
not
more
active,
more
engaged,
I
thought
it's
good.
D
C
Not
in
the
car,
but
he
will
yes,
Kevin
and
I
will
assist
Kevin
and
doing
this
we
talked
about
how
to
do
it
and
got
the
files
ready.
Now
we
need
to
actually
polish
them.
One
of
things
that
we
want
to
do
is
because
the
recordings
are
focused
on
the
presenter.
Does
that
be
overlaid
the
slides
and
so
that
there's
some
amount
of
editing
in
the
videos
to
do.
H
C
H
H
H
H
L
D
H
E
D
E
G
E
The
the
video
serve
trickle
out
because
they
someone
has
to
actually
review
them,
so
they
can
cut
the
beginning
and
end
off
to
make
sure
that
it's
just
the
bit
that
the
presenter
is
actually
doing
because
they
record
the
whole
the
whole
day.
So
a
human
has
to
actually
look
at
it
in
a
prove
it
and
in
my
case,
to
send
it
to
me
to
approve
and
I
think
they
must
have
also
sent
it
to
the
community.
Deborah.
F
E
E
D
E
A
Alright,
so
that's
that's
the
conclusion
that
I
think
that
concludes
our
meeting
today.
I
have.
D
D
And
so
it
is
theoretically
possible.
You
just
have
to
get
a
lot
of
you
know,
give
them
a
lot
of
heads
up
and
figure
out
what
exactly
is
needed,
but
they
know
because
everything
they
do
is
basically
script
based
where
they
cut
and
end
and
then
do
the
review
process,
and
it's
all
automated
so
but
they're
not
yeah,
so
give
it
a
shot,
see
what
happens.
Yeah.
D
E
The
websites
the
same
way
so
all
the
CFE
submission
stuff
is
in
what's
called
Penta,
barf
I
think
it's
called
penny
I
think
they
shortened
it
but
easily
Penta
barf,
it's
it's
it's
awful
to
use.
So
a
lot
of
it
is
a
challenge.
If
you
weren't
the
person
who
wrote
it
to
actually
use
some
of
it.
So
I'm
saying
your
mileage
may
vary
what
they
have
works
really
really
well
in
the
context
of
FOSDEM
with
a
whole
bunch
of
volunteers
that
are
tightly
trained
on
how
to
do
it,
but
it.
C
E
D
They
can
because
they
did
do
it
for
copyleft.
Okay,
because
trust
me,
you
don't
want
to
use
pinto-barbital
bit
I'm.
A
All
right
thanks
everybody
thanks
for
all
the
help
and
see
you
next
week.