►
From YouTube: CHAOSS July10 Montly Meeting
Description
July 10, 2018 Monthly Meeting
A
Hey
soos,
we
have
Amy,
I
am
Amy,
hey
Ben.
How
are
you
do
not
coming
in
here
in
a
little
bit
all
right
well
in
the
interest
of
kind
of
keeping
on
schedule.
I
will
go
ahead
and
call
this.
So
this
is
our
monthly
meeting
right,
because
I
called
blasts
went
off
because
of
the.
Why
so
agenda
item
it
sent
it
out?
B
C
A
D
C
A
Can
help
with
just
kind
of
waiting
until
it
wait
until
late,
late,
August,
okay,
that
sounds
real
good
cool,
alright
and
then
obviously
open
source,
some
at
North
America,
oh
I'm,
sorry
did
anybody
else
want
to
add
anything
for
Kaos.
Can
we
good
Shawn?
We
tell
you
we're
good.
We
see
a
very
interesting
picture.
A
B
B
A
B
G
G
A
F
A
A
A
B
A
E
E
A
G
B
G
F
H
G
And
the
other
thing
I
was
going
to
mention
was
it's
it's
pretty
informal,
but
there
we
can
also
propose
like
unconference
sessions.
So
if
there
are
other
topics
that
we
want
to
lead,
I
think
when
we
all
are
all
there
on
on
Saturday
in
the
morning
we
can
kind
of
huddle
about
like
ideas.
Obviously
we
have
a
good
topic
with
diversity
and
I
mean
Shawn.
If
I
don't
know,
if
you
want
to
have
any
like
a
separate
session
on
that
growth,
maturity
and
decline,
sort
of
a
pretty
informal
like
a
discussion
session.
G
F
G
I
I
Okay,
thank
you.
So
basically,
there
are
some
keynotes
by
some
people
and
then
there
are
some
the
unconference
session.
It
happens
that
this
year
they
decided
to
have
half
of
those
in
advance.
So
people
know
the
kind
of
things
that
they
are
let's
say
facing
during
the
weekend
and
then
the
other
half
those
will
be
decided
during
the
conference.
In
any
case,
all
of
those
are
going
to
be
on
conferences.
I
You
know
from
my
understanding
and
the
idea
in
on
a
conference
session
is
that
basically
you
present
the
topic
you
would
like
to
discuss
and
people
there
are
eight
or
nine
in
parallel
people
go
to
each
of
those
and
then
you
will
be
there
kind
of
a
group
of
and
up
to
10
people
20
people
at
most
and
then,
but
those
are
really
interested
in
that
topic.
So
even
the
people
that
is,
that
are
kind
of
pleading
the
proposal
or
10
confrontation.
I
A
Okay,
that
sounds
good.
Okay
sounds
like
there's
a
lot
kind
of
happening,
so
what
I'll
do
in
the
minutes,
when
I
send
this
out?
Why
we'll
kind
of
track
down
all
of
the
talks
that
are
going
on
at
the
Community
Leadership
Summit
sit
around
the
schedule
and
also
track
down,
obviously,
chaos
con
and
then
the
talks
that
are
going
on
it
opens
for
some
in
North,
America
and
I'll
make
sure
that
I
give
those
into
the
minutes
that
I
can
share
with
people
to
the
list.
E
G
A
E
D
E
I
J
You
know
that
I'd
like
to
be
tables
advocate,
and
but
what
is
the
point
of
going
to
all
these
conferences
when
they're?
The
most
important
goal
is
to
really
create
something
that
is
useful,
sure
so
and
also
and
I
think
that
it
was
in
line
with
also
with
the
diversity.
So
what
makes
diversity
is
so
difficult
from
the
point
of
view
of
our
group,
because
it
seems
that
it's
a
matter
of
defining
use
metrics
that
proportionally
measure
who
is
diverse,
which
is
relatively
straightforward.
So
what
what?
What
is
difficult
about
it?
J
A
J
I
I
For
instance,
in
our
case
in
the
in
the
openstack,
the
underreport,
we
were
trying
to
focus
on
a
specific
metrics
from
a
quantitative
point
of
view.
Well,
when
we
started
to
talk
to
to
Amy
and
to
Emma
some
others,
then
we
realized
that
there
is
a
big
amount
of
topics
that
we
didn't
cover.
So
the
work
that
we
have
so
far
is
that
we
are
trying
to
learn
the
common
view
of
what
to
measure,
and
we
are.
We
are
right
there.
So
we
are
still,
let's
say,
agree
in
what
to
measure
so.
J
Why
don't
we
do
that
more
empirically,
it
seems
to
me
that
the
real
problem
is
to
do
the
science
behind
it
so
have
fun
and
pickle
studies
that
go
to
their
to
the
people
interested
and
ask
them
exactly
what
they
need.
It
seems
that
it's
more
it
so
currently
it
seems
that
many
of
the
metrics
are
more
like.
Let's
see
it
and
think
about
what
we
can
measure,
and
in
that
line,
we
can
go
infinitely
into
into
that.
J
D
I
I
mean
I
agree
with
you.
Perhaps
at
the
beginning,
we
we
were
focusing
on
the
things
we
can
measure.
I
know
that
not
important
things
that
we
should
be
measuring
and
have
another
point
of
view
was
really
helpful
and
basically
that's
the
reason,
because
it's
taking
I'd
say
long
in
the
diversity
and
inclusive
working
group,
but
we
are
still
defining
what
what
important
things
are,
and
we
are
still
there
to
say.
J
I
J
I
And,
for
instance,
Aimee
is
working
on
that
kind
of
surveys,
so
perhaps
you
can
elaborate
a
bit
more
here.
The
point
is
that
each
of
us
has
a
different
role
in
somehow
different
skills,
so,
for
instance,
I
can
produce
qualitative
data,
but
I'm,
probably
not
the
right
person
to
produce
a
survey
and
but
perhaps
a
me
and
Nicole
here
can
help
a
lot
with
that.
So
but
I
mean
your
point
of
view
is
pretty
good.
So
it's
something
to
discuss.
I
mean.
H
We're
trying
to
use
some
of
the
same
questions
between
the
different
surveys.
I
know
OpenStack
we're
using
some
of
Mozilla
questions,
especially
like
the
gender
one,
I
really
liked
how
they
hook
that
out.
But
then
we're
also
going
to
have
within
our
community
different
diversity
related
questions
so
that
we
know
how
diversity
affects
our
individual
community.
So
you've
got
the
overall
ones
that
I
think
will
eventually
be
shared
between
the
different
communities
and
then
more
specific
ones
to
the
individual
communities.
So.
E
I
think
we
are
having
two
different
surveys
being
talked
about
right
now.
One
is
the
ones
that
we
produce
with
the
metrics
and
that
we
can
give
people
to
assess
their
diversity,
inclusion
or
what
other
open-source
community
health
metrics
and
the
other
survey
that
Daniel
brought
up
is
to
define
the
metrics
in
the
first
place
and
I
think
that
kind
of
work
is
where
we
have
the
work
groups,
and
so
that's
the
work
groups
are
responsible
for
figuring
out
what
the
method
they
use
to
define.
What
metrics
in
so
not.
J
Notice
that
everything
that
you
have
said
about
this
particular
matrix,
the
subset
of
diversity,
the
diversity
inclusion,
the
answers
are
generic
to
any
metric
that
would
come
out
with,
and
so
it
seems
that
they,
the
role
of
the
different
subgroups,
is
essentially
the
same.
It's
just
that
the
concept
or
metrics
that
measure
something
in
particular,
so
I
wonder
whether
what
we
need
is
some
sort
of
guidelines.
J
That
said,
if
you're
a
working
group-
and
your
goal,
of
course,
is
to
come
up
with
metrics
that
measure
what
you
are
interested
on
is
how
to
determine
those
ones.
My
biggest
fear
is
that
we
are
going
to
become
an
anecdotal
group
that
says
these
are
the
metrics
that
are
needed,
because
this
is
what
we
think
they
are.
This
is
what
the
people
we
have
taught
to
say
without
really
scientific
backing
and
I.
J
A
B
I,
what
I
don't
get
is
what
do
you
mean
by
by
scientific
or
by
empirical,
because
if
you
just
go
to
the
street
and
ask
people
which
metrics
are
relevant
to
you,
that
usually
doesn't
produce
any
sensible
output?
That's
different!
That's
I
mean
that
that's
that's
different
from
coming
to
a
certain
working
group
or
a
certain
that's
a
number
of
people
who
are
a
stick
horse
or
something
like
that
I'm
doing
asking
them
the
issue
specific
metrics.
Tell
you
something
this.
G
B
My
treatment,
and
my
impression
at
least
in
the
working
groups,
is
to
try
to
come
out
with
a
set
of
metrics
that
we
think
from
our
point
of
view,
that
they
could
be
useful.
And
of
course,
then
you
can.
You
need
to
go
to
the
real
world
and
ask
people
and
tell
them.
This
is
useful
for
you,
but
usually
what
you
need
is
not
only
to
tell
them
about
the
metric,
but
to
tell
about
how
the
metric
applies
to
certain
projects
so
that
they
can
really
feel
what
the
metric
is
about.
And
it's
not
the.
B
J
That
you
are
breaking
apart,
what
I'm
saying
it
stages
the
first
one
is.
We
think
these
metrics
are
good,
so
that's
kind
of
the
first
part
of
the
story.
This
is
the
hypothesis.
So
this
is
where
you
sit
together
and
you
come
up
with
them
and
I
think
for
that.
The
groups
can
actually
just
essentially
discuss
I,
take
input
from
different
stakeholders
and
come
up
with
metrics.
Now
you
have
this
matrix
now.
J
The
point
is:
how
do
you
demonstrate
scientifically
that
those
metrics
measure
what
you
claim
and
and
that
they
are
essentially
the
best
to
use,
because
there
will
be
a
lot
of
correlation
between
different
metrics
and
different
ways
to
measure
and
and
my
issue-
is
that
I
don't
really
so
I?
Don't
think
that
we
are
only
in
the
first
stage.
We
come
with
hypothesis
that
these
are
metrics,
that
we
think
that
they
are
good.
J
No,
no
I
think
that's
our
goal.
I
think
that
goal
is
to
come
up
with
metrics
that
they
are
that
they
are
useful
to
the
stakeholder,
mm-hmm
and
I
think
that
that
should
actually
be
our
goal
for
every
metric.
How
do
we
measure
that
is
used,
usefulness?
How
do
you
evaluate
that?
A
metric
is
really
measuring,
what's
supposed
to
be
measuring
what
are
their
values
and
advantages
of
each
metric
and
to
do
it,
alas,
and
typically
as
possible,
so.
F
The
next
metric
metrics
measure
what
they
measure
and
I
think
they're
the.
If
they're
objectively
described
as
measuring
specific
elements
of
software
production,
then
you
know
how
someone
interprets
those
is
what's
what's
subject
to
debate
and
how
one
applies.
The
metrics
I
think
is
what
subject
to
discussion,
but
I
think
the
metrics
themselves
are
or
reasonably
you
know
just
ground
out
of
data.
So.
F
Because
I
think
bringing
the
metrics
to
folks
attention
and
making
them
visit
in
ways
that
they
haven't
done
before
is
quite
useful.
People
have
lots
and
lots
of
different
ways
that
they
count
these
things
right
now,
but
they
don't
have
any
consistent
way
of
counting
them
or
comparing
things
with
a
project
if
they
were
on
before
and
the
tool
that
they
use
it.
A
new
project
so
I
think
providing
some
kind
of
consistent
foundation.
I
mean
I,
think
it
well
I
think
it
has
value
for
community
managers.
If
not
software
engineers
yeah.
J
F
J
All
right,
but
interpretation,
is
about
what
I'm
talking
about
in
terms
of
value
and
so
I
think
that
that
part
I
agree,
that
is,
is
scientific
and
how
do
you
present
them
as
etc,
and
I
think
that
that
many
of
us
have
different
interests.
My
said
my
interest
is
scientific.
I
think
that
the
engineering
part
is
relatively
simple,
straightforward.
The
real
difficult
issues
are
the
ones
in
terms
of.
Why
is
this
better
than
other.
B
But
I
don't
think
that
feminists
in
this
is
better
than
other
is
a
way
of
framing
that,
usually
with
the
interesting
way
of
framing
it
from
a
point
of
views.
This
is
more
useful
than
this
one
for
this
specific
purpose.
That
means
that,
first
of
all,
you
need
to
know
the
proportion,
for
instance,
that
the
intentions
and
the
needs
of
a
community
manager
are
way
different
from
operate
manager.
Yeah.
B
That
they
need
may
be
different.
That's
why,
if
you,
if
you
intend
to
validate
it
by
these
by
by
going
and
ask
people
what's
a
valid
you
find
in
this
metric
I,
don't
think
that's
very
much
scientific
in
the
sense
that
people
are
going
to
tell
what
they
think
and
that's.
That's
it
and
I
mean
what
you
are
going
to
know
is
how
people
perceive
these
metrics
to
be
interesting,
but
now
how
useful
this
matrix
are,
which
is
a
different
step
so
so.
F
What's
the
I
mean
I
think
the
use
case
of
different
and
I
think
the
engineering
use
cases
which
a
lot
of
us
are
anchored
in
are
more
definitive
than
the
human
centered
design
or
human
centered.
Look
at
this
that
may
be
a
community
manager
would
have
so,
while
the
subject
miss
to
interpretation
kind
of
bristles,
with
our
engineering
sensibilities,
I.
Think
one
thing
to
keep
in
mind
is
that
people
that
are
managing
communities
don't
have
a
window.
F
That's
consistent
across
the
projects
they
work
on
across
organizations
necessarily,
and
you
know
we
are
giving
them
something
that
they
they
can
and
will
interpret
in
different
ways
and
I.
Think
studying
their
interpretations
is
also
is
part
of
the
scientific
process.
It's
just
way
messier
than
than
the
actual
metrics,
so
the
presentation
and
interpretation
and
providing
something
useful
I
think
is
one
of
the
really
hard
problems.
Yeah.
J
Yeah,
it's
a
really
hard
problem,
but
let
me
even
say
that
we
already
argue
that
this
is
what
they
they
might
require.
This
is
what
they
are
using.
This
is
the
different
ones,
and
so
different
managers
require
different
metrics.
We
we
know
that
anecdotally.
We
have
not
even
have
a
proper
survey
tool
as
managers.
J
A
Me
let
me
make
it
come
in
here:
I'll,
try
to
just
reiterate
what
I'm
what
I'm
hearing
here
but
I,
don't
think
that
Daniel
is
arguing
and
Daniel
correct
me
if
I'm
wrong,
but
I,
don't
think
you're
arguing
for
empirical
truth.
This
was
true
thing
here,
I,
don't
think.
There's
truth,
I,
absolutely
I!
Think
you're
arguing
for
being
empirically
sound
in
the
first.
D
A
J
But
I
think
I
think
you're
doing
exactly
I
think
you're
phrasing.
It
very
well
I
think
that
what
I
would
like
to
see
it's
more
guidelines
in
terms
of
this
process
that
needs
to
happen
for
things
to
be
able
to
be
proposed.
Why
is
that
they're
proposed?
Because
they
come
out
of
the
brain
of
somebody
or
because
there
is
a
survey
that
says
we
talk
to
them.
This
is
what
they
need.
This
is
what
80%
of
them
want.
This
is
what
30%
of
them
wanted,
but.
J
B
J
C
B
J
D
A
J
A
J
A
J
A
A
J
B
G
B
Of
view
is
I,
don't
know
how
to
design
such
such
a
survey
in
a
way
which
is
useful,
but
if
you
Danielle
or
somebody
else,
wants
to
start
drafting
something
that
could
be
the
basis
for
that
and
they
agree
with
all
of
you.
Even
a
small
step
would
be
very
good,
even
maybe
as
a
step
just
saying
how
much
you
are
exposed
to
matrix
trying
to
measure
that,
for
instance,
I,
don't
know
something
which
is
simple
enough
for
starting
and
maybe
maybe
leading
to
our
paper.
As
you
say
that
then
Ellis
I
don't
have
anything.
A
B
E
Make
one
point
we're
talking
about
the
process
and
the
diversity
and
inclusion
of
work
group,
as
well
with
the
exactly
this
goal
of
how
to
be
half
a
good
process
and
what
we
decided
in
the
diversity.
Inclusion
workgroup
is
that
we
document
our
process
through
pull
requests
through
issues
and
through
emails,
and
that
we
include
references
to
other
people's
work
that
provide
evidence
that
these
metrics
are
useful
and
helpful
within
our
metrics
detailed
pages.
J
J
A
A
A
K
J
K
K
But
it
I
think
what
is
going
to
take
is
combining
all
these,
like
component
metrics,
that
we've
come
up
with
into
something
that
actually
creates
like
some
sort
of
narrative
around
it
and
yeah.
We're
calling
like
anything
derived
metric
seems
like
probably
the
best
name
or
meta
metrics
or
whatever
they
are.
K
K
So,
like
the
one
that
I
came
up
with
this
development
velocity,
and
that
is
a
it's
pretty
simple,
because
it
really
only
takes
into
account
two
metrics
right
now
could
potentially
add
more
because
I
think
one
of
the
issues
with
tracking
commits
as
a
metric
for
how
quickly
development
happens
is
that,
if
you
start
saying
more
commits
means
more
development,
then
suddenly
your
developers
might
just
stop
rebasing
their
commits
and
instead
of
when
they
have
a
pull
request.
Instead
of
having
one
commit,
they
might
have.
K
20
commits
that
each
have
one
tiny
change
in
each
of
them.
You
know,
and
so
I
think
there
is
some
sort
of
inherent
risk
and
making
sure
that
the
metrics
don't
don't
influence
behavior,
but
I
still
think
there's
value
in
creating
these
somehow
so
yeah
I'd
be
interested
to
see
what
other
people
have
to
say
on
this.
E
K
Yeah
yeah
I
guess
it
does
kind
of
sound
like
it's
going
in
that
direction,
but
I
think
a
lot
of
I.
Think
a
lot
of
the
value
is
not
going
to
be
in
the
individual
metrics
themselves.
Some
some
of
them
might
be
able
do
you
to
like
create
value
statements
out
of
them
just
based
on
a
single
metric,
but
I
think
a
lot
of
it
is
going
to
take
combining
multiple
metrics
into
some
sort
of
comprehensive
or
more
inclusive
set
of
data
that
actually
has
more
meaning
than
just.
You
know.
One
single
number
I'm.
A
B
I
assert
the
idea
of
a
ban
of
not
only
having
like
meta
metrics,
that
groups
of
meta
metrics
that
can
tell
you
the
story
about
what's
happening
from
a
certain
point
of
view.
So,
for
instance,
when
you
look
at
just
say
a
number
of
commits
or
something
that
is
really
telling
you
nothing
when
you're
looking
at
things
like.
B
How
is
this
behaving
over
time,
which
is
basically
what
painters
with
Nav
and
do,
for
instance,
combine
that
with
the
composition
of
your
community,
for
instance,
you
can
find
ways
of
characterizing
the
project
and
saying
this
is
a
period
which
is
accelerating
with
their
growing
community.
Please
base
it
on
two
three,
maybe
four
metrics
at
the
same
time,
and
that
may
allow
you
to
characterize
the
period
and
you
know
in
which,
let's
say
phase
or
which
can
offer
you
are
dealing
with
and
then
tracking
that
there,
for
instance,
some
cases,
a
seller,
alien
of
development.
B
K
K
G
Yeah
I
think
another
point
is
I.
Think
he's
just
kind
of
alluded
to
this
I
think
it's
important
to
look
at
like
a
trends
over
time
and
and
I'm
somewhat
reluctant
to
use
like
a
composite
metrics,
because
I
feel
like
I
mean
this
is
my
personal
bias.
It
introduced
like
another
level
of
indirection
which
which,
which
can
hide,
details
and
and
I
mean
one,
is
like
I-
think
it's
important
to
look
at
trends
over
time
and
the
other
thing
is
depending
on
where
you
are
in
terms
of
your
community.
G
It's
important
to
you
know,
educate
senior
managers
on
on
on
how
people
should
view
the
data,
because
I
mean
a
lot
of
them
are
not
like
really
close
to
developer
communities,
so
they
they
tend
to
want
to
look
at
things
like
very
simplistically.
A
couple
of
sets
of
numbers
like
which
is
always
kind
of
a
dangerous
thing
to
do
so.
G
It's
important
to
use
different
sets
of
metrics
as
a
tool
to
show
you
know
health
of
the
community
or,
however,
you
want
to
measure
it,
but
I
think
it's
like
you
need
to
look
at
trends
over
a
longer
period,
rather
than
focusing
on
a
snapshot
and
also
look
at
depending
on
where
you
are
in
the
community,
like.
Maybe
you
need
to
look
at
different
sets
of
metrics,
even
if
it's
the
same
community,
so
just
I
mean
my
two
cents.
K
C
K
K
Often
do
is
they
give
you
the
point
to
look
at
for
where
you
need
to
do
your
qualitative
analysis
if
suddenly
you're
getting
zero
commits
after
two
years
of
getting
hundreds
of
commits
something
happened.
So
that's
where
that's
the
point.
What
you
want
to
investigate
right.
So
the
point
is
that
you
create
some
sort
of
statement
that
actually
has
meaning.
K
J
Some
comments
so
I
think
that
we
need
to
be
more
proactive
in
our
nomenclature,
so,
for
example,
and
the
way
that
you're
using
velocity,
you
really
are
arguing
about
acceleration.
So
the
change
of
velocity
over
over
it's.
J
J
But
the
change
of
velocity
becomes
acceleration,
but
anyway,
so
that's
not
what
I
wanted
to
mention.
So
I
think
that
what
really
happens
is
that
every
metric,
that
we
have
measures
the
traces
that
of
development
that
we
have
so
we
have
it's
almost.
We
are
like
archaeologists,
cooking,
the
remnants
of
development,
and
so
the
real
challenge
is
that
there
is
a
gap
between
the
metrics
and
the
interpretation
of
that
and
I
think
that
every
interpret
so
every
metric
should
have
with
some
guidelines
of
interpretation,
some
traits
of
validity.
C
J
A
A
I've
been
taking
hope,
I
agree,
I've
been
taking
copious
notes
so
trying
to
capture
this
yeah.
This
is
good,
so,
okay,
software
people
you're
up
next
and
you're
on
a
tight
schedule
to
talk
about
any
any
updates
you
want
to
provide
for
people
and
software
that
you're
working
on
this
would
include
hey,
Zeus
and
Sean
again.
B
Thank
you.
So,
from
from
the
Romano
point
of
view,
we
are
moving
to
keep
on
elastic
Emily
we're
having
new
panels.
We
are
releasing
the
new
community
version
of
everything
pretty
likely
this
week,
so
I
have
to
look
at
it
because
we
are
having
new
staff
that
maybe
you
can
be
interested
in
and
really
did
too,
that
the
google
Summer
of
Code
project
is
going
smoothly.
I
would
say-
and
this
week
apprentice
mentor
start
actually
writing
the
first
food
to
produce
reports,
studying
different
approaches
and
so
on.
B
I
So
at
least
there
are
a
couple
of
those
that
are
important
ones,
so,
basically
we
keep
covering
more
data
sources,
as
matter
most
and
some
others,
but
the
two
new
ones
are
the
community
structure
analysis.
This
is
based
on
the
kind
of
old
paper
that
basically
says
what
we
have,
and
this
is
kind
of
our
analysis.
So
we
have
the
coroner
and
casual
developers.
I
We
have
the
committee
structure
of
these
across
over
time,
so
we
know
for
basically
each
of
the
quarters
how
many
core
developers
that
are
those
two
80%
of
the
activity
or
regular
developers
doing
those
up
to
95
percent
or
castle
developers
gluing
hose
to
the
rest.
5%
are
evolving
over
time
and
then
in
the
tolls
developerworks
up,
for
they
have
some
high
school.
I
Nights
working,
thank
you
Matt,
so
then
the
idea
is
that
you
have
all
of
that
information
over
time
per
quarter
and
then
we
have
this
information
and
the
same
analysis,
community
structure
at
the
level
of
the
organization.
So
you
know
the
core
tricular
and
casual
developers
for
each
of
the
companies,
and
then
you
have
same
level
for
at
the
level
of
prey.
Yet
so
you
know
the
core
regular
on
CASL
developers
for
each
of
the
plates,
and
then
you
can
have
things
like.
I
Ok,
this
developer
is
linked
or
here
and
here
or
triggering
this
table
and
disparate.
And
then
this
developer
is
basic
like
basic
it
cash
falling
in
this
project,
and
then
you
can
have
the
overtime
analysis
for
that
one
and
then
the
other
one
is
the
areas
of
code
idea.
Is
that
we
you
can
work
and
play
with
data
the
level
of
file
path,
so
you
can
look
for.
The
idea
is
to
look
for
areas
of
knowledge
for
each
of
the
developers
or
organizations.
J
J
J
J
F
G
A
B
E
A
H
Basically,
we're
getting
ready
to
release
the
OpenStack
survey
and
mark
when
we
were
first
coming
up
with
the
new
draft
was
like
how
can
chaos
be
involved,
so
we
had
a
little
bit
of
an
email
thread
yesterday,
all
of
good
points
on
how
you
guys
can
be
involved.
The
only
thing
I
do
need
to
mention
to
you
is
that
there
will
be
an
NBA
involved
for
participating
and
helping
us
with
the
statistics.
So
as
long
as
chaos
is
good
on
that
end,
I
think
we're
good
to
go
ahead
and
set
up
the
survey.
H
I'll
have
to
get
that
from
the
OpenStack
foundation,
but
basically
just
saying
that
you
know
privacy
for
the
individuals
if
there
is
individual
type
information.
The
responses
for
the
most
part
we're
hoping
that
it'll
all
be
be
selected,
a
choice,
but
we
will
be
giving
everyone
the
opportunity
how
most
questions
to
fill
in
other
text
blog-
and
it's
just
you
know
so
like
we
do
this.
Also
with
the
user
survey
that
we
send
out-
and
there
might
be
a
I
doubt
in
this
particular
one.
H
J
F
H
H
A
A
All
right
cool
sounds
great,
let
see
any.
We
have
two
minutes.
I
had
one
item
that
I
would
like
to
put
out
there.
We
have
a
wiki.
Does
anybody
know
this?
We
started
with
a
wiki
I,
think
yeah
I
thought
we
moved
away
from
it.
No
and
we
shut
it
down
altogether.
So
I
thought.
Okay,
we
said
we
don't
have
a
wiki.
Well,
it's
still
there.
Oh
it's
alive,
this
dormant
or
I,
don't
know,
I
think
you
can
run
everything
through
the
webpage.