►
From YouTube: OpenStack DefCore Criteria Sub Meeting 2 on 2013 Dec 30
Description
Meeting #2 of the OpenStack DefCore Criteria Subcommittee
Notes on https://etherpad.openstack.org/p/DefCoreTestCriteria
C
A
A
A
A
A
E
A
Okay,
so
if
you
are
not
on
the
list
of
attendees
that
we've
identified
on
the
etherpad,
we
are
definitely
going
to
be
using
the
etherpad
heavily
for
this
call,
as
we
usually
do.
Please
add
your
attendee
names
to
the
list.
The
etherpad
is
in
the
chat
session
and
it's
also
deaf
core
test
criteria.
If
you,
if
you
need
it,
I
do
not
have
the.
A
A
A
So
the
agenda
and
and
goes
if
things
get
hard
because
of
the
holidays
and
things
like
that.
The
move
will
stop
we're,
definitely
not
running
past
two
hours
like
guarantee
that.
But
if
we
can
we'll
go
as
long
as
it's
meaningful
for
the
agenda,
we
have
a
discussion
of
the
waiting
approach
that
we
sort
of
gotten
to
the
conclusion
that
we
needed
in
the
last
meeting
reopened
the
review
discussion
of
the
existing
criteria.
A
A
E
E
A
A
A
B
Meaning
like
what
is
there,
so
the
criteria
work
not
to
go
ahead
well,
I,
think
by
weight
we
really
mean
you
know
it's
like
a
must-have.
You
know.
We
know
that
streams.
You
know
optional.
You
know
that
you
get
some
minor
points
for
having
that,
but
it's
optional,
so
we
really
have
a
must-have
or
or
mandatory.
A
A
A
And
I
actually
think
that's
going
to
be
where
it's
going
to
get
really
interesting,
because
there's
going
to
be
a
small
set
of
tests
that
are
in
the
gray
zone
and
where
we
put
that
that
limit
is
going
to
deter.
But
then
again
we're
going
to.
Especially
in
this
pass,
we're
going
to
have
to
tweak
the
waiting's
to
write
so
well.
B
I
correct
me
if
I'm
wrong,
but
I
think
what
we're
trying
to
do
over
all
of
the
services
or
Aleksei
for
a
given
service.
Ok
for
Nova
right
there
will
be,
let's
say:
20
tests
just
for
argument's
sake
right
and
that
they
will
be
a
test
will
be
marked,
as
even
the
simplest
case
mandatory
or
optional.
You
have
to
pass
a
certain
the
score
that
you
get
for
nova
or
from
the
note
for
the
compute
service
and
would
be
how
many
of
the
mandatory
test
you
passed.
B
If
we
were
to
do
it
simply
that
way,
right,
I'm,
not
sure
to.
B
A
A
B
C
C
B
B
A
I
think
you're
crossing
the
the
core,
the
core
definition
with
the
criteria,
which
is
this
is
so
that
this
Nova
as
a
whole
is
you
have
to
pass
that
you
have
to
pass
the
core
tests
to
write
for
Nova
to
be
OpenStack,
but
this
is
the
what
we're
trying
to
figure
out
right
now
is
which
ones
are
those
tests
or
the
court
asked
you
know
and
then
so,
each
each
right,
so
down
only
criteria.
Your.
A
A
Yeah,
the
reasons
confusing
is
because
the
criteria
are
tests
for
the
tests
right.
So
when
I
say
criteria,
I
mean
the
way
we
evaluate.
If
a
test
is
considered
must
pass
or
not.
The
tests
are
the
actual
thing
that
OpenStack
but
I
know
that's
it's
really
confusing,
but
I'm
trying
to
be
I'm
trying
to
be
careful.
My
language
always
use
the
word
criteria,
as
this
is
the
way
we
select
whether
or
not
a
test
is,
must
pass
so
in
this,
which
makes
it
confusing
Oh.
B
B
A
A
C
B
D
B
A
B
E
E
E
A
E
B
B
E
E
B
E
In
as
people
and
organizations
start
using
the
system
and
start
running
through
the
test,
we're
going
to
create
a
whole
bunch
of
data
that
we're
what
we're
going
to
want
to
mind
not
only
just
for
interests
sake,
but
also
to
improve
the
testing
system
and
to
find
possible
bugs
in
the
testing.
Before
we
start
getting.
You
know
five
hundred
thousand
complaints,
so
thinking
about
how
we're
going
to
map
this
out
on
some
kind
of
cohesion,
yeah.
D
E
E
B
Go
through
all
this
or
we
just
because
this
sounds
like,
then
we
have
to
decide
what
are
the
scoring
going
to
be
for
each
one
of
these
criteria
before
we
can
actually
should
we
get
at
some
point,
we
have
to
get
from
the
criteria
to
really
to
the
hard
part.
What
you're
saying
is
this
thing
in
yeah
the.
B
To
spend
we
can
spend
months
just
getting
to
that
process.
I
think
the
criteria
we
have
here
if
we
just
want
to
like
rank
order
them
right
now.
We
should
say
and
then
go
through
the
next
process
of
try
and
get
sort
of
a
test
of
run
through
of
the
intent
process
before
we
totally
refine
this
down
to.
B
A
With
you,
here's
here's
that
the
loop
that
we
that
I'm
trying
to
avoid
I,
don't
want
to
so
now
fight
about
what
wait
the
criteria
should
get
or
not.
What
I
wanted
to
do
is
spend
enough
time
talking
this
through,
so
that
we
know
we're
going
to
apply
awaiting
criteria
so
that
we
can
then
start
the
process,
because
what
what
what's
starting
to
happen?
Our
last
conversation
is
people
were
starting
to
try
and
make
these
asset
test.
D
A
B
A
A
E
A
E
A
Became
pretty
clear
that,
with
the
criteria
we've
got,
we're
going
to
some
are
more
important
than
others
and
we're
going
to
we're
going
to
end
up
having
to
do
it.
What
I
expect
is
is:
is
this
we're
going
to
get
all
that
stuff
on
a
spreadsheet
or
some
of
the
comparable
equivalent
and
then
we're
going
to
tweak
the
waiting
to
see
which
of
these
criteria
are
actually
the
most
important
and
use
some
judgment,
and
then
the
argument?
A
What
I'm
really
hoping
the
argument
gets
to
be
is
that
is
how
much
weight
each
criteria
needs
and
then
we're
going
to
experiment
with
well.
If
this
is
that
high,
then
this
test
comes
in.
If
this
is
this
high
than
this
test
comes
in,
and
then
we
can
actually
start
to
discuss
the
relative
relative
importance
of
what
some
of
these
criteria
are
based
on,
the
actual
impact
of
which
which
criteria
get
in
or
out
is
consequence.
A
B
E
A
E
But
I'm
just
the
logical
conclusion
of
this
is
that
we're
going
to
say
each
grouping
of
Korra
prot
each
project.
That's
involved
in
the
testing,
has
a
certain
amount
of
relative
value
to
the
other
ones,
and
you
need
to
have
a
sum
of
a
certain
amount
to
be
able
to
say
that
you
passed
the
test.
If
we
add
another
ten
projects,
then
the
ten
that
existed
previously
are
now
Thank,
You
percent,
less
important
does
that
make.
A
B
E
Mean
I
that
actually
that's
important
and
that
kind
of
goes
along
the
lines
of
what
we've
been
talking.
If
you
want
to
keep
things
less
complicated,
we
want
to
add
the
least
amount
of
whatever
we're
doing,
least
amount
of
change
possible
and
adding
a
hundo
projects.
I
think
would
be
very
bad
and
each
new
project,
just
like
each
new
gold
member,
we
should
take
very
seriously
not
is
that
there's
a
certain
amount
of
not
inflation,
but
relative
loss
of
value
of
the
previous
projects
and
or
whatever
projects.
E
E
A
C
A
Would
I
would
agree
with
you
about
criteria
that
we
don't
want
you?
We
don't
want
to
bring
in
six
new
criteria
every
every
cycle
that
great.
If
we
brought
in
six
new
criteria,
then
we're
saying
well,
okay,
then
we
have
to
make
something
some
other
criteria
less
important,
but
for
tests
I'm,
not
as
worried
about
that.
If
we
have
ten
times
the
number
of
tests
next
cycle,
we
should
be
awesome,
then
I.
You
know
it
might
be
perfectly
reasonable
to
have
a
matching
growth
in
the
number
of
core
tests.
A
Just
right
with
a
certain
percentage
of
our
population
of
tests
will
likely
be
a
core
will
likely
be
courthouse,
but
I
wouldn't
wait.
It
I,
wouldn't
I
would
rather
see
us
do
that
type
of.
If
we're
going
to
look
at
that,
I'd,
rather
look
at
her
as
a
ratio.
That
way
once
again,
somebody
could
then
sku
that,
just
by
dumping
a
whole
bunch
of
tasks,
I
don't
want
to
see
that
either.
C
A
Expect
that
we
would
do
that
in
the
either
in
the
next
meeting
or
after
we've
gotten
the
Tet
some
data
to
actually
apply
the
criteria
to
I
think
the
waiting
the
waiting
the
waiting
is
going
to
be
based
on
I,
think
we're
going
to
be
playing
with
it.
I
think
we're
just
going
to
set
them
all
to
get
1
over
n
the
number
of
criteria
and
then
individually
go
play
with
our
spreadsheets
and
then
come
back
with
some
suggested
weights.
But
that's
what
I'm
expecting
is
that
we'll
have
a
meeting
where
we
set
the
weights?
A
B
So
so
we
would
go
through
a
cycle
of
testing
this.
You
know
of
looking
at
this
process
through
the
view
of
when
we
have
actual
tests
to
see
whether
the
then
what
we're
yes,
ok,
end
up
with
these
50
tests.
Let's
say
we
say:
that's
defining
the
crack-tip,
the
capabilities
we
really
think
are
important,
and
now
we
can
assign
the
wait
for
the
criteria
such
that
that
would
that
result
would
come
out
right.
B
A
A
And
then
I,
the
only
trick
is
that
we
were
starting
to
trip
over
thee.
Well,
if
we
are
going
to
wait,
things
and
I
would
phrase
it
this
way
and
if
we're
not,
then
I
would
phrase
it
that
way,
and
so,
since
that
was
becoming
repetitive
conversation
in
the
last
meeting,
I
felt
like
we
could
just
run
that
to
ground
right
now
and
say:
okay,
we
will
wait
them.
A
B
A
A
D
A
A
loop
on
legal,
a
legal
affairs
side
of
this,
which
was
considering
the
program's
core
and
defining
the
pro
riders
course
programs
and
and
the
projects
are
really
members
of
a
program
so
Nova
compute.
This
is
this
is
my
reasoning.
This
is
not
my
suggestion
yet,
although
I
need
to
write
it
up,
yeah.
E
E
A
A
You
split
to
Pat,
you
switch
to
tests,
the
tests
aren't
waiter,
that's
not
later
the
criteria
right,
it's
a
criteria
or
weighted.
So
what
would
happen
is
that
of
all
of
the
body
of
tasks,
so
it'll
be
easier
once
they
actually
have.
We
have
a
spreadsheet
which
we're
supposed
to
get
along
the
test
that
so
I'm
envisioning
a
spreadsheet
that
have
each
test
in
it
and
the
tests
are
basically
going
to
be
capability
grouped
so
there
might
be
10
test.
The
test.
A
If,
if
that
test
meets
not
passes,
if
that
test
meets
six
of
the
ten
criteria,
then
we're
going
to
take
the
weights
for
those
criterias
assuming
10
each
right
now
that
would
get
a
score
of
60
and
that
one
test
would
be
a
60
in
the
relative
ranking
of
it
six
score
of
60.
For
that
one
task,
not
it
and
then,
and
then
so
in
that
process
we'd
be
able
to
score
every
test
based
on
its
it,
whether
it
meets
the
criteria
or
not,
and
then
the
waiting
gives
us
a
route.
A
That
means
that
in
each
individual
careers,
what
was
happening
in
the
last
meeting
right
we'd
say:
oh,
we
have
a
criterion.
People
like
well
I
think
should
be
in
there,
but
it's
not
as
important
as
something
else,
and
they
were.
We
were
getting
very
caught
up
in
the
binary
nature
of
the
criteria,
all
right,
so
you
might
have
a
criteria
that
is
not
very
stable.
A
It
hasn't
been
stable
for
a
whole
bunch
of
release,
and
so
it
scores
it
doesn't
score
there,
but
it
meets
a
whole
bunch
of
other
criteria,
and
people
think
that
that
should
be
core,
even
though
the
test
isn't
stable,
and
so
that
gives
us
the
flexibility
to
move.
You
know
to
look
at
look
at
how
all
this
stuff
works.
No.
A
D
D
A
A
D
Well,
I
think
I
think
I
think
the
bigger
danger
is,
if
you,
if
it's
a
must-pass
test,
it
can
always
be
dropped
down
to
you
know
some
type
of
weighted
average.
What
would
be
AI
think
challenging
is
if
somebody
has
passed
all
the
criteria
now
we
add
another
must
pass
test
which
they
don't
meet
and
what
happens
then.
D
To
have
more
practice
for
you,
hopefully,
hello,
I
wanted
spur
you
the
one
whose
version
by
version
or
some
other
way
of
dealing
with
that's.
Why
I
you
know
I'm
I'm
neutral
on
that
I
just
want
to
make
sure
that
we
don't
have
surprises
for
people
or
create
a
situation
where
you
know
you
know
the
market
play.
You
know,
you're
the
user
base
or
the
supplier
base
gets
surprised
by
this
in
bad
way.
Shall
we
say
and.
A
That's
one
of
the
things
that
I
like
about
doing
the
weighted
criteria.
Is
that
we're
we
should
see
very
clearly.
This
test
is
near
being
matching
the
cutoff,
and
if
it
had
these
changes,
then
it
would
become
acceptable
right
there.
This
is
this
should
be
very
quantitative
right.
You
can
say
all
right.
This
test
is
below.
They
require
threshold
because
of
the
criteria.
It
doesn't
meet
this
criteria
and
if
it
did,
it
would
get
above
the
threshold,
and
so
it
should.
A
We
should
be
able
to
see
release
by
release
very
clearly
which
tests
are
candidates
for
must
pass
status.
I'm
right
part
part
of
my
goal,
for
this
is
we're
trying
to
make
take
the
debate
out
of
a
lot
of
this,
so
that
it's
not
us
not
a
subjective.
A
I
think
this
is
a
required
API
and
I
don't,
but
this
it
matches
with
these
criteria.
This
is
why
the
waiting
is
going
to
be
really
interesting.
E
So
I
think
it
makes
may
have
been
our
intention,
but
I
think
it
makes
a
lot
of
sense
for
its
as
we
go
forward
to
tie
in
changing
the
criteria
and
awaiting
and
with
new
releases,
and
we
actually
created
a
new
group
that
or
maybe
just
the
rally
group
or
whoever's
doing
the
sender
derivative
of
the
tempest
group
to
actually
work
on
this
throughout
the
year.
It's
it's
going
to
be.
E
It
wouldn't
take
much
for
somebody
to
take
this
the
wrong
way.
If
there
was
a
mistake
made
or
intentional
change
made
in
it
dropped
some
people
out
or
included
some
people
in
that
you
know,
others
didn't
agree
with.
So
it's
going
to
become
very
political,
even
though
that's
not
our
intention
that
using
trademarks
becomes
very
political.
Now.
A
D
Whatever
we
do,
I
think
we
need
to
have
a
transition
period,
and
maybe
we
implement
it.
You
know
when
the
next
version
comes
out,
but
you
know
what
we
don't
want
to
do
is
pull
the
rug
out
from
underneath
anybody
and
I
think
there
I
think
we
have
to
be
very
clear
about
these.
What
these
are
and
what
the
when
they're,
going
to
become
affected
right.
A
And
mark
we
we've
been
doing
that
in
in
discussion.
We
need
to
communicate
to
the
community,
but
you
know
the
timelines,
for
this.
I
have
I
believe
we've
been
very
specific
right:
the
havana
at
the
juno
summit
we're
going
to
have
the
havana
criteria,
it
will
basically
be
preliminary
and
then,
at
the
you
know
that
will
allow
us
to
then
collect
feedback
around
Icehouse
and
plus.
A
You
know
basically
start
this
process
again
on
the
ice
house,
which
which
will
actually
be
materially
important,
because
the
ice
house
criteria
are
really
the
ones
that
are
going
to
be
the
official
ones,
so
there'll
be
a
whole
cycle
and
then
the
expectation
is
for
the
juno
release.
We
would
be
able
to
then
have
the
juno
mater
sorry.
A
D
A
A
The
hallways
are
challenged.
Yes,
I'm
planning
to
start
where
I'm
coordinating
with
Lauren
about
this,
so
Jonathan
and
I'll
have
have
some
components
and
videos
and
materials
about
how
this
goes
and
we're
just
going
to
have
to
take
the
tests
and
show
people
with
the
criteria
are
and
that
that's
when
people
are
going
to
pay
attention
all
right.
These
individual
criteria
aren't
that
interesting
until
you
have
a
test
that
isn't
considered
must
pass
because
it
doesn't
reflect
a
future
of
technical
decision
right
direction.
A
A
B
A
Okay,
so
I
agenda
perspective
are
waiting
approach
of
15
minutes.
We
were
way
over
that.
That's
that's
fine
I
was
hoping.
We
would
now
go
over.
The
criteria
that
we've
got
I
would
actually
suggest
that
we're
going
to
hold
the
contentious
items
until
we
have
a
bigger
quorum
and
spend
spend
the
next
time.
Looking
at
additional
criteria.
A
A
I
definitely
have
at
least
11
more
to
add
that
and
as
a
review,
what
we
have
is.
We
have
criteria
that
are
consensus.
Everybody
thinks
they're
right
ones
that
we
said
weren't
right
and
then
we
have
ones
that
we
know
we're
going
to
need
a
bigger
discussion
on
so
the
first,
the
first
one
we
have
is
a
test
is
stable
for
at
least
two.
The
test
is
required,
stable
for
at
least
two
releases.
A
B
A
A
A
B
A
B
B
B
Yeah,
because
if
we
make
these
criteria
very,
we
have
to
have
other
people
who
haven't
been
on
these
long
calls.
You
know
to
understand
what
we're
talking
about
so
might.
My
canonical
example
is
the
the
fourth
one.
Inner
criteria
about
candidates
are
widely
used
capabilities
right.
We're
trying
to
try
and
get
that
words.
Miss
like
capability
is
included
it
or
is
expected
to
be
included
in.
Why
do
we
use
cloud
computing
platforms,
tool
sets
and
relied
upon
by
application
developers?
You
know.
B
Thing
in
a
cloud
great
instance,
you
know
if
we
don't
have
a
crate
instance.
You
know
I,
don't
know
that
we
have
a
cloud
right.
So
capability
seem
to
be
good,
a
good
way
to
say
that,
and
that
then
we
have
tests
for
pests
are
the
things
that
you
can
actually
run
to
test.
That
capability.
A
A
A
A
E
Just
I'm
just
wondering
thinking
this
through
the
implementation.
If
I
have
five,
if
I
have
six
Tests,
let's
say
five
of
them
that
have
partial
capabilities
or
excuse
me,
yeah
partial
criteria
and
one
that
has
all
of
them.
If
I
passed
the
five
that
have
all
partial
criteria
that
equal
up,
the
sum
total
would
say,
there's
ten
criteria
and
the
sum
of
all
of
them
means
that
I
passed
all
ten,
but
I
can't
pass
the
one
that
has
all
ten
in
one
test
for.
A
A
I
I
would
think
that
you
would,
if
you
passed
eight
of
the
ten
criteria,
all
right
then
you're
going
to
be
you're
going
to
get
a
weight,
it's
going
to
be
say,
75
and
then
then
we're
going
to
have
a
we're
going
to
have
a
separate
threshold.
That
says
any
any
test
that
has
a
score
of
75
or
above
his
must
pet
is
considered,
must
pass.
A
Ok,
all.
E
A
C
A
C
No
there's
no
waiting
to
the
Tesla,
you
don't
end
up
getting
a
score
of
either
you're,
seventy-five
percent
core
or
sixty
five
percent.
You
either
pass
the
best
pass
tests
or
you
don't.
The
criteria
and
waiting
process
is
only
to
help
us
determine
out
of
all
the
tests
that
we
have
available
to
us,
which
one
make
the
must-pass
qualifications
so
to
attest.
That
I
grabbed
said,
if
you
add
up
all
the
criteria
and
the
waiting
pieces,
you
come
up
and
say
well
that
or
75,
then
yeah
that's
likely
to
be
a
must-pass
test.
A
C
E
B
C
A
C
E
A
I
think
so
I
think
it's
confusing
now,
because
we're
we're
not
it's
very
abstract
e
type
of
stuff
that
we
just
end
enough
of
this
done
so
that
we
can
then
actually
do
it
with
a
test,
run,
click
building
the
code
and
then
once
we've
done.
That
it'll
be
much
more
obvious.
What
we
need
to
tweak
but
yeah.
B
Like
I
actually
would
like
to
get
the
criteria
really
simple,
straightforward,
easily
understood
so
here's
my
straw.
Is
anybody
able
to
see
that
they're
in
purple
on
the
etherpad
blue,
simple
criteria
present.
B
So
the
first
one
has
tries
to
cover
the
whole
issue
all
of
the
issues
around
it's
widely
used
in
many
cloud
platforms,
kind
of
expected
to
be
there
relied
upon
by
application
developers.
You
know
create
instance,
you
know
store
this
block,
use
a
key
value
know
those
kind
of
things
the
second
is
has
to
do
with
the
quality.
You
know
the
capabilities
well
documented
it's
stable
across
multiple
releases
and
it's
consistent
with
the
future
technical
direction
and
then
the
third.
A
B
We
were
just
passing
the
time.
I
was
actually
trying
to
move
us
along
I.
Think
it
like
you
saying.
We
want
to
sort
of
test
whether
this
stuff
actually
makes
any
sense
a
little
bit
great.
So
I
tried
to
just
take
a
straw
man
here
below
half
page
down
from
where
I
think
the
end
of
use.
Your
criteria
are
blue,
simple
criteria
of
like
we
get
two
three,
maybe
it's
for
but
I'm
trying
to
group
a
lot
of
the
things
I've
read
and
the
other
ones
so
that
we
move.
A
A
A
A
Okay,
although
there
there
are
gaps
and
we're
going
to
have
aspect
for
ice
house,
we're
going
to
whole
bunch
of
new
tests
that
we
want
to
be
core,
but
could
be
that,
that's
not
that
they
won't
make
it
because
of
that
that
rule
that
the
code
hasn't
tension
framework.
So
this
there.
This
might
be
something
that
we
could
do
more
broadly.
I
was
thinking
about
this.
B
With
a
second
part
of
that
means,
there
should
be
paradise
guessed
this
before
I
still,
wasn't
that
clear,
they
should
be
parody
in
capability.
If
we're
saying
it's
desirable
that
the,
because,
if
we're
talking
about
the
test,
the
test
of
the
capability
or
that
this
capability
should
be
built
on
an
extent
in
an
extendable
framework
that
I
would
agree
with
Ram,
that's.
A
B
B
B
C
C
B
C
B
Not
the
whole
reason
why
at
least
my
understanding,
why
we
have
abstentions
is
because
there
isn't
necessarily
agreement
that
everybody
should
have
this
right,
and
so
we
we
expected
a
normal
process
at
at
some
point.
We
expect
floating
IPS
to
become
a
part
of
a
core
API,
and
then
it
has
to
have
the
same
semantics
and
everything
no
matter
no
matter
what,
but
at.
C
A
The
extension
there
is
a
there
is
a
requirement
that
every
extension
framework
have
an
OpenStack
variant,
that
is,
that
is
considered
court
right
you,
the
capabilities
expressed
by
extensions,
are,
could
be
courts
core
capabilities
matter
of
fact
they
probably
are
going
back
to
our
Nova
example.
You
couldn't
have
a
nova
create
instant
test
without
having
hitting
an
extension
framework,
because
the
virtualization
layer
is
extend
its
extension
as
part
of
the
extension
framework.
Now.
C
B
A
Right
thing:
to
make
sure
we
don't
Indigo's
food
well
in
the
core
principles
model,
it
they're
actually
intentionally
confused
I
hate
to
say
it
I'd
to
say
this
does
two
words
together,
but
what
we,
what
we
did
was
we
said
that
that
we
called
those
designated
areas
instead
of
extensions,
and
so
what
we
could
do,
we
could
say
has
a
desert.
You
know
we
could
use
that
word
designated
area
of
extension
or
something
like
that.
Okay,.
B
A
And
so
this
is
this
is
the
balance
right.
This
is
where
we're
trying
to
create
these
balances
right
it
you
could
say
I
don't
want
to.
I
want
to
down
weight,
something
that's
an
API
extension
perfectly
cool.
It
will
fail
this
criteria,
although
I'm
not
sure,
that's
necessary,
and
then
you
could
say
well,
but
we
want
something
that's
in
common
use,
which
is
down
in
number
four
and
therefore.
E
A
B
B
B
A
C
C
C
C
E
C
E
E
On
something,
if
we
start
adding
calling
certain
extensions,
considering
them
to
be
part
of
core
and
critical
and
the
TC
does
not
I'm,
not
sure,
that's
where
we
want
to
be
Isis,
I
totally
agree
with
putting
heavy
weight
on
things
and
waiting,
different
extensions,
API
extensions
or
whatever
way
we
want
to
define
it.
But
does
that
make
any
sense.
A
I
I
think
you
guys
are
jumping
ahead.
I
didn't
feel
like
it
I
just
okay.
Well,
let's
keep
going
and
let
me
let
me
let
me
explain
why
why
I
make
that
that
comment,
there's
there's
different
things
that
govern
these
criterias,
that
make
sure
that
the
TC
is
an
overruled
or
the
TC
has
a
view,
but
part
of
what
we're
trying
to
do
with
this
is
to
make
the
ruling
the
judgment
on
some
of
these
balanced
and
give
users
a
right
to
say
hey.
This
is
really
important
to
me.
E
A
The
criteria
are
to
be
able
to
even
out
the
influence
of
different
groups
and
there
there
are
specific
principles
here
that
do
allow
the
TC
to
throw
flags.
But
we
specifically
very
specifically
said
we
will
not
have
one
group
that
can
throw
a
flag
veto
on
at
ona
criteria
or
a
test.
We
actually
made
that
that's
actually
in
rejected
criteria.
We
don't
want
the
TC
to
be
able
to
say,
I
think
that
this
shouldn't
be
in
core
over
the
users
objections,
and
so
it
could
be
that
they're
out
of
sync
and
we
want.
E
A
Purpose
to
me,
the
purpose
of
this
number
two
item
was
to
say
that
we
don't
want
a
test
that
to
get
we
don't
want
to
test
now
we're
back
to
the
individual.
From
this
criterias
perspective,
a
test
that
does
not
have
an
extensible
backing
or
be
as
part
of
an
extensible
capability
is
not
as
preferred
and
and
this
this
this
actually
limits
us
to
say,
hey,
look.
We
want
to
make
sure
that
if
there's,
if
the
code
that's
tested,
we
want
it
to
be
part
of
a
have
alternate
implementations.
That's
all
we're
saying
look.
B
B
A
B
As
it
as
stated,
there
I
like
that
I
mean
that,
and
what
you're
talking
know
about
is
where
we
have
plugins
for
different
hypervisors
and
things
like
that,
whereas
I
was
interpreting
this
as
API
extensions,
which
I
think
are
actually
like
excluded
in
this.
But
you
talk
about
alternate
implementations,
which
I
I
think
it
is
important
that
we
have.
We
want
to
have
alternative
implementations,
we
think
that's
healthy
and
where
there.
A
A
A
Monty
didn't
tweak
this
wording,
but
I
think
people
understand
this.
Yeah
candidates
are
widely
are
widely
used
capabilities,
and
then
we
subdivided
this
into
for
public
clouds
and
into
public
clouds
and
products
supported
by
common
tools
and
part
of
common
libraries.
So
those
might
actually
end
up
being
individual
pieces
is.
B
A
B
A
A
A
That
would
be
part
of
what
I'm
thinking
I,
don't
know
where
the
word
this
is
we're
gonna
have
to
figure
out
how
to
do
that
number
for
some
time,
but
it
I
think
it's
an
important
criteria.
Obviously
test
capabilities
that
are
required
by
must
other
must
pass
tests.
So
this
this
one
I
think
we
ended
up
doing
this,
because
the
variance
of
the
shuttleworth
test,
I.
E
B
A
A
A
But
once
again,
it's
possible
that
we
could
actually
say
you
have
extensive
documentation,
has
minimal
documentation,
haven't
you,
we
could
be
condemned
it
degrees
on
that,
let's
see,
and
then
what
we
added
this
this
time
was
a
test
is
a
must
pass
tests
should
stay
at
most
pass
tests.
So
this
this
to
me,
I,
mark
I,
really
like
this
idea.
I,
don't
think
this
is
exactly
what
you
intended.
C
D
A
D
A
A
A
E
B
E
A
A
A
This
this
is
the
mark.
Shuttleworth
item.
That's
going
to
need
some
more
discussion
candidates
so,
and
this
this
to
me
becomes
a
one
of
the
things
that
could
come
in
as
a
criteria
with
waiting.
Let
me
read
it
so
for
people
who
are
looking
at
screen,
it
says
candidates,
favor
capabilities,
that
users
cannot
implement.
A
If,
given
the
presence
of
other
capabilities,
which
means
that
if,
if
I
have
a
capability
that
I
could
implement
using
other
must
pass
capabilities,
then
we
would
we
would
we
would
score
it
down,
and
so
a
capability
like
Keystone,
where
there
was
no
alternate
way
to
implement
it.
You
had
to
have
Keystone
in
there
was
no
substitute,
would
be
scored
higher,
but
that
capability
would
be
scored
higher
than
say
something
like
heat,
which
you
could
implement
in
on
top
of
an
openstack
cloud
without
any
other
Cape
about
without
needing
a
fundamental
capability.
B
A
Yes,
and
and
Lou,
that's
exactly
why
people
like
this
criteria,
because
what
it
does
is
it
keeps
score
very
small
and,
and
so
the
the
argument
would
be
unless
there's
a
lot
of
cake
of
need.
This
capability
is,
if
it's
widely
used,
bring
it
in,
but
if
it,
if
it
is
not
widely
used
and
could
be
implemented
on
top
of
an
existing
openstack
cloud,
then
we
wouldn't
that
would
down
weighed
it
and
waiting
changes.
This
whole
conversation,
because
all
of
a
sudden
we
can
say
you
know
what
this
this
line.
E
A
A
B
A
B
E
A
Ok,
so
I'm
going
to
I'm
going
to
actually
make
this
the
the
actual
text
and
then
we'll
do
it
like
that.
Okay,
that's
very
good
and
I
think
that,
with
the
waiting
change
that
this
could
actually
I,
don't
want
to
do
it
in
this
meeting,
but
I
think
that's
correct.
That
could
actually
become
a
a
must-pass,
a
consensus
criteria.
At
that
point,.
A
A
A
B
A
This
is
actually
this
is.
This
is
why
it's
non
consensus,
so
one
of
the
changes
I've
been
I've
been
driving
on
ref
stack.
Is
that
instead
of
ref
stack
being
a
third-party
test,
meaning
that
it's
run
against
and
exists
a
cloud
somewhere
by
an
by
an
external
party
that
we're
altering
ref
stack
to
allow
you
to
run
your
own
version
of
the
tests
against
your
own
cloud
and
upload
the
result,
and
if
you
were
it,
let's,
let's
use
Rackspace's
our
case
in
point.
If
you
were
Rackspace
or
HP,
if
you
want,
but
let's
leave
Rackspace.
A
So
if
you're
rackspace
we
can
Rackspace
could
run
restack
run
the
tests
upload
their
results
and
say
this
is
the
official
rest.
Rackspace
result
for
the
test,
the
test
place,
but,
and
they
have
administrative
access,
and
they
can
do
that.
They
might
have
a
hundred
customers
who
run
the
same
battery
of
tests,
but
can't
pass
all
of
the
tests
because
they
don't
have
administrative
access,
but
then
they
run
it
against
their
own
cloud
and
they
do
have
administrative
access
and
they
pass.
They
pass
a
broader
set
of
tests
and.
E
A
B
Yeah
I
mean
we're
either
choosing
to
include
administrator.
We
are
using
to
include
in
the
definition
of
OpenStack
a
certain
amount
of
administrative
functionality,
function,
functionality
for
administrators
or
not
grand.
If
we,
if
we
simply
saying
no
at
this
point,
at
least
let's
get,
let's
not
worry
about
the
administrative
functions.
Let's
get
the
the
application
developer,
the
user
capabilities
to
find
right.
You
know.
A
And
so
the
idea
here
is
that
that's
going
to
end
up
being
awaited
criteria,
and
it
could
be
that
we're
like
now.
You
know
what
this
test
looks
awesome
in
every
other
way,
except
the
it
requires
administrative
access,
but
we
still
want
it,
and
so
this
would
let
us
that
might
then
score
90
out
of
100
or
right,
but
that
would
be
an
indication
that
we
probably
want
that
test
to
be
included.
D
A
A
Well,
there's
there's
there
already
with,
must
pass
and
not
must
pass.
There
are
two
classes
a
test.
This
is
just
one
of
the
criteria
that
we
would
consider
is.
What
is
what
it's
coming
back
to
right?
We
have
a
preference
for
tests
that
don't
require
administrative
rights,
that
it's
not
a
it's
not
a
disqualification
is
the
way
I
would
way.
I
would
look
at
that
all.