►
From YouTube: DefCore OpenStack Meetup SFO 2014-04-08
Description
Hosted by Sean Roberts and remotely attended by Rob Hirschfeld and Troy Toman.
Notes & Links on https://etherpad.openstack.org/p/DefCore.E-CommunicationPlan
A
So
so
what
we're
getting
very
close
to
now
is
actually
a
witch
capabilities
of
open
staff
about
test
against
how
those
tests
so
Rob
Rochelle
and
brave
women
are
also
in
work
and
they're
joining
us
from
moving
tonight
and
these
may
working.
I
did
a
little
bit
more
in
the
background
and
they'll
be
working
a
lot
more
so
I'll.
Let
this
dude's
most
talked
about.
B
C
A
A
D
Everybody
and
thank
you
for
participating,
I'm
gonna,
happily
make
sure
you
guys
know
that
your
guinea
pigs,
because
this
is
this-
is
a
concept
that
we've
been
spending
a
lot
of
time
on
at
the
board
level
and
with
the
technical
committee.
But
it's
one
that
not
very
many
people
follow
closely
and
we
want
to
make
sure
that
we
can
explain
what's
important
and
so
we're
asking
for
your
help
to
sort
of
help
sift
through.
D
You
know
everything
that
we're
presenting
us
tons
of
information
so
that
we
can
explain
this
to
the
community
in
a
way
that
they
understand
why
it's
important
and
what
how
they
can
contribute.
So
with
that
and
I,
don't
I,
don't
have
a
sense
for
how
much
people
know
or
don't
this
I
can
give
you
the
five
minute
overview.
D
D
No
answer
too
many
big
questions,
it
we've
always
run
aground,
and
so
what
we
did
very
intentionally
was
we
step
backwards
to
sort
of
first
principles
and
smaller
bites
and
a
lot
of
what
we're
doing
is
working
out
stepwise
what
we
can
agree
to
what
the
community
is
ready
for
and
then
move
it
forwards,
it's
one
of
those
classic
the
faster.
We
push
the
process
the
longer
it
takes,
so
we're
very
careful
about
the
pace
and
we're
actually
just
it's
really
critical
milestones.
D
So
what
what
we
started
with
was
this
tense
10
principles
which
are
really
a
flowchart
that
sean
has
up
in
front
of
you
and
it
starts
on
the
top
with
this,
these
general
statements
about
what
what
core
is
and
what
we're
trying
to
achieve
and
what
we're
really
trying
to
do
is
we're
trying
to
define
an
definition
of
openstack
core
that
applies
to
commercial
use.
So
when
somebody
is
selling
a
product
that
says,
I
am
OpenStack,
we
need
to
be
able
to
tell
them.
D
Yes,
you
can
say
that
we're
not
trying
to
control
whether
or
not
somebody
says
I'm
OpenStack
Nova.
That's
that's
of
the
stat
code.
People
can
do
that
it's
when
it
becomes
a
commercial
prospect
that
it
gets
much
stickier
and
the
OpenStack
brand
becomes
much
more
important,
and
so
what
we
want
to
do
initially
was
to
say
we
want
an
OpenStack
use
of
the
spec
brand.
That
applies
to
every
use.
We
decided
not
to
take
on
the
idea
that
there
might
be
a
private
cloud
mark
or
a
public
cloud
mark.
D
Hora
la
animals
use
only
mark,
I,
don't
know,
and
so
we're
trying
to
create
a
definition
of
OpenStack
that
is
universal,
and
so
the
first
couple
principles
apply
that
and
there's
text
that
backs
this
document.
There's
a
whole
page
on
the
OpenStack
wiki
that
describes
that
these
principle
are
in
detail,
and
sometimes
we
go
way
down
in
the
leads
and
talk
about
specific
items
very
productively
and
then
so
that
the
next
section
of
this
talks
about
the
code
itself
and
what
code
we
can.
D
What
code
is
included
or
not,
and
there's
been
a
lot
of
talk
about
this
section
couple
that
what
what's
in
there,
it's
called
designated
sections.
So
the
idea
here
is
that
OpenStack
is
Apache
License.
You
can
do
pretty
much
whatever
you
want
with
that
code.
You
can't
necessarily,
if
you
copy
it,
you
can't
necessarily
say
it's
OpenStack
and
one
of
the
things
that
that
the
community
felt
was
very
important.
Was
that
if
you're
going
to
say
you're
using
OpenStack
code
OpenStack,
you
must
be
using
certain
parts
of
the
code.
D
And
so
that's
that's
what
those
sections
say,
but
because
we
want
to
be
a
vendor
and
ecosystem
friendly,
it
doesn't
make
sense
to
use
all
of
the
code.
There's
places
that
we
know
or
for
plugins
or
extensions
or
changes
or
places
where
somebody
has
an
enhancement,
and
we
want
to
say,
there's
parts
that
you
can
change
and
there's
parts
that
you
can't
and
a
big
part
of
that
is
because
we
want
to
encourage
up
streaming
as
a
core
value
in
OpenStack.
D
A
I
can
I
please
any
questions,
so
that's!
This
is
a
section.
That's
people
had
the
most
discussion
about
why?
Why
we're
doing
it
this
way-
and
you
guys
have
any
questions
before
we
move
on
anybody
like
why
we're
designating
its
code
need
to
be
in
there
and
why
we're
not
just
supporting
it,
guys
hear
it
yet.
So
why
wasn't
it
to
the
softball?
Okay?
Thank
you.
Until
then,
part
of
the
reason
is
that
we
don't
want
open.
Second,
shell
and
some
it's
just
started
with
okay
against
eight
guys
to
stay
and
not
too
bad.
D
Anything
to
add
to
that
crop,
it's
actually
one
of
the
topic
areas
where,
if
we
need
to
dig
into
it
it's
one
of
those
things
that
we
have
on
the
list
as
a
complex
topic
and
it's
one
where
the
TCS
very
actively
engaged,
because
it
we
actually
have
a
request.
The
teach
the
TC
actually
defines
what
what
is
designated
sections
and
actually
there's
a
question.
There's
a
topic
so
on
the
etherpad,
which
I
don't
think
is
up.
D
We've
laid
up:
okay,
yep,
so
in
this
area's
to
explain
it's
one
of
its
one
of
the
areas
that
we
we
want
to
help
figure
about
how
much
time
we
should
spend
on
it
or
not.
This
is
part
of
the
guinea
pig
aspect
of
this
meeting.
If
people
feel
like
designated
sections
are
hard
to
understand
and
we
need
to
spend
more
time
explaining
it
than
we
will
then
so
that's
so,
let's
keep
going
and
then
I
want
to
do
enough
background.
The
people
can
say
this
doesn't
make
any
sense
and
we'll
spend
more
time
there.
D
Okay,
but
without
a
doubt
that
this
is
a
place
where
there's
very
active
participation
with
the
TC
I
feel
like
lately,
we've
been
making
a
lot
of
progress
and
I
put
a
link
up
to
the
designated
section
future
past.
So
once
you
get
once
you
understand
that
we're
talking
about
cuts
code,
does
you
know
some
of
us
does
against
some
of
its
vendor
replaceable?
The
question
becomes
well.
How
do
we
know
that
it's
actually
working
code?
D
How
do
we
know
that
it
interoperates
with
other
other
OpenStack
clouds,
and
we
made
the
decision
in
this
process
to
not
try
to
hit
just
pick
parts
of
the
parts
of
the
project
to
say
these
are
core
knock
or
what
we
did
was
we
went
back
to
the
tests
in
this.
This
next
section
of
the
principles
goes
through
the
idea
that
we're
going
to
use
tests
to
define
the
code
to
define
what
is
core
we're
going
to
identify.
So
we
have
a
lot
of
tests.
We
actually
have.
We
have
a
lot
of
tests.
D
I
think
people
will
find
as
weak
as
we
think
about
it,
that
we
don't
have
enough
tests.
I,
don't
think
we'll
ever
have
enough
tests,
but
we're
going
to
take
the
test
that
we
have
and
we're
going
to
say
some
of
them
are
required
must
pass,
and
if
you
pass
those
most
past
tests,
then
you
can
say
that
you
are
OpenStack
so
and
we're
not
planning
to
create
a
police
state
with
a
whole
bunch
of
certification
bodies.
It's
a
self
certifying
activity.
D
If
you
pass
the
tempest
tests
that
we've
said
are
required,
then
you
can
say
that
you
are
an
openstack
cloud
implementation
product
whatever
and
that's
the
gist
of
the
the
core
principles
and
then-
and
that
was
what
really
what
we
did
at
the
in
the
last
cycle,
so
leading
up
to
the
home
come
summit
since
the
Hong
Kong
summit,
we've
been
incredibly
busy,
actually
putting
those
principles
into
into
place.
So
Troy
did
a
really
good.
D
First,
pass
on
these
1200
+
tempest
tests,
turning
them
into
the
subgroups
we
call
capabilities,
and
then
we
took
those
capabilities
and
actually
started
scoring
to
say
is
this:
is
this
capability
is
too
hard
to
do
all
the
tests?
So
we
trimmed
it
down
to
about
70
capabilities
and
there's
a
capabilities
matrix,
which
is
what
one
of
the
things
that
we
want
to
get
to
talking
about
and
from
that
those
capabilities
we
need
to
score
them
and
we
made
the
decision.
We're
going
to
use
100
point
scale
that
we
may
or
may
not
wait.
D
That's
as
well
as
100
point
scale,
so
some
to
be
determined,
cutoff
line,
we
will
say
these
capabilities
are,
must
must
have,
must
pass
capabilities
and
these
ones
aren't
so
instead
of
breaking
it
down
by
project.
This
is
usually
one
of
the
big
things
people
need
to
understand,
as
we
actually
have
it
by
capability,
so
Nova
is
not
all
in
or
it
it's
partially
in
right.
D
There
are
parts
of
Nova
that
we
consider
must
pass
capabilities
that
are
core
and
there
are
some
features
in
Nova
that
are
still
new
enough
were
not
widely
adopted
enough
for
experimental
enough
that
they
aren't
required.
So
do
you
have
to
have
cells
to
be
an
openstack
cloud?
That's
a
very
real
question.
Right!
Nova,
everybody
agrees
is
the
core
project
right.
It's
been
defined
that
way,
but
yet
not
every
capability
in
Nova
is
is
required
capability.
D
E
D
So
we
found
that
when
the
community
gets
into
a
conversation
about,
you
know
two
people
with
a
view
about
something
if
they
will
stand
in
the
room
and
argue
about
it
all
day
long,
which
is
expected
behavior.
So
what
we
want
to
be
able
to
do
is
give
give
us
a
yardstick
and
we
spend
actually
a
fair
bit
of
time
working
through
selection
criteria
for
the
for
each
capability.
So
it
came
up
with
13
of
these
capabilities.
D
Now
you
can
see
why
there's
a
lot
of
information-
and
I
have
a
blog
post
about
with
the
actual
capable
is
I-
think
it's
linked
off
of
this
document,
but
every
one
of
these,
and
if
you
hover
over
the
capabilities,
you'll
see
the
definitions.
Every
one
of
these
criteria
is
used
is
given
a
yes
or
no
wait.
Sometimes,
if
we're
indecisive,
we
give
it
a
half
and
then
we
add
up
those
13
capabilities.
A
D
That's
an
excellent
question
and
so
there's
a
couple
of
caveats
with
that
right
now
we
don't
have
any
cut-offs.
What
we're
doing
is
we're
doing
it.
This
score
distribution
matrix
and
if
you,
if
you
click
over
on
the
score
distribution
matrix
side
base,
you'll
see
that
there's
actually
a
pretty
clear,
yes,
no
and
maybe
set
of
categories
so
for
the
cutoff.
Just
looking
at
the
histogram
is
pretty
is
pretty
obvious.
D
D
D
A
F
A
C
F
And
I
think
right
now,
I
mean
we've
kept
all
the
whiting
consistent,
mostly
because
we're
still
trying
to
get
a
view
of
what
the
data
tells
us
and
and
whether
it's
useful
or
not,
and
and
so
I
think
what
you're
seeing
is
still
very
much
a
work
in
progress.
It
sets
up
the
framework
from
the
decision-making
process,
but
you
know
playing
with
those
whites
and
understanding
them
and
making
sure
we
have
clear.
You
know
quite
honestly,
clear
understanding
of
the
unintended
consequences
as
much
as
the
headlines
when
you
start
playing
around
this
data.
F
So
we've
we've
kept
it
very
simple
up
front
where
everything's
waited
at
the
same
time
we're
using
some
fairly
straightforward
cutoff
mechanisms,
but
I
think
and
I
think
that's
going
to
do
a
good
job
of
sort
of
weeding
out
the
obvious
things
that
don't
make
the
cut
and
the
things
that
obviously
are
above
the
cut
and
then
I
think
we're
going
to
have
to
spend
our
time
really
looking
at
those
those
middle
ground
cases
and
understanding.
You
know
how
to
how
to
make
the
matrix.
You
know
work
for
that.
F
I
think
the
other
out
that
we've
left
ourselves
as
a
board.
It
is
guys
I,
think
we're
not
we're
not
going
to
be
beholden
to
the
spreadsheet
exclusively,
meaning
that
it
eunuch
Li
come
up
with
something
that
you
know
hits
the
score
mark,
but
everybody
you
know
looks
at
it
doesn't
pass.
The
sniff
test
is
the
right
thing
for
core.
We,
you
know,
have
the
right
its
board
to
not
not
choose
that.
F
So
there's
a
lot
of
things,
a
lot
of
a
lot
of
work
to
do
between
now
and
when
we
get
there
and
I
think
that's
where
we
really
want
to
start
getting
more
people
in
putting
to
this
and
I
Rob
will
cover
this
there's
a
whole
set
of
because
it
was
where,
where
people
can
begin
to
run
these
tests
and
solicit
feedback
and
in
you
know,
start
to
voice
where
this
is
working
and
where
it's
not
so
that
there's
not
a
handful
of
us.
You
know
on
a
Google
Doc
working
all
this
out
and.
D
That's
the
crowdsourcing
aspects,
an
important
part
and
I
will
get
I
do
want
to
get
to
that
in
a
minute.
The
other
thing
that's
important
is
we're
going
to
iterate,
so
you
were
doing
Havana
first,
because
it's
sort
of
old
news
and
then
we're
going
to
get
reactions
from
that
we're
going
to
do
ice
house,
hopefully
90
days
later,
submit
release,
hopefully
at
the
juno
summit.
D
Sorry,
when
the
juno
release
in
Paris
will
actually
have
the
Juno
scores
simultaneous
with
the
release,
so
we're
giving
people
time
to
sort
of
go
through
and
and
figure
this
out.
We
also
want
to
err
on
the
side
of
not
not
requiring
tests
right.
We
think
it's
going
to
be
much
harder
to
remove
a
must-pass
test
than
it
is
to
add.
So
we
want
to
if
that
makes
sense,
a
lot
of
times.
You'll
hear
you'll
Harris
described
this
as
a
smaller
core
is
preferable.
D
F
I
think
do
it
I
think
a
lot
of
what
we
want
to
try
to
do
is
make
sure
that
it's
more
important
to
make
sure
that
core
is
stable
and
dependable
over
a
period
of
time,
meaning
we
don't
want
to
put
something
in
saying.
Okay,
you
can
depend
on
this
to
be
there
and
OpenStack
for
the
next
six
months
and
then
it's
all
gonna
change.
Six
months
from
now
the
goal
is
to
rot.
F
F
You
know
one
version
ago,
but
it's
not
core
this
version
and
it
becomes
more
of
something
that
we
build
on
over
time
and
have
more
consistency
around
as
a
principle
I
think
we
all
agree
on
that.
I
think
how
that
gets
applied
and
some
of
the
selection
will
be
an
interesting
thing
for
us
to
look
for
in
the
next
few
months.
A
F
Good
either
it's
a
it's
a
yes/no
answer.
In
most
cases,
the
the
the
point
fives
are
kind
of
further
discussion
needed
or
unclear.
But
again
that
was
our
past
to
get
through
and
sort
of
get
the
easy
ones
out
the
zeros
and
ones
that
we
knew
either
yes
or
no
to
and
I
think
eventually
we'd
like
to
have
most
of
these
zeros
and
ones
that
the
point
files
are
really
ones
where
you
know,
for
instance,
on
the
TC
future
direction.
We
really
want
the
T
seat
away
and
on
the
strategic
importance
of
those.
F
We
didn't
always
know
that
that
we
had
the
answer
there,
so
those
were
kind
of
left
is
holding
places
for
us
to
come
back
to.
But
it's
a
pretty
simple
there's
a
lot
of
information
in
the
graph,
but
there's
not
a
lot
of
complex
computation
behind
it
right
now.
It's
every
columns
weighted
equally
I
guess:
there's
17
up
there
versus
all
eighths,
but
it's
basically
weighted
equally
across
and
it's
either
a
yes
or
no
in
each
box,
and
that's
just
added
up
to
give
us
the
ranking
and.
D
One
of
the
things,
if
you,
if
you
scroll
down
to
the
bottom
of
this
chart,
I,
actually
threw
in
a
percentage.
This
percentages
might
not
make
sense
on
the
surface.
What
they
are
is
it's
the
percentage
of
ones
and
zeros.
So
it's
it's
an
indication
of
how
effectively
we're
scoring
these
things
so
I
wanted
to
try
and
see.
Is
there
a
column
that
we
always
give
a
12
or
always
give
a
02?
D
So
ideally,
every
column
would
be
fifty
percent
and
we'd
be
saying
yes
to
half
a
node,
a
half
and
for
the
most
part,
these
tests
show
these.
These
groupings
show
that
we're
actually
having
a
lot
of
diversity
in
scoring,
but
you
can
sort
of
see
visually
when
you
look
at
the
grid
and
the
challenge
with
with
waiting
changing
the
weights
is
that
for
anything
we
make
more
important.
We
have
to
make
something
less
important
and
we
might
need
to
do
that,
but
you
know
somebody
likes
suggested
L.
D
Maybe
we
don't
need
docked
to
be
as
important,
but
if
we
make
it
score
it
down,
then
it's
going
to
discourage
community
behavior.
This
is
one
of
the
unintended
consequences.
There's
components
in
these
criteria
that
are
drive
community
behavior
like
documentation
or
discoverable
api's,
or
things
like
that.
So
we
don't
want
things.
We
want
to
use
these
criteria
to
encourage
behavior.
It's
part
of
the
whole
goal.
D
Actually-
and
that's
that
acted
as
part
of
part
of
our
overall
principles
to
help
drive
what
we
think
is
as
a
war,
sustainable
behaviors
for
the
community
right
up
screaming
code,
because
it's
part
of
the
designated
sections
or
writing
more
tests
right.
We
hope
that
this
whole
process
encourages
the
ecosystem,
vendors
and
customers
and
people
who
want
to
rely
on
OpenStack
to
write
tests
to
protect
their
interests.
D
To
move
back
to
your
what
I,
just
electric
I'd
love
to
cover,
if
there's,
if
there's
more
questions,
but
what
I'm
thinking
would
make
sense
is
we
cover
all
of
the
core
process
pieces?
There's
some
implement
more
implementation,
details
in
collecting
the
data
and
posting
the
data,
and
things
like
that,
but
I
would
rather
sort
of
walk
through
the
process
questions
before
we
get
to
that.
A
F
B
B
F
Did
you
are
the
point
correctly?
It
was.
It
was
looking
particularly
some
of
the
Nova
api's,
where
you've
got
proxy
he's
for
services
that
may
be
used
to
be
another
and
are
gradually
being
broken
out
and
sort
of
how
those
get
treated
with
the
idea
of
a
long-lasting
core-
and
you
know,
I,
think
that's
a
great
point
where
we've
had
a
lot
of
discussion,
certainly
as
outlook
that
the
looked
at
these
and
we've
talked
through
them.
That's
been
one
of
the
areas.
In
fact
it's
one
of
the
drivers
around
the
TC
future
direction.
F
Aspect
in
particular,
was
you
know
things
that
exist
now
that
may
be
widely
used,
but
are
hilly
clearly
directionally
going
to
be
replaced
by
other
services.
I
was
sitting
down,
I
was
trying
to
remember
I,
know
the
last
discussion.
We
had
a
really
interesting
example,
where
we
were
going
through,
for
instance,
some
of
the
images
capabilities
well,
I'll,
even
even
talk
about
networks,
in
particular
where
we
look
at
something
like
floating
IPS
or
you
know
where
you
say:
okay,
well,
floating
eye.
Peas
clearly
has
a
longer
term
direction
as
being
part
of
the
neutron
API.
F
F
But
these
other
things
really
really
shouldn't
be
part
of
the
core
and
so
I.
Don't
know
that
we
have
answers
to
all
those
yet
I
think
particularly
networks
where
we're
neutrons
meant
a
little
slower
to
mature
than
we
had
all
hoped.
It
is
going
to
be
a
tougher
one,
because
there
is
so
much
use
of
that.
But
I
tend
to
believe
that
we
need
to
stick
to
this
idea
that
core
needs
to
be
stable
in
those
network.
F
Api
is
that
we
really
don't
think
you're
going
to
last
more
than
six
months
to
a
year
inside
of
the
Nova
context.
Probably
shouldn't
be
core,
even
if
that
leaves
core
a
little
lighter
than
we
like
it
to
be.
I
tend
to
favor
that
but
I,
you
know,
I,
don't
know
if
trying
to
rob
have
opinions
as
other
board
members
I
know,
there's
a
variety
of
opinions
on
that
around
the
group
and
would
love
to
get
solicitation
from
outside
of
our
own
group.
D
Yeah
right,
it's
an
interesting
dilemma,
because
I
think
our
goal
is
interoperability
over
time,
and
this
is
we're
trying
to
drive
towards
interoperability
and
so
I
think
Troy.
First
trust
been
doing
an
amazing
job,
with
the
capabilities
and
being
very
proxy
for
use,
which
is
one
of
the
hardest
things
for
us
to
determine,
and
it's
really
important
that
we
don't
start
creating
expectations
that
openstack
cloud
has
a
capability
that
we're
expecting
to
be
phased
out.
That
the
consequence
is
it,
then
that
that
slows
interoperability,
because
at
that
point,
there's
fewer
things.
D
B
D
B
A
They
feel
like
they
have
to
support.
Well,
so
I'm
not
sure
if
you
guys
are
able
to
hear
that
so
marc
was
thinking
through
of.
If,
if
we
make
it,
if
we
include
too
many
things
in
core,
then
it's
going
to
make
it
harder
on
distributions
having
to
support
those
for
longer
periods
and
it's
going
to
draw
out
adding
features
to
replace
and
deprecated
those
old
features
that
are
already
in
core
totally
agree
with
that.
One.
D
Of
the
things
to
remember
is
that
we,
this
this,
this
pass
is
supposed
to
find
the
minimum
set
that
the
smallest
part
of
the
Venn
diagram
and
there's
likely
to
be
specialized
marks
for
different
ecosystem
or
different
uses
of
OpenStack.
So,
like
a
public
to
ensure
public
cloud
interoperability,
there
might
be
a
broader
set
of
tasks
that
a
dond
on
to
the
core
list
or
for
private
cloud.
There
might
be
things.
A
And
it
there
hasn't
been
a
decision
on
exactly
what
these
tests
would
end
up.
You
know
which
mark
these
tests
would
actually
it
actually
would
it
be
called
our
need
market.
That's
where
someone
leaving
that
up
to
the
foundation,
employees,
that's
really
their
responsibility,
we're
trying
to
come
up
with
what
we
think
is
a
good
baseline
to
start
from,
and
then
how
we
branch
off
for
that
they'll
come
up
with
like
they
happen,
the
repast
of
how
to
tweak
the
brand
but
mine.
D
Is
attached
I
do
expect
it
will
help
answer
that
the
question
of
what
parts
of
Nova
do
I
have
to
show
right
right
now,
it's
a
hundred
percent
of
sumthin
ova
and
there's
things
in
you.
It's
still
not
clear
what
that
means.
I'm
hoping
this!
This
will
give
a
lot
more
clarity,
as
will
say
all
right.
If
you
pass
these
tests,
then
you
know
these.
Are
this?
What
you're
expecting
right?
Well.
A
And
the
the
point
can't
be
made
too
often
that
this
is
only
attached
to
the
brand
of
being
able
to
carry
the
mark.
The
OpenStack
mark
that
this
gets
attached
to
it
doesn't
mean
that
you
can't
actually
still
deploy
and
use
OpenStack.
However,
you
want
it
just
means.
If
you
don't
pass
the
test,
you
won't
be
able
to
use
mark
right.
C
A
A
D
There's
a
work
powered
by
there's
a
couple
of
different
marks
and
some
and
there
there
actually
are
in
pretty
constant
flocks.
I
wouldn't
say
that
that's
maybe
two
brothers
David,
but
it
feels
it
feels
like
it
because
there's
there's
times
when
we
have
like
hardware
marks,
there's
there's
this.
Since
people
are
inventing
new
ways
to
use
OpenStack
where
the
foundation
responds
to
that
sure
which.
A
F
A
For
applying
centers
somebody's,
their
appliance
would
interoperate
with
that
it
could
be
used
for
other
things
as
well,
so
it's
very
likely
I
can't
guarantee
it,
but
it's
very
likely
that
we'll
have
something
like
that
feature.
It
would
just
be
test
behind
this.
The
being
able
to
you,
smart
rabbit
is
saying
I
use
nova,
which
really
doesn't
mean
much
to
anybody.
Unless
you
say
what
Nova
is
the
fine?
No,
but
this
will
be
getting
the
point
where
we
start
defining.
A
D
E
A
C
A
So
the
idea
of
rough
stack
would
be
that,
and
it's
evolved
to
the
point
where
we're
talking
about
using
docker
agents
that
would
be
self-contained
and
that
you
could
run
privately
or
publicly
and
the
results
would
be
validated
at
subway.
Certified
it's
being
good
and
the
information
from
running
those
running
the
agent
would
be
pushed
up
to
rep
spec,
and
that
would
be
the
place
where
everyone
can
go
to
say:
hey,
you
know
you
put
the
flag
up
or
two
checks
and
do
subscribe.
A
A
E
D
And
this
is
this
is
an
important
important
part
of
the
quantitative
results.
Well,
what
we
really
want
to
do
is
have
people
identify
community
people
who
actually
are
using
the
cloud,
identify
capabilities
that
they
rely
on
and
say,
I
rely
on
this,
and
the
tests
are
a
proxy
to
that.
We're,
assuming
that
somebody
who
doesn't
use
heat
will
necessarily
install
it
and
try
and
pass
a
test,
although
they
might
very
well
install
them,
pass
the
tests.
So
we
have
both
a
hey.
D
Just
like
the
user
committee
has
some
respect
people's
privacy
about
the
results
we
want
to
have
people
be
able
to
share
their
results
or
not
it's
up
to
them,
and
there's
companies
they're
going
to
want
to
be
able
to
have
certified
results
or
not
right,
because
you
could
run
these
tests
against
rackspace's
cloud
or
HP's
file
or
dreamhouse
cloud
and
say
and
post
the
results
and
say
this
is
my
results
for
rackspace
and
they
might
disagree
with
those
results.
So
part
of
what
we
have
to
do.
D
C
D
A
There'll
be
a
certain
amount
of
the
weight
of
support.
Who
did
that
its
long-term
at
make
that
work
for
any
company
I
mean
you
could
probably
do
that
for
a
release
or
two
that
causing
too
much
things
that
becomes
very
difficult,
keep
keep
a
fork
into
like
up
to
date.
In
passing
these
passes,
the
test
keep
the
Turing
kidding.
It's
going
to
need
very
hard,
so
I
think
just
the
way
to
try
and
do
something
like
that.
A
D
There
are
there
our
active
discussions
around
like
what
happens
if
somebody
rewrites
OpenStack
and
java
SE,
and
passes
all
the
tests,
but
there's
not
it's
not
upstream
OpenStack.
Those
are
very
real
issues
that
we're
going
to
have
to
face
and
in
some
ways
we're
enabling
we're
facilitating
those
conversations
in
the
future
by
the
actions
that
we're
taking.
D
F
I
think
the
other
piece
of
that
is,
you
know
the
the
thing
we
need
to
get
right
first
is
to
make
sure
that
people
who
are
using
OpenStack
at
least
are
interoperable
and
have
some
consistency
around
it.
And
you
know,
while
it's
easy
to
sort
of
think
about
all
the
worst-case
scenarios
and
things
that
can
go
happen
around
the
edges
ethic
weakens.
F
To
ask
Hopsin
right
sure
we
got
the
question
from
Craig.
They
came
up
on
the
chat
around
our
retirement
requirements
to
release
seems
like
you
have
to,
and
what
does
that
mean
for
someone
whose
eyes
house
interoperable
not
and
not
junos
or
grace
period,
so
Rob
that
probably
gets
wet?
We
haven't
talked
a
lot
about
the
process,
I
guess
outside
of
how
we
got
to
this
first
pass.
But
clearly
the
goal
is
to
have
an
identified
core
associated
with
each
release.
Integrated
release
of
the
product
stack
I
mean
is
rob
said.
We
started
with
Havana.
F
We
have
to
actually
catch
up
to
ice
house
with
about
90
days
after
the
summit
and
then
hopefully,
by
the
time
Juno
comes
out
and
we
hit
the
summit
in
Paris
we
caught
up
and
have
a
defined
core,
that's
applicable
to
to
Juneau,
and
from
that
point
then
we
help
we're
staying
consistent
where
there's
a
six-month
update
to
core
coincident
with
the
six-month
integrated
release
processes.
I
think,
obviously
we
recognize
there's
a
grace
period
involved
in
this
process.
F
F
The
first
day
we
do
enforcement
where
we've
broken
the
process
is
really
not
the
goal
that
we
have
so
I
think
at
a
minimum,
we're
kind
of
looking
at
having
some
enforcement
beginning
towards
the
end
of
the
year,
exactly
how
that
lines
up
with
the
data
will
line
up
directly
with
the
Havana
version.
Just
because
I
think
of
Anna
is
our
first
draft
and
it's
likely
to
be
full
of
holes.
It
may
be
some
modification,
I
mean
you
know.
F
Man
I
doubt
will
also
come
out
and
say:
okay,
everything,
that's
new
and
jiew
know
that
just
released
were
also
enforcing
on
day
one
either
I
don't
know
Rob.
You
probably
talked
even
more
about
this
than
I
of
the
discussions
have
loved
for
you
to
add
your
thoughts
around
that,
but
grace
period.
Definitely
exactly
how
long
and
how
binding
I
think
those
are
some
work.
Some
things
we
have
to
sort
out,
I.
D
So
I,
look
at
our
Havana
list
is
the
sort
of
this
first
first
test
where
people
are
going
to
be
able
to
benchmark
against
it
and
we're
going
to
get
a
new
level
of
dialogue
around
this
whole
process
after
people
start
measuring
their
product
for
their
service
against
this
list,
and
so
I
expect
there
to
be
a
lot
of
pushback.
We
expect
there
to
be
I,
don't
know
about
push
back,
I.
D
Think
there's
going
to
be
a
lot
of
more
discussion
or
an
elevated
sense
of
of
what's
going
to
happen,
which
is
why
we've
been
so
careful
to
be
quantitative
in
our
analysis,
because
it
becomes
very
hard
if
the
vendor
walks
up
and
says
this
one
feature
is
required.
I
need
it
to
be
in
business
and
we
need
to
go
back
and
point
to
how
we
made
that
decision.
D
A
Shot,
if
you
will,
what
and
one
of
the
points
that
we
can't
make
too
much,
is
that
just
because
one
feature
is
really
important
to
a
vendor
and
it's
not
pot
a
core
doesn't
mean
that
now
they
can't
carry
the
brand.
It
just
means
that
if
the
entire
product
only
uses
that
feature,
then
likely
they
can't
carry
the
brand,
but
if
they
carry
up
with
the
required
core
features,
plus
that
feature
that
sonic
or
the
file,
they
can
still
carry
the
barn.
The.
D
B
A
D
D
The
the
teacup
ref
stopped
and
we
haven't
talked
about
teacup
yet,
but
the
idea
of
crowdsourcing
test
results
and
having
them
reported
means
that,
let's
be
very
I'll
I'll
use
the
hyper-v
example.
So
let's
say
the
hyper-v
team
writes
a
set
of
tests
that
only
pass
if
you
use
hyper-v
but
they're
never
going
to
pass
gate
right,
but
they're
they're,
valuable
test
for
the
community
of
users
who
are
invested
in
hyper-v
capability
and
by
including
them
in
the
crowd
source
data
collection.
D
We
could
actually
start
getting
real
data
on
how
many
clouds
out
there
are
passing
the
hyper-v
test
suite
and
that
would
allow
us
to
make
much
more
informed
decisions
about
the
community
in
general.
So
we're
driving
this
process
to
get
the
most
past
data,
but
it
can
have
much
more
profound
impacts
about
data
collection,
of
what
capabilities
down
to
specific
choices
of
plugins
are
important
in
the
community
and
to
me
that
that's
actually
one
of
the
huge
benefits
to
doing
this.
So
one
thing's
for
trended
to
drop.
A
D
A
A
As
we
start
developing
market
places
for
public
providers
and
service
providers
that
we
have
the
training
that
starts
expanding
into
other
areas
of
OpenStack
companies
affiliated
companies,
then
it's
not
building
myself
I
think
it's
highly
likely
that
that
will
link
back
to
rep
sac
capabilities
of
that
public
craft
cloud
or
the
service
provider.
So
then,
this
sort
of
companies
in
the
marketplace
very
likely
will
talk
about
the
capabilities
and
broad
sense
of
regions,
size
of
clusters.
A
B
B
D
I
yeah,
so
what
I?
What
I
heard
was?
You
were
talking
about
the
benefits
of
having
a
broader
data
collection
even
down
to
doing
covering
some
of
the
things
the
user
committee
does
and
and
very
much
we've
been
talking
to
the
user
committee
a
bit
and
we're
hoping
that
some
of
the
data
we
start
collecting
automatically
with
ref
stack,
can
lessen
the
burden
of
the
user
committee
for
what
they're
trying
to
test
they
can
actually
switch
to
you
because
they
only
have
a
limited
amount
of
attention
that
they
they
collect.
D
So
if
they
spend
a
lot
of
their
time
collecting.
What
did
you
install
then
there's
other
quantitative
and
qualitative
things
they
don't
get
to
cover
and
so
we're
hoping
to
be
able
to
lessen
their
their
data
collection
verb
and
free
them
up
from
that
a
little
bit.
There
was
some
talk
a
while
ago
about
actually
embedding
data
collection
into
OpenStack,
which
most
people
didn't
like
very
much
and
we're
actually
accomplishing
that,
but
in
a
much
more
voluntary
way,
and
that
brings
up
the
rally
project
and
restack
a
little
bit.
D
I
wanted
to
add
something
to
what
Sean
was
saying
about
restack,
so
restack.
The
idea
with
ref
stack
is
it's
just
there's
like
talk
to
ten
companies
that
have
all
implemented
the
same
thing.
Restack
is
really
a
place
for
you
to
collect
your
tempest
results
and
show
them
in
a
UI
so
that
you
can
compare.
D
We
have
sort
of
installation
to
installation
how
you're
doing
a
lot
of
people
have
implemented
this
and
a
lot
of
different
ways
and
we're
actually
making
a
community
portal
there'll,
be
a
community
portal
version,
that's
maintained
by
OpenStack,
but
you
could
run
the
same
code
internally.
If
you
just
wanted
to
QA
your
openstack
cloud
and
collect
the
results
yourself
and
then
post
the
data
backup
to
graph
stack
and
share
it.
D
If
you're
tracking
the
rally
project,
it
has
a
relatively
similar
goal,
it's
using
tempest
as
a
way
to
do
continuous
delivery
and
performance
benchmarks,
so
we're
working
with
them
to
have
some
synergy
around
those
things
to
get
slightly
different
objectives.
And
then
there's
this
other
concept
called
kikou,
which
is
tempest
running
in
a
docker
container.
And
so
the
idea
is,
we
don't
want
to.
We
want
a
lesson
or
eliminate
the
burden
to
set
up
tempest
to
do
a
test
of
your
own
cloud.
D
So
the
idea
teacup
is
that
anybody
can
test
their
own
cloud
and
then
upload
and
share
the
results,
even
if
they've
never
set
up
tempest,
even
if
they're,
not
the
sighted
man,
you
know
they
can
just
create
the
container
pointed
it
use
their
credentials,
pointed
at
their
cloud
and
then
share
if
they
want
to
share
the
data
back
to
ref
stack.
So
we're
really
trying
to
cast
an
incredibly
broad
net
to
collect
this
data
back
and
so
in
refs
tackle.
D
D
The
board
is
still
looking
at
making
sure
that
we
don't
become
a
certification
body
in
doing
this.
That's
that's
not
our
goal
yet,
and
so
we're
not
trying
to
expose
people
for
whether
or
not
they
pass
tempest
one
hundred
percent
or
not
I
think
we're
going
to
get
to
a
point
with
some
of
these
specialized
tests
that
you
can't
pass
till
one
hundred
percent,
which
is
fine.
We
that's
not
that's
reasonable,
yeah.
D
Was
hoping
to
throw
it
back
to
say
what
do
we?
What
do
we
focus
on
when
we
explain
this
right
right,
we're
not
going
to
have
an
an
hour
most
people's
time
to
explain
what's
important
what
what
after
hearing
it
for
an
hour
should,
did
people
think
we
should
we
should
focus?
What
was
the
important
point.
A
B
Here
because
you
folks
look
at
that
start,
anything
capabilities,
next
settle
and
just
having
this
really
excited.
So
the
capabilities,
yeah
capabilities
are
kind
of
dudes,
big
conferences
like
this
I
say
it
was
pee
and
maybe
it
pleasantly
I'm.
You
get
a
better
presentation
there.
Some
timelines
that
small
with
us
with
the
small
center
time
been
banging
started
compra
soon.
Then
you
immediately
suppose
going
to
take.
B
B
Steps
for
fatigued
obviously
doesn't
is
you
know
something
for
today?
Ok,
write,
anything.
A
lot
of
us
knows
how
can
save
us
now
and
it
makes
the
competition
want.
Focus
is
used
to
hit
it
up
on
polar
cut,
becomes
of
attention.
Now
it
doesn't
make
some
designated
sections
she
directed
back.
It
was
the
first
classes
more
about
80
degree
with
Ivan.
B
D
B
D
B
A
General
agreement
on
idol
of
to
Aaron
like
to
see
it
within
functional
groups
score
too
high,
too
low
boom
I.
A
D
C
I
need
more
like
fun,
explaining
it
just
just
for
my
application,
for
you
might
not
even
aware
only
three
seconds
they're
like
wow,
you
an
example,
have
application
I'd
say,
for
instance,
you
want
to
have
very
large
star,
apparently
he's
in
silence,
so
that's
something
that
a
lot
of
people
want
to
us
for
things
like
as
long
as
I
when
it's
beings
by
nothing
like
maybe
an
agent
like
what
you
choose
today
actually
rely
on.
What
are
their
own
applications
could
be
for
our
exact
point,
actual
inability
to
the
east
in
seconds.
C
A
A
There
you
go
I
didn't,
say:
I,
didn't
quote
it
exactly
like
ass,
but
yeah
I
think
we're
going
to
find
out
once
we
go
through
the
first
couple.
Months
of
this,
that
will
be
surprised
at
some
of
the
results
is
that
the
surveys
have
been
very
abstract.
They
haven't
been
braces
like
it
for
good
reason,
because
it's
hard
to
get
somebody
to
take
a
survey.
Well
now
we'll
have
basically
a
wheel
survey
that
will
be
extra
it
and
so
making
that
up
with
customers
and
public
riders
and
will
starting
to
figure
out.
C
A
A
D
Is
this
is
where
the
fun
gets
in
its
it
has
to
do
with
what
found
a
foundation
for
others,
one
of
our
favorite.
We
call
this
the
mark
shuttleworth
test.
Actually
it's
it
in
his
honor
and
mark
one
of
the
things
that
mark
really
asked
when
we
were
starting
when
we
started
this
cycle
was,
are
things
that
you
can
build
on
top
of
an
openstack
cloud?
D
If
it
you
what
other
things
that
you,
you
could
build
it
if
you
already
had
an
openstack
cloud
and
what
are
things
that
you
just
can't
do
unless
you
have
an
openstack
cloud
or
you
have
to
have
the
sort
of
fundamental,
and
so
we
use
that
as
a
test.
If,
if
there's
a
way
to
compose
it
out
of
other
OpenStack
parts,
then
we
get
to
zero,
and
so
the
idea
was
that
storing
he
pairs
is
something
you
could
do
without
a
native
without
of
being
a
native
capability.
D
And
then
cluster
is
an
interesting
one,
where
there's
a
there's
a
benefit,
there's
a
positive
benefit
to
being
part
of
a
set
of
tests
or
a
feature
set,
and
so
we
score
things
that
are
integrated
with
other
capabilities
with
a
with
a
cluster
score
so
that
we
don't
have
partial
function.
Groups
implemented
and
this
one
didn't
pass
either.
Those
tests
and
discussion.
D
D
E
D
F
We've
had
an
interesting
discussion
in
general
around
the
admin
functions
which
they're
obviously
there's
a
set
of
admin
functions
that
doesn't
even
really
make
sense
to
test.
If
you
don't
have
admin
access
to
the
cloud
you
know
like,
we
may
use
those
extensions
inside
of
rackspace
to
run
the
public
cloud,
but
nobody
outside
of
rackspace
would
ever
be
able
to
validate
whether
they're
and
but
at
the
same
time,
if
you're
really
trying
to
get
commonality
around
environments.
Where
you
manage
maybe
managing
a
private
cloud.
F
Those
that
advant
features
made
you
very
important
Enza
and
we
kind
of
go
back
and
forth
on
whether
admin
functions
should
even
be
considered
as
part
of
core
or
whether
there
should
be
a
separate
section.
If
you
got
access
to
administrative
privileges
or
not-
and
this
is
one
of
those
areas
like
we'd
love
to
get
feedback,
you
know,
do
we
do
it
and
allow
the
vendors
to
publish
their
their
full
results
yet
with
admin
access
or
do
we
rely
solely
on
what
you
can
test
external
e?
F
All
those
have
been
topics
that
I
think
we
kind
of
gone
back
and
forth
on
its
we've
thought
about
it
over
time
and
have
it
that
that
one
in
particular
around
the
admin
functions,
I
think,
is
still
one
that
even
where
they
score
high
in
the
matrix.
We're
not
sure
what
we
really
do
with
that
conceptually
I.
F
C
A
Be
a
good
question:
like
Yahoo,
we
filled
out
the
survey
and
we're
genuinely
extremely
paranoid
about
cleaning
out
exact
details
of
how
we're
capabilities
are
specific
regions.
We
generally
talk
and
broad
integers
and
I'm,
not
exactly
sure.
If
I'll
be
able
to
get
approval
to
run
I,
know
and
run,
he
thought,
obviously
that
be
able
to
publish
it.
A
F
A
Aws,
which
is
baffling
me,
but
you
know
they're
into
it
so,
but
there
was
a
level
participate
participation,
but
when
they
got
to
the
point
where
they
talked
about,
they
ran
a
job
he
verified
and
they
came
out
with
a
number
that
they
said
made
their
across
multiple
regions.
They
could
run
equivalent
to
number
13
on,
not
500
people.
A
Everyone
got
excited,
it
started
talking
and
got
all
kinds
of
feedback,
there's
no
way
to
verify
any
that
it
was
just
a
number,
and
it's
just
you
know
they're
bragging
rights,
but
it
got
that
element
of
gamification
got
a
lot
of
people
really
excited
about
holy
crap
AWS
is
real,
so
that
kind
of
stuff
a
lot
of
out
or
even
though
logic
leads
that
you
go
look
at
that.
Well,
that's
excuse
me
thing.
It
could
have
been.
A
F
Know,
kinda
kind
of
guest
did
the
took
the
tools
that
we're
looking
at
setting
up
with
the
rough
stock
piece
would
run
the
test,
collect
the
results
that,
like
you,
don't
actually
get
to
go,
just
go
to
the
rough
stock
site
and
fill
in
and
say
my
results
for
X.
Like
that,
the
idea
is,
you
run
the
tool,
the
tool
generates
results
you
choose
whether
those
get
uploaded
or
not,
but
I.
Don't
know.
F
D
Actually,
that's
that
aesthetically
we've
discussed
a
little
bit
and-
and
this
is
the
fun
thing
about
tika-
anybody
can
use
it.
So
if
a
vendor
fakes,
their
results
and
uploads
them
and
their
customers
use
key
cap
to
validate
their
results
in
against
the
same
thing,
they're
gonna
report
discrepancies.
This
is
the
amazing
thing
about
open
source
and
crowd
and
crowdsourcing.
D
It
is
that,
if
somebody
is
being
dishonest
with
it,
they're
going
to
be
outliers
and
will
will
find
that
you
know,
there's
there's
no,
there's
no
practical
way
to
force
everybody
who
tries
to
use
the
results
to
produce
false
results.
So
I
there
is
a
risk.
I
expect.
Somebody
will,
because
you
know
what
John
saying
about
gamification
the
stakes
could
be
high
and
you
know
will
certainly
be
people
who
try
and
budget
multiple
they'll
be
an
interesting
part
of
this.
This
whole
OpenStack
experiment
in
some
ways
right
just
for
doing
new
stuff.