►
From YouTube: OpenStack Austin Meetup June 2013
Description
OpenStack Austin Meetup June 2013
A
So
for
those
of
you
who
are
new,
this
is
just
it
is
a
thing.
That's
done,
dell
host
every
month
with
a
different
partner
this
year
or
this
month.
So
thanks
thank
us
for
sponsoring
today
and
we'll
have
them
come
up
and
talk
a
little
bit
about
what
they're
doing
in
this
space
momentarily
participate.
This
is
an
interactive
type
of
a
forum.
If
we're
all
crickets,
it
really
doesn't
accomplish
things.
A
There
will
be
no
opportunity
to
kind
of
throw
some
ideas
or
questions
that
you
might
have
out
to
start
some
discussions.
We
do
have
some
canned
idea
items
as
well.
If
you
want
to
talk
about
those
things,
if
you
are
so
inclined,
please
leverage
twitter
and
we
have
a
hashtag
osatx
osatx,
not
at
oscpx.
A
A
So
don't
don't
call
out
those
guys,
but
I
heard
I
heard
john.
B
Actually,
leadership
position:
somebody
tweet
that.
A
But
but
you
know
if
there
are
interesting
comments
or
questions,
or
things
like
that,
I
do
know
that
the
openstack
foundation
and
others
actually
monitor
these
kind
of
hashtag
stuff,
so
it'd
be
good
to
get
austin
represented
there.
So
without
further
further
ado,
are
you
our
guy
yeah.
B
B
C
When
we
slide
or
anything
like
that,
this
is
a
meet-up
and
an
open
mic
meet
up.
So
I'm
from
xenos
over
here
eric
edgar
also
knows
who
here
is
familiar
with
xenos,
that's
what
it
is?
Okay,
so
I'm
not
gonna
go
into
it.
So
it's
a
monitoring
system.
We
have
zen
packs
out
there
adapters
specifically
for.
B
Right
now,
monitoring
nova
kind
of
as
a
consumer
of
it
like
if
you're
running
in
rackspace
and
you're
running
workloads
in
other
people's
openstacks
or
your
own,
you
can
monitor
them
and
kind
of
hook
them
up
and
see
how
they're
related
to
the.
C
Operating
systems
and
stuff
running
inside
them,
we
also
have
one
for
swift,
which
is
much
more
for
people
who
are
operating
a
swift
cluster,
checking
on
just
performance
and
health,
and
things
like
that
so
check
those
out
if
you
haven't
already.
The
main
thing
I
want
to
say
is.
B
B
So
that's
pretty
much
it
from
us.
My
kind
of
my
main
interest
here,
if
you
want
to
catch
up
with
me
throughout
this
meetup
and
have
interest
in
these
things
are
cylometer.
I
don't
know
much
about
it
right
now,
but
it's
really
relevant
to
how
we
want
to
continue
going
forward
and
monitoring
openstack.
So
I'd
be
interested
in
doing
well
like
right
now
who
is
running
any
stillometer
at
all
in
their.
B
Well,
so
for
us
we
we're
using
mysql,
we've
already
got
horizontally
scaled
out.
You
know
my
sql
clusters
we're
like
well.
Why
should
we
use
mongodb
and
add
another
thing
in
the
system?
So
let's
use
the
mysql
stuff
and
in
folsom
is
debacle.
It's
just.
It's
basically
selects
the
entire
database
and
joins
like
everything
together
and
it's
like.
Oh
okay,
that's
just
good,
really
well
yeah.
B
And
even
like
on
a
very
well
used,
you
know
zone
without
a
lot
of
utilization.
You
still
get
tons.
A
B
B
B
If
you're
gonna
the
first
time
you
speak
for
introductions,
take
a
minute
introduce
yourself
so
that
people
have
a
you
know,
have
some
background
on
who
you
are
and
and
whether
you're
speaking
from
a
source
of
authority,
or
I
know
josh,
I
don't
know
if
I've
got
any
authority.
B
So
yeah,
but
we're
hoping
it's
better
in
christmas.
Okay,
all
right!
Me
too,
I
had
a
question
for
you
on
the
zen
packs
yeah,
I
know,
doesn't
xenos
have
two
different
players
like
commercial
and
open
source.
So
you
know
how
give
us
some
advice
on
that?
How
does
how
would
we
use
it
with
openstack,
more.
B
Yeah,
so
I
mean
we
do
right:
we
have
an
open
source
version
that
we
call
things
core.
You
can
find
it
easily
enough
right.
You
want
to
use
the
internet
and
we
have
a
commercial
version
called
service
dynamics
all
this
stuff
generally,
as
a
rule
of
thumb.
The
way
it
works
is
all
of
the
features
that
we
add
to
monitor,
open
source
technology
or
integrate
the
core,
the
open
source
version.
So
all.
C
B
So
that's
the
basic
story,
all
the
openstack
stuff
and
all
sort
of
the
surrounding
pieces
that
you
might
want
to
also
leverage
with
it.
That's
interesting
question:
what
about
microsoft?
Hyper-V
because
now
to
work
with
openstack
computers
working
on
that
so
can
I
use
zenos
to
monitor,
like
we
do
a
whole
microsoft
free?
B
B
B
Things
you
have
interest
in
lightweight,
contain
all
right.
Let's
talk
about
that
too
yeah,
that's
interesting
to
me
too,
especially
bare
metal
as
a
service,
because
I
freak
out
about
just
giving
somebody
an
entire
box
and,
like
oh
you've,
eaten
this
super
powerful
server
to
put
like
peanuts
on
it,
and
you
actually
use
containers
to
kind
of
like
kind
of
give
somebody
bare
metal
but
you're,
still
containerized,
it's
potentially
more
powerful.
C
Right
so
you
can
support
those,
but
for
the
most
part
you
know,
like
our
workloads
are
linux,
so.
B
B
You
know
qa
systems
that
are
short-lived
to
build
system,
test
systems
and
stuff
like
that
they're
short-lived.
So
lately
I've
been
working
on.
You
know,
moving
our
build
test
infrastructure
over
to
doctor.
You
know
to
kind
of
the
best
use
case
is
you
know
I
have
sort
of
a
matrix.
I
developed
zen
packs
right,
so
I'm
kind
of
responsible
for
all
these
stuff,
and
what
I
want
to
do
is
for
every
zen
pack.
B
I
want
to
build
and
test
it
for
a
matrix
of
xeno
stations
right,
the
xenos
core,
the
z-most
commercial
and
different
versions,
thereby
and
the
permutations
end
up
being
like
9
or
12
of-
and
you
know,
rather
than
you
know,
having
nine
or
ten
different
images
or
configuration
management
roles
or
whatever
for
all
these
things
and
spinning
up
these
heavy
vms.
B
C
I've
been
using
a
little
bit
for
chef,
zero.
B
Lxc
directly,
it's
you
know.
C
C
Had
a
lot
of
trouble
running
a
transport
upstart
script
and
so
like
I
couldn't
get
running
because
it
has
a
lot
of
stuff
that
comes
out
about
pop
stars.
There's
something
in
it.
C
B
Stage,
that's
like
perhaps
even
earlier
is
a
docker
back
end
for
nova,
and
this
is
really
cool,
because
once
you
start
to
read
about
docker,
one
of
its
sort
of
killer
features
is
a
an
image
registry
right.
So
they're
they're
trying
to
sort
of
make
the
image
things
social
so
that
anybody
can
very
easily
take
an
image
that
they've
created
and
push
it
up
into
the
registry.
B
So
you
know
like
when,
when
you
use
openstack
normally
you
say
I
want
to
use
this
image
right
to
provision
my
new
inserts
right.
You
kind
of
take
the
same
approach
with
docker,
but
instead
of
getting
that
image
from
the
glance
store,
glance
is
just
re-uh
sort
of
pushing
you
to
the
docker
image
comes
from
so
the
interesting
thing
about
this
is
your
glance
you
can
as
you're
provisioning
your
docker,
back-end
and
open
stack.
D
C
D
C
Oh
well,
it's
so
institutionally
just
two
different
things
they're
right,
so
I
wasn't
sure
if
you
were
saying
they
as
available
images
in
glance
and
thereby
they
would
work
show
up
in
dashboard.
If
that's
what
you
were
saying
versus
it
sounded
like
you
were
saying
they
were
actually
just
spun
up
instances
already.
No,
no
all
right!
So
they're
they're
available
be
spun
on
it.
You
spin
them
off
and
they
look
like
instances.
C
It's
like
they're,
not
sure
that
lxc
is
going
to
be
what
they
use
permanently.
They
are
talking
about
just
going
directly.
B
And
using
c
groups
directly
without
really
changing
the
functionality,
but
right
now
it's
a
wrapper
around
lxc
and
hey
you.
B
Are
using
docker
have
created
their
images
from
the
base?
You
know
precise
image
right
when
you
go
out
to
get
one
of
theirs
you're,
essentially
just
getting
the
diff
between
who
wants
to
precise
and
bears
in
the
registry.
The
docker
registry.
D
B
They
don't
have
any
support
from
the
in
the
docker
back
end
or
federal
storage
at
all.
C
B
And
you
know
somebody
could
always
get
on
their
image
and
you
know,
touch
every
file
and
suddenly
they're
using
the
full
size.
I
think
that's
what
we're
doing
right
longer.
It's
going
to
look
for
you
spun
up
this
instance,
and
it's
got
this
bigger
replication,
this
big
ephemeral
and
that's
what
you
can
charge.
B
C
B
Sense
be
really
cool
yeah!
That's
a
really
neat!
That's
a
really
neat
technology
and
it's
all
around
it
kind
of
works
like
at
least
I
think
correctly,
but
it's
kind
of
like
a
vagrant
where
it's
like,
I'm
doing
I'm
doing
a
code
or
configuration
definition
of
what
my
image
looks
like
right
right
and
that's
what
gets
installed
versus.
Oh,
I
installed
all
these
packages.
B
It's
really
I
mean
the
lxc
stuff
really
seemed
interesting
to
me
from
we
were
looking
at
solaris
zones
a
couple
years
back,
and
I
was
talking
about
something
that
I
was
calling
dark
cycles.
So
the
idea
that,
if
you're,
if
your
workloads
are
daytime
cyclical
right,
you
don't
want
to
have
a
vm.
That's
spun
up
to
do
background
work
because
it's
so
hard
to
throttle
it
and
control
it
and
grab
all
the
resources
and
oversubscribe
the
ram.
B
But
if
you're
using
containers,
then
you
could
literally
monitor
the
system
and
have
a
low
priority
pass.
It
would
start
off
like
hadoop
job.
It
would
start
at
night
when
cycles
were
low
and
available.
Wouldn't
you
know,
could
grab
all
the
available
resources
on
the
system
as
a
dark
cycle
and
then
finish
it's
when
your
daytime
mode
came
back
in
and
you
wouldn't
have
to
swap
things
in
and
out
right.
We
get.
A
lot
of
a
lot
of
people.
B
Talk
about
using
via
hadoop
on
vms
is
a
dangerous
proposition
if
it's
really
high,
I
o
bound,
but
if
you're
in
swapping
a
complete
compute
node
from
hadoop
to
compute,
you
know
over
over
a
day
cycle
is
a
little
bit
of
a
daunting
challenge
too,
but
a
containerized
version
of
it
has
been
a
really
interesting
strategy.
B
I
wonder
if
you
run
your
hadoop
hypervisor
I've
had
that
discussion
before
it.
You
could
you
like
kvm
and
lxc
on
the
same
system.
You
could
you
could
do
that,
but
you're
still
you're
you're
kvm
the
trick
is
your
kvm
system
is
still
allocating
the
ram,
so
you
don't
want
to
shut
so
the
goal
is
you're.
Never
shutting
down
your
daytime
workload,
you're
just
giving
back
the
rams
compute
cycles
that
it's
not
using
because
that's
the
benefit
of
the
containers
and
so
then
you're
right.
B
B
C
D
B
The
thing
that
I've
seen
in
discussions
about
that
is
at
some
point
hardware
is
not
that
expensive
and
complexity
is,
and
so
there
you
get
into
a
conversation
at
some
point
where
it's
like.
How
much
risk
are
you
willing
to
take
on
your
compute
farm
or
lack
of
elasticity
where
it's
just
like
a
hard
room?
B
No,
I
agree.
I
know
I
mean,
and
I
I
know
you're
you're,
entirely
right
to
call
me
out
on
it
I'll
own
that,
but
I
have
conversations
with
customers
all
the
time
who
are
trying
to
to
merge
in
yeah
and
and
we
try
to
help
them
merge.
You
know,
because
I
see
people
want
to
do
it.
We
try
to
help
customers.
B
You
know
merge,
use
cases
on
hardware
together
and
at
some
point,
you're
like
this.
Is
the
server
cost
10
grand?
How
much?
How
many
manpower
is
it
going
to
take
for
you
to
actually
make
this
work
and
be
sustained
and
be
sustained?
And
then
so
I
mean
one
strategy,
though
you
know
it's
so
docker
comes
from
a
company
called
dot
cloud.
I
don't
know
if
anybody
who's
friends
yeah,
but
you
know
the
funny
thing
about
cloud,
is
they
run
on
amazon?
B
Okay,
so
you
know
you
could
kind
of
take
that
and
say.
Well,
you
know
what
I
could
run
an
open
stack
on
my
k.
I
could
run
a
docker
or.
C
A
B
B
D
B
This
is,
this
is
right
right.
People
are
people
who
are
serious
about
doing
a
lot
of
computation
with
hadoop
have
seen,
in
my
opinion,
one
they
buy
a
lot
of
servers
to
do
it.
They
want
to
distribute
the
load,
they
want
those
jobs
to
finish,
and
they
want
them
to
be
highly
performant,
and
so
I
I
we
do
have
we.
There
is
a.
B
There
is
a
set
of
hadoop
use
cases
where
people
don't
use
hdfs,
which
is
the
file
system
underneath
hadoop
that
is
used
to
improve
io
performance
by
putting
jobs
where
the
data
is,
if
you
just
want
mapreduce
and
want
to
use
the
algorithms
for
mapreduce
and
then
they'll
leave
swift
to
back
end
hadoop
pull
the
data
in
over
swift,
run,
mapreduce
on
the
jobs
and
then
shut
down
the
machines
that
did
that
work.
I've
seen
that
use
case
a
bit
for
people
who
are
very
invested
in
mapreduce
and
that's
a
great
use
case.
B
That's
true,
but
you're,
not
because
the
way
those
jobs
are
structured,
you're,
not
parallelizing
the
work
you're,
just
reducing
it,
you're
just
you're,
you're,
parallelizing
it
but
you're
not
trying
to
improve
performance
parallelizing,
it
you've
got,
say
a
huge
amount
of
archival
data
that
a
customer
wants
access
to
and
they
need
specific
things
out
of
the
archive.
They
do
a
very
specific
type
of
analysis
and
they
want
the
results.
B
I
it's
I've
seen
the
like
25
percent
of
the
use
cases
that
we've
I've
seen
for
hadoop
drive
that
direction
not
as
much
it
used
to
be
more
prevalent.
It's
faded
a
little
bit.
C
Back
to
your
point
of
you
know,
basically
layering
or
nesting
workloads
like
a
full
disclosure.
I
also
work
at
dell,
but
the
one.
A
B
C
B
C
B
You
know
the
the
partnership
we
have
with
cloudera
the
reason
people
pay
money
for
cloudera
when
they
do
this
noticeable
project
is
clutter,
helps
people
optimize
their
hadoop
workloads
and
they
have
a
lot
of
great
analytics
and
tools,
because
people
surprisingly
care
about
the
performance
and
the
speed
at
which
these
jobs
run,
because
if
you
do
them
wrong,
they
take
a
lot
longer
to
run
and
so
yeah.
This
is
this.
Is
this
is
sort
of
the
funny
thing?
B
I
think
savannah's
a
really
interesting
idea,
especially
for
people
who
want
to
learn
hadoop
or
have
a
certain
spiky
demand
or
elastic
hadoop
demands,
but
a
lot
of
the
customers.
We
talk
to
get
past
that
and
into
all
right.
I'm
going
to
run
this
all
the
time
I'm
going
to
ingest
data
all
the
time.
I
don't
want
to
have
elastic
hadoop
as
much
as
I
want
to
use
it
as
a
database
right,
be
like
having
an
elastic.
B
But
now
now
we're
back
to
the
container,
because
the
container
could
give
you
I
mean
this
is
literally
four
years
ago.
We
were
trying
this
out
and
it
was
really
cool,
but
you
have
to
figure
out
how
you're
going
to
do
it
right,
I'm
not
sure,
even
in
savannah,
it
would
you'd
be
able
to
set
up
a
background
workload
because
you
have
to
place
the
workload
and
you
actually
need.
Hadoop
actually
cares
about
workload.
Placement.
C
C
C
And
so
you
could
use
that
single
set
file
system
to
mount
virtual
machines
to
distribute
your
hdfs
or
your
your
file
system
that
you're
going
to
run
your
mac
produce
against.
And
then
you
know
all
the
other,
lovely
things
and
stuff
supports.
So
have
a
single.
B
Disk
so
along
the
lines
of
commingling
that
we
were
talking
about,
one
of
the
one
of
the
interesting
use
cases
I'd
be
interested
to
see
if
people
are
going
to
discuss
this
a
little
bit
so
for
as
long
as
I've
been
doing
cloud.
But
one
of
the
first
conversations
I
ever
had
around
openstack
the
nasa
team
was
talking
about
commingling,
swift,
storage
data
and
compute
data
on
the
same
nodes
is
first,
does
anybody
know
somebody
who's
done
that
successfully
any
type
of
storage
and
compute
commingled.
D
B
And
therefore
you
have
a
hybrid
environment,
the
problem
I've
always
seen
with
that.
Is
you
end
up
with
this
really
poor?
I
o
optimization
pattern
where
you're
you've
got
a
compute
node
that
has,
I
o
burned
to
send.
You
know,
storage
to
another
node
and
you're
literally
your
noisy
neighbor
io
paths
become
really
problematic.
It
only
works
like
with
the
dupe
where
you
actually
get
the
the
storage
for
the
vms,
on
the
system
on
that
machine
or
with
staff
actually
distributed
across
a
whole
bunch
of
machines.
So
you
don't
have
to
you.
C
As
the
hypervisor
just
dedicate
a
certain
set
of
disks
to
set,
and
then
you
would
use
the
swift
driver
to
set-
and
I
would
imagine
it
would
all
play
nicely
together
in
theory-
I
think
the
block
you're
requesting
that
hadoop
job
is
requesting
might
not
be
on
that
node,
thus
generating
absolutely
exponential
amount
of
I
o
well,
he
goes
and
fetches
that
block
returns
it
and
then
replicates
it.
You
know
is.
D
D
D
Because
it's
just
one
large
file
system
distributed
and
then
I
have
seen
you
know
things
in
vmware
where
they
have
the
bsa,
which
is
basically
instead
of
having
in
hypervisor.
They
have
an
earlier
version
where
every
node
have
a
virtual
is
all
of
the
disks
and
still
distributed
across
all
of
them.
You
have
really
annoying
neighbor.
B
B
B
B
Homogeneous
throughout
the
whole
thing
you
might
realize,
oh
you
know
I
have
twice
as
much
cpu
as
I
need.
I
still
need
more
and
they
can
practice
you're
right.
What
we
found
is
we're
we're
constantly
adjusting
our
compute
to
like
figure
out
like
what's
the
right
size
of
this
thing.
All
together,
storage
is
like
so
much
easier
like
just.
C
B
B
D
A
D
B
They're
underlying
controllers
that
are
running
the
disk,
they're
pretty
small
and
they
can
run
a
lot
of
disks,
so
I
think,
trying
to
parallelize
that
storage
like
this.
It
seems
like
a
good
idea
when
you're
thinking
about
it,
but
the
reality
is,
I
don't
think
it's
efficient
or
it
really
works,
also
be
very
careful
in
terms
of
number
of
topics,
and
then
it
was
papers.
D
By
google
and
microsoft,
a
couple
of
others,
probably
five
years
ago
by
now
for
a
triple
replica,
all
of
them
when
they
have
either
all
the
bumps
under
the
problem,
enough
corruption
that
all
three
of
the
cuts
are.
Basically,
you
cannot
reason
why
you
use
a
ratio,
cutting
erasure,
cushion
much
more
palatable.
I
agree
with
you
it's
much
better,
but
it's
not
too
many
people,
usually.
B
Want
to
try
a
new
topic
since
we've
gone
pretty
well,
and
I
have
one
question
about
dude
how's,
the
multi-tenancy
and
kind
of
like
being
able
to
localize
your
block
off
your
data
from
other
groups,
because
that's
that's
the
one
thing
that
I
think
running
it
on
openstack.
I
can
that's
the
nsa.
They
had
a
great.
D
C
A
multi-tenant
secure
file
system,
and
so
I
know,
there's
effort
from
cloudera
intel
a
number
of
other
players
in
that
area
to
provide.
You
know
encryption
down
in
the
hardware
layer
to
help
secure
that
data
as
it
exists
on
hdfs
to
prevent
tenants
and
access,
because
hadoop.
C
Done
you
know
you're
wide
open
versus
now,
rather
than
just
using
unix-like
permissions
on
hdfs,
it's
truly
secure.
It's
it's
down
to
the
encrypted
packet
level,
accelerated
by
the
socket
pretty
crazy.
But
it's
it's
a
recent
trend.
You
know
it's
something!
A.
B
Yeah
I
mean,
I
know
our
contention
is
going
to
be
a
lot
smaller
see
if
you
can
meet
with
snowden
while
you're
there.
B
C
B
I
mean
a
couple
of
weeks
ago,
steve
spector,
tweeted
or
posted
an
article
suggesting
to
separate
design
and
conference
right.
Other
people
would
people
be
in
favor
of
separating
design
and
con.
I'm
asking
actually
the
board
survey
thing:
oh,
that's
what
we
did
originally
they
were.
They
were
co-located.
B
And
then
we
had
two
days
which
sort
of
sucked
too,
but
I
mean
I
steve
steve's
thought
was
to
actually
have
them
significantly
separate
yeah.
Well,
I
think
you
and
I
talked
about
that
report
significantly
separated.
I
know
that
when
I
attend
you
know,
I've
been
to
portland
in
san
diego
ones,
and
I've
never
been
there's
so
many
overlapping
sessions
between
the
design
and
the
commercial
summit
that
it
was
yeah.
B
There
was
a
lot
of
good
stuff
that
I
missed
out
on
simply
because
there
was
something
else
that
needed
to
be
listened
to
as
well
and
they're.
Also
just
it's
getting
so
big
that
there's
probably
a
ton
of
people
that
aren't
going
to
have
any
input
in
the
design
sessions,
and
you
have
a
ton
of
people
that
should
be
in
design
stations
and
can't
get
it.
B
B
To
me
that
makes
sense
is
that
in
in
practice,
the
people
that
you
know
when
you're
in
the
design
cycles
that
we're
in
once,
we
cross
release
milestone
three
and
we
start
getting
towards
the
speech
complete
and
release
candidate.
B
If
your
feature
didn't
make
it
or
you
know
you
or
or
you're,
not
in
the
sustaining
side
of
it,
you
start
working
on
design,
and
so
the
the
thing
I've
seen
especially
on
some
of
these
release
cycles
is
in
the
middle
of
the
release
is
actually
when
a
lot
of
the
design
is
done
and
so
yeah
openstack
likes
to
have
implemented
projects.
So
people
show
up
at
the
design
conference
with
an
implementation
of
what
they
want
to
go
in
the
next
release,
and
that
means
that
we
didn't
talk
about
design.
B
Ideally,
would
you
offset
it
by
three
months
then
that
was
that
was
my
suggestion
was
yeah
to
make
it
in
the
middle
of
the
release,
makes
design
summit
in
the
middle
of
the
release,
because
that's
when
you've
made
the
decision,
this
feature
will
not
will
not
make
it
right,
and
then
you
can
talk
about
all
right.
So
now
we're
in
the.
B
Mean
what
three
four
days?
What's
not
even
that
much
two
or
three
days,
but
you
would
do
it.
I
would
say
most
of
the
people
could
deal
with
that
twice
a
year
like
well,
I
can
see,
I
can
see
budgets.
Budgetary
people
in
vidcon
are
saying
that
you
only
get
x
number
of
shows
per
year,
regardless
of
how
long
they
are.
How
much
overlap
is
there?
B
B
C
B
Is
whenever
I've
done
product
design
work,
I
never
design.
I
never
finish
the
release
and
then
start
the
design
of
the
next
release.
I
always
have
design
work
going
on
at
the
in
the
middle
of
the
current
of
the
release.
I'm
on
right-
that's
that's
been,
I
mean
that's
been
my
development
practice.
Is
that
you?
You
start
you
at
that
point
what
you
have
to,
because
what
happens
is
when
the
release
is
done,
then
the
majority
of
the
engineering
team
is
available
and
they
they
that's.
You
know,
so
they
need
what's
next.
B
We
don't
talk
about
the
designs
as
much
until
somebody's
sitting
there
and
framing
the
code,
and
it's
much
harder
to
argue
with
an
engineer
who
has
written
code
that
they
should
change
their
design,
and
so
my
goal
was
to
actually
get
get
the
conversations
before
the
implementations
are
done.
How
much
longer.
A
B
B
B
B
B
Nobody's
nobody's
testing
backwards,
their
deployments
from
the
previous
thing.
Nobody
does
that.
I
guess
you
have
grenades
now,
but,
like
developers
are
like
we're
working
on
that,
but
everybody's
working
on
I
mean
I've
got
diablo
zones.
Those
are
gonna,
always
be
down
below
yeah,
no
way
you're
going
anywhere.
You
know.
The
only
thing
you've
got
to
hope
with
is
a
folsom,
nothing
else,
that's
right,
and
even
even
that
there
was
no
real
design
around
right
right.
So
I
mean
that's.
B
I
totally
agree
with
that.
Just
so
openstack
keeps
kind
of
statistics.
You
know
runners
on
the
download
of
the
open
stack
as
an
adoption
versus
the
time
of
the
release.
B
B
There
is
so
there
was
a
really
good
presentation
about
statistics
around
earth
and
stack
that
I
highly
recommend,
if
you,
if
you
haven't
seen
it,
I
highly
recommend
tim
timbell
who's.
Chairman
of
the
music
committee,
did
a
really
good
job
with
this.
The
thing
that
was
interesting
is
there
was
an
almost
even
distribution
between
the
different
releases,
so
we
were
on
the
eve
of
grizzly
being
released
and
there
was
like
30
percent
of
people
were
already
of
open
stack
survey.
Respondents
were
already
deploying
grizzly
and
I
think
50
fifty
percent
were
folsom.
B
Then
there
was
a
thirty
percent
diablo
and
a
couple
of
or
essex
and
then
a
couple
of
diabolos
in
there.
So
it
was
actually
surprising,
surprisingly
fast
to.
B
C
B
That's
in
the
statistics
too,
it
was
more
more
pilot,
more
not
as
much
production
yeah,
which
you
would
absolutely
expect
yeah
I
was.
The
statistics
were
really
well
thought
out.
There
was
a
good
sample
size
tim
bell.
It's
I
it's
linked
up.
There's
a
link
off
my
blog
somebody
posting
that
there's
actually
a
youtube.
B
B
Nobody's
going
to
fix
the
bug
you
know
you're
going
to
back
for
it
yourself.
If
there's
a
fix
for
it,
can't
you
even
back
for
it
you've
run
into
that
so
many
times
it's
like
there's
a
fix,
upgrade
yeah.
So
that's
you.
What
I
found
is
without
doing
continuous
deployment.
You
lose
a
lot
of
the
benefits
of
using
an
open
source
product,
just
they're
gone.
B
This
this
comes
back
to
the
the
whole
idea
with
open
source
projects
where
all
the
bugs
are
shallow.
If
you
have
enough
users
and
so
right,
if
you're,
if
that's
truly
the
case,
if
you
are
on
your
own
and
not
taking
the
latest
stuff,
then
you
lose
the
benefit
of
all
the
eyeballs
working
on
the
way
this
stuff.
So
right,
that's
how
linux,
mature.
C
You
might
see
such
a
heavier
weight
towards
people
being
on
newer
releases,
just
because
you
know,
as
a
very
immature
people
are
kind
of
in
a
hurry
to
get
off
of
diablo
and
essex
right.
So
as
project
mature
people
are
going
to
lag
behind
farther
farther.
C
B
Upgrade
there's
there's
actually
a
really
interesting
topic
that
the
board
is
discussing.
This
is
actually
this
would
be.
This
is
in
itself
a
whole
meetup
which
might
be
sort
of
fun
to
talk
about,
but
the
board
is
is
actively
trying
to
answer
the
question
of
what
is
core.
What
is
openstack
core,
because
that
there's
there's
a
ton
of
things
from
trademarks
to
what
is
how
projects
are
incubated
to
how
you
know
we
do
training.
B
Even
we
had
a
very
heated
discussion
about
training
the
that
it
sort
of
all
comes
back
to
what
is
core,
and
it's
actually
it's
a
surprisingly
tricky
question
to
answer,
but
the
continuous
integration.
Also
it's
like
what?
What
do
we
integrate?
What
do
we
have
to
upgrade?
What
kind
of
things
come
in?
Well,
that's
an
interesting
one
to
me
too,
because,
like
when
you
start
talking
about
deployment
and
continuous
deployment,
does
the
openstack
community
embrace
a
certain
deployment
mechanism?
B
Yeah?
That's.
B
B
I
there
I
I
can
tell
you
pretty,
except
for
the
potential
of
salt,
which
is
a
python
flavor,
so
to
speak
there
you
will
not
see
official,
openstack,
chef
or
puppet,
because
they're
in
ruby
and
the
trend
towards
openstack
is
python,
is
incredibly
incredibly
strong,
first
and
very
well
defended
in
the
board
and
on
the
technical
committees
too.
B
B
B
Those
are
those
are
really
little
tricky
questions
seriously
and
what
you
know
actually
I'll
I'll
pause.
It
there's
some
housekeeping.
I
want
to
do
before
we
begin
the
meeting.
I
don't
want
to
stop
this.
I
want
to
keep
going,
but
is
everybody
here
you
know
there's.
I
know
almost
everybody
here.
I
think,
but
are
people
cool
with
we're,
throwing
around
the
code
names
and
a
lot
of
buzzwords,
and
things
like
that?
B
I
would
ask
if
you
don't
know
what
something
is
just
throw
asking
ask
in
the
back
channel
or
raise
your
hand
and
we'll
define
terms,
but
it's
actually
really
convenient
we're
not
doing
a
newbies
conversation
we're
doing
these
conversations,
but
we
should
track.
Some
of
these
topics,
like
I
think
the
upstreaming
stuff
would
be
a
really
good
topic.
We
can
make
get
your
mats
in
and.
B
And
I'm
happy
to
do
the
this.
What
is
core
conversation
for
the
board?
It's
surprisingly
rich.
As
far
as
you
think,
a
lot
of
our
conversations
are
not
that
interesting
to
watch
if
you're.
Actually,
some
of
the
we
have
very
mundane
conversations
about
very
significant
topics
that
have
a
lot
of
commercial
value,
but
they
seem
very
like
watching
paint
dry
because
everybody
has
opinions
some
of
the
topics
so
I'll
do
one
on
what
is
what
is
core
and
there's
also
a
really
interesting
actual?
A
11Th
for
the
meetup
yeah
it's
july,
11th
nebula
will
be
our
sponsor
that
day
and
actually
chris
kim
and
bish
the
former
ptr
forum.
D
B
B
It'll
be
open
bar
until
I
decide
so
yeah
so
that'll.
That
should
be
a
lot
of
fun
and
we'll
probably
we'll
probably
do
dave
investors.
It's
a
good
central
rotation
yeah,
we'll
figure.
B
B
That
what
we
do
in
the
south
bay
is,
we
have
an
ether
path
for
our
meetup,
so
people
put
job
listings
things.
Do
you
guys
have
anything
like
that
here?
We
don't.
It
would
be
a
really
easy
thing
to
just
link
off
the
can.
We
agree
on
a
maybe
hashtag
like
osatx
or
something
yeah.
No,
that
would
be
awesome
right
actually
and
just
do
it
off
the
openstack.
C
B
C
B
B
Reasonable
question
I
gave
it
the
wrong
setup,
would
do
people
want
or
be
interested
in
rotating
the
knights,
a
little
bit
more
like
having
dude
wednesday
night,
sometimes
and
thursday
nights
that
make
it
easier
to
attend.
A
B
A
B
A
B
C
The
meeting
to
throw
it
back
through
the
osap
I
do
want
to
mention
on
the
aufs
I've
used
that
in
the
past
you
have
to
be
a
little
careful
on
io,
because
you
are
layering
file
systems
and
you
have
to
do
calculations
to
get
the
final
fit
as
you're
layering
it.
So
you
don't
want
to
do.
I
o
jobs
with
a
ufs.
B
It's
a
direct
block
device
mount
right
so
you're.
Actually
I
o
your.
I
o
improved
over
the
abstraction.
No,
it's
not
a
direct
mount
the
way
they
have
it
implemented.
Now,
it's
more
like
a
move
back,
but
it's
much
later.
B
Cool
we
should
talk
about
next
week.
There's
a
hosting
con
and
openstack's
gonna
have
a
booth
there.
I
don't
know
if
anybody
else
is
gonna
be
there,
but
I've
got
some
duty,
rob
you're
gonna,
be
there
yeah.
B
B
B
That's
one
thing:
well
we're
not
going
to
make
it
july;
that's
that's
the
that's
the
chris
kemp
on
this
show.
I
think
we
have
a
board
meeting
again.
We
have,
we
have
a
board
meeting
in
oscon
and
then
there's
august,
so
august
would
be
a
good
time
to
do.
It
might
be
an
interesting
I'll.
A
Just
say
his
last
comment:
there's
a
lot
of
vendors
in
the
room.
Obviously,
if
you're
interested
in
sponsoring
you
know
one
of
these
months,
like
I
said,
we've
got
a
nice
little
roadmap
of
vendors
that
are
co-sponsoring
with
dell.
Dell
takes
care
of
the
relationship
branch
if
you're
interested
in
sponsoring,
like
zenos,
did
today,
it's
just
food.
It's
800
bucks
for
food
and
drinks
and
stuff
like
that.
A
B
That
we
didn't
answer,
I
think
we
could
actually
talk
an
hour
on
the
ones
that
the
ones
that
I
added
back
into
backlogger
aka
and
openstack,
networking,
okay,
quantum
and
different.
B
B
Discussion,
that's
going
to
end
up
being
a
board
discussion,
but
the
board
is
very
actively
discussing
how
to
do
openstack
certified
training
and
it
is
a
it
is
a.
We
have
several
different
companies
that
are
offering
openstack
training
and
nobody's
coordinated
it
yet
and
we're
trying
to
figure
out
how
to
have
it,
be
certified
and
there's
trademark
issues
and
they're.
It's.
B
Well,
if
you
take
openstack
training
that
you
actually
got
trained
on
openstack
right,
so
if
you
get
openstack
training
from
piston,
are
you
trained
on
piston?
Were
you
trained
on
openstack
and
it's
a
reasonable
question,
because
you
you
expect
the
answer
to
be
yes
in
both
questions,
if
you
took
it
contestant
right,
I
know.
Stackops
has
training.
Also
where
you
give
up
for
training.
Rackspace
rackspace
has
training,
they
have
a
certificate.
Mirantis
has
training
rights,
everybody
has
this
training,
and
actually
I
I
know
the
vendors
and
it's
good
training.