►
From YouTube: OpenStack Austin Meetup 8/8/2013
Description
Cody Herriges, @cody_ah, presents about Puppet, Razor and OpenStack Ops from StackForge.
A
A
bare
metal
provisioning
system
that
is
running
on
my
laptop
this
is
going
to
take
a
while.
So
before
I
actually
explain
a
lot
of
stuff
when
we
go
through
what
the
technical
technology
looks
like
and
what's
actually
happening,
I'm
going
to
actually
start
the
demo
and
then
actually
come
back
to
it
later
and
then
kick
off
the
next
part
of
it.
A
There
actually
is
an
order
of
operations
here,
since
I'm
setting
up
a
entire
from
scratch:
openstack,
environment,
rpc
databases,
all
that
stuff
need
to
be
up
before
the
other
portions
of
openstack
comes
up,
so
I
can't
sadly
bring
it
all
up
in
synchronics.
If
controller
and
databases
are
already
up,
I
could
bring
up
a
hundred
odd
hypervisors
all
at
one
time,
but
that's
going
to
take
about
20
years
of
laptop
technology
to
improve
before
I
can
do
that.
Demo.
B
A
It's
a
bit,
let
me
know
if
you
can't
actually
read
what's
coming
out
on
the
screen.
I
noticed
that
when
I
do
my
razor
commands,
yeah
they're
kind
of
not
formatting,
and
so
it's
resulting
in
a
lot
of
text
wrapping.
So
we
may
be
switching
back
and
forth
between
sizes
so
that
people
can
actually
get
an
understanding
of
what
the
commands
look
like.
But
if
I
just
razer
has
a
whole
bunch
of
sub
commands
I'll
go
more
through
razer,
as
we
kind
of
continue
talking.
A
Razer
has
the
basically
works
on
the
premise
of
a
small
ipixie,
booted
microkernel,
which
will
register
back
to
the
home
server
and
then,
after
you
go
through
the
identification
which
that
the
microcurrent
has
factor
built
in
which
factor
is
the
the
data,
the
the
system
in
public
that
produces
data
based
on
the
state
of
the
machine,
and
so
it
brings
all
that
data
together
and
then
ships
it
back
to
the
puppet
masters
in
order
to
produce
you
a
catalog
artifact.
In
order
to
do
the
actual
configuration
on
inside
the
on
the
agent
side.
A
So
I
have
a
set
of
policies
here
which,
like
I
said
it's
wrapping
and
we've
tagged
anything
that
has
two
cpus
and,
in
the
case
of
this
demo,
two
gigabytes
of
ram
that
becomes
a
openstack,
compute
node
and
anything
with
one
cpu
and
one
gig
of
ram
becomes
my
controller
node.
So
the
place
horizon
lives
glance
apis
stuff
of
that
nature.
A
So
this
would
this
would
map
into
just
very
much
bigger
sizes,
your
control
node.
At
one
point,
it
will
eventually
probably
be
split
out
into
its
different
api
components,
but
it's
all
around
like
standardizing
hardware,.
A
So
I'm
just
going
to
kick
this
off,
since
these
things
are
just
virtual
machines.
I
really
I'm
not
going
to
be
able
to
ask
the
station
to
them
right
now,
but
you'll
notice
right
off
that
it
starts
pixie
booty
and
then
we
go
through
this
high
pixie
process.
Razer
assumes
that
every
time
you
boot
a
bare
metal
machine,
it
will
likely
be
pixie
boots,
even
if
it
does
have
an
operating
system
on
it.
A
So
if
you
actually
need
to
go
and
remotely
reload
a
machine,
all
you
have
to
do
is
go
to
the
razer
server
unfind
its
actual
state
and
go
reboot
the
machine,
and
it
will
just
go
and
reboot
and
go
through
this
entire
boot
process.
Again
it
also
makes
it
so
that
you
have
up
to
you,
have
an
up
to
minute
inventory
or
not
up
to
minutes.
Every
time
the
machine
is
rebooted.
If
anything
changed
about
that
machine,
it'll
check
back
into
the
razer
server
and
update
its
metadata
as
well.
A
So
it
should
it'll
have
a
recent
collection
of
information
about
that
machine.
Last
time
it
was
booted.
You
could
also
manually,
as
the
machine
was
up
just
register
it
back
to
the
razer
server.
It's
just
a
rest
api
endpoint
that
you
post
some
data
too.
A
A
If
I
go
ahead
and
look
and
so
to
actually
identify
what
nodes
are
bound
to
which
policy
we
can
use
the
razer
active
model
command,
and
it
will
just
tell
us
so
we
have
a
label
of
openstack
controller
policy,
it's
in
the
state
unit,
so
it's
just
now
coming
up
and
then
the
node
uid
is
attached
to
brokers.
Those
are
the
things
that,
after
the
whole
operating
system
is
set
up,
then
it
will
hand
off
to
and
do
customized
post
installation
steps,
and
so
you
can
also
do
in
there.
You
can.
A
You
can
have
a
ssh
into
the
node
and
do
runs.
You
can
also
proxy
from
the
razer
server,
but
the
razer
server
is
able
to
api
calls
so
like
in
the
case
of
public.
We
can
have
the
razer
server
authorized
to
actually
sign
ssl
certificates
or
to
classify
the
node.
If
we
want
to
today,
I
have
manual
sign
turned
on.
We
can
look
at
that
later,
but
so
you
can
do
this
proxy
state.
A
You
can
also
log
into
the
machine
which
razer
knows
the
initial
password
that
you
gave
the
machine
and
its
initial
ip
address.
So
it's
going
to
be
able
to
log
into
that
machine
by
ssh
do
some
work
and
then
puppet
would
go
along
during
its
configuration
steps
and
change
the
root
passwords
to
something
stronger
or
something
different,
all
right.
So
this
is
on
its
way.
A
Okay,
so
the
the
quick
obligatory
who
am
I
cody
harris
I've
been
with
put
labs
over
three
years
now,
I'm
I'm
kind
of
an
old
guy
in
the
the
star
of
space.
I
think
I
am
fifth
oldest
at
puppet
labs
now,
so
the
joke
is,
if
you
make
them
number
one
you
get
ceo.
I
doubt
it,
but
I
can't
placeholder
engineer
right
now.
I've
been
doing
a
lot
of
jumping
around
during
my
career
public
labs
right
now.
A
I'm
kind
of
in
this
flux
between
what
we
call
an
integration
engineer,
working
with
business
development
and
we're
changing
around
our
business
development
strategy,
which
I
am
going
back
to
operations.
So
in
a
couple
weeks,
I'll
be
in
operations
again
previous
dependent
labs.
A
I
worked
in
what
I
like
to
call
a
dungeon
people
that
have
worked
in
dungeons
know.
This
is
kind
of
very,
very
traditionally
the
place
they
put
I.t
folks
in
educational
institutions.
So
I
started
out
my
career
as
lead
unix
for
an
engineering
college
and
so
kind
of
where
I
learned
my
initial
sysadmin
shops.
A
Former
world
traveler
so
like
another
puppet
labs
employee
here
in
the
room
nathan
back
there
in
the
corner,
I
started
out
as
a
professional
services
engineer,
so
I
traveled
around
the
world
teaching
evangelizing
puppet
and
setting
up
infrastructure
on
the
backs
of
puppets.
A
I
slapped
a
big
relapse
on
there,
because
one
of
the
reasons
I
became
part
of
bd
again
is-
I
convinced,
my
manager
at
the
time
in
operations
to
let
me
go
on
a
two
and
a
half
week
trip
to
london
and
europe
to
do
three
conferences
back
to
back
realized.
I
really
actually
like
being
on
the
road
occasionally,
and
so
that's
kind
of
one
of
the
reasons
I'm
here
today
too,
but
inevitably
I
mean
the
job
that
we
do
in
this
world.
While
we
come
sometimes
think
it's
mundane.
A
Mundane
people,
like
my
father-in-law,
who
runs
a
cherry
orchard
in
rural
oregon,
has
no
idea
what
actually
runs
our
entire
planet.
So
what
did
he
really?
I
think
this
is
what.
B
A
Traveling
the
world
and
he
had
no
idea
what
I
actually
did
day-to-day.
So
obviously
I
am
a
spy,
so
we
all
know,
that's
not
true
what
I
normally
do.
A
What
I've
always
done?
I've
changed
just
so
many
different
places
inside
public
labs.
You
know
jobs
previous
to
this,
I'm
a
hacker
I'll
sit
at
my
laptop.
You
know
before
work
after
work.
You
know
put
my
son
on
my
lap
and
just
go
out
of
the
keyboard
and
that's
kind
of
that's
even
all
the
different
kind
of
titles.
I've
had
business
engineer
whatever
it's
all
the
same.
A
So
what
am
I
here
really
to
talk
about
today?
It
would
be
puppet
openstack
and
razer.
So
the
tile
you
know
profile
driven,
that's
mostly
because
razer
is
a
a
profile,
a
hardware
profile
based
bare
metal
provisioning
system,
and
you
can
inevitably
also
use
it
for
virtual
machine
deployments
as
well,
so
I'll
get
back
I'll,
get
kind
of
get
to
the
ups
and
downs
of
those.
A
So
that,
like,
like,
I
said
my
slides
fairly
informal,
I
decided
that
I
don't
like
a
lot
of
bolts,
so
they're
all
big
words,
bright,
very
simplistic,
I'll,
basically
try
to
sit
up
here
and
talk
for
half
an
hour.
You
guys
probably
get
sick
of
what
I'm
saying,
but
so
kind
of
the
things
about
the
public
openstack
project.
A
Well,
oh
a
month
before
the
last
openstack
summit,
we
kind
of
took
all
these
modules
that
we
had
collected
from
internal
to
public
labs
externally
from
other
sources,
and
we
pulled
them
all
in
and
said:
hey,
let's
build
a
community,
that's
really
kind
of
more
centered
inside
what
the
openstack
community
is.
So
we
took
all
those
modules
and
we
just
kind
of
we're
like
here,
you
go
community
and
we
put
them
up
on
stack,
forge.
I
think
we
were
one
of
the.
A
We
were
one
of
the
first
configuration
management
systems
that
were
actually
up
there
on
stack,
forge
which,
if
you,
if
you
know
what's,
if
you
don't
know
what
stack
forge,
is
it's
the
it's,
the
infrastructure,
the
openstack
infrastructure's
effort
to
do
code
review
as
a
service?
So
if
you
have
a
project,
that's
openstack
related,
you
can
just
get
in
touch
with
them
and
they'll
set
you
up
a
garrett
code
review
system.
A
Garrett
will
push
all
your
stuff
to
stack,
forge
for
you.
So
it's
all
under
one
jet
organization
stack
four.
So
if
anyone
wants
to
interact
with
any
of
the
projects
that
may
be
in
core
or
or
not,
none
of
the
core
stuff's
actually
on
stack
forge,
so
it's
usually
all
the
stuff
that
kind
of
is
bolted
on
or
helps
you
work
with
openstack.
Those
are
you'd
find
just
by
going
to
github.com.
A
So
we're
just
up
until
recently,
public
labs
didn't
even
have
an
official,
reviewer
and
committer
on
those
modules.
I
got
it
handed
off
from
dan
beauty
who
now
works
for
cisco,
and
then
we
hired
an
openstack
engineer
as
well.
So
these
are
largely
most
of
the
code
doesn't
even
come
from
us
which,
when
you,
when
you
actually
build
a
successful
kind
of
organization
around
a
set
of
puppet
modules,
that's
kind
of
what
you
eventually
hope
for
that
they
take
a
life
of
their
own.
B
A
A
couple
of
reasons
in
that
public
labs
doesn't
have
the
resource
to
be
able
to
write
configuration
code
for
every
piece
of
software
that
exists
in
the
world
and
the
other
one
just
knows
that
it.
You
know
if
you
can
get
this
community
around
here.
You
know
that
you're
doing
something
right
in
your
in
your
application
that
people
actually
don't
like
it.
A
So
the
other
one
kind
of
key
word
up
here
is,
I
have
prioritized,
and
so
the
modules
have
been
productized,
not
by
public
labs,
but
versions
of
these
modules
are
shipped
by
mirantis
to
customers.
A
They
are
the
actually
red
hat's,
a
rdo
project
which
actually
becomes
red
hat
openstack
in
the
future,
predominantly
100
ups
downstream
to
stat
forge
they
only
have
like
one
or
two
modules
in
that
openstack
distribution
that
they
had
to
fork
because
our
modules
all
support,
debian,
ubuntu
and
all
of
yell,
so
all
enterprise
linux.
So
getting
all
the
models
to
work
equally
across
all
those
platform
forms
it
can
be
a
bit
different
difficult
at
times.
So
a
couple
of
those
modules
where
we
hadn't
actually
gotten
to
that
state
where
we're
like.
A
Yes,
that
works
really
well
across
all
those
platforms
and
we're
ready
to
publish
it.
They
went
ahead
and
just
forked
that
module
and
made
it
work
in
their
environment
and
then
shipped
it
when
I
talked
to
them
last,
the
only
one
was
quantum
and
they
know
all
of
them,
just
vanilla,
so
they're,
probably
I'd,
say
like
a
lot
of
linux
dish.
Linux
projects
out
there
red
hat's,
probably
one
of
our
biggest
contributors
to
these
modules,
they've
been
a
really
good
partner.
A
I
set
my
computer
up
to
mirror
and
every
time
I
start
powerpoint,
which
serves
me
right,
I
should
have.
I
actually
used
a
textile
based
presentation
tool.
A
But
I
do
this
is
a
new,
a
new
name
for
me,
united
states,
but
iweb,
there's
cisco
in
there
in
aubameyang,
which
is
fair,
is
also
fairly
active
in
the
openstack
set
community.
So
I've
been
talking
to
them
recently,
which
I
do
have
it
also
an
ulterior
motive
to
be
here
once
after
I
do
my
presentation.
I
hope
I
have
enough
time
before
you
all
get
up
and
leave
the
room
that
I
can
actually
talk.
A
Ask
some
infrastructure
questions
of
you
guys
about
what
you've
done
with
openstack,
because
one
of
the
things
that
I'm
doing
with
this
work
that
I've
been
doing
with
openstack
lately
is
evaluating
it
to
be
the
kind
of
the
next
generation
iis
for
for
public
labs,
we're
evaluating
a
couple
different
ones,
but
I'm
kind
of
doing
this
all
in
a
repository,
that's
public
and
you
can
kind
of
watch
all
my
commits
as
I
build
this
infrastructure
app
public
labs,
and
so
I'd
like
to
get
some
ideas
about
storage
technologies,
networking
architectures
that
some
people
that
people
may
have
implemented
but
yeah,
like
I
said,
cisco's
up
there,
red
hat
kind
of
all
those.
A
So
it's
been,
apparently
it's
a
fairly
healthy
community,
we're
pretty
happy
about
how
healthy
it
is,
and
you
know
it's
still
growing
and
I
realized
I
put
my
phone
up
there,
which
was
actually
my
clock
to
know
how
long
I've
been
talking
for
so
yeah
pretty
excited
about
it.
It's
it's
moving
so
fast,
though,
that
it's
hard
for
me
to
keep
up
with
it,
trying
to
build
infrastructure
and
manage
a
community.
At
the
same
time,
it
has
been
difficult.
A
So,
like
my,
I
think
I
only
have
two
slides
of
bullets
on
it.
This
one
has
a
lot
of
links,
so
this
is
every
stack
forged
project
that
is
puppet
related.
So
the
two
up
here
that
are
three
up
here:
tempest
doesn't
go
to
the
four
jet
heat
and
salometer
don't
go
to
the
boards,
yet
quantum
went
up
recently
so
and
then
to
actually
go
ahead
and
you
want
to
get
involved
the
link
down
the
bottom
to
the
wiki.openstack.org
public
openstack
project.
A
This
is
where
you'll
find
your
contributing
doc
the
contributor
documentation
and
where
to
get
the
modules.
I
guess
I
actually
forgot
the
forge
link,
but
finding
forge
is
very
easy:
it's
forge
that's
forge.puppetlabs.com,
search,
openstack
you'll
find
all
the
modules.
A
If
going
before
drought,
all
the
dependencies
are
pulled
down
for
you
gaining
stuff,
like
gravity
mcu,
cupid,
postgres
mysql,
whichever
back-ends
that
you
want
to
actually
use
okay,
so
that's
kind
of
an
overview
of
puppet
openstack.
We
internally
are
ramping
up.
Yes,
I.
D
Have
a
question
about
so:
do
you
do
like
stable
branches.
C
A
We
just
started
that
as
of
the
grizzly
releases,
since
all
these
modules
were
pulled
in
from
many
other
sources,
all
of
them,
when
they
got
mirrored
out
to
stack,
forge,
have
random
kind
of
openstack
codename
based
branches,
but
they
don't
follow
the
same
openstack
development
workflow
that
we've
kind
of
adopted.
A
Now
now
we
do,
we
did
cut
stable
grizzly
for
everything
we're
trying
to
get
better
on
actually
going
through
the
good
back
porch
process
when,
when
you,
when
you're,
when
your
ecosystem
lives
so
heavily
inside
the
github
environment,
and
then
you
ask
someone
to
go
through
the
very
stringent
jarrett
workflows.
A
It's
been
hard
actually
educating
like
the
the
people
that
want
to
commit
and
the
and
the
people,
even
inside
the
company
that
don't
want
to
be
involved
as
well,
but
we're
getting
closer
to
it
and
as
we
get
closer,
it
gets
more
and
more
closer
to
the
actual
openstack
process.
So
right
now
the
assumption
is
that
master
follows
havana,
which
is
why
quantum
is
being
named,
neutron
and
master
right
now
he
said.
Try
and
it
doesn't
happen
all
the
time.
A
So
what
is
razer
razer
is
kind
of
this
collaboration
between
puppet
labs
and
emc
last
year,
so
emc
wasn't
happy
with
the
way
that
they
were
going
to
do
kind
of
the
day
zero
day.
One
deployed
data
center
from
nothing
type
workflow.
So
they're
like
we
wanted
to
do
something
different.
We
wanted
to
be
able
to
actually
just
have
someone
take
up,
take
a
whole
rack
of
machines
and
plug
them
in
and
based
on
what
they
look
like
and
the
characteristics
of
those
machines.
A
A
So
I've
tested
a
couple
times
if
profile
based
provisioning.
It's
not
cobbler,
do
not
use
it
like
calper.
This
was
my
first
mistake
in
using
it
so
when
we
first,
when
emc
first
presented
it
to
us
at
public
labs,
I
was
in
the
operations
team
at
that
time
and
we're
like
we
got
really
excited
about
we're
like
wow.
This
is
it's
really
nice.
It's
it's
easy
to
understand,
and
so
we
tried
to
use
it
at
our
time.
Our
use
case
we
were
just
we
were
bringing
up
single
machines.
A
A
lot
of
our
scale.
Testing
stuff
was
all
done
in
ec2,
so
we
really
didn't
need
this
for
for
large
scale
installation
of
stuff,
so
we
tried
to
start
using
it
for
as
a
as
a
very
traditional
cobbler
like
provisioning
tool.
It
got
to
the
point
that
it
was
so
frustrating
to
use
it.
That
way,
then
I
stopped
using
it.
I
switched
back
to
pure
pixie
and
pure
templated,
kickstart
and
pc
files.
A
Time
goes
on.
Time
goes
on.
I
have
to
deploy
a
vmware
cluster
and
then
I
move
on
from
that
and
now
I'm
kind
of
living,
openstack
and
all
of
a
sudden
I
understand
now
I
I've
kind
of
I
figured
out
what
I
was
really
supposed
to
use
it
for,
and
it
take
it's
two
components:
it's
you
as
an
organization
will
have
to
come
to
an
agreement
on
the
hardware
profile
that
your
clusters
are
made
out
of.
If
you
want
to.
A
If
you
want
another
cluster,
then
it's
on
like
another
hardware
profile,
because
the
only
characteristics
you
have
that
are
identifiable
on
a
machine
when
you
get
it
is
its
hardware
profile
and
it's
mac
addresses
so
hardware
profile.
You
know
all
that
stuff.
In
the
beginning,
it's
really
easy
to
identify
mac
addresses
you
generally
have
to
discover
and
and
then
you
have
to
you
end
up
having
to
obtain
those
mac
addresses
and
tie
them
to
profiles.
A
That's
what
I
originally
did.
I
mean
I
grabbed
a
mac
address.
I
would
tie
it
to
this
is
squeeze.
This
is
wheezy.
This
is
centos
6.
that
didn't
end
up
working.
If
you
have
dhcp
environments
structured
in
a
way
where
a
different
cluster
is
a
different
subnet,
then
you
could
also
classify
things
via
ip
address
and
razer
two.
So
you
could
do
by
network
segmentation
as
well,
which
would.
A
A
You
mean
for
crowbar
type
stuff
yeah
there.
I
have
not
used
bar
clamps.
I
know
how
I
know
crowbar
and
how
it
works.
But,
okay,
I'm
gonna
be
kind
of
honest
here.
It's
written
in
chef,
so
yeah
I
don't
get
farther
than
that,
which
is.
We
have
a
couple
chef
users
inside
the
company
that
have
since
been
converted
one
of
them's
in
the
room
as
well.
He
comes
from
doing
opera
operations
and
infrastructure
consulting
actually,
you
guys
have
both
used
chef.
A
Haven't
you
yeah
nathan,
a
consultant
was
a
consultant
that
did
not
both
and
joe
who's.
Also
back.
There
was
a
happy
chef
user.
So
there's
this
weird
where
you
know
you
learn
a
tool
and
someone
tries
to
tell
you
you
know
this.
Other
tool
is
really
cool.
You
should
learn
it
and
then
you
go
and
you
start
like
struggling
with
it.
You're,
like
you
know.
I
can
probably
finish
this
two
days
ago
on
the
other
tool:
yeah,
that's
the
place
where
I'm
at
I've
lived
puppet
for
so
long.
A
A
Other
tool
so
yeah,
it's
it's,
it
can
be
frustrating,
especially
as
a
an
operations
guy
and
wanted
to
actually
do
research
on
other
companies
tools.
To
get
to
the
point
where
you
go
all
right
done.
I
mean
yeah,
because
I
actually
probably
built
this
entire
puppet
environment
in
order
to
do
this
demo
in
a
day
and
a
half
to
do
the
stuff
today-
and
I
would
have
never
accomplished
that
trying
to
do
anything
else.
A
All
right,
let's
see
where
this
thing
got
all
right,
so
it
looks
like
it
we'll
check
in
on
it
more
thoroughly
in
the
next
one.
A
So
we're
live
so
we're
at
a
console.
If
I
log
in
we
can
go
ahead
and
watch
the
actually
did
I
get
a
message.
Oh
I
did.
I
set
up
alerting
inside
the
razer
broker
that
sends
me
a
pushover
notification.
When
the
certificate's
ready
to
be
signed,
I
didn't
notice
it.
C
A
Nathan,
you
jinx
me
yeah.
This
is
like
before
we
started
this.
He
was
bragging
about
how
successful
the
environment
had
been.
I'm
like.
A
So
if
we
wanted
to
sit
here
and
wait,
he
eventually
would
resubmit
the
certificate,
but
I'm
pretty
sure
I've
already
been
talking
for
over
20
minutes
and
we
only
have
one
note
up.
So
I'm
going
to
just
kill
things
really
quick.
A
A
A
Okay,
so
that's
going
to
go
through
its
puppet
run.
That's
not
really
what
I
was
going
to
go
through
anyways,
so
that
will
finish
well
at
least
get
to
see
the
openstack
horizon
console
by
the
end.
We
may
run
out
of
time
to
actually
see
if
a
hypervisor
comes
up.
When
all
said
and
done,
it
takes
about
40
minutes
to
bring
everything
up.
A
Yes,
so
we
can
go
ahead
and
go
through
that
stuff.
So
as
far
as
environment
stuff
goes
so
since
this
is
inside
vmware,
I
just
created
three
virtual
devices.
One
of
the
two
of
them
are
dhcp.
One
of
them
has
no
actual
addresses
on
it
whatsoever,
so
one
of
them
is
public.
The
other
one
is
access.
No
one
without
ip
address
is
going
to
be
private,
so
that's
where
the
fixed
range
ips
actually
go.
A
So
as
far
as
actual
puppet
configuration
goes,
it's
a
stop.
It's
a
fairly
stock
puppet
enterprise,
installation
that
I'm
using.
I
did
repackage
one
package
because
there
was
a
bug.
I
have
a
horrible
habit
or
addiction.
Nowadays
I
can
only
use
puppet
with
the
future
parser
turned
on
which,
if
anyone
follows
probably
you'll
know
that
we
acquired
a
company
called
cloudsmith,
which
they
had
written
their
own
expression-based
parser
in
ruby
for
a
puppet-based
id
or
in
java
for
a
puppet-based,
ide
called
gepetto.
A
So
now
they're
part
of
the
company
within
a
couple
weeks
of
them
joining
their
cto
had
finished
the
rewrite
of
puppet's
parser,
and
so
now
I
can't
I
started
using
it
and
gotten
to
the
point
where
I
can't
actually
write
public
code
without
it
I'll
notice
that
I
go
and
write
a
whole
bunch
of
public
code
go
run
it
on
something
and
it
doesn't
work.
I
get
really
really
confused
and
I
go
all
right
future
parser.
The
two
things
that
generally
hang
me
up
now
is
the
future.
A
Parser
has
lambdas,
so
you
can
do
loops
inside
the
dsl
itself,
and
you
can
also
do
things
like
you
used
to
have
to
when
you
were
running
a
function
to
generate
some
data,
you
had
to
apply
it
to
a
variable
and
then
use
that
variable
later
and
now
you
can
use
functions
directly
in
expressions.
A
So
I
I
embed
those
everywhere
now
and
use
lambdas
all
over
the
place,
and
so
none
of
my
code
works
so
I'll
admit
that
that's
a
bit
different,
but
you
can
accomplish
all
that
without
this.
It's
just
the
way
I
code
now.
The
future
parser
is
in
public
enterprise.
Three,
it
was,
it's
actually
part
of
the
puppet
open
source
3.2
series,
which
is
part
of
pe3.
So
if
you
have
a
puppet
open
source
3.2,
then
you
also
should
have
a
future
parser.
A
So
as
far
as
setting
up
the
stuff
for
the
actual
openstack
components
go
over
here
to
my
puppet
master.
A
So
I
nowadays,
when
I
write
public
code,
I
follow
this.
The
the
pattern
of
roles
and
profiles,
so
I
have
some
very
basic
building
blocks
things
like
rabbitmq,
mysql,
nova,
cinder,
those
very
low
level
pieces
that
build
up
an
infrastructure.
Then
I
wrap
those
in
profiles
once
I
have
all
those
profiles
together,
I
then
can
take
those
and
compose
them
into
roles.
A
So
what
makes
up
my
openstack
environment
is
an
openstack
compute
rule
for
computing
profile
for
compute
nodes,
and
I
put
a
whole
bunch
of
profiles
together
with
that
to
make
a
role,
which
is
the
role
openstack
compute.
So
those
generally
are
things
like
other
roles
or
like
because
our
other
profiles
are
configuring.
Ssh
root
users
stuff
like
that.
So
that's
how
I
compose
those
together,
so
the
demo
I
have
today
is
based
on
is
basically
a
branch
of
the
environment.
I
use
to
build
my
openstack
clusters
and
the
environment.
A
Yes,
it's
readable
nice,
so
I've
set
up
a
decent
amount
of
data.
Some
high
level
defaults
in
this
situation.
B
A
I
know
what
my
hardware
profiles,
you
know,
look
like
then
there's
things
like
fixed
range
floating
range,
and
this
is
basic
puppet,
manifest
code,
managing
network
devices
network
interfaces.
So
I
know
what
the
stack
ip
addresses
of
things
are
and
then
the
meat
of
it
comes
at
the
openstack
controller
class.
A
So
if
you
just
really
want
to
kind
of
get
up
and
going
with
openstack
fairly
quickly,
you
don't
quite
yet
understand
all
the
components
that
make
up
openstack
you
can
go,
get
the
openstack
module
which
will
pull
down
the
dependencies
and
then
give
you
a
set
of
things
to
fill
out,
which
I
have
an
example
up
on
a
gist
that
I
tweet
now
now
and
again,
that
should
get
you
going
and
that
set
of
data
is
kind
of
a
recommendation
on
how
to
get
openstack
going
quickly.
A
It's
generally
everything
goes
on
control
node,
except
for
hypervisors,
which
is
at
the
end
of
the
day,
not
all
that
scalable.
But
it's
going
to
do
you
probably
for
a
couple.
You
know
several
hundred
vms,
at
least.
A
So
the
there's
a
lot
of
variables
here
and
that's
because
that
one
controller
is
wrapping
every
component
of
openstack,
because
your
controller
node
has
a
piece
of
everything
of
openstack
on
it.
Because
it's
going
to
have
nova
it's
going
to
have
pieces
of
nova
on
it,
but
not
necessarily
the
piece
that
does
compute.
A
It
depends
on,
I
mean,
there's
a
couple
different
ways.
I
would.
I
would
end
up
going
if
I
had
multiple
topologies
in
my
environment,
I
probably
after
understanding
the
openstack
module.
Well
enough,
I
would
probably
explode
it
and
I
would
take
all
the
classes
that
make
up
it
internally
and
rebuild
my
own
composite
classes
from
it,
so
that
I
can
put
them
where
I
want
to
and
build
a
profile,
a
profile
set
that
specifically
addresses
that
topology.
A
Then
I
would
then
augment
that
by
using
hera,
which
is
puppet's,
hierarchical
data
store
and
it
has
ways
to
plug
in
different
backends.
So
if
you
have
a
way
to
get
data
out
of
something
that
you
can
define
a
topology
in,
then
hera
could
extract
that
data
and
based
on
you
region,
ip
space,
even
similar
to
the
profiles
from
razer.
You
could
then
have
a
hierarchy
of
what
kind
of
data
goes
where
and
when
you
get
it,
and
so
that
would
augment
the
the
topology
as
well.
A
So
you
could
do
you,
you
could
even
hera
basically
does
its
hierarchy
based
on
data
that
comes
from
factor,
so
you
can,
at
provisioning
time
have
razer
seed.
A
piece
of
data
for
a
specific
cluster
of
machines,
have
factor
read
that
in
and
then
make
a
decision
on
the
topology
based
on
that,
and
you
can
even
do
that
just
directly
inside
puppet
with
some
kind
of
conditional.
B
A
I
will
eventually
explode
the
openstack
module
and
do
that
exact
thing
I'll
move
components
around,
because
if
we're
really
serious
about
building
an
internal
cloud
and
then
going
the
next
generation.
On
top
of
that,
with
you
know,
past
docker
based
service
catalogs
stuff,
like
that,
I'm
going
to
be
able
to
need
to
be
able
to
split
these
components
out
and
that's
really
going
to
be
hard
with
stainless
side
inside
the
openstack
module
itself.
A
A
So,
but
the
manifest
that
makes
up
my
compute
nodes,
it's
very
similar,
but
we'll
notice
this
right
here,
so
I
didn't
want
to
have
to
define
all
this
data
twice.
That
was
going
to
be
the
same
across
everything.
So
puppet
automatically
learn
looks
up,
namespace
variables
inherit
based
on
their
class
name
and
variable
name.
So
if
you
need
the
sender,
variable
hubble
will
go
all
right.
I
will
look
inside
the
name,
space
profile,
openstack
compute
sender
and
then
look
that
up.
A
If
I
then
want
to
for
some
special
situation
overwrite
that
password,
then
all
I
have
to
do
is
put
that
variable
inside
here
in
the
right,
namespace,
location
and
puppet
will
just
pull
it
in,
and
so
I
don't
actually
have
to
define
any
data
for
the
the
compute
session
section.
A
A
I
have
the
sender,
stuff,
disabled
right
now,
more
networking,
config
things
and
then
here's
the
openstack
compute
stuff
since
less
stuff
runs
on
computers,
there's
less
variables.
I
also
think
I
was
able
to
use
more
defaults
in
this
environment
as
well.
A
Basically,
those
two
classes,
those
two
files
make
up:
what
make
up
openstack
if
you
can
take
those
and
put
them
inside
your
environment
that
will
work
very.
Similarly,
all
the
other
stuff
in
this
repository
is
just
support.
Things.
A
A
Is
it
hera
has
a
lot
of
potential
back
ends
by
default
it
ships
with
gamma
and
json
file?
You
can
do
use
there's
any
number
of
other
database
based
ones
out
there
as
well,
but
I'm
just
using
the
yaml
one.
So
that's
all
I.
A
It's
right
here
to
bring
up
openstack
a
couple
of
arbitrary
passwords.
If
you
wanted
to
you
modify
this
to
make
sure
that
all
this
stuff
is
the
same
password
if
you
wanted
to
and
then
so
that
this
thing
would
actually
work.
A
I
said
a
whole
bunch
of
proxy
stuff
for
my
packages,
and
that
was
it
so
that
was
it
so
public
interface
ip
address,
that's
just
a
stack
ip
address
from
my
controller,
the
only
thing
the
only
stack
I
p
address
I
use
in
my
installations
are
on
the
controller.
Everything
else
is
dhcp
based
passwords
tokens.
A
A
A
Actually,
it's
in
all
my
because
it's
set
to
true
by
default.
So
if
you
just
delete
that
line,
it
would
have
just
worked
yeah.
So
I
guess
I'm
still
trying
to
get
from
hardware
to
here,
so
I'm
still
trying
to
figure
out
what
makes
up
further
profile
things
like.
I
I
see
where
you
can
define
zero
one.
A
More
from
the
razer
point
of
view
and
the
public
point
of
view,
yes,
okay,
gotcha.
A
So,
by
default,
when
it
first
does
that
boot
process,
when
we
were
watching
it
boot
and
then
it
paused
for
a
while
before
I
switched
back.
This
is
all
it
was
trying
to
do
so.
It
checked
in
and
registered
itself
razer
has
this
set
of
default
tags
associates
with
things
which
is
how
much
memory
it
has
if
it's
a
virtual
machine
and
what
kind
of
virtual
mac
how
many
cpus.
A
Once
you
know
the
tags,
what
you
can
do
is
you
can
create
the
policy,
takes
a
description
and
then
that
list
of
tags-
let's
see
if
I
can
illustrate
this
a
bit
better.
A
So
we
have
traditional
kickstart
and
precede
files.
A
These
are
tied
together
with
how
you're
going
to
deploy
this.
So,
if
we're
just
going
to
use
standard
pixie,
then
what
razer
has
is
this
template
called
linux
deploy,
and
so
you
create
a
model.
A
Which
ties
to
this
these
kickstart
flashes,
and
so
it
defines
what
kind
of
operating
system
that
the
model
is.
So
if
I
do
razer
model
you'll
notice,
I
have
two
models
here.
I
have
one
for
like
for
openstack
controller
nodes
and
one
for
openstack
compute
nodes.
This
template
linux
deploys
the
thing
that
says:
use
pixi
in
order
to
deliver
the
installer
to
these
machines.
A
A
So
if
you
also,
if
you
do
have
package
mirrors
in
your
environment
and
you
want
to
load
from
those
mirrors,
then
you
can
go
grab
the
iso,
which
is
generally
like
in
the
debbie
world,
the
mini
dot
iso
or
a
net
boot
iso
in
the
el
world
and
just
load
that
into
the
razor,
and
it
will
just
just
deliver
the
kernel
and
the
unit
rd
image
and
then
actually
start
loading
from
your
mirror.
So
you're
not
you're,
not
stuck
with
just
isos.
B
Yeah,
so
you
were
able
to
do
firmware,
baselining
and
bios
updates
and
setting
up
of
all
of
the
right
hardware.
A
Not
yet
so
I'll
talk
a
little
bit
after
and
kind
of
the
last
thing.
So
after
we
have
the,
and
so
the
model
is,
is
really
all
these
kickstart
and
precede
templates
and
an
association
with
the
disk
image.
And
so
then
we
have
the
policy
which
is
basically
like
a
list
of
what
is
the
root
password
supposed
to
be?
What's
the
hostname
prefix,
okay
and
then
we
have
the
actual
node.
So
then,
how
does
the
node
actually
get
bound
to
the
policy?
B
A
Oh
yeah,
there
it
is
razor
tag
matcher,
and
so
we
can
and
matchers
are
the
mapping
between
factor
facts
and
these
arbitrary
identifiers
in
razer.
Okay.
So
if
we
look
at
the
razer
policy
list,
so
cpus
under
score,
one
that's
a
tag.
We
can
also
say
puppets
fact
variable
of
how
many
what's
a
good
one
that
isn't
here,
oh
yeah,
it's
not
in
here
so
factor
is
going
to
be
able
to
discover
the
ip
address
after
the
dhcp.
A
So
we
can
actually
take
that
ip
address
and
then
there's
also
subnet
and
network.
That
factor
can
determine
so.
We
would
actually
take
that
number
and
use
it
as
a
tag
matcher
and
create
an
arc
tag
called
openstack
cluster
one.
So
we'd
have
you
know,
network
1001
24
and
create
a
tag
called
openstack.
One.
A
E
Some
sort
of
module
to
once
it
does
this
discovery
process.
Look
for
this
type
of
hardware!
That's
unique
to
this
hardware.
A
Profile,
so
you
have
any
of
these,
so
anything
that
factor
ships
with
you
can
create
a
tag
for
so
we
have
biospender
here.
A
There's
even
release
dates,
architectures
the
type
of
file
systems
on
this
machine
that
are
currently
mounted.
A
Various
hardware
profile
stuff,
like
more
architecture,
things
see,
there's
all
the
ip
addresses,
so
you're
going
to
end
up
getting
every
ip
address
of
every
interface,
so
those
are
all
capable
of
being
used,
razer's
still
being
a
pre
1.0
release.
It
doesn't
yet
it
isn't
capable
of
yet
sinking
backs
from
puppet.
So
generally
in
a
regular
puppet
environment
when
puppet
starts,
it'll
grab
facts
and
bring
them
down
from
the
public
master,
and
so
you
can
just
write
any
number
of
arbitrary
facts
in
order
to
do
work.
A
Razer
doesn't
really
sync
those
right
now
so
right
now,
you're
limited
to
core
facts,
so
the
ones
that
ship
with
factor
yeah.
You
can
do
all
these
all
these
things
and
since
I
mean
even
unique
id,
so
if
you
want
to
tag
things
in
individually,
you
could
buy
that.
A
It's
a
vmware
vm,
so
factor
only
produces
facts
for
things
that
are
actually
present
so
yeah,
so
yeah.
The
node
is
tagged
with
bound
to
policy
by
tags.
B
A
A
Razer's
future,
so
these
next
two
links
so
public
labs,
razor
dash
server
and
public
labs,
razer
elmk.
A
We
decided
after
looking
at
razer
and
the
direction
we
wanted
to
take
it
that
it
was
not
actually
going
to
be
easily
maintained
in
with
its
current
code
base.
So
we
decided
to
start
over
from
scratch,
so
daniel
pittman
and
david
letterport,
who
actually
ironically,
I
didn't
even
know
this
was
going
to
happen
this
morning,
did
an
openstack
meet
up
in
poland
where
he
talked
about
razer,
so
it
was.
It
was
very
coincidental
coincidental,
I
guess
not
actually.
C
A
So
it's
going
to
now
be
a
an
application
based
around
very
standardized
components:
sinatra
ruby,
a
couple
twists
on
it
because
the
entire
data
center.
You
need
this
to
be
able
to
serve.
A
You
have
actually
booting
from
the
network
every
time
they're
going
to
have
to
check
into
razer
every
time,
so
it's
going
to
be
written
on
actually
in
jruby
and
and
torque
box,
and
so
that's
what
the
razer
server
will
actually
be,
which
is
but
we're
keeping
most
all
the
apis
there.
So
all
the
things
that
I
was
doing
on
the
command
line
with
razer
those
are
all
going
to
stay.
The
workflow
is
going
to
be
similar,
but
the
technology
stack
will
be
jruby,
torque,
box,
postgres
and
then
there'll
be
some
command
line.
A
Components
as
well,
so
that's
converting
it
from
a
code
base
which
was
node.js
fronts,
fronted
to
serve
the
web
requests
and
a
ruby
backend
to
actually
do
all
the
processing
underneath
that
I
usually
use
mongodb.
A
So
we're
kind
of
bringing
it
closer
to
what
the
technology
stack
is
of
puppet
which,
since
we
rewrote
our
database,
backend
from
it
being
pure
ruby
to
be
enclosure.
It's
now
backed
by
postgres
as
well.
We're
we're
always
looking
more
at
the
jvm
to
see
what
kind
of
performance
boost
we
can
get
out
of
puppet
by
moving
its
components
to
the
jvm,
and
so
it's
kind
of
right
along
those
lines
of
the
the
same
technologies
that
we've
been
using
inside
puppet.
A
The
other
thing
is
that
the
microkernel
is
being
rewritten
to
be
syntax
based,
so
it'll
be
enterprise
linux.
Anyone
can
go
grab
it
make
modifications
to
it,
add
those
tools
in
order
to
actually
do
bios
updates
brand
controller
configs,
because
you
want
all
that
kind
of
stuff
to
happen
inside
the
micro
kernel.
A
You
want
the
micro
kernel
to
be
able
to
do
all
that
work
and
then
register
the
node,
and
you
then
get
an
operating
system,
so
that
stuff
is
going
to
be
able
to
happen
in
the
microphone
and
that's
why
the
micro
total
is
going
to
be
very
standard
space,
something
everyone
can
understand
and
build
themselves,
and
so
they
can
just
kind
of
pop.
In
there
add
the
tools
they
need,
the
workflows
they
need.
A
Even
I
mean
at
that
point
you
can
add
different
metadata
backends.
If
you
want,
it's
factor
is
not
a
requirement,
and
I
think
factor
is
doing
is
generating
the
data
but
then
posted
to
the
razer
server.
If
you
want
to
import
your
own
metadata
service
and
post
it
back
to
razer,
that
would
be
totally
fine.
A
The
current
microkernel
is
also
open
sourced,
but
it's
based
off
of
tiny
linux
and
you
know
just
kind
of
a
basic
busy
box,
shell,
so
very
unlimited,
a
little
hard
to
work
with,
of
course,
with
some
for
some
people.
So
you're
gonna
just
be
able
to
layer
the
microkernel
on
top
of
an
el
system
and
build
your
kernels.
A
A
A
Yeah
so
I
click
the
button
to
do.
Compute
compute
goes
a
bit
faster,
a
lot
less
packages.
A
I
guess
I
didn't
do
the
workflow
very
well
we'll
see
if
it
finishes
in
the
meantime.
A
Yeah,
so
I
know
people
have
just
been
asking
questions
like.
I
said
this
was
very
fairly.
A
About
razer,
I
mean
I'm
here
for
a
bit
while
we're
gonna,
let
this
finish
but
open
it
up
real,
quick
and
tell
it
before
I
get
to
start
asking.
You
guys
questions.
F
A
I
mean
crowbar
is
similar
by
quite
a
bit.
It
kind
of
served
the
same
purpose
of
that
data
center
roll
out
process.
A
It
is
probably
fairly
similar
to
mirantis
fuel,
so
mirantis
is,
but
theirs
is
more
all
100
centered
around
specifically
openstack,
but
they
have
a
web
application
which
ships
public
code
underneath
that
has
cobbler
built
in.
So
they
do
a
lot
of
razor-like
things
on
top
of
cobbler.
A
There's
a
tool
from
red
hat,
which
is
being
rolled
into
satellites,
which
is
probably
one
of
the
oldest
like
dashboards
for
puppet
written
by
ohad
levy,
he's
a
great
guy
that
also
works
for
red
hat.
Nowadays
it
has
a
system
in
it
that
can
do
bare
metal
and
virtualization
provisioning.
I
would
say
it's
more
like
cobbler
than
razors
and
crowbars
like
cobbler,
it's
kind
of
more
a
bit
traditional,
but
it
has
things
built
in
like
smart
grouping
and
stuff
like
that.
F
And
a
lot
of
the
the
upside
to
this
is
about
provisioning,
large-scale
implementations
like
again
a
gigantic
data
center,
and
you
really
wanted
to
wrap
up
rack
after
round
after
graph
being
able
to
set
up
these
profiles
and
pull
these
things
in.
This
is
where
the
product
is
really
set
aside,
yeah
and.
C
E
B
A
I
some
way
to
identify
the
node.
A
And
generally
I
mean:
are
you
talking
name
space
or
ip
space
for
rack
yeah?
Then
it
would
just
be
when
you're
doing
your
tag
matchers
to
actually
create
a
match.
Based
on
the
network
that
the
that
the
rack
is
in.
C
To
do
vlan
stuff,
and
so
when
you're
sending
up
your
dhcp
and
you're
serving
out
and
you're
serving
up
pixie
on
those
different
vlans
you
could.
You
could
then
use
with
police
and
use
factor
and
do
the
networking
based
on
that.
Okay.
A
Yeah,
it
very
relies
on
being
able
to
pixie,
if
you,
if
you
can't,
if
you
can't
actually
pixie
across
switch,
then
you're
gonna
have
to
have
some
kind
of
provisioning
machine
inside
inside
that
rack,
and
does
this
factor
allow
you
to
pull
like.
A
Not
with
the
ones
that
are
shipping
for,
but
yeah.
B
A
That
would
be.
That
would
be
ideal
where
we
we've
been
looking
at
that
to
actually
be
able
to
query
ldp
information
by
imcollective,
because
puppet
now
runs
on
juniper
switches,
and
so
we
have
a
set
of
juniper
switches
in
our
server
room
and
then
ip
phones
connect
to
those
as
well.
So
we're
looking
at
the
same
for
the
that
same
kind
of
dynamic
discovery
of
lldp
information,
yeah.
Exactly.
F
You
have
high
availability
in
all
of
the
books,
but
like
wrap
up
my
circles.
A
Yeah
all
the
rabbit
stuff
does.
The
high
availability
for
myspl
is
current.
A
They
have
a
use,
so
that's
that
different,
like
the
different
types
of
networking
you
have
to
use,
has
different
high
availability
functionality,
nova
network
and
quantum
stuff
can
all
be
multi-node.
A
A
Those
drivers
for
other
stuff,
obs
obs
linux
bridge
there
was
two
others,
and
now
I've
forgotten
them,
and
right
now
and
last
week
I
had
a
couple
meetings
with
vmware
about
nsx,
so
we're
gonna
look
at
nsx
as
well.
A
So
this
is
the
repository
where
I've
been
pushing
all
these
commits
to
build
this
infrastructure,
my
twitter
handle
and
free
nodes.
So
I
basically
rebooted
my
twitter
recently
and
I'm
a
bit
noisier
than
I
used
to
be
because
I
now
tweet
about
this
stuff,
because
I
find
it
interesting,
I'm
building
infrastructure
things
it
was.
A
I
got
lost
in
in
my
career
for
a
little
while,
not
knowing
what
I
wanted
to
do,
and
I
realized
that
I
was
having
a
really
fun
and
interesting
time.
While
I
was
at
the
university
building
virtualization
infrastructures,
I
took
a
break
from
that
when
I
started
public
labs,
it
was
all
about
configuration
management.
A
And
really
just
enjoyed
digging
into
openstack
and
underlying
virtualization
underlying
hypervisor
systems
and
storage
platforms.
Again,
so
I
interested
to
be
quite
excited
about
that.
A
So
if
you
want
to
meet
people
at
public
comp
and
you
have
a
free
weekend
or
sorry,
a
three
three
two
or
three
days
in
two
weeks.
A
It's
going
to
be
quite
in
a
fun
time.
They
always
are
we're
shooting
for
over
a
thousand
people
this
year.
If
you
haven't,
got
your
ticket
yet
discount
code
on
that
link,
basically
just
go
to
eventbrite
and
put
pl
friends
in
there,
so
you
can
go
there
and
get
a
discount
as
well
a
lot
more
people
there
a
lot
more
people
that
have
done
a
lot
bigger
things
infrastructure-wise
with
openstack
than
I
have.
A
It's
almost
eight,
but
I
hope
I
have
a
couple
of
minutes
to
ask
people
questions.
Does
anyone
actually
currently
managing
openstack
in
production.
C
B
A
With
people
that
actually
do
things
in
production,
what
are
I'm
just
trying
to
really
get
a
feel
for
predominantly
what
storage
solutions
people
are
using?
I
find
that
to
be
kind
of
one
of
the
bigger
questions
about
how
to
do
storage
in
openstack
and
it's
very
different
than
the
old
vmware
model.
Where
you
you
pushed
a
fairly
big
one
at
it,
and
it
just
kind
of
put.
A
And
then
you
could
also-
and
I've
been
looking
at
seth.
I've
also
looked
either
doing
just
a
raw
cfs-based
thing,
and
but
there's
the
one
thing
that
I
wanted
to
I
got
away
from
was
having
storage
be
on
the
same
machine.
A
The
compute
is,
I
was
very
fine
with
instances
being
there
and
that's
where
you
know,
that's
where
the
actual
ephemeral
disks
exist,
but
for
these
large
block
devices
that
may
have
a
lot
of
bits
going
back
and
forth,
which
would
end
up
leveraging
a
decent
amount
of
cpu
at
times
they
don't
want
to
have
it
competing
with
the
compute
nodes.
So
I
just
wondering
what
kind
of
solutions
people
were
using.
I
just
you
know
the
storage
folks
were
kind
of
were
a
little
slow
to
the
game,
so
yeah.
A
So
I
mean
when
you,
when
you
deploy
an
instance
it's
going
to
generally,
if
you
cal,
you
know
it's
file
based,
it's
gonna
be
on
disk,
and
so
I
was
looking
for
something
that
was
still
very
basic
still
very
like
I
could
take
it
to
you
and
throw
it
in
a
rack
and
produce
another
slice
of
storage
for
my
openstack
cluster,
but
not
like
every
time
I
put
in
a
computer
like
I
didn't
want
to
also
share
processing
power
for
the
storage
with
the
compute
nodes.
D
We
had
a
discussion
about
this
a
couple
of
sessions
ago,
and
one
of
the
challenges
is
that
if
you
start
doing
that,
you
end
up
pinning
your
storage
ratios
and
we
sort
of
talked
around
this
problem
for
about
10
minutes,
and
then
we
kept
coming
back
to
if
it
sounds
like
a
good
use
of
equipment.
But
you
might
end
up
with
more
or
not
not
enough,
storage
or
too
much
storage
and
your
networking
is
going
to
get
double
hit.
Yeah
there
was
some
there.
C
Storage
controller
was
basically
the
difference
about
being
a
bucket
cap
with
a
luggage
cap.
So
actually
putting
storage
in
the
rack
was
the
only
way
to
get
their
price
targets
and
then
they
made
their
trade-offs
from
there.
I
would
say
that
price
has
driven
at
least
half
of
the
configurations
that
that
I've
seen
where
people
basically
said
tell
me
what
the
choices
are
on
face.
A
Probably
object
and
block
because
if
I
go
with
seth
for
instant
storage
that'll
that
that'll
do
my
high
availability
for
live
migration.
So
if
I
have
sap
I
might
as
well
use
it
for
object,
store
as
well.
Okay,.
F
Yeah,
I
think
if
you
definitely
if
you
do
want
to
use
I'll
get
started
if
you're
tapping
just
pop
store-
and
I
think
nfs
drivers
I
just
use
file
still
and
if
you've
got
a
you've,
got
netapp.
There's
some
netapp
specific
drivers
that
give
you
a
little
bit
of
extra
net
apps
but
based
on
the
best
ones.
E
E
A
particular
type
of
storage
like
block
storage,
you're,
going
to
have
lots
of
stuff
that
has
different
use
patterns
in
there,
and
I
don't
think
seth
is
necessarily
optimized
for
handling
all
of
those
things.
Well,
you
know
when
you
look
at
you
know
the
equivalent
of
a
raid
ebs
type
volume
where
you
need
really
high
performance
or
you
need
dedicated.
I
o
operations
per.
E
Second,
you
need
to
be
looking
at
a
a
storage
server
that
can
handle
that
and
that
might
have
to
be
a
a
storage
server,
with
a
significant
amount
of
flash
to
be
able
to
give
you
the
iops
you
want,
but
then
you're
going
to
have
other
storage.
You
know
you
put
it
out
there
once
it's.
E
Basically
your
archive,
you
don't
really
care
what
happens
to
it
for
the
most
part,
but
you
still
want
it
there,
and
so
your
hierarchical
storage
capabilities
there,
I
think,
can
be
at
least
for
some
applications
very
important
drivers.
So
I
think
you've
got
to
really
understand
what
your
usage
models
are
and
what
tools
are
appropriate
for
the
appropriate
usage
models.
Yeah.
A
And
probably
I
guess
initially,
I
get
I'd,
probably
do
all
three
sap
standard.
I
have
blocked
from
array
and
probably
at
best,
I
guess
I'm
really
kind
of
in
the
discovery
process
here,
but
we
we
internally
already
have
emc
bnx.
A
We
also
we
already
run
connectcenta,
which
already
has
a
cinder
plug-in
and
I
could
and
since
cinder
has
the
ability
to
do
those
the
various
storage
classes.
It
was
less
obvious
at
first,
but
I
kind
of
found
the
documentation.
A
I
end
up,
like
I
suppose,
trying
them
all.
I
really
don't
know
what
my
use
case
is,
but
we
kind
of
I'll
admit
that
we
built
our
last
vmware
cluster,
with
a
very
vague
understanding
of
what
the
use
case
was
going
to
be.
As
we
unleashed
people
on
the
cluster,
we
could
see
that
their
usage
pattern
changed
very
quickly
as
soon
as
they
got
enough
resources
to
actually
start
interacting
with
things
on
a
much
larger
scale.
A
The
the
usage
pattern
changed
almost
immediately,
so
I
assume
that's
going
to
have
to
happen
in
an
openstack
environment
as
well.
Well,
I
think
you're
getting.
C
Into
workload,
profiling
and
you
have
a
way
of
you've,
got
policies
related
to
hardware.
You
have
ways
to
do
workload,
profiles
and
match
up
policies,
because
you
could
have
a
case
where
you've
got
one
class
of
users,
that's
very
happy
with
whatever
storage
they
can
get,
they
don't
have
high
performance
constraints.
C
You
have
better
workloads
are
going
to
be
very
io,
intensive
and
they're
not
going
to
be
very
repeat
intensive.
Those
could
be
on
shared
ones.
Every
other
things
are
going
to
be
very
comprehensive.
You
want
to
upload
storage,
those
are
all
workload
policies,
not
hardware
policies.
So
if
you
can't
match
those
up,
you're,
almost
kind
of
guessing.
F
Right
is
that
profile?
Is
that
profile
built
into
open
openstack,
or
is
it
more.
B
A
They're
you
could
you
get
that
kind
of?
Could
those
sometimes
those
profiles
are
hard
to
determine
in
the
beginning,
when
you
don't
have
an
openstack
cluster
to
actually
run
the
workloads
on,
but
I
haven't
actually
dug
into
slow
motor
a
lot.
Does
it
report
stuff
like
that,
or
are
we
relying
on
the
storage
systems
to
produce
those
metrics
for
us
you're
only
going.
C
C
C
Then,
as
you're
going
through
this
stuff
going
up,
you
will
reach
kind
of
an
information
limit
or
data
limit,
because
you've
got
to
start
looking
at
from
the
other
side,
I
know
from
the
storage
cases
that
I've
seen
that
that's
been.
That's
where
the
conversation
always
stops
is
okay.
C
Tell
me
what
the
user
wants
to
do
with
this,
and
then
you
go
talk
to
the
users
and,
like
you
said
it's
like
uis,
you
don't
know
if
you
like
it
until
you
see
it
well
like
workloads,
and
so
you
actually
have
users
using
the
workloads
you're
only
guessing
what
those
things
are
going
to
look
like.
So
there's
a
lot
of
anecdotal
stuff
right
there,
but
I
think
I
need
to
match
that
up
with
the
experience
of
people.
C
C
A
Does
anyone
else
hit
any
there
any
performance,
other
performance
bottlenecks
that
people
ran
into.
B
C
D
F
A
Okay,
let
me
actually
look
at
riding
an
rpc
connector
for
active
mq.
It's
the
latest
active
mq
release
now
supports
amqp.
A
And
the
only
reasons
I
would
do,
that
is
because
I
really
I
like
the
aj
and
clustering
of
activemq
and
also
it's
the
broker
of
choice
for
m
collective
as
well
rewrites,
specific
rpc,
connectors
that
leverage
specific
and
cue
and
so
be
nice.
If
I
could
kind
of
keep
it
on
the
same
workload,
I
don't.
I
haven't
dug
into
that
python.
F
F
Have
like
four
five
six
of
each
api
endpoint
right,
so
you
can
load
thousands
of
products
yeah,
I
mean
hundreds
to.
A
So
we're
talking
more
like
actual
limitations
in
openstack's
api
endpoints,
as
opposed
to
databases.
So
I
assume
these
all
have
to
be
tied
back
to
a
single
database,
be
it
clustered
or
not.
Are
people
generally
letting
the
database
avenues?
Take
care
of
that
or
have
people
seen
had
to
actually
scale
databases
in
this
case
as
well.
C
I
think
you'll
see
iterated
scenarios
come
up
fairly,
often,
especially
where
you
have
public
clouds
and
virtual
private
clouds
and
etcetera.
I
don't
think
that
you'll
see
the
ability
to
use
the
simple
database
programming,
so
I
think
you'll
see
federation
models
be
driven
by
those
business
use
cases
as
much
as
the.