►
From YouTube: OpenCrowbar Anvil Release Announce
Description
Review capabilities and history of the 1st OpenCrowbar (was v2) release!
Details: http://robhirschfeld.com/2014/04/30/opencrowbar-anvil
A
Cool
and
I'm
happy
to
talk
about
whatever
people
want
to
talk
about,
but
business.
The
business
for
the
meeting
is
to
sort
of
go
through
the
anvil
release
and
then
we'll
open
it
up
to
questions
or
things
like
that.
I'm
trying
to
figure
out
there's
a
couple
of
process
things
I
want
to
cover
and
there's
some
work
I'm
trying
to
get
through
on
the
dell
side.
That,
hopefully,
will
be
the
subject
for
next
week's
community
meeting.
A
Delat,
sometimes
moves
more
slowly
than
I
would
like
things
like
this.
Okay,
so
from
so,
the
purpose
of
the
day
is
to
officially
the
release.
We
might
have
to
wait
until
the
afternoon
or
evening
to
actually
do
the
over
these
tags,
because
we
have
there's
two
pending
pull
requests
out
there.
Let
me
bring
them
up
to
pull
request,
so
I
was
able
to
get
a
hundred
percent
passing
on
the
BDD
tests,
which
is
an
important
milestone,
because
I
want
to
enter
this
release.
A
But
when
we
a
couple
of
weeks
ago,
we
had
talked
about
the
release,
milestones
and
pulling
things
together
for
the
release
and
we'd
identified
this
concept
of
a
dot
release
moving
to
quarterly
quarterly
set.
So
the
idea
here
was
that
the
open,
crowbar
platform
itself
had
a
lot
of
stuff
in
it.
It
has
annealer.
B
A
Is
our
orchestration
engine
which
is
focused
on
being
able
to
do
the
type
of
orchestration
you
need
for
a
bare-metal,
discovered
infrastructure?
It's
a
different
problem.
Space
than
a
cloud
deployment
probably
could
be
used
that
way,
but
our
focus
is
really
on
dealing
with
discovered
infrastructure,
because
I
like
to
talk
about
it
and
said
we're,
docker
docker
effectively
becomes
discovered
infrastructure
because
it
doesn't
have
all
the
control
points
that
vm
or
physical
machine
does.
So
you
have
to
there's.
A
A
Yeah,
so
in
so
when
we
kicked
offer
the
open
the
v2
of
open
crowbar
two
years
ago
at
OSCON,
almost
two
years
ago,
we
identified
a
lot
of
requirements,
capabilities
and
things
we
wanted
to
accomplish.
We
didn't
even
have
a
concept,
the
concept
of
the
annealer
and
some
of
the
things
we've
done,
like
jigs
we're
design
objectives,
but
we
didn't
we,
we
hadn't
completed
the
design
and
so
there's
actually
a
considerable
amount
of
material.
A
A
If
you
go
back
and
look
through
this
document
in
some
detail,
what
you
would
find
is
that
we
have
accomplished
those
objectives
and
it's
worth
as
a
release
celebration
talking
about
some
of
what
these
are
right.
So
we
needed
to
clean
up
art
the
community
debt
that
we
had,
which
we've
done
by
moving
into
one
core
into
one
repo
and
making
it
eliminating
dev
tool.
A
We
have
heterogeneous
operating
system
deployment.
This
is
actually
a
significantly
complex
thing,
because
it
requires
you
to
be
able
to
pull
dependencies
for
your
deployments
from
a
wide
range
of
things,
which
required
us
to
change
the
networking
model
and
haven't
have
online
mode,
and
actually
one
of
the
things
that
we've
been
doing
is
embedding
squid
proxy,
so
that
the
online
mode
actually
doesn't
hit
the
internet
as
much
it
brings
in
things
as
it
needs
it.
B
A
Excited
about
that
being
able
to
use
chef
or
puppet,
we
actually
have
the
puppet
client
is
in,
although
it's
very
immature
at
this
point
compared
to
our
chef
capability.
So
that
was
a
major
deliverable
for
this.
This
release
online
mode,
which
means
that
we
use
the
internet
instead
of
having
everything
packaged
into
an
iso
iso.
A
It's
tremendously
useful
for
doing
disconnected
installs
that
you're
very
common
and
we'll
have
to
put
that
feature
back,
but
it
also
was
incredibly
limiting
because
that
way,
if
you
fix
a
cookbook
and
it
pulled
in
a
new
dependency,
you
were
stuck
in
the
old
model.
Actually,
we
had
to
address
that.
It's
a
serious,
serious
problem
arm
for
limitation
abilities
upstream
deployment
code
sources.
A
This
was
effectively
are
part
of
our
online
caching
strategy,
but
crowbar
open
crowbar
with
one
of
the
recent
ads
was
to
use
burke
shelf,
and
so
all
of
the
chef's
dependencies
can
be
housed
and
maintained
outside
of
crowbar,
which
is
really
important
to
us,
because
using
the
chef
up
streams
for
say,
open
sac
are
really
the
target
here.
You
don't
want
to
do
we'll,
be
doing
crowbar
one
and
create
islands
of
cookbooks.
A
We
want
to
be
fully
in
the
community
cookbooks
and
that
also
required
adoption
of
attribute
injection
I
don't
have
in
here,
although
it
was
a
design
design
requirement
to
be
able
to
fully
embrace
the
attribute
injection
model.
That
is
the
most
popular
and
that,
in
turn,
lets
us
do
things
like
the
annealer,
which
creates
these
very
atomic
rolls
and
so
I
think
the
annealer
wasn't
one
of
our
design
objectives.
It's
a
critical
feature.
A
We
actually
discussed
whether
or
not
it
should
be
in
in
this
than
it
is
down
here
as
a
baseline,
so
significant
capability
that
was
delivered
in
this
release
was
the
this
annealer,
the
orchestration
the
whole
proving
out
that
model
and
showing
that
it
could
do
very
flexible
things
like
build
a
cluster.
The
way
we
demonstrated
with
Seth
or
have
two
different
execution
paths
like
we
had
to
do
to
make
it
work
for
Decker.
A
That's
a
really
tricky
orchestration
challenge
and
then
doing
it
in
a
way
that
you
could
actually
do
an
upgrade
around
it
and
he'll
it
and
deal
with
all
the
contingencies
have
discovered
hardware,
they're,
really
important
capabilities
scale
to
100
nodes.
A
flexible
networking
configuration
is
something
we
actually
have
had
for
a
long
time
in
the
open,
crowbar
work
getting
away
from
the
whole
network,
Jason,
letting
you
define
roles
even
more
importantly.
A
In
this
we
actually
networks
our
roles
and
part
of
the
dependency
graph,
so
in
open
crowbar
we
made
it
so
that
your
network,
you
could
actually
staged
your
orchestration
in
your
graph
based
on
your
network,
and
since
we
do
late
binding
that
graph
can't
isn't
your
network
doesn't
exist
until
you
create
it.
Based
on
what
your
needs
are,
so
we're
very
flexible
about
how
that
works.
That's
a
really
important
thing:
it's
not
hard
to
conceptualize!
You
basically
can't
start
doing
a
deployment
till
you
build
a
network.
A
B
A
100
nodes
that
are
docker
the
doctor
work
we've
been
doing.
Let
us
really
work
focus
on
this
and
so
I'm
very
confident.
We
could
scale
well
beyond
100
nodes,
but
we've
actually
been
testing
is
basic.
Is
a
hundred
node
simultaneous
deployment,
so
cramming
100
nodes
into
the
system
is
fast
as
they
can
possibly
go,
and
we've
been
using
our
docker
infrastructure
to
test
that
hundred
nip
bring
up.
So
we
are
incredibly
confident
about
100
nodes
scale
and
beyond
and
provably
a
hundred
note,
which
is
good
and
then
proof.
A
A
We've
moved
all
the
documentation
into
the
ricoh,
so
we
couldn't
clean
up
the
history
easily
with
all
the
crowbar
one
stuff,
especially
because
susa
is
very
active
with
crowbar
one
and
it
was
making
a
very
difficult
to
distinguish
which
docks
are
for
what
so
separating
them
was
really
a
much
more
logical
thing,
and
so
we've
been
keeping
the
documentation
in
the
code
not
in
a
wiki
not
distributed
all
over
the
place
and
so
far
I
think
that's
been
really
powerful.
We
still
have
a
long
way
to
go
going.
Anybody
who
says
the
docs
are
complete.
B
A
B
A
They
version
together,
but
their
individual
parts
are
just
one
thing,
so
it
makes
it
so
much
more
logical
and
easy
to
it
means
we
can
use
get
in
a
normal
way,
because
each
repo
represents
a
dependent
to
bundle
restful
api
normalization.
We've
done
a
lot
on
the
API.
The
CLI
has
been
normalized.
Actually
really
nice
I
encourage
people
to
look
at
that
BDD
test
coverage.
A
We
had
this
I,
just
I'd,
let
it
slip,
and
now
it's
back
that's
one
of
the
release
criteria
for
me,
because
this
effectively
makes
sure
that
nobody
regreses
the
API
calls
we
really
have
very
good
API
coverage
and
pretty
good
UI
coverage
on
the
BDD
sweeps
multiple
deployment
technologies.
This
is
chef
and
puppet,
it's
a
little
redundant
and
then.
B
A
A
And
that's
actually
that's
a
great
T
up
for
what
the
what
the
next,
what
the
next
piece
is.
So
the
idea
here
is
that
now
that
we've
covered
the
base
and
we've
done
some
OpenStack
work
and
to
sort
of
test
out
the
concepts,
but
now
that
we've
covered
the
base,
it's
really
time
to
focus
on
work
loads.
We
don't
expect
the
broom
release
to
have
significant
changes
in
architectural
changes.
It's
mostly
going
to
be
bug,
fix
and
convinced
with
what
we
discover
and
that's.
A
Actually,
some
of
the
we
actually
pulled
ahead,
so
it's
a
docker
containers
as
workloads
should
let
us
work
on
OpenStack
more
quickly,
more
more
community
accessible
in-place
upgrade.
We
is
now
something
we
need
to
test
braden,
bios
capabilities
or
something
that
we're
looking
there's
some
architectural
work
that
was
done,
I'm,
always
looking
for
feedback
here.
A
But
the
thing
that
we're
looking
at
is
the
annealer
can
deal
with
out-of-band
actions.
So
from
a
server
configuration
perspective,
it
makes
a
lot
of
sense
to
be
able
to
do.
I'd
rack
calls
directly
from
the
kneeler,
rather
than
do
it
like
we
did
with
crowbar
one
which
would
do
it
institute.
So
in
crowbar
one,
we
actually
go
into
the
sledgehammer
discovery
image
and
we
act
in
that
image
on
the
system
itself.
A
A
So
that
this
is,
this
is
all
too
BBD
stuff.
We
feel
like
this
is
a
really
significant
piece
to
bring
up
I
think
the
raid
controllers
every
can
be
configured
by
remote.
So
we'll
look
at
that
and
I'm
hoping
that
we're
going
to
see
some
community
now
that
we've
simplified
the
model,
more
community
contribution
to
expand
the
breadth
of
hardware
capabilities
that
are
covered
and
then
the
other
thing
that
we're
looking
at
doing.
A
We've
built
some
infrastructure
at
Dell,
although
I
think
we're
going
to
be
moving
it
to
a
community
to
a
cloud-based
infrastructure,
but
just
be
able
to
install
crowbar
and
run
the
gate
so
that
we
actually
have
a
gated
check-in
process
and
then
I'm
not
quite
ready
to
talk
through
camshaft.
But
the
idea
with
cam
chat
we
have.
We
have
road
map
leading
forward
and
camshaft
that
is
going
to
take.
The
broom
is
all
about
workloads.
A
A
A
All
right,
cool,
oh
and
from
a
schedule,
perspective
I'm,
expecting
to
have
a
meeting
next
week
and
then
we'll
take
the
open
sex,
some
it
off
and
then
I
like
to
go
back
after
the
OpenStack
summit
and
probably
have
another
meeting
and
start
talking
about
OpenStack
migration,
because
I
think
it'll
be
time
at
that.
At
that
point
and
then.
A
B
B
A
Saying
that
makes
a
lot
of
sense.
Let's
sync
up
at
the
summit.
We
can
discuss
it
and
then,
and
we'll
figure
III
understand
we
have.
We
have
a
lot
of
people
in
the
community
who
want
to
make
sure
that
they're
there
they're
working
in
their
intent
that
you
know-
and
these
are
open-
calls
not
record
them
so
I'm-
perfectly
cool
to
talk
one-on-one
figure
out
where
we're
going
yeah
very,
very
common
happens
all
the
time
so
with
that
I
wouldn't
expect
to
see
a
lot
of
action
in
the
next
in
the
next
couple
of
days.
A
I
know
for
myself
and
a
lot
of
people
on
the
team
or
completely
focused
on
the
OpenStack
summit.
Oh
one
thing
that
is
worth
discussing,
though,
is
that
we
are
planning
to
be
active
in
the
triple
o
ironic,
all
that
stuff
I
actually
I,
have
some
opinions
about
that
work
and
we're
we're
going
to
talk,
we're
going
to
go
there
and
see
if
we
can
influence
the
direction
on
that
I'm,
very
I'm,
getting
a
little
off-topic,
but
this
is
useful.
A
A
Which
effectively
means-
and
it's
really
convenient
the
choice
awards
is
not
accidental
ready.
The
ready
state
ready
state
infrastructure
aligns
with
what
we
call
ready
would
be
a
ready
state
in
crowbar
as
from
a
deployment.
So
a
deployment
is
annealed
when
it's
done
annealing
it's
ready,
but
as
I
talked
to
different
people,
you
know
it
seemed
like
the
concept
of
saying
all
right.
A
Crowbar
gets
your
infrastructure
to
a
ready
state
seemed
very
logical,
and
so
one
of
the
things
to
discuss
and
think
about
at
the
summit
is,
can
we
use
crowbar
to
get
opens
a
can
for
infrastructure
to
a
ready
state
to
install
OpenStack
on
and
then,
if
the
community
really
wants
to
use
heat
to
do
that
OpenStack
installment
on
top
of
the
ready
infrastructure?
I
think
that's
actually
very
attainable,
because
at
that
point
and
what
I
spelt?
A
A
So
if,
if
you
have
concerns
about
some
of
the
challenges
that
that
OpenStack
is
trying
to
face
by
taking
a
cloud
infrastructure,
provisioning,
an
orchestration
system,
a
cloud
infrastructure,
provisioning
system
and
AP
is
and
applying
it
to
a
different
problem
space
which
is
bare
metal,
you
know
I'm,
hoping
this
concept
is
useful
for
people
to
actually
identify
the
problem
and
because
I
believe
very
strongly.
They're
two
different
problem,
spaces
and
different
are
different
with
different
architectural
challenges.