►
From YouTube: OpenStack Austing 1108
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
The
general
process
for
openstack
excuse
me,
I,
drink
copious
beforehand
is,
if
I
think
you
have
a
project
piece
debated
and
there's
generally
a
fairly
well
I,
don't
want
to
get
too
controversial,
but
there's
a
there's
a
bit
of
a
process
and
getting
it
brought
into
the
core
or
into
the
fold.
That
was
somewhat
fast
tracked
with
cinder,
since
it
was
actually
already
principally
part
of
OpenStack.
It
was
something
called
me
know
the
volume
part
of
Nova
of
a
second
cute.
So
really
what
was
happening
there
was,
let's
break
it
out
into
a
separate
service.
A
Let's
make
no,
but
a
little
bit
second
few,
so
much
less
modular
I'm!
Sorry,
tomorrow,
I
think,
is
today
and
then,
by
the
way
I
also
been
breaking
out,
there
might
be
opportunity
you
think
we
wouldn't
have
done
otherwise
had
it
been
part
of
OpenStack
computer.
So
so
the
incubation
process,
I
guess,
is
someone
accelerated.
It
went
immediately
to
in
the
subsequent
release
in
folsom
result
available
here
in
September
to
it
to
a
service
of
even
use
today,
albeit
as
part
of
that
incubation
process.
Noah
volume
is
not
fully
defecated,
so
total.
A
Oh,
it's
actually
still
present
and
folsom
in
some
subsequent
releases.
It'll
go
away
and
send
will
be
the
one
true
way
to
do
bluster.
That's
what
one
of
the
possible
turns
on
timelines
in
I?
Guess
it's
not
terribly
compelling
showing
you
a
link,
but
maybe
afterwards
I'm
faciliate
at
this
point
to
the
felony
and
yes,
you
did
some
some
pretty
cool
stool
stuff,
with
a
visualization
of
commence
of
submissions
to
to
OpenStack
over
time.
I
recommend
it.
A
What
does
it
do
well?
Doesn't
want
less
what
it
sounds
like.
In
fact,
the
name
cinder
is
sounds
like
cinder
blocks
sounds
like
block
storage
service.
To
me,
you
know
provides
you
know.
Probably
the
thing
consumes
it
most
with
an
OpenStack
is,
is
the
is
for
providing
a
model
of
persistence
or
instances.
A
So
if
you're
familiar
with
the
day
in
life
of
an
inside
BM
and
OpenStack,
you
know
someone
goes
out
to
arise
in
the
dashboard
or
maybe
uses
a
command-line
tool,
or
does
it
programmatically
doesn't
matter
just
give
me
four
instances
or
ten
or
whatever
and
selecting
from
a
catalog
of
images
and
those
images
are
stored
in
glance.
Those
images
that
are
copied
out
over
curled
or
something's
used
to
be
copied
to
to
a
cloud
server
somewhere
and
from
there
and
since
Dan
chea.
A
That's,
that's
that's
great,
except
for
what,
if
I,
that
particular
cloud
server
goes
away
or
if
I
want
to
bring
that
back
up
again,
some
other
time,
there's
really
no
persistence
model,
that's
where
that's
where
Noah
volume
it
now
cinder
come
in!
You
want
to
keep
you
know
what
most
people
do
you
want
to
keep
application
storage?
B
A
Do
useful
work
upon
the
next
instance
creation?
Well,
that
you
needed.
You
know
how
to
persistence
model,
that's
what's
winter,
just
for
you,
so
in
a
general
sense
it
provides
for
that
it
can
also
be
consumed
independently.
You
know,
if
you
have
a
particular
application,
you
can
programmatically
obtain
it.
It's
not
simply
for
per
per
instance
per
guest
usage.
I,
don't
know
of
a
ton
of
usage
for
it,
but
there's
no
reason
why
you
couldn't.
A
A
A
Lvm
Bay
snapshots,
so
lv
create,
is
actually
called
anyone,
but
not
a
snapshot
of
a
running
instance
if
the
running
yes
of
the
data
store
for
the
persistent
data
store
port,
but
not
the
note
that
the
house
are
operating
system.
Well.
Actually,
yes,
there
is.
There
is
a
mechanism,
that's
separate
from
cinder
that
allows
you
to
create
you
some
snapshots
and
then
there's
also
a
completely
separate
notion
of
bootable
volumes,
which
we
haven't
got
to
where
you
can
do.
You
know
coordinated
in
memory
state.
You
know
the
persistent
bit
of
data
storage.
A
A
A
I
mentioned
noah
volumes,
you
know
it's,
I
think
I
already
probably
primarily
described
what
it
does
so
in
moving
to
tricia
center
for
four
feet:
basic
julie's
for
the
fulsome
release.
The
primary
goal
was
just
to
break
it
out
and
there's
a
lot
of
a
fair
amount
of
heavy
lifting
associated
with
that
activity.
You
know
sort
of
the
surgical
activity
of
not
implanting
the
extracting.
Something
can
be
somewhat
difficult.
In
fact,
in
part,
why
why
the
project
Policy
Board
decided
to
keep
Milla
bowling
around
for
a
while
for
another
release?
A
A
You
got
in
folsom
was
the
existence
of
Senator
broken
out
from
Nova?
You
know
that
occupied
the
attention
of
everyone
involved,
so
with
grizzly.
Now,
there's
an
opportunity
to
start
it
Iranian
and
do
new
or
cool
things
and
right.
In
fact,
some
of
those
things
are
the
ability
to
do
a
sort
of
a
lightweight
back
up.
I
should
point
out
that
these
were
discussion
and
see
these
back.
Actually,
the
etherpad
is
is
is
located
the
URLs
located
there
for
other
new
features
discussed
amongst
the
community
at
the
summit
for
cinder.
A
You
know
the
ability
to
actually
do
some
form
of
volume
back
up
from,
like
one
cinder
instance
to
another.
This
one
is
a
little
gray
in
terms
of
the
specific
mechanisms
for
doing
it.
Resizing
is
fairly
clear.
This
is
specifically
a
capacity
activity
gave
my
ten
terabyte
or
one
terabyte
line
are
given
my
volumes,
or
essentially
lungs
or
blog
devices,
and
in
this
context,
I
want
it
to
be
too.
You
know
so
so
make
the
resize
activity
occur.
That.
A
Know
requires
coordination
with
whatever
host
of
us
you're
interacting
with
some
of
them.
Don't
you
automagic
recognition
of
additional
capacity,
but
multiple
backends
is
fairly
hot
topic
amongst
those
those
who
are
involved
in
this
in
your
community.
So
today,
if
you
want
multiple
backends,
let's
say
you
had.
A
A
It's
something
it's
under
discussion
in
the
community,
whether
that'll
get
taken
but
I,
believe
it
will
some
basic
support
for
allowing
me
during
billing
systems
to
to
plug
in
for
I'm
chargeback,
hey
I
want
to
make
money
off
of
this
deployment,
or
at
least
not
lose
it.
Something
to
that
effect
there
is.
There
is
a
notion,
I
hate,
to
suggest
earlier
of
bootable
volumes.
B
A
B
A
Know
authority
for,
for
all
images,
doesn't
know
the
first
thing
about
bootable
volumes.
So
if
glances
to
retain
its
relevance
in
the
world
and
I
think
we
generally
want
it
to,
but
it
needs
to
know
there
are
bootable
there
are.
There
are
bootable
volumes
out
along
the
edge
at
the
at
the
actual
cloud
service.
So
there's
quite
a
lot
of
discussion
on
how
that
will
look.
It
probably
means
a
fair
amount
of
modification
to
glance
itself,
not
without
controversy.
So
we'll
see
how
that
develops.
This.
A
This
was
alluding
to
is
the
controversial
there's
a
bit
of
that
there's
just
I
know.
You
know
that
nebula,
for
example,
it
got
better
ton
of
details
there.
They
seem
a
little
bit
more
supportive
I
believe
actually
present.
The
glance
PTL
is
actually
at
nebula.
He
he's
more
supportive
of
a
of
an
ocean
work
lends
itself
just
independently.
Has
the
ability
to
fire
off
instances
based
off
of
bootable
volumes,
I'm,
not
sure
I'm
respective
maps
perspective
we
clear,
but
but
we'll
do
what
the
community
wants
and
then
secure
attachment.
A
This
is
an
interesting
topic
and
whenever
you
talk
about
security,
is
both
essential
and
also
well
that's
kind
of
hard
to
do
at
least
that's
my
view
of
it,
but
but
there's
some
thought
processes
and
we
not
not
realize
in
the
world
not
that
I'm.
Aware
of
today's.
You
know
when
something
like
a
vm
escape
becomes
more
plausible.
That
has
some
real
dire
consequence
for
for
a
public
cloud
for
shared
infrastructure.
So
can
you
can
you
segregate,
or
what
can
you
do
to
segregate
like
storage?
A
You
know,
and
there
a
variety
of
things
there's
network
segregation,
but
but
there's
no
reason
to
not
account
for
it.
Also
at
the
storage
layer.
There
are
great
reasons
to
do
it
there.
So
how
do
you
do
secure
attachment?
How
do
you
prevent
a
particular
instance
where
somebody
hasn't
done
an
escape
and
they've
got
access
to
maybe
some
component
of
OpenStack
from
from
doing
nefarious
things.
A
This
is
pretty
much
still
on
the
discussion
page
with
this
will
look
like,
but
basically
more
secure
attachment
between
instance
in
in
cinder
there's
a
lightweight
support
for
things
like
chopped
today.
That's
not
what
really
what
I'm,
referring
to
but
you're,
not
more
of
an
authentication
between
the
instance
in
and
the
volume
is
it
about
from
a
logical
perspective,
so.
F
A
A
A
A
He
this
is
more
for
for
sort
of
the
folks
coming
new
to
it.
If
you're
interested
in
getting
involved,
I
will
say
that
that
sooner
certainly
can
use
an
emotional
resources
looking
to
get
started.
We'd
be
happy
to
help
you
out.
I
should
point
out:
there's
a
gentleman
named
john
griffith
who's.
The
PTL
talked
about
salt.
By
earlier
he
was
going
up,
so
I
don't
want
to
represent
that
where
whoever
active
work,
when
more
active
I
don't
want
to
represent
the
work
we
represent.
A
The
leadership
of
the
presently
I
would
definitely
pick
you
up
and
not
in
helping
whatever
way
and
getting
you
into
the
community
there's
a
ton
we
want
to
do
and
now
that
we
got
has
to
kind
of
more
boring
basis
extracting
it
knowing
to
do
feature
stuff.
So
a
bit
above
what
naps
doing
you
know,
I
talked
about
the
various.
Oh
well,
that's
not
remotely
readable.
Well,
there's
lots
of
drivers.
I
mentioned.
There's
lots
of
a
king's.
A
A
A
A
A
So
let
me
just
talk
briefly
about
what
we
did
in
the
Attic
spend
one
hour
time
frame.
So
so
now,
just
briefly,
we
have
one
storage.
We
went
operating
system
that
functions
on
all
of
our
storage
controllers,
albeit
there
are
two
modes,
more
of
a
classic
historical
mode
and
then
there's
a
newer.
What
we
call
clustered
mode
and
don't
read
too
much
in
the
naming
I
mean
it's
been
clustered
from
an
availability
perspective
for
many
many
many
years
I'm
referring
to
like
creating
like
clusters
of
clusters.
A
You
know
providing
everything
line,
a
single
namespace,
the
ability
to
do
you
know
continuous
business
operations
without
impacting
user
I.
Oh,
it's
basically
virtualized
everything
network
is
half
the
desk.
You
know
the
actual
logical
storage
entity
it
sells
because
that's
what
I'm
referring
to
integral
unflustered
this
initial
contribution
in
essex
was
for
more
about
classic
mode
of
operation.
The
basics
are
that
you
know
we
provide
a
back-end
or
a
provider.
Some
people
call
it
a
driver.
A
I
think
that's
not
not
the
best
name
for
it
that
that
connects
to
our
systems,
but
one
of
the
themes
for
us
was
well
people
don't
buy
an
inept
systems,
typically
to
be
commodity
block
storage
devices.
Frankly,
it
won't
be
honest.
There's
there's
a
bit
of
a
price
disparity
between
commodity
block
storage
device
and
what
we're
offering
so
there's
a
variety
of
capabilities
that
you
want.
You
want
to
get
at
OpenStack
itself
doesn't
have
this
nuance
consonance
concept
of
some
of
these
things.
A
C
A
Doesn't
know
where
snatcher
on
cologne
is,
but
that's
it,
so
we
wanted
to
avail
those
in
the
OpenStack
context.
So
what
we
ended
up
doing,
I'm
actually
just
jump
ahead.
Sorry
for
them,
blinding
them
so
of
this
is
just
kind
of
a
use
case.
You
know
the
whole
gimme
gimme
number
of
instances.
You
know
they're
interacting
with
without
the
second
and
then
we
talked
to
sort
of
an
intermediary
provisioning
service,
a
policy
engine
that,
when
you
say,
hey,
I,
want
four
terabytes
of
type
gold.
That
type
is
is
critical.
A
A
Availability
perspective,
maybe
it's
commodity
and
do
what
you
will
very
little
for
it.
It
doesn't
matter
red,
blue,
green
gold,
silver
bronze.
However,
you
want
to
define
it,
but
with
each
of
those
you
get
to
define
those
characteristics
I
was
talking
about.
Is
it
do
it
is
a
thin
provision?
Is
it?
Is
it
highly
performant?
Is
it
automatically
replicated
replication?
Is
not
a
small
thing
over
technically
doesn't
have
a
notion
of
this.
You
know,
how
do
I
do
site
failover.
B
A
Now,
at
some
level,
if
you've
written
apps,
you
know
against
the
cloud
api,
maybe
that
can
be
handled
for
you.
But
good
luck
for
you
know
what
on
infrastructure,
because
the
service,
so
so
that
that
notion
of
go
back
of
providing
a
service
catalog
that
Bales
those
value
and
capabilities
is
what
we
were
trying
to
attack.
So
our
privileges
and
immunity
or
peace.
We
call
on
command
and
it
does
intelligent
provisioning
costume
and
who
love
systems
based
on
whatever
false
you
state,
which
might
be
light
at
least
utilize
from
a
performance
perspective.
A
Art
might
be
capacity
base.
How
are
you
wanting
to
find
them
and
the
other
thing
we
did
is
we
took
those
lv
vase,
lbm
based
snapshots
and
clones.
We
intersect
that
call
on
redirected
to
the
back
in
your
a
and
use
our
native
snapshots
and
clowns,
which
don't
decay
in
performance
over
time,
not
that
I'd
expect
anybody
in
the
room
house.
It's
this
global
knowledge
of
netapp
specifically,
but
you
know
we
typically.
A
B
A
A
So
I
mentioned
the
other
mode
of
operation,
which
is
really
essentially
the
future.
It's
where
all
of
our
future
development
is
is
delivered
is
in
this
cluster
mode.
It's
been
about
around
for
a
little
while
now,
but
it
made
sense
to
serve
cater
to
the
larger
install
base.
First
and
and
now
in
folsom.
We
provided
that
same
those
things
that
are
capabilities
I
just
described
for
our
our
future,
not
a
future
guard,
our
future
proof,
if
you
will
a
version
of
our
storage
operating
system.
A
The
other
thing
I
want
to
point
out
so
I'm,
not
necessarily
a
time
different.
Most
of
what
you
see
that
is
different.
There
is
a
curiosity
of
the
way
how
flustered
on
capability
works.
You
can
do
cool
things,
I
think
someone
cool
things
like
you
know
organically,
adding
or
removing
systems.
You
know,
maintenance
events
without
negotiated
down
time.
You
know
user
IO
doesn't
suffer
because
because
of
the
behavior
of
the
individual
knows
could
be
brought
on
an
offline
same
things
for
shelves.
Obviously,
there's
some
coordination
of
movement
of
the
actual
data.
B
A
A
Losing
good,
okay,
the
other
thing
we
did
is
we
provided
two
drivers,
one
which
is
just
a
reference
driver
that
anybody
can
use
with
any
generally
available,
NFS
server,
so
actually
work
Helen,
which,
and
then
we
also
quoted
a
version
of
that.
That's
you
know,
does
additive
things
for
foreign,
a
nap
systems,
but
what
is
this
NFS
driver?
I
just
described
cinder
as
a
block
storage
service.
It
is.
It
is
only
that
president,
what
it
actually
does.
Is
it
mounts
in
a
fast
to
the
hypervisor
entries
files
with
virtual
block
devices,
and
this.
A
More
scalable
than
providing
it,
as
you
know,
a
nice
cozy
line,
because
it
eventually
doesn't
matter
who
you
are
in
terms
of
what's
out
there
in
the
market
today,
you're
going
to
run
out
of
out
of
initiators
and
ones
that
counselor
those
are
nine
minutes,
but
files
well
we're
talking
about
the
building's
range.
So
you
know
when
you
think
of
open
Stax
design
center,
which
is
you
know,
web-scale
hyperscale
I
was
important
for
us
to
deliver
a
mechanism
to
actually
do
that
with
a
with
the
back
end.
A
In
this
case,
we
get
it
through
NFS
there's.
Also
some
cool
stuff
is
NFS,
hasn't
SAT,
still
most
people,
think
of
as
what's
that's
time-worn.
It
is,
but
NFS
is
now
and
then
be
for
one
stage
and
we'll
be
in
before
too
soon.
There's
a
cynical
paralyzed
NFS,
which
doesn't
really
cool
stuff
like
directing
io2
nearest
neighbor.
Does
delegations
it's
it's
things
have
evolved.
It
also
has
a
far
better
security
model.
That's
been
true
since
nfsv4,
whereas
the
the.
D
D
B
D
A
Yeah
we
meet,
we
created
them
when
we
can
come
contributing
them
all.
What
I'm
talking
about
is
an
OpenStack
proper.
It's
not
separately
available
from
that.
What
it
what
it
does
is
it
it.
It
basically
mount
sent
an
MF
s,
export
at
the
hypervisor
at
the
cloud
server.
What
if
the
cinder
driver?
Ok
and
when
you
go
to
create
a
new
cinder,
all
you
as
a
user
yamba
comes
to
you,
but
what
ends
up
happening
is,
instead
of
it
going
out
and
creating
a
lung
in
an
exported
one
from
from
a
block
storage
device
somewhere.
D
A
F
A
Not
necessarily,
however,
in
in
folsom,
because
of
the
project
policy
board,
now
now
known
as
technical
committee
as
decision
to
to
retain
no
volume
and
center,
there
was
a
decision
that
anything
that
was
a
driver
submit
or
a
provider
submitted
for
center
must
also
have
an
analogous
driver
for
no
volume,
and
so
that
is
true
in
folsom.
So.
A
F
A
Wouldn't
make
sense,
no
there's
a
whole
other
notion
of
fulsome
stable,
certainly
in
there
folks
who
submit
bug
fixes
and
such
I
would
I
would
say
that
that
would
get
accepted.
So
briefly,
you
were
referring
to
do.
I
want
NFS
feel,
like
maybe
I
want
to
coordinate
shared
file
systems
and
then
w.e
NFS
alone,
but
just
shared
customs
between
between
instances.
You
know
if
you
will
share
file
systems
as
a
service,
so
we
we
prototype
some
code
and
blueprints
and
there's
ass
back
on
both
launch.
A
Are
the
blueprints
on
launch
padding
on
the
Leakey's,
the
functional
spec,
and
we
actually
demonstrate
it
and
have
a
session
during
the
design
summit
to
expose
shared
cloud
system
support
through
cinders.
So
the
ambition
is
to
make
sender
not
only
a
block
construe
and
if
you
look
at
the
code,
what
we
require
to
create
a
separate
service
to
do
what
we're
referring
to
it's
about
ninety
ninety-five
percent
common
we're
really
still
talking
about
capacity.
Where
do
you
get
it
from?
Where
do
you
mount
of
two?
There
is
more
to
it.
A
A
The
point
is:
is
that
this
stuff,
that
we
prototyped
and
we
show
have
shrunk
to
the
community-
will
be
ready
for
submission
here
in
the
next
week?
Well,
it's
actually
ready
for
tomorrow,
but
I
I
said
well,
we'll
wait
till
early
next
week
because
there's
this
is
a
community
and
just
because
we
showed
up
what
the
code
usually
wins,
but
they're,
also
folks
who,
who
might
not
want
to
see
a
dumbest
way.
There
are
a
lot
of
folks
in
the
center
core
for
good
reasons.
For
them
who
want
to
see
it
remain.
A
A
block
storage
only
concert,
that's
kind
of
their
interest
in
the
world
commercially
and
we've
operated
one
for
obvious
reasons
that
does
well
profile
system,
so
strategically
it's
important
to
us,
but
also
from
the
operators.
Many
of
United
for
a
couple
of
folks
say
we
would
like
you
to
act
in
this
in
this
capacity,
so
it
remains
to
be
seen
whether
this
will
be
taken
at
one
point.
A
The
alternative
is,
it
becomes
a
separate
service
and
then
has
to
go
through
the
olding
polling
incubation
process
and
we're
prepared
to
do
that.
If
we
need,
if
need
be,
we
just
think
that
that
would
be
unfortunate
because
it
would
take
a
lot
like
Cody's
painful
well
in
what
it
would
end
up
happening,
because
there
is
so
much
commonality.
A
Is
you
would
end
up
like
I
said,
there's
so
much
commonality
you
end
up
with
a
separate
incubated
project
second
service,
and
then
the
bulk
of
cinder
would
get
ripped
out
opens
that
common
because
they
become
two
separate
products:
budgets,
nonsensical,
so
yeah
we've
got
something
that's
actually
available.
I
meant
to
put
your
all
in
here
it's
available
on
github,
now
to
look
at
it's
just
not
submitted.
It
will
be
orphans.
A
The
the
other
thing
I
talked
about
when
you
actually
provide
the
block,
storage
devices
or
the
block
view
the
virtual
block
devices,
if
you
will
on
NFS,
there's
some
interesting
things
that
we
that
we
were
working
on
to
support
google
volumes.
So
if
it's
the
case
that
you
have
glance
which
is
providing
images
onto
shared
files
and
that's
one
of
the
options
besides
object
store
that.
A
Before
objects
toward
you
you
what?
Why
would
it
be
the
case
that
if
you
can
actually
bend,
treat
that
as
a
bootable,
and
why
would
you
copy
it
local
if,
instead,
what
you
could
do
is
immediately
clone
it
and
have
it
available
back
that
fast,
you
didn't
have
to
incur
the
expense
or
or
the
resource
contention
associated
with
the
actual
movement.
The
activity
and
it's
also
storage,
more
storage,
efficient
I
asked
it
that
way,
someone
rhetorically
in
fact,
there
are
some
good
way
what
we're
just
looking
at
providing
options
so
so.
D
A
A
Of
the
folks
I
mentioned
earlier,
to
pull
it
off
then
also
on
the
other
end
is
matter
from
the
Sator
device.
We've
been,
you
know,
file,
says
NFS
blog
I
skies,
the
SDO
es
fibre
channel,
but
we
also
are
rolling
an
object,
storage
support
built
into
the
same
same
storage,
offering
system
there's
a
couple
different
products
there
who
will
be
supporting
the
soft
API
in
front
of
it.
You
know
strategically.
We've
done
a
lot
of
work
in
the
CDM
I
space
and
I
think
the
direction
is
to
say
let
the
market
decide.
A
If
swift,
swift
by
you,
you
know
ubiquity.
If
you
will
becomes
the
standard,
alterable
open,
alternative
test.
Three
so
be
it
SED
mi.
Is
that
it
cool
to
support
book.
In
fact,
actually
our
initial
support
for
swift
API
will
come
in
the
form
of
a
swift,
API
proxy
modification
to
talk
to
CD
mine,
so
Apple.
If
that's
that's
it
kind
of
a
nut.
Shell
starts
to
mount
cold,
but
a
little
bit
about
center,
a
little
bit
about.
B
A
No,
it's
not
an
API
extension.
We
use
ball
type.
That
was
referring
to
this,
which
is
part
of
know
about
subset
of
API
septa
of
the
Nova
API
and
the
sender
API.
We
use
that
to
map
to
those
catalog
entries
that
was
referring
to
in
the
future.
We
may
end
up
using
something
different
type
has
not
been
exhaustively,
typed
or
defined,
as
if
I
would
put
in.
There
is
something
separate
called
no
ball
ball.
Type
that
respects
I
think
is
what
it's
called.
Where
folks
are
supposed
to
actually
advertise
/
provider.
A
What
are
the
cool
things
you
can
do
and
then
a
more
intelligent,
cinder
scheduler
can
make
the
decision
for
you.
So
what
we've
essentially
on
the
back
end
is
implemented
our
own
scheduler,
because
the
center
scheduler
can't
do
when
we
need.
Yes,
when
senator
scheduler
does
that,
then
fine,
we'll
use
that
and
set.
A
C
C
A
A
You
know
migrations.
So
you
know
you
do
a
check-in
and,
and
there's
a
bunch
of
you
know,
there's
a
bunch
of
automated
tests
that
occur.
You
know,
is
it
Pepe
compatible?
You
know,
hey.
Your
code
looks
good
that
type
of
thing,
but
then
there's
also
other
more
rigorous
things
that
happen
as
well,
but
what
it
to
the
best
of
my
knowledge,
there's
no
provision
currently
and
I'm
sure
sure
there
are
others
who
are
looking
at
this
I'm
just
not
aware
of
their
progress
on
that.
Actually.
A
What
is
the
behavior
of
it
and
moving
from
for
one
version
to
another?
What
happens
with
the
schema
changes
things
like
that
and
it's
a
hard
problem
of
salt
but
I
think
ultimately,
the
only
real
way
to
solve
it
unless
your
own,
a
distribution
and
are
assessing
and
create
in
solving
the
problem
for
people,
is
for
the
continuation,
Cuba
generation
integration
process
and
OpenStack
proper
to
assessment,
not
probably
the
answer.
No,
this
is
why,
should
you
take
lyrics
exactly
something.
A
A
A
C
Ought
on
what
you
and
I
know
you
said:
drivers
the
I
agree:
drivers,
a
mixed
word:
we've
used
plug
in
some
internally
in
plugins,
no
better
in
my
opinion,
but
you
know
how
do
we
deal
with
the
idea
of
doing
code,
reviews
and
checks
against?
You
know
physical
systems
in
the
community?
Do
you
think
it
requires
if
somebody
was
to
write
a
cinder,
adapter
physical
system
yeah
for
our
new.
F
You
are
you
Veronica
Roth.
We
actually
have
a
heterogeneous
environment
where
we
use
down,
use
quantum
micro
or
we
use
you,
know
app,
and
so
we're
actually
building
our
our
test.
Environments
just
run
all
those
pieces
of
hardware
and
we
test
against.
All
of
it
doesn't
make
sure
that
you
know
that's.
C
We've
been
worried
about,
you
know
the
question
of
somebody
saying
all
right:
I
just
wrote
a
Eureka
logic
driver
for
cinder
and
I.
Don't
I
can't
take
the
pole
because
I
can't
test
it
here,
which
is
a
very
real
comment.
It's
a
real
web
service,
but
it
has
somebody
else
tested
it
do
it
so
does
it
need
to
be
in
core?
You
know,
if
you
does,
it
really
need
to
be
part
of
the
OpenStack
code.
F
A
So
the
problem,
though,
is
that
you,
so
as
a
vendor
rehearses
created
integrations.
We
want
to
be
as
broadly
available
as
possible,
so
the
only
way
to
really
make
it
happen
is
to
be
upstream.
So
if
you
create
a
distribution,
we're
there
unless
you've
exercised
us
in
some
editorial
decision
where
they're
right.
So
that
means
that
you
still
have
to
solve
it.
I
manage
tan.
It's.
C
C
A
F
C
But
the
same
thing
is
true:
community
doing
a
push
and
that
push
getting
accepted
in
the
vendor
then
trying
to
support
it
when
they
didn't
actually
authorize
that
that
push
I
suspect
it
over
time
we're
going
to
have
to
have
vendor
supporting
vendor.
You
downloadable
drivers,
because
even
if
they're,
the
ones
in
the
community
vendor
doesn't
want
to
support
something
they
haven't
certified
too.
Very.
We.
C
Its
in
that
case
it
does
but
right
now
it
doesn't.
Anybody
could
push
a
changeup
street
right.
Yeah
I
mean
it's
not
anybody's,
going
to
have
to
be
checked
and
gated,
but
it
doesn't
necessarily
have
to
get
tested
against
physical
hard
and
it
I
mean
I.
Don't
have
the
clear
answer
for
it.
I
think
it's
a
little
tricky
and
same
thing
is
true
about
people
who
show
up
late
right.
If
you
miss
the
feature
lock,
you
know
will,
will
they
take
your
driver
into
into
the
project?
C
C
Sorry,
which,
which
I
think
is
I,
mean
I.
These
are,
these-
are
all
I,
think
cinders
doing
a
really
good
job
of
bringing
up
these
issues
and
we're
going
to
have
to
resolve
them.
Just
like
you
know,
you're
doing
good
job
and
you're,
not
the
only
one
driving
the
API
versus
implementation
question,
because
you're
talking
about
in
using
Swift's
API
with
a
different
implementation,
yeah.
A
A
A
B
B
F
A
F
A
A
A
D
It's
it's
my
little.
C
C
Okay,
so
yeah
I
think
there's
a
couple
of
things
would
be
fun
to
do
it
just
go
round
table
about
impressions
of
the
summit,
maybe
that
you're
the
most
impressive
session
we
can
do
that
and
if
we
run
out
of
topics
we
can
talk
about
the
most
controversial
and
the
most
troubling
thing
that
you
saw
something,
but
then
I
think
it
would
be
worthwhile.
Have
that
people
didn't
go,
you
go
run
road.
Ask
let
people
ask
a
question.
F
F
So,
what's
I'm
sorry,
there
was
a
security
session.
I
went
to
in
the
way
that
I
really
like
they
were
talking
about
various
technologies
for
at
rest,
encryption
of
everything-
and
I
can't
remember
everything
about
it.
They
were
building
a
true
fifth
inning.
His
was
their
technology
and
I
mean
that
was
pretty
important
for
me
because
we
actually,
you
know,
we're
working
on
things
like
HIPAA
and
pci
compliance
and
there's
questions
around
at
rest.
They
didn't
Christian
and
we're
like
tenants
at
your
problems,
basically
encrypt
the
data
and
we're
not
going
to
deal
with
it.
F
F
So
were
like
oh
there's,
there's
a
whole
nother
story
about
automatically
tricking.
Everything
like
well
is
that
a
good
idea
John
they
actually
had
their
technology,
was
such
that
they
keep
keys
securely
locked
away
on
a
service,
and
you
can,
you
know,
request
the
key
in
in
that
can
unlock
the
data
and
they.
F
F
Thank
you
yeah.
They
were
arrested
and
yeah
so
that
that
was
pretty
cool
to
me
because
they
nicely
solved
a
lot
of
problems
around
that
you
know
encrypting
only
part
of
it,
keeping
the
keys
lock
the
ways
we
do
something
this
machine
I
still
can't
do
anything
with
it.
F
A
It
is
internet
type
cramp,
so
you
make
that
an
attribute,
oh
right,
how
long
you
live
again,
but
I
can
I
just
suggest
they
think
the
real
hard
balls
isn't
at
rest
and
cooking
because
most
of
any
back-end
provider
to
it
is
actually
in
flight.
Encryption
word
these
for
the
tang
of
the
keys
and
I.
Think
that's
the
thing
that
there
any
other
and
by.
F
A
With
a
transport
but
like
the
whole
coordinated,
you
know,
security,
tuner
and
transferring
between
0
single
set
of
keys
or
maybe
a
couple
of
them
are
signed
by
a
masturbatory
public
cloud
provider
knows
nothing
entirely
a
Peyton.
Oh,
you
know
that's
kind
of
the
vision,
but
the
implementation.
Do
you
think
quantum
and.
F
F
Yeah
that's
right,
gotta
file,
sitting
there.
Oh,
if
I
want
to
do
it,
the
server
I,
just
back
it
up
and
I
should
over
something.
Let
me
seek
you.
As
you
said,
security
does
really
start
from
the
very
beginning.
Right,
like
TPM,
am
I
actually
running
what
I'm
being
told
I'm
running.
You
know
the
network
to
storage
to
really
it
touches.
So
many
I
mean
if
you
really
wouldn't
do
like
cause
havoc
inside
of
a
clouding
in
you
know
any
access
outside
of
you
vm
that
you
know
to
know
that
you
as.
F
As
son
of
Ars
Technica
researchers
utah
we're
recycling,
ssl
keys,
like
they
were
on
the
same
fiscal
supposed,
yes,
two
different
beams
and
they
were
able
to
capture
a
496
bit
key
from
the
other
hoes.
They
were
using
through
like
system
entropy
and
like
all
this,
like
wacky
stuff,
they
were
actually
able
to
capture,
but
it
requires
you,
be
on
the
same
host
run.
F
B
F
Applied
apparently,
they
supplied
information
on
how
you
figure
out,
which
you're
on
and
jump
to
about
it.
You
know
so
basically
you're
like
trying
to
I'm
going
on
so
this
one.
This
end
point
right
here
which
hosts
inside
of
Amazon
who
is
beyond
in
order
to
be,
if
I,
on
the
right
one.
Now
let's
go
and
make
a
new
one
demand
would
like
or.
C
C
F
N
plus
3,
exactly
I,
fell
in
a
bit
about
I'm,
actually,
a
big
fan
of
0
and
you
yeah
I
thought
you'd
think
so
for
many
applications
that
superior
to
gravity
human
folly,
because
I've
done
a
fair
chunk
of
gravity
administration
in
the
past
and
somebody
actually
the
cloud
scaling
people.
I
have
done
a
fairly
interesting,
so
you
know
for
the
uninitiated
rabbinic.
B
F
People
have
actually
implemented
a
mechanism
to
replace
rabbiting
you
with
a
zero
of
human
environment,
every
basically
zero
he's,
not
that
there's
no
reliable
messaging
or
anything
like
that
said.
The
way
load
balance
is
not
tapped
and
basically
push
messages
into
using
about
battles
are
in
local,
a
book
located.
Rapid
0
and
Q
servers
in
your
lift
the
messages
back
off
with
a
similar
fashion.
F
I
found
that
fairly
interesting.
Because
right
and
that's
the
question,
I
don't
been
trying
to
get
mr.
minute
for
about
a
year
sure
you
I
flavor.
Is
it
the
same
thing
I?
Think
the
puns
given
people
also
just
recently
implemented
and
I
know
they
publish
that
a
full
computer
ap
unknown.
That
was
that
was
then.
C
C
F
E
C
E
C
F
C
F
Also,
I
don't
know
it:
did
anybody
sit
through
the
nation
from
there's
like
three
entities
and
they're
all
talking
up
there,
so
it's
NTT
DoCoMo
and
they
basically
scale
nope
its
open
staff
using
OpenStack.
Anybody
like
a
self
scaling,
OpenStack
environment,
where
they
basically
had
done
a
mechanism
where
they
introduce
new.
B
F
That
were
essentially
bare
metal
servers,
so
you
actually
had
an
opportunity
in
the
OpenStack
and
dashboard.
You
could
go
and
you
know
pick
you
know,
instead
of
picking
your
ex
one
XL
or
x,
one
small,
whatever
it
was
to
roll
out
your
instances
who
would
actually
pick
your
bare
metal,
dot,
large
women
a
lot
small
and
what
would
happen
behind
the
scenes
that
would
basically
have
an
actual
bare
metal
system
be
provisioned
and
booted
and
basically
would
become
a
compute
node
to
place.
F
F
Cpus
and
then
there
was
virtual
CPUs
and
over
time
they
actually
use
zabbix
to
do
so
and
over
here
basically
going
you
know,
go
and
roll
out
a
new
vm
and
roll
out
another
system,
roll
up
analysis
and
all
the
while
the
physical
CPUs
data
say
well
as
soon
as
as
that
line
across
the
city.
You
had
more
virtual
CPUs
of
physical
CPUs.
C
The
fact
that
there's
five
different
proposals
that
are
not
misled,
reconcilable
I
think
it's
an
indication
of
challenge
that
in
the
interest
and
the
interest,
there's
none
others
interest
I
do
barrel,
provisioning
in
its
messy.
So
yes
be
interesting
to
see
how
that
emergency
other
other
people
who
were
there.
She
looks
like
she's
ready
to
go
to
the
game
as.
A
A
I'm
just
suggested
there
are
a
lot
of
folks
at
certain
companies
who
who
are
going
to
be
compelled
to
interact
with
their
customers,
who
can
who
then
thus
cannot
actually
participate
in
the
community
discussions.
I
think
I
think
that
there
it
makes
some
sense
to
separate
the
design
something
in
conference
back
out.
At
the
same
time,
I
would
like
to
see
and
off
more
representation
operators
in
the
design
side.
The
actual.
C
Offer
is
accepted
operator
right.
Well,
I
mean
we
actually
did
it.
I
was
spent.
You
met
number
two
co-chairs
on
the
cops
track,
yeah,
which
I
think
which
I
I
was
really
happy
with
the
way
it
turned
out.
The
last
thing
we
had
in
the
tractors
panel
has
been
a
highlight.
Was
we
did
an
ops
panel
that
respected
we
got
Monty
and
Jesse
on
module
is
supposed
to
moderate
jessie
andrews
who's,
the
originator
devstack,
and
then
we
had
met
ray
and
Joseph
puppet
chef
representation
that
Gigi.
C
But
you
mess
us
up
when
you
do
this
and
we
actually
had
a
really
good
discussion
between
the
four
of
them
about
your
the
balance
with
operations
and
dads
and
things
that
we
could
do
for
that.
That
that
was
really
that's
really
good
answer
period
through
the
school
I
should
have
the
link
for
the
recording
of
it.
Cuz
the
topics
were
pretty
free.
A
/,
the
reason
why
I
mentioned
that
is
politically
in
design
center
portion
is
that
you've
got
a
room
full
of
developer.
That's
not
going
to
change,
who
are
essentially
making
a
decision
on
the
direction
of
something
that
will
hoist
it
upon.
Operators.
Theory,
it's
always
location.
The
Wellman,
where
we
need
to
get
out
of
there
was
an
opportunity
to
have
some
voice
right,
so
I.
F
F
B
B
F
That
guy
heated
presentation,
they
have
one
of
their
guys
they're
infected
a
presentation
of
a
kind
of
internal
project
to
think
of
the
case
study
where
they
kind
of
showed
what
they
had
to
do
before
minus
your
understanding.
But
so
they
kind
of
talked
about
this
whole
notion
of
well
before
we
had
implemented
the
system.
We
know
it
took
us
a
week
or
two
to
actually
set
up
an
environment
to
give
it
that
much
of
a
customer
la
la
la
and
then
they
basically
implemented
the
system
using
my
syrah
I.
F
You
know
wanted
them
and
all
openstack
and
stuff
like
that,
and
it
originally
showed
like
a
complex
configuration
with
multiple,
vlans
and
kind
of
all
kinds
of
stuff
to
drop
down
and
just
push
the
button,
and
he
saw
show
basically
how
it
all
about
I
think.
That's
really
really
amazing,
especially
with
you
know
the
coming
support
for
all
the
backend
drivers
and
stuff,
like
that.
It
will
be
very
interesting.
The
other
session.
B
That
I
found
wasn't
interesting,
for
me
was
a
was
an
intro
to
development
that
the
name
was
surviving
your
first
called
chicken
and
that
guy
walk
you
through
step
by
step
on
the
thing
you
need
to
do
to
actually
be
able
to
do
you
force
chicken,
it's
really
good.
Yes,.
E
C
B
C
E
E
I,
/,
demo,
stuff
I'm,
really
excited
to
see
a
pmr
fixing
for
lack
of
a
better
term
lift
for
support,
4pm
work
so
with
the
sort
of
playful
get
some
more
advanced
features
of
ESX
through
a
through
the
verge,
so
I
think
that'll
be
some
something
wrote
mean:
that's
not
just
going
to
be
good
for
OpenStack
those
be
good
for
linux
in
general
I'm.
Having
that
kind
of
support,
considering
how
problem.
F
E
Speak
yeah
yeah
that
came
out
and
you
know
the
session
45
minutes
or
whatever
it
may
be,
talked
about
their
own
investment,
different
angles.
You
know.
Basically,
you
know
a
lot
of
people
concerned
this.
Now
they
bought
my
Syria.
Are
they
going
to
try
to
quash
things
or
being
invest
in
it
or
whatnot?
The
overwhelming
theme?
Not
only
that,
but
for
entire
week,
CTV,
that
everybody,
the
dog,
was
hiring
and
trying
to
scale
up
their
efforts
to
to
really
go
forward
with
stuff.
E
E
Jet
was
we're
doing
our
part
to
get
the
economy
moving
again,
the
specs
very
house.
So
what
is
he
a
sex
doing
with
the
openstep?
Well
right
now,
there's
there's
limited
support
for
vmware
in
limpert,
and
so
you
can
be
like
just
starting
and
stopping
your
pm's
and
that's
about
it.
So
they're
going
to
be
able
to
get
any
corporate
things
like
you
know
your
vmotion
capabilities
and
some
of
the
more
advanced
features
that
you
get
em
ESX
they're
going
to
be
in
court
as
getting
those
written
then
exposed
to
the
apartment.
F
E
A
Prohibits
you
from
using
other
management
tools
on
top
of
it,
but
yeah.
So
I'm
very
curious,
what's
meant
by
by
that?
Most
of
the
folks
I
should
say
most
a
good
portion,
very
measurable
portion
of
the
folks
that
we
talk
to
you
about
their
OpenStack
missions,
may
mention
VMware
licensing
as
a
reason.
Why
they're
one
of
the
reasons
why
they're
investigating
on
web
site
in
the
first
place
running
very
interesting
you
mention.
E
A
nice
time
you
get
low,
so
yeah.
Well,
we
get
the
smaller
people
they're,
typically
as
smaller
people
who
tend
to
be
that
because
they
don't
necessarily
have
resources
when
you
get
into
the
really
large
enterprises
we're
where
we
spent
a
lot
of
kind
of
then
eat
and
no
you
got
to
do
as
well
now,
VMware's
everywhere
and
it's
in
a
market.
In
many.
E
Just
because
it's
like
the
industry
standard,
everybody
is
on
it,
and
so
they
their
engine,
a
lot
of
them
are
interested
in
it
and
I
do
see
a
new
CEO,
pathetic,
potentially
gateway
for
F
jacking.
That
away,
if
that,
maybe
that
won't
matter
as
much
eventually
but
I
do
know
that
the
conversations
that
I
had
around
the
OpenStack
/
Souza-
that's
been
ok,
so
what
you
get
when.
E
A
F
Worse
right,
I,
don't
know,
and
the
thing
that
I
heard
there
was
that
they
were
adding
and
not
like.
That
makes
total
sense
right.
They
want
to
keep
the
spirit
place,
so
adding
support
for
vsphere
and
OpenStack
makes
it.
So
the
idea
is
that
you
know
you
know
how
compute
that
goes
and
provisions
your
VMs
look
basic.
It's
going
to
be
ugly
I.
B
E
E
Actually
limited
view
more
supporters
hyper-v,
so
oh
there's
lots
of
different
things
right
there.
So
you
just
how
you
can
script
out
your
interactions
with
a
vm
using
live,
hurts,
and
then
that
was
the
theoretically.
Those
scripts
will
then
function
regardless
of
what
the
hypervisor
is
underneath
that,
which
is
how
Nova
does
a
lot
of
its
work
with
interactive
the
ends
be
able
to
avert
and
how
they
can.
We
can
support
multiple
hypervisors
wound
up
going
crazy
is
do
that
that.
E
E
Got
picked
back
up
again,
I
can't
remember
to
take
it
up
with
Microsoft
or
somebody
else,
so
I
mean
you're
not
going
to
get.
You
know
it
all
like
I
mean
just
because
it's
windows,
there's
a
lot
of
the
stuff
to
do
doesn't
translate.
Cyrus
ensures
me
some
sort
of
wrapper
in
that
case
of
well
we're
there
simply
to
rewriting
commands
over
to
some
API
in
that
system
center
or
whatever.
E
F
A
Else,
that's
why
I
wear
that's
my
reading
of
it
enjoy
your
thinking.
It
prevents
this
utility
from
making
like
vsphere
calls.
Is
it
yeah
I?
Believe
it's
I?
That's
all
I
think
that's
right,
but
it
or
anything
that
broadly
amounts
to
like
a
value-add
management
tool
for
coordinating
things
like
the
motion.
For
example,
it
was
prohibited
unless
it's
branded
kit
or
getting
more
ended
software.
A
F
That's
ok,
but
so
it's
eight
o'clock
right.
Thank
you
very
much
very,
very
for
showing
up
we're
going
to
have
Opscode
here
next
time
around,
so
they
in
it
as
a
sponsor.
Yes,
so
I'm.
If
I'm
going
to
guess
I'm
going
to
guess
that
they're
probably
going
to
talk
about
Chef
for
openstack
and
what
they're
doing
so.