►
From YouTube: OpenStack Austin 2013-07-11
Description
OpenStack Austin 2013-07-11
A
Bar
and
it
get
enough
people
sign
up
and
be
open
bar
all
day,
singletrack
conference
hundred
people.
So
it's
nice,
tiny,
intimate
thing.
The
lab
speakers
and
they've
been
doing
it
three
years.
As
we
conference
and
the
people
that
organized
it
said
hey,
we
should
do
a
40
sack.
They
have
to
work
with
me
for
my
puppy
and
so,
if
you're
interested
it's
at
OpenStack,
dot
or
nails
that
work
check
it
out,
peace
and
that
eventually
the
fabulous
yeah
I
actually
already
been
perfect.
A
A
A
cool
line
up
I'm
not
going
to
take
much
the
times
because
I
what
vicious
going
to
be
talking
about
might
be
relevant
to
belong
the
room.
That's
already
got
open,
stack
up
and
running
he's
going
to
talk
about
performance,
tuning
sort
of
the
aspects
of
assistance.
So,
on
what
I'm
going
to
talk
to
you
about?
Some
of
you
might
have
heard
little
pieces
of
this
so
I
had
a
glass
of
wine,
so
I'll
be
a
little
bit
more
forthcoming.
With
details,
this
creative
project,
alright
and
kind
of
what
happened
he's
kind
of
raised.
A
I
started
three
years
ago
july:
twenty
seventh,
we
got
on
stage
20,
let
loose
openstack
goshawk,
but
there
was
there
were
a
few
years
before
that
on
their
team
coco
by
NASA,
we're
banging
on
eucalyptus
and
working
on
stuff
that
actually
became
Nova
and
so
and
talk
a
little
bit
about
that,
and
maybe
we
can
rip
a
little
bit
on
this
through
the
presentation,
so
I
promise
this
just
works.
So.
B
A
Back
when
how
we
were
trying
to
figure
out
how
to
speak
a
whole
bunch
of
data
from
NASA
and
serve
it
up
to
platform
called
worldwide
Costa,
we
just
worked
with
Google
I,
should
put
google
Mars
together
and
Google.
Had
this
really
interesting
philosophy
where
all
their
engineers
had
as
much
contending
and
storage
capacity
as
they
could
use
and
we
started
working.
Microsoft
is
exactly
opposite
right.
There
were
teams
there
that
had
to
find
anything
beg
and
it
still
resources
when
they
wanted.
A
In
fact,
the
group
we
were
working
with,
which
was
in
Microsoft
Research
I,
literally,
had
to
borrow
the
resources
from
the
google
being
max
team
wherever
msn
match.
Whoever
was
called
access
to
make
this
thing
work
and
it
was
really
challenging
for
us
to
do
what
we
needed
to
do
with
microsoft
and
really
matched
what
we
were
doing
with
the
google,
moon
and
Google
Mars
project,
and
so
we
did
is
we
decided
to
do.
A
A
Right,
cooling
to
build
alarm
and
building
new
facility,
an
asset
was
like
a
25-year
budgeting
process,
so
it
turned
out
that
big
thing
in
the
background
is
a
wind
tunnel
that
used
massive
amounts
of
electricity
and
thing
behind.
That
is
a
big
power
substation,
and
so
when
I
was
at
NASA
Ames,
we
had
hundreds
of
megawatts
that
excess
power
we're
Anna
Mae
West,
which
was
like
the
sixth
note
on
the
internet
and
a
really
significant
period,
and
we
had
lots
of
land
right,
Milliken
knowledge.
A
A
A
So,
on
brought
a
bunch
of
people
on
the
sir,
the
demon
has
become
of
the
Nova
project,
but
I
Josh
mcentee,
our
commander
Devin
for
Lanisha
shia
jessie
andrews
tones.
I
determine
all
these
guys
in
Kuala
critic
for
a
significant
contributors,
that
of
a
project
like
this
English
people,
but
Todd
wily,
nily,
excessively
Republic,
every
mom.
These
guys
twill,
be
you
know.
A
My
role
is
really
to
protect
them
from
the
bureaucracy,
so
we
had
a
room,
the
lock
door
on
the
really
we
tried
to
isolate
them
and
give
them
that
were
connectivity
and
basically
keep
the
bureaucrats
out
of
there
Aaron
and
fun,
loom,
of
course,
and
they
were
kind
of
busy
toiling
away.
Building
this
this
thing
that
became
Nova
and.
B
A
Ended
up
happening
was
when
we
started
talking
about
the
workers
are
doing
in
Washington
DC
this
guy
named
though
that
Kundra,
who
became
the
first
CIO,
the
United
States
of
America,
got
really
excited
about
what
we
were
doing,
and
this
was
really
convenient
because
little
part
I
left
out
was
we
were
being
investigated
by
Congress
for
spending
money
on
this
project
because
it
turns
out
you
can't
put
people
in
shipping
containers
and
NASA.
It
was
like
a
jobs
program
right,
and
so
one
of
the
Senators
that
put
a
bunch
of
money
to
NASA.
A
We've
really
getting
concerned
about
this
trend
that
if
we
put
all
the
infrastructure
and
shipping
containers
out
of
Ames,
you
wouldn't
be
able
to
send
a
bunch
of
jobs
to
Alabama
and
Florida
and
places
where
the
big
data
centers
could
employ
thousands
of
people.
So
this
guy
was
actually
really
instrumental
in
protecting
us.
Few
weeks
after
he
left
on
and
launched
the
cloud
computing
strategy,
the
US
government
got
another
call
the
White
House
and
turned
out
that
there
was
this
really
important
application
he's
going
to
factory
to
the
new.
A
The
talk
here
respond
code
usaspending.gov,
and
this
was
an
application
of
his
design
before
cloud
and
a
lot
of
you
know
that
means
that
was
designed
with
an
architecture
that
did
not.
You
know,
really
scale
out
very
well.
The
requirements
on
the
storage
subsystem
were
kind
of
ridiculous
on
so
we
worked
tremendously
hard
to
try
to
fit
this
round
peg
into
a
square
hole
which
was
making
this
usaspending.gov
site
work.
And
why?
Because
Barack
Obama
was
a
Senator
in
Illinois
he
got
two
pieces
of
legislation
through
Congress.
This
was
the
first
one.
A
It
was
called
the
Obama
covert
act,
and
so
long
was
actually
personally
interested
in
seeing
this.
They
work
in
the
White
House,
with
this
big
red
team,
together
and
here
on
saturdays
and
sundays,
for
a
six-month
period,
and
this
focus
on
this
project
is
really
what
elevated
this
work
that
was
going
on
at
NASA
and
enabled
NASA
to
sustain
its
investment
in
the
project
for
the
period
of
time.
A
It
did
and,
of
course,
when
these
guys
left
to
the
rack
space
and
we
elect
him
to
nebula
on
things
kind
of
became
less
less
organized,
but
that's
actually,
the
president
outside
the
Oval
Office,
using
the
IT
dashboard,
which
was
running
in
that
shipping
container,
which
was
at
some
point
running
over
the
person,
turns
up
Lucas.
The
point
is
that
all
of
this
investment
agree
to
this.
A
A
A
team
spent
that
night
before
OpenStack
has
announced
basically
adjusting
copyrights
and
making
it
so
that
this
was
a
nasa
project,
so
see
what
happens
in
that
gets
retweeted
so
that
that
means
it
was
a
harrowing
experience,
but
it
shows
just
how
difficult
it
was
to
get
a
big
organization
like
NASA
to
do
stuff,
and
it
just
shows
kind
of
how
you
know
the
perseverance
of
the
team
and
just
everybody
kind
of
committed
to
working
together
and
make
this
thing
and
open
source
project
came
together.
A
So
after
we
open
source,
everything
on
OpenStack
was
unveiled
feel
like
a
month
or
two
later,
Swift
was
was
added
to
no
go
ahead.
We
had
the
whole
computer
storage
project
and
then
something
interesting
happen,
something
that
I
could
have
never
predicted
like
I
when
I
was
an
asset
on
my
justification
for
doing
this
whole
thing
was
we're
in
the
space
exploration
business.
We
have
no
business
funding
and
sustaining
funding
infrastructure
project
two
bullet
on
computing
infrastructure.
A
When
we
talked
to
Jim
Karimloo
and
the
guys
at
Rackspace,
they
were
like
we're
in
the
fanatical
support
business
right.
You
got
a
big
team
of
people
in
San
Antonio
that
run
big
data
centers
to
compete
with
Amazon.
We
want
to
open
source
everything
and
get
a
community
of
developers
work
at
this,
and
so
for
very
different
reasons.
We
both
have
the
exact
same
strategy,
which
was
our
organizations,
don't
want
to
maintain
this
thing
forever
right.
A
You
want
to
open
source
it
and
build
a
great
community
and
the
the
size
of
this
community
I
think
surprised
everyone,
and
if
you
look
at
the
stats,
you
guys
a
trellis
I'm
like
I,
reiterate
it,
but
I
mean
we've
had
a
tremendous
number
of
companies,
people,
individuals
from
different
countries
all
over
the
world
heat
map
in
here
I-
think
wait
a
minute
due
to
heat
map.
It's
an
amazing
little,
so
I
basically
took
a
bunch
of
different
stats.
A
This
is
in
jobs,
mentioning
OpenStack
as
a
function
of
time
versus
amazon,
web
services
and
eucalyptus,
which
is
but
yeah
the
orange
lines
open
set
so
OpenStack
as
a
job
skill
has
exploded.
This
is
a
three
thousand
two
hundred
and
fifty
percent
growth
since
january
twenty
left,
it's
just
gone
up.
This
is
actually
Google's
interest
over
time
in
different
keywords
where
Lu
is
OpenStack-
and
this
is
a
heat
map
of
keywords
are
on
websites
include
OpenStack
great
eating
me
China
is
a
huge.
A
It's
it's
by
far
the
hottest
region
of
the
world
where
open
side
with
being
assessed
we've
been
over
there.
I
remember,
you
know
they
were
John.
I
do
a
lot
of
other
people,
but
there's
a
reason
why
the
next
book
consecutives
on
top
this
was
just
so
much
interest
over
there
contributing.
So
that's
a
second,
so
one
thing
I
would
have
note
on
the
contributors.
This
is
a
list
of
the
intruders
to
grizzly,
which
is
the
last
release
in
order
of
the
contributions
they
made
code
and
what
I
want
to
have?
A
A
A
That
adds
OpenStack
becomes
more,
concerning
gonna
lose
customers
over
that
and
I
think
that's
what's
driving
a
lot
of
this
interest
in
the
project
is
just
this
desire
to
be
part
of
an
ecosystem,
you're,
just
open
and
isn't
controlled
by
one
vendor
and
I.
Think
that's
one
of
the
most
powerful
things
about
the
project.
It's
something
that
we
call
reality
right.
It's
actually
a
non-profit
foundation
which
isn't
controlled
predominantly
by
one
or
two
corporate
interests
right
building
the
whole
way.
Illogical
complain
about
the
board
and
the
way
everything
is
set
up.
A
We
set
up
that
way
for
a
reason:
try
it
was
set
up
white
almost
like
the
government
was
set
up.
You
don't
want
an
unstable
government,
but
that's
the
last
thing
you
want
on
economies
falter
when
there
is
inconsistency.
Whenever
is
no
election,
you
see
the
economy
kind
of
second
guessin
reactivates.
What
you
want
is
stability
in
the
governance
of
a
project
like
OpenStack,
and
so
it
was
kind
of
setup.
So
a
third
of
the
influence
was
the
big
companies
attorney
influence
with
medium
sized
companies.
A
That
third
of
the
influence
was
individual
contributors,
and
so
literally,
two
thirds
of
the
influence
from
a
policy
perspective
is
actually
the
smaller
companies
and
the
individuals
and
only
one-third
of
the
influences,
the
VidCon
phase,
and
that's
really
not
how
other
foundations,
province
or
citations
have
been
set
up.
Pastora
CLE
and
it's
all
the
tests,
but
we'll
see
if
it
works
out.
I
think
the
point
is.
It
is
working
right
now,
because
we
still
see
the
size
of
the
community
that
attends
conferences
double
number
of
contributors
increase.
So
this
is
just
super
exciting.
A
So
the
question
is
why
the
hell
is
this
happening
right.
I
mean
why
in
three
years
is
OpenStack
add
a
foundation
that
has
many
millions
of
dollars
of
support
for
basically,
every
computer
got
that
Earth,
but
Linus
Torvalds
was
still
toiling
around
without
a
single
corporate
sponsor
in
the
same
amount
of
time
you
know
what
what's
what's
different
about,
opens
that
so
I
think
there's
some
underlying
trends
in
the
computer
industry
which
are
driving
this
one
of
the
people
that
I've
always
kind
of
really
followed
is
an
analyst
and
mary
meeker
who's
at
Morgan.
A
She
puts
together
this
presentation,
chess
560,
slides
on
SlideShare,
unless
EAP
I
picked
four
or
five
slides,
which
I
think
are
really
relevant
to
OpenStack
from
this
hundred
and
sixty
slide
presentation
I'm
going
to
share
with
you
what
is
the
amount
of
information
which
is
being
created
and
shared
on
the
internet,
and
this
is
a
graph
instead
of
ice,
which
is
just
actually
amazing
groups
without
more
real
terms,
photos
uploaded
internet
every
day,
500
million
as
astonishing
500
million
pictures
are
uploaded
to
those
poor
web
sites
everywhere
individuals
mortised
every
single
day
right.
A
So
why
you
asked?
Why
did
what
does
this
have
to
give
up
its
back?
What
does
it
do
is
Swift
and
cinder
and
all
the
projects
of
the
cycle?
The
answer
is
some
of
those
interesting
conversations.
I've
had
recently
are
with
companies
like
itachi
and
Samsung
and
Seagate
that
are
literally
questioning
the
abstractions
that
exist
on
top
of
hard
disks
that
were
invented
to
serve
primarily
a
wintel
PC
architecture
or
Linux
architecture,
the
desktop
architecture.
A
You
know
this
idea
that
a
hard
disk
has
size,
something
that
simple
actually
has
profound
implications
when
a
failure
occurs
and
it
doesn't
blow
the
thing
to
hell
because
there's
you
know
metal,
&
dust
and
everything
just
shattering
every
little
piece
of
that
poor
terabyte
twice.
You
know
when
a
hard
disk
is
no
longer
a
bunch
of
spinning
platters
that
people
will
find
that
to
kind
of
humorous
about
five
years,
but
it's
actually
a
bunch
of
chips.
It
can
fail
all
the
way
to
the
point
where
it
doesn't
make
sense
to
power.
A
The
controller
anymore
before
you
actually
have
to
worry
about
it,
and
so
I
think
what's
actually
happening.
Here
is
the
fact
that
the
Internet
companies
are
beginning
to
drive
a
significant
portion
of
the
demand
for
the
components
on
in
computers
and
mobile
devices
that
if
things
are
changing,
so
it's
not
just
it's
not
just
pictures
video
hundred
hours
every
minute
to
youtube
alone.
There
are
these
webcams
that
you
can
buy
for
149
dollars
at
Best,
Buy
and
they're.
Claiming
I've
got
the
neighbor
thing.
A
A
Gps
devices
that
will
be
in
every
car
in
your
to
our
reporting
data
back
to
the
car
manufacturers
and
insurance
companies.
The
number
of
API
calls
from
the
Fitbit
and
from
devices
that
people
wear
just
to
keep
track
of
their
health
is
growing
exponentially
on
every
year.
So
it's
coming
from
everywhere
on
this
is
also
really
interesting.
It's
also
coming
literally
from
everywhere
it's
coming
from
mobile
devices,
so
this
is
a
global
Internet
traffic
in
2008,
the
2012
over
four
years,
we've
seen
you
know
just
over
one
percent
of
internet
traffic
being
mobile.
A
So
all
the
way
up
to
fifteen
percent
of
all
global
internet
traffic
coming
from
mobile
devices.
This
is
the
most
interesting
one,
so
this
is
PC
sales
since
1995,
through
2013,
on
where
greenness
pcs,
blues
laptops
and
the
Army's
tablets.
So
for
the
past
two
quarters
tablets
have
outsold
pcs,
and
that
happened
in
less
than
three
years,
so
the
pc
is
no
longer
the
dominant
architecture
and
a
bigger
business.
Why
is
this
important
right?
When
we
see
enterprise
servers,
they
have
two
and
a
half
team.
Vs
three
and
half
inch
discs
intel,
x86
CPUs.
A
It
all
came
from
mixed
right.
The
enterprise
servers
are
the
sliver
on
top
of
the
volume
that
was
in
the
PC
market
and
the
winter
architecture,
or
the
Linux
architecture
doesn't
matter,
and
so,
as
this
starts
to
shift
and
if
I
were
to
add
mobile
devices.
On
top
of
this,
there
would
be
over
a
billion
and
it
would
go
away
out
at
the
top
of
the
Charter.
A
They
don't
come
up,
know
where
they
come
out
of
the
opportunity
created
by
the
fact
that
we
don't
need
such
complex
storage
devices
that
do
such
exotic
things
to
preserve
the
appearance
of
reliability
and
performance.
We
can
just
have
reliability
or
is
it
the
other
line
media
or
we
can
have
different
strategies
to
retain
it.
So
what's
happening
here
is
an
evolution
and
the
evolution
is
essentially
we're
seeing
odd.
Okay,
that's
not
here
the
PC
doesn't
del
to
that's
Microsoft.
A
We
should
be
Gellin,
but
the
pc
is
no
longer
the
relevant
driving
force
in
the
computer
industry.
We're
starting
to
see
all
these
mobile
devices
as
the
primary
things
that
were
interacting
with,
and
these
massive
data
centers
running
OpenStack,
so
late
powering
all
the
applications
on
the
web
and
on
these
mobile
devices,
and
when
there
are
billions
of
these
being
sold
every
year.
A
The
components
in
these
things
are
going
to
start
percolating
here,
right,
I
used
to
be
the
everything
here,
but
there
I
think
that's
the
interesting
thing
that's
happening
so,
as
you
start
to
think
about
OpenStack.
Think
about
abstractions.
That's
really!
All
of
these
decades.
It's
a
layer
of
abstractions
that
sits
on
top
of
infrastructure.
A
Here's
an
abstraction
of
a
object
storage
system
in
its
bunch
of
obstructions
that
are
being
created,
arguably,
will
be
done
with
OpenStack
when
we're
done
with
all
of
the
abstractions
that
are
needed
to
represent
the
way
computing
works,
and
then
arguably
will
never
be
done
because
they'll
be
new
abstractions
invented
like
object,
stores
and
queues,
and
things
like
that.
So
as
we
start
to
move
to
architectures
that
look
more
like
this,
the
question
will
be:
what
are
the
best
obstructions?
A
There
are
useful
as
we
create
the
applications
they're
going
to
power,
this
kind
of
stuff,
something
I
think
that
core
discussion
will
just
going
on.
So
now
we
get
to
the
cloud
I
tolds
the
cloud.
This
is
what
all
the
sales
people
say.
You
know
that
was
cost
of
big
data
and
scale
out.
I
was
my
stuff
yeah.
The
idea
is
I
love.
A
What
was
done
over
at
NIST
a
few
years
ago,
because
every
single
indecision-
this
is
the
most
resilient
one-and-a-half
pH
document
ever
created
you,
the
history
of
computing,
right,
there's
a
bunch
of
organizations
that
can
create
a
50
page
document.
It
is
not
as
good
as
this
one
and
a
half
page
document
on
this
miss
website.
This
defines
cloud
computing
as
a
service-oriented
computing
architecture
that
has
these
five
characteristics.
So
the
problem
with
this
is
this
is
not
how
computers
and
software
learn
today
at
all
right.
A
If
you
compare
the
way
enterprise
of
shooting
works
before
cloud,
it's
a
bunch
of
infrastructure
which
is
built
on
approval.
You
have
to
get
approval
to
get
the
resources
you
need,
and
even
with
the
workflows
and
the
stuff
that
exists
in
political
cloud
systems.
This
process
completely
breaks
software's
ability
to
provision
the
resources
it
dynamically
need
to
make
them
to
make
software
more
reliable
or
more
scalable.
It's
static,
it's
private!
It's
purchased
it's
generally
inaccessible.
The
whole
concept
of
security
plans
is
about
isolation.
A
It's
about
be
able
to
say
that
this
is
the
system
and
we're
able
to
describe
it
and
it's
self-contained
and
there's
no
external
influences
and,
as
we've
worked
nearly
days
of
NASA,
it
was
all
about
trying
to
map
a
culture
and
thinking
a
policy
framework
that
thought
about
the
computer
systems
that
powered
software
that
looked
like
this
onto
a
model.
It
was
trying
to
behave
like
this,
and
so
what's
really
interesting.
Is
these
directly
path
back
to
the
definition
of
positivity?
A
So
if
you
look
at
on
demand,
but
computing
used
to
be
on
approval,
elastic
used
to
be
static,
sure
used
to
be
private,
Peter
used
to
be
purchased,
so
we're
really
kind
of
in
the
situation
where
the
way
cloud
works
is
kind
of
diametrically
opposed
to
the
way
software
used
to
work,
and
so
it
actually
the
last
thing
in
the
world.
You
want
to
do
to
a
piece
of
software
that
relies
on
the
infrastructure
being
static.
It's
made
the
infrastructure
dynamic,
that's
the
best
way
to
break
it.
A
So
what
you
end
up
with
is
that's
neat.
Is
you
end
up
with
on
a
tension
that
exists
between
the
investment
that
you
make
it
in
applications
and
infrastructure,
and
so
I
should
have
put
more
logos
up
here's
to
pick
on
various
companies,
but
basically
what
you
have
is
you
have
a
curve
here
we're
on
the
vertical
axis?
You
have
an
infrastructure,
and
so,
let's
best
a
ton
of
money
in
your
application,
run
it
on
the
least
expensive
infrastructure
that
you
can
possibly
buy
right.
So
the
idea
here
is
this:
is
google
he's
history.
B
A
It's
okay,
it
will
not
create
that's
offering
people
do
not
put
this
right.
Most
people
can't
put
some
money
into
their
software,
they
don't
have
cater
to
and
so
that
you
invest
in
infrastructure.
That
maintains
the
abstraction
of
reliability
of
performance
and
that's
what
you
pay
for,
and
you
need
to
pay
for
it
because
guess
what
trillions.
A
And
unless
you
start
to
make
changes
to
that
software
slick
and
running
in
architecture
that
doesn't
care
about
those
those
reliability
in
this
performance
abstractions,
unless
you
can
say,
I
need
to
be
more
I
need
to
be
faster,
distribute
the
load
into
hundreds
or
thousands
of
different
instances
running
in
different
geographic
availability
zones.
If
your
application
has
no
ability
to
express
that,
it's
not
going
to
work,
and
so
this
isn't
the
line,
though
it
doesn't
look
like
you
like
that,
that
is
supposed
to
be
mine.
A
It's
not
a
line,
it's
actually
a
point
cloud,
so
in
every
business,
even
a
start-up,
but
less
so
in
a
start-up
on
certainly
a
mature
business.
What
you
have
is
you
have
applications
here,
but
it's
actually
not
is.
This
would
be
wonderful
if
this
is
actually
how
it
work.
It
doesn't
work
that
way.
It
actually
looks
more
like
this,
where
you
have
a
distribution
of
applications
running
under
expensive
infrastructure.
You
might
have
one
or
2
applications
up
here
that
you
run
on
some
sort
of
high-performance
grid
system
that
you
built.
A
You
know
just
for
that
application,
but
you
spent
you
spent
a
lot
of
money
to
run
an
application.
I'll
have
an
expensive
infrastructure,
and
so
really
what
you
want
to
try
to
do
is
you
want
to
try
to
move
as
many
of
the
applications
as
you
can
look
at
this
sweet
spot
where
you're
actually
able
to
run
on
cloud
systems,
so
you
don't
have
to
buy
these
really
ridiculously
over
built
systems
when
we
look
at
it,
I
mean
are
not
picking
on
these
companies
reinventing
themselves
to
speed,
but
the
MC
appliance.
A
If
you
tear
one
of
those
things
apart,
it's
one
of
the
most
exotic
pieces
of
technology.
You'll
ever
see.
You
know
when
you
write
data,
it
says
top
written.
It's
not
written
it's
in
some
sort
of
memory
that
has
some
super
caps
that
back
it
up,
but
then
back
it
up
to
another
level
of
memory
to
back
it
up
to
flash,
then
back
it
up
to
the
disk
a
couple
of
hundred
little
seconds
later.
It
gets
graduate
and
it
optimizes
it
along
the
way.
A
If
the
whole
thing
blows
up,
it's
all
redundant
in
there
and
there's
power.
There's
batteries
there's
queen,
that's
what
you're
paying
for,
and
you
know
what,
if
your
application
needs
to
do
a
million
I
ops
and
it
needs
to
do
it
reliably
and
there's
no
way
you
can
rebuild
your
application
to
run
any
other
way.
A
You
need
that
and
that's
why
it
exists
so
on
what
we
need
to
do
is
we
need
to
really
start
talking
about
software
and
from
an
OpenStack
perspective,
as
we
talk
about
what
score
and
what's
not
need
to
think
about
the
things
that
enable
software
to
scale
out
the
services.
It's
why
I
mazon
started
with
the
cueing
services
start
at
82
and
start
with
s3
sort
of
the
cueing
service,
because
that
was
the
most
useful
thing:
every
application
that
Amazon's
Billy
needed
our
department.
A
A
Full
seconds
look,
and
that
seemed
very
similarly
structured
they
do
cluster
would
be,
which
makes
you
think
he'll
do
you
have
150
petabytes
of
memory?
That's
five
hundred
thousand
machines
that
have
five
range
of
gigs
in
there.
It
is
because
a
lot
of
computers
longer
so
this
is
one
because
so,
when
you
think
about
every
single
dot
on
that,
what
you
know
is
that
application,
something
that
for
a
reliability
perspective,
you
can
write
her
life.
A
I've,
invest
a
ton
of
money
in
to
figure
out
how
to
re-argue,
so
they
can
run
like
this,
but
you
have
to
literally
take
every
application
in
your
luck.
If
you're
working
a
new
application,
you
can
write
it
to
scale
out
for
the
very
beginning.
The
most
applications
don't
and
must
have
no
business
running
in
a
crowd
yeah.
A
When
you
start
thinking
about
whether
you
do
some
OpenStack,
maybe
looking
I'm,
you
need
to
think
about
it,
dirty
if
you've
got
some
system
which
generates
hundreds
of
tens
or
hundreds
of
terabytes
of
data
or
more
on
you
don't
want
to
move
that
to
the
public
cloud,
no
matter
how
fast
your
connection
is.
I
give
you
a
dedicated,
10,
cable
connection
to
Amazon,
which
will
cost
you
as
much
as
a
private
cloud.
Does.
A
A
I'm
you
to
a
complicated
thing,
how
many
of
you
think
of
our
King
and
it
with
600
people
contributing
code
to
it
every
six
months
on
and
this
amorphous
definition
of
what
score?
What's
not
it's
going
to
get
more
complicated
and
so
the
challenge
is
you
know
what
what
Rick
you
know
if
you
want
to
make
this
thing
work?
A
What
do
you
have
to
invest
in
that
right
and
I
think
it
is
wonderful
that
if
you're
a
service
provider-
and
you
want
to
customize
something
that
you
have
all
the
source
code
and
you
can
make
the
implementation
work
for
you,
but
that's
not
for
everybody
right,
you're,
a
lot
of
companies
that
they
want
to
think
about
it
like
Amazon.
They
just
want
to
use
it
and
so
the
product
that
we
created
actually,
which
is
beautiful,
because
our
company
can
and
does
contribute
a
considerable
medico
to
OpenStack
is
literally
just
a
little
appliance.
A
You
plug
Arrakis
servers
into
you,
turn
the
power
switch
on
and
it
comes
up
as
a
cloud
I'm.
You
can't
customize
it.
You
can't
screw
it.
The
idea.
Is
it
just
works?
It's
just
just
amazon
just
works.
You
can't
screw
at
amazon
eager
right,
and
so
the
thing
about
OpenStack
is
that
if
you
buy
one
of
these,
you
can
still
build
an
openstack
cloud.
You
can
still
submit
code
to
the
projects.
A
You
will
support
it
six
months
later
right
and
you
can
still
also
build
a
massive
infrastructure
based
on
OpenStack
at
a
lower
price
point
point,
and
we
can
support
to
the
point,
is,
is
that's:
that's
the
whole
idea
of
an
open
ecosystem
powering
it
and
everybody
went
by
these
things,
but
a
bunch
of
people
with
our
that's
the
basic
idea.
If
you
want
to
learn
more
about
what
we're
doing,
you
can
check
out
nebula
com,
we
have
a
really
cool
video.
A
We
created
that
shows
you
the
whole
system
that
our
shareholder
patrick
stewart
was
involved
in
and
pour
soap,
it's
tacky
as
hell.
No
doubt
I'm
going
to
see
the
rest
of
the
time
to
the
Shia
Tom,
who
was
PTL
innova
for
many
years,
and
then
we
put
the
work
it
nebula
on
our
product
most
recently,
so
he's
going
to
tell
you
about
how
to
optimize
into
OpenStack
in
various
ways
fish.
Thank
you.
C
Thanks
Chris,
thank
you,
I
love!
It
I
love
it
here
in
crispy,
especially
about
the
history
of
NASA
being
there.
He
makes
it
sound
a
lot
more
glamorous
today,
I
moved
out
to
servers.
This
go
to
work
for
NASA,
like
just
like
a
dream
that
as
a
kid
and
then
I
found
out
that
it's
working
for
government
has
its
own
set
of
challenges.
So
I'm
not
going
to
talk
much
about
the
history,
etc.
C
Sort
of
like
being
professional
cat,
herder
and
I
wanted
time
to
actually
work
on
front
sevens,
but
I
was
happy
to
court
of
sort
of
shepherd.
It
or
cat
heard
it
up
from
being
a
team
of
six
people
up
to
about
I
think
we
have
about
90
developers
every
month,
contributing
our
six
month
period
is
about
100
to
150
ugly.
C
I'm
also
designed
with
various
companies
and
deployed
a
bunch
of
different
private
clouds,
including
working
on
Nebulon,
helping
our
customers
actually
not
have
to
design
it
and
just
plug
it
in
and
make
it
work.
I
also
helped
a
kind
of
spearheading
courageous
in
their
project
to
help
create
Jeff.
Stacks
I've
been
very
involved
in
all
OpenStack
since
the
beginning,
which
basically
means
I,
wear
a
lot
of
hats
on
this
nice
mold
polar
cap,
okay,
so
I
want
briefly
talk
about
the
problems.
C
Making
me
have
to
give
a
talk
like
this,
so
what
I'm
going
to
do
is
I'm
going
to
talk
about
a
perfect
world,
so
perfect
world,
you
fire
up
your
computer,
you
type
app
get
or
young
some
install
OpenStack.
You
wait
for
a
few
minutes
and
then
it
says
OpenStack
installed
successfully.
So
that
would
be
fantastic
innings
for
many
people.
C
B
C
You
suggest
that
I
was
something
to
be
treated
to
help
developers.
You
start
without
a
second,
it's
great
for
getting
on
one
deployment
of
code
working,
but
it's
not
something
you'd
ever
use
in
production,
it's
more
like
a
try
it
out
to
be
so.
The
problem
here
is
that
open
zach
is
configurable.
Chris
talked
about
all
the
different
companies
working
on
OpenStack
and
how
many
people,
with
all
the
different
use
cases
it
needs
to
serve
up
from
very
tiny
scale
to
extremely
large
scale,
the
biggest
that
I
know
of
that
have
announced.
C
The
amounts,
for
example,
is
Bluehost,
which
has
about
20,000
hosts
running
OpenStack,
which
is
pretty
incredible
concerned.
They're
doing
it
all
in
one
single
install,
they
did
partition
it.
It
has
the
support
of
much
different
level.
Backends
people
on
these
different
hypervisors.
They
will
use
different
storage
devices
and
it
has
a
whole
bunch
of
different
components.
We
started
with
to
Nolan
Swift,
and
now
we
have
my
think,
seven
with
two
incubation
bellies
to
this.
This
is
the
end
of
the
nova
conch
sample
sample
file,
showing
the
configuration
options
just
in
Nova.
C
C
C
So
it's
actually.
My
goal
here
is
to
just
give
distribution
points,
how
to
make
decisions
about
which,
like
that
and
then
a
little
bit
of
tweaking
for
how
you
can
configure,
but
what
perform
it
or
do
some
interesting
things
and
then
finally
talk
just
very
briefly
about
security
mom.
This
is
where
I
get
you
the
wallet
text,
slides
I,
apologize
I,
don't
have
all
the
fancy
pictures
to
Chris
hat,
but
hopefully
there's
enough
information
in
here
that
you
will
at
least
be
able
to
come
to
some
conclusions.
C
C
It
didn't
support
a
lot
of
programmability
and
software-defined
networking,
and
so
another
project
that
was
created
was
originally
called.
Quantum
has
now
been
named
reading
to
do
Tron
due
to
a
copyright
conflict
and
in
the
last
couple
of
releases,
it's
finally
out
of
the
point
where
it's
providing
a
very
rich
features
that
it
is
suitable
for
production
use.
So
probably
the
first
decision
you
have
to
make
is:
okay.
C
Do
I
use
the
kind
of
older
know
the
network
configurations,
or
do
I
switch
over
to
one
of
the
new
neutron
configurations
on
and
it's
a
little
bit
tricky?
The
trade-offs
here
are
neutrons,
obviously
the
way
the
futures
that
are
kind
of
future
proofing
yourself
by
switching
over
to
neutron
on.
Conversely,
a
lot
of
people
have
installs
already
that
are
using
novanet
work,
a
little
bit
more
tested,
and
it
has
a
better
scalability
and
performance
and
high
availability
characteristics
currently
so
on.
The
default.
C
Configuration
of
neutron
means
that
you're
going
to
have
especially
the
layer,
3
gateway
components,
single
points
of
failure
that
you
can
avoid
in
certain
configurations
in
dova
network.
Oh
those
are
going
away
and
probably
in
the
next
release,
the
mono
release
will
be
gone
completely.
So
if
your
deployment,
your
real
deployment,
is
always
off,
neutron
is
probably
the
way
to
go
the
only
time
I'd
recommend
against
it
is,
if
you're
deploying
something
now,
and
you
have
really
really
high
performance
and
AJ
requirements.
You
might
want
to
stick
with
Nova
network.
C
For
the
moment,
which
leads
me
to
say,
suggestion
is
to
switch
this
to
use
one
of
the
neutron
plugins
as
possible.
Obs
is
the
default
configuration
it's
great.
There
are
vendor
plugins.
If
you
an
alternative
route
for
a
larger
scale,
deployment
would
be
to
use
neutron
but
then
pay
one
of
the
vendors
that
is
provided.
So,
for
example,
the
sarah
has
their
own
plug
in
mid
of
Korra
has
their
own
plugin.
C
Configuration
comes
out
of
talk,
some
people
that
are
working
on
things
like
that,
because
it's
sort
of
a
bummer
to
meet
and
feel
like
you
have
to
trade,
a
really
good
performance
for
all
of
the
cool
software
to
find
that
were
key
features
of
your
chocolate
buds.
Does
everybody
here
familiar
with
what
you
can
do
with
neutrons
want
to
raise
your
hand
if
you
know
what
it
does?
Okay,
so
some
people
don't
I'm
just
going
to
briefly
tell
you
the
features
that
that
Neutron
adds
to
traditional
sort
of
OpenStack
usages.
C
C
So
me
Sean
allows
you
to
inside
as
a
particular
user.
You
can
say,
give
me
a
local
network,
that's
just
isolated
to
me
and
plug
my
virtual
machines
into
that
Network
and
then
you
can
create
another
set,
the
back
of
work
and
then
you
can
create
a
software
router
that
will
route
between,
but
you
can
mirror
sort
of
good
security
practices
that
people
using
the
data
center
virtually
by.
C
So
next,
big
decision
that
you
have
to
make
is
what
hypervisor
you're
going
to
use.
There's
a
lot
of
options
here
that
the
most
tested
one
is
KPM,
the
largest
sort
of
public
on
deployment
right
now,
which
is
back
spaces
using
that
as
their
hypervisor
and
then,
of
course,
Microsoft
is
back
in
hyper-v
and
the
other
is
back
in
ESX,
and
these
two
especially
have
really
coming
along
well
in
the
last
release,
they
improved
and
contained
almost
feature
parity
with
the
other
two
and
I
think
an
H.
C
It's
going
to
be,
if
not
feature,
parity
extremely
close,
so
I
think
those
are
actually
going
to
be
real
choices
in
terms
of
production
deployment.
Today,
I
wouldn't
recommend
either
of
them
unless
you
have
very
specific
needs.
So
if
you
are
trying
to
save
money
on
licensing
costs
that
you
really
want
to
run
a
bunch
of
Windows
machines,
that
hyper-v
might
be
a
good
choice
there.
If
you
have
existing
VMware
infrastructure
to
try
to
converge
somehow,
then
the
ESX
driver
will
probably
be
a
good
choice
for
that
in
the
future.
C
It's
just
that
there's
more
information
up
on
the
net
to
when
you
get
stuck
the
more
people
that
know
it
so
que.
Vea
is
my
star
choice
here,
but
there
are
valid
reasons
for
victim
of
the
other
ones.
Okay,
this
is
a
big
one,
and
this
is
kind
of
a
controversial
one.
So
so
it
has
been
around
for
a
long
time
and
over
the
last,
probably
six
months
or
so,
there's
been
a
lot
of
discussion
about
this
other
project
called
Seth,
which
is
also
an
open
source
project.
C
It
has
some
interesting
features,
which
is
it's
a
converged
storage
system,
so
you
can
actually
use
the
same
storage
system
to
back-end
for
a
swift
and
force.
F
I
mean
Swift
and
for
sender,
so
you
can
do
blog
sergeant
office
object,
storage,
using
the
same
systems.
Another
option
which
I
didn't
put
here.
Is
there
a
couple
of
hardware
vendor
device
part.
C
Proximité
you
want
them,
that's
doing
it
on
for
their
hardware
device,
better
option.
If
you
already
have
a
bunch
of
some
kind
of
hardware
device
in
your
datacenter
that
you'd
like
to
use.
That's
another
possibility
here,
I'm
going
to
start
just
probably
giving
a
lot
of
trouble
on
twitter
buzz
with
on
their
art,
should
fix
with.
C
C
You,
the
default
el
internet,
is
not
a
distributed
system,
so
unless
you're,
somehow
using
radar
mirroring
behind
to
be
filmed
on
the
IM
system,
you're
a
of
data
failure,
which
is
not
necessarily
good
for
people
that
are
expecting
volumes
to
be
sort
of
highly
available
and
redundant.
So
the
path
of
least
resistance
here
to
getting
sort
of
a
commodity
system
up
and
running
is
also
use
that
now
I,
don't
think.
That's
necessarily
the
only
option
here.
C
So
if
you
make
this
decision
that
you
want
to
do
objects
and
Swift,
probably
the
best
solution
for
some
sort
of
redundant
volume
storage
would
be
to
actually
use
a
physical
piece
of
hardware.
This
I
think
applies
these
other
two
options.
All
of
our
net
app,
there's,
also
emc
I-
think
iBM
has
a
as
a
hardware
back
end,
I
think,
there's
a
dull
one
as
well
give
you
this.
So
there's
a
bunch
of
choices
here
and
what
we've
seen
a
lot
of
customers
say
is
well.
C
I
already
have
it
a
huge
investment
with
one
of
these
companies
I'd
like
to
keep
using
that
and
if
you
already
have
that,
then
set
becomes
a
little
bit
more
of
it
less
of
a
home
run,
and
it
makes
more
sense,
maybe
to
just
use
your
existing
hardware
for
centre-back
head
that
men
throw
a
swift
on
the
object.
Storage
I
actually
go
back.
C
C
Okay,
so
for
a
small
scale
deployment,
here's
all
the
projects
case,
you're
not
familiar
with
them.
The
current
core
projects
or
the
ones
that
will
be
courted
H
release
anyway,
is
compute,
object,
storage,
image,
service,
identity,
dashboard
and
panicking,
like
storage
metering,
orchestration
for
small
scale
deployment.
Here's
my
suggestions,
which
is
basically
I,
don't
try
and
don't
worry
about
applying
object.
Storage
youssef
instead
skip
metering,
not
because
it's
a
horrible
project.
C
Again,
it's
just
there's
what
it
provides
out
of
the
box
is
fairly
limited
and
not
particularly
valuable
to
a
lot
of
enterprises
you're
going
to
vesting
a
lot
of
custom
development
and
integrating
with
the
some
sort
of
billing
system.
So
the
main
reason
people
wanted
is
to
do
meters
for
integrating
with
their
custom
did
exist.
So
there's
a
lot
of
work
on
top
of
Justin
song.
You
have
to
do
so
unless
you
have
a
really
strong
need
for
some
sort
of
integration
like
that,
it's
probably
not
really
worth
installing
the
moment
right.
C
Here
is
looking
for
grizzly
all
this
stuff
workers
like
go
we'll
see
we'll
see
where
it's
at
two
or
three
months.
I'm
sure
a
lot
of
these
things
will
have
improved
and
change
on
large
scale.
The
only
the
only
suggestions
I'm
making
here
at
our
to
switch
out
go
ahead
and
use
Swift
on
a
large
scale.
Deployment
Seth
probably
has
scaling
problems
once
you
get
to
very
large
unless
you
have
all
the
expertise
that
dream
post
has
to
actually
manage
the
infrastructure.
C
So
once
under
past,
a
certain
point
swiftest
got
great
scale
properties,
it's
running
in
very
large
deployments,
so
use
no
worries.
There
only
only
took
networking
Neutron
out
of
the
picture
currently
from
the
things
that
I
said
earlier,
which
is
that
it's
going
to
have
a
chain
performance
problems.
So
if
you
really
need
sdn
in
your
large-scale
deployment,
you're
probably
gonna
have
to
get
a
vendor
to
get
one
of
their
sort
of
better.
More
performant
at
are
scalable
solutions,
so
that's
another
option
there,
and
once
again,
these
are
just
like
off-the-cuff
suggestions.
C
C
You
can
do
it
Nova
as
a
whole.
They're
609
configuration
option
or
by
the
time
the
next
release
rolls
around.
You
don't
need
a
mess
with
many,
but
there
are
a
few
that
the
default
options
maybe
are
the
best,
for
there
might
be
the
simplest
configuration,
but
they
might
have
be
the
most
performant
configuration
or
more
secure
configuration,
etc.
So
these
are
just
a
few
suggestions
that
I
have
formal
tweets
that
you
can
do
in
your
noble,
install
I
was
one
of
the
main
authors
of
Nova
networks.
C
C
There's
a
few
things
here,
I'm
just
going
to
briefly
describe
what
each
of
these
tweaks
does
and
why
is
useful,
so
the
force
dhcp
release
basically
allows
as
soon
as
you
terminated
vm
you
get
that
IP
address
back
in
the
pool
without
it
there's
a
delay
of
sometimes
up
to
10
minutes
before
that.
D
dress
comes
back
so
if
you're
kind
of
small
cool
black
addresses,
which
some
people
do
in
the
in
the
dearth
of
ipv4
addresses
that
we
have
today,
it's
it's
a
useful
way
to
get
your
IDs
back
a
little
bit
quicker.
C
It
requires
that
an
extra
binary
is
installed
on
the
system
which,
when
we
first
created
Nova,
was
not
available
in
most
distributions
is
now
so
it's
not
really
that
difficult
to
get
it
working
as
it
once
was
different.
The
tables
apply
as
a
performance
tweak.
If
you're
running
a
lot
of
rules,
security
group
rules,
it
applies
them
all
at
once.
Instead
of
one
of
the
time
there
are
problems
at
scale
where
you
are
running
large
deployments
with
a
lot
of
instances,
a
lot
of
security
rules
where
the
system
can
basically
spend
all
its
time.
C
Applying
security
group
rolls
over
and
over,
and
so
it
slows
down
the
performance
of
the
machine
will
stop
responding
to
messages.
Multi-Dose
true
is
just
it's
been
around
for
a
couple
releases.
Now
it
distributes
the
load
for
Nova
network
and
to
a
bunch
of
all
the
machines
in
the
system.
So
one
on
each
compute
node
basically
run
Nova
network
on
every
computer
instead
of
a
central
location.
It
makes
it
aj.
Schurr
dhcp
address
basically
means
that
you're
not
burning
an
extra
ID
for
each
really.
C
This
is
probably
the
most
we
are
most
son
and
there
are.
They
have
specific
network
and
switching
architecture
built
around
sending
all
the
data
through
that
gateway,
and
so
it's
just
adding
extra
overhead
to
route
all
the
traffic
through
the
local
host
and
create
a
gangway
on
each
Nova
network
goes
on.
So,
essentially,
what
you
can
do
is
you
can
configure
a
configuration
file
that
will
pass
that
gateway
to
the
guest
instances
directly
so
that
they
will
just
use
your
existing
gateway
instead
of
having
to
put
a
gateway
and
each
host.
C
This
is
actually
required,
along
with
your
dhcp
address.
If
you
can
try
and
just
do
one
of
those
going
to
be
a
lot
of
configuration
problems
here-
oh
they
do
so
because
if
the
dhcp
address
is
also
used
for
the
gateway
and
if
you
try
to
use
the
same
10,
multiple
hosts
routes
back
won't
work.
So
those
are
some
tweaks
I'm
putting
slides
up
on
SlideShare.
So
if
there's
no
need
to
like
write,
it
he's
got
up
the
same
interesting,
but
hopefully
that
gives
you
some
explanation
of
why
they're
there
are
no
to
compute
tweaks.
C
C
We
stick
it
locally
on
the
computer
and
then
we
do
a
copy
of
right
clone
which
allows
us
basically
to
move
the
instance
very
quickly,
because
we
don't
have
to
copy
the
original
source
image
into
the
directory
where
the
instances
launched
from-
and
it
also
allows
us
potentially
a
safe
space
on
the
compute
node,
because
it
does
have
a
bunch
of
copies
of
the
same
image.
C
First,
it
stops
to
using
copy
on
write
for
images
which
gives
you
about
a
fifteen
percent
performance
increase
on
disk,
reads
and
writes
when
you're
reading
writing
from
the
inside
the
guest,
but
it
still
allows
people
to
compress
the
images
when
they
upload
them,
so
they
can
still
use
cue
cows
when
they
upload
to
glance,
but
then
so
you're
still
getting
the
space
savings
of
using
a
compressed
version
of
the
image,
instead
of
just
a
full
raw
image,
but
you're
not
using
the
copy
on
write
to
generate
this
sort
of
IL
overnight.
C
So
you
end
up
getting
a
quite
a
bit.
Big
performance
increase
out
of
switching
knees
at
the
cost
of
some
disk
space
and
some
time
on
boot,
because
you
have
to
copy
there's,
also
a
bit
more
I.
Oh,
that
happens
on
boot.
So
during
the
boot
process,
io
can
slow
down
for
other
BM
slightly.
That
I
think
it's
a
reasonable
trade-off
to
make
these
other
two
are
basic,
basically
just
in
the
default
configuration
if
an
instance.
C
If
the
host
machine
goes
down
and
comes
back
up,
the
state
of
the
guests
is
not
resumed,
so
it
basically
says:
okay,
there
is
a
failure
or
something
happened.
Machine
rebooted,
I,
don't
know
what
you
want
to
do
so
wait
for
an
administrator
to
come
in
and
say:
okay
start
these
instances
again
its
utterance,
probably
not
the
best
default
configuration
in
this
case
it
would
be
nice
if
an
instance
was
running
when
machine
comes
back
up.
C
Just
put
it
back
in
the
running
state,
so
is
terminated,
terminated
cetera,
so
these
two
flights
here
will
essentially
make
a
consent
doing
spaz
know
they
do
its
best
to
kind
of
get
the
vm
back
and
say
that
was
when
forth
machines.
Reboot.
C
Here's
another
interesting
networks,
tech
performance,
so
this
is
assuming
that
your
infrastructure
has
10.
Can
you
hear
down
their
faces?
The
default
configuration
of
Linux
for
most
of
the
most
operating
systems
are
not
tweaks
to
do
well
with
10.
When
you
have
to
tell
you
interfaces,
there's
a
few
things
you
do.
I'll
give
you
a
turn
on
jumbo
frames
later,
not
increase
the
transfer
queue
length
and
for
the
guests
that
are
running,
it
need
equivalent
settings
and
a
bunch
of
TCP
settings.
C
Tweet,
there's
a
great
blog
binder
and,
aside
from
our
laps
on
doing
a
bunch
of
benchmarking
and
tweaking
of
that
talks
about
how
you
can
the
specifics.
I
didn't
want
to
put
up
on
here,
but
you
can
essentially
get
about
without
doing
any
tweaking
of
Kingdom
and
verdi.
Oh
is
it
oh
there's
one
thing
I
forgot
to
put
on
here,
which
is
t0
day
on
that,
which
is
the
current
kernel.
C
Module
then
uses
a
dozen
the
host
side
of
road
io,
basically
doing
those
tweets
attorney,
convert
I
owe
you
can
with
one
node,
you
can
get
about
seven
gigs
across
the
wire
and
you
can
saturate
attend
a
link
with
multiple
VMs.
So,
basically
the
problem
there
is
switching
on
the
cpu,
and
so
you,
without
those
twigs,
probably
the
default
configurations,
get
something
like
1.5
to
2.
So
it's
a
really
significant
increase
in
tweet.
Your
networking
stack
to
get
the
best
performance
out
of
those
technique,
links
if
you're
running
on
one
game
links.
C
Some
of
these
tweets
are
also
still
valuable.
I,
remember
back
at
NASA
the
default
configuration
we
were
sitting
on
one
gate
with
something
like
I'm,
only
getting
about
five
to
six
hundred
mega
bits
with
with
tweaking
we
got
it
up,
some
more
basically
like
speed
on
the
18
ones,
so
definitely
worth
fighting
some
time
tweaking
your
network
stack
on
posts
and
I
guess.
C
C
All
of
your
normal
it
exciting
that
you
would
do
still
applies.
You
know
you
got
to
be
careful
with
your
access
controls.
You
gotta
lock
down
your
services.
You
want
to
fulfill
a
ban
on
them
all
the
normal
things
you
do
to
locked,
outta
linux
box.
You
can
do
on
including
control
acts,
controlling
access
to
machines,
keeping
the
software
updates,
especially
colonel
vulnerabilities,
get
released,
and
this
is
actually
some
of
the
things
that
become
hard
as
you're,
managing
and
OpenStack
infrastructures.
C
You've
got
to
realize
that
there
are
security
patches
that
come
out
yet
figure
out
how
you're
going
to
deploy
those
edges
make
the
assistant
way.
Then
a
couple
really
common
ones:
that
people
forget
a
lot
of
services,
start
up
listening
on
all
interfaces
and
really
what
you
want
to
do
is
you
want
to
have
a
management
traffic
and
your
guests,
traffic
separated,
and
therefore
you
don't
want
your
management
services
to
list,
not
0000.
C
You
want
to
listen
on
interface
only,
and
you
want
to
separate
the
management
in
the
guest
traffic
so
that
there's
isolation
between
two
I
had
one
more
slide,
which
apparently
is
not
there
anymore,
and
what
I
must
have
Sentinel
full
version.
So
the
last
slide
was
supposed
to
say
on
the
specific
OpenStack
suites
that
you
can
do.
There
are
a
few
things
that
are
important
in
OpenStack
itself,
but
especially
for
Nova.
C
One
thing
that
you
can
do
is
you
can
disable
the
extensions
that
you
aren't
using
so
there's
a
whole
bunch
of
API
extensions,
that
kind
of
ship
with
you
by
default
with
with
Nova
and
a
lot
of
people,
don't
use
those
API
extensions
and
it's
prudent
to
just
turn
off
the
ones
that
you're
not
using
just
in
case
there's
ever
a
vulnerability
or
something
in
one
of
the
extensions.
Your
attack
service
is
smaller.
C
Also,
you
should
configure
the
policy
file
to
only
by
default,
there's
a
lot
of
actions
that
are
only
a
lot
of
administrators
right.
There
may
be
some
work
you
want
to
take
out
of
that,
or
they
may
be
more,
that
you
want
to
expose
to
users,
but
some
tweaking
of
the
policy
JSON
file
to
describe
what
actions
should
be
allowed
to
be
done
by
which
roles
is
valuable.
I
believe
I
had
one
more
on
that.
Oh
wait,
it
is
there
it's
just
after
questions.
Okay,
so
only
enable
a
PID
stretches.
C
Scheduler
filters,
the
interviewers
need
customized
policy
for
the
mystery
of
actions.
Oh
last,
two,
which
are
kind
of
interesting,
it's
I
know
a
lot
of
people
are
not
doing
this
because
most
of
the
OpenStack
clients
didn't
support.
Some
configurated
cut
some
configurations
of
HTTPS,
so
if
at
all
possible,
https
in
front
of
our
website,
services
put
a
chip,
rock
city
or
internet,
something
in
front
of
them
that
terminates
the
ssl
connection
and
internally
they
can
talk
over
HTTP,
but
don't
invite
snipping
of
the
open
site
traffic
sure
it's
probably
its
API
traffic.
C
So
maybe
there's
no
sensitive
information
in
there,
but
I
would
never
trust.
My
users
do
not
put
sensitive
information
in
a
web
request.
You
know
maybe
they'll
tag
it
with
something.
That's
password,
so
just
put
a
sheep.
Your
estimation
of
chocolate
is
not
suitable,
and
one
thing
you
should
consider
is
disabling
instance.
Migration
is
a
great
feature.
A
lot
of
people
want
it.
C
So
it's
a
tough
ask,
but
currently
the
implementation,
Vincent's
migration
requires
that
the
compute
posts
have
some
sort
of
way
of
accessing
HR,
usually
ssh-keys
your
ssh
keys,
which
basically
means
if
someone
breaks
into
a
computer
posted
figured
or
they
break
out
a
library.
They
can
very
trivially
get
around
to
all
the
other
computers
in
the
system.
Now,
if
you're
all
in
computing
those
there
are
other
ways
to
break
the
system.
So
it's
not
a
huge
problem,
but
it
is
something
to
consider
that
is
an
extra
revenue
at
the
time
to
make
it
available.
C
With
the
current
configuration
log
plan,
vibration,
okay,
I'm
ask
Chris
to
come
back
up
here,
because
we're
going
to
team
new
questions
on
give
questions
for
either
of
us
either
about
the
more
the
future
of
cloud
which
I
think
Christmas
talk
was
sort
of
about
or
tweets
or
even
the
whole
range
or
the
past
afya
ya.
Never
the
past.
The
future
askim
featuring
your
presence.
A
B
A
For
those
of
us
are
blessed
with
people
that
don't
have
a
lockdown
throne
hypervisor,
he
looks
well
any
thoughts
to
providing
more,
shall
we
say,
secure,
out-of-the-box
experience
that
that
provide
some
of
these
injuries
can
ease
your
way
to
get
there
for
them,
as
opposed
to
having
a
variety
on
14
inches
per
transcription.
Don't
apply
all
the
seconds.
Yes,.
C
C
Trivial,
yes,
that's
true,
yeah
I,
don't
know.
If
there's
a
good
answer
that
I
mean
we
have
a
good
document.
You
have.
We
have
some
documentation
on.
What's
need
to
get
things
installed.
There
are
a
lot
of
vendors
out
there
they're
offering
distributions
now,
which
you
know.
If
you
don't
have
that
expertise
yourself,
that
might
be
a
good
way
to
go
or
you
couldn't
sell
something
like
Gunga
link
luggage,
smaller.
A
For
you
you'll
see,
we
have
like
eight
people
on
our
security
team
yep
and
most
of
what
they
do.
Isn't
it
opens
that
there's
a
lot
of
kernel
level
work,
it's
a
running
selinux
on
the
actual
course
themselves.
You
know
securing
every
kernel
running
every
cpu
on
doing
the
https
encryption
of
all
the
traffic
between
the
office
is
creating
a
very
strategically
segmented
set
of
networks.
On
that
really
isolate
the
customer
traffic
from
management
traffic
and
there's
all
sorts
of
revenge
insecurities
your
work
with
the
Securities.
Never
done
right.
A
Ce
o--'s
have
to
keep
up
with
every
piece
of
technology
you
use
is
always
being
patched
with
security
holes.
You
have
to
kind
of
watch
that
stuff.
You
have
to
continue
to
apply
the
patches,
all
employees,
consistent
music,
it's
a
never-ending
tireless
job
and
you're
going
to
get
hat
in.
This
is
a
question
of
what
happens.
You
need
to
kind
of
think
about
the
chess
game
and
how
difficult
you
want
to
make
it
for
the
attacker
to
you
know,
accomplish
certain
things
until
your
it's
a
playbook,
it's
you
know
so
when
they
attack
this.
A
C
A
A
To
address
this
here
to
hear
another
option
in
certain
large
enterprises,
though
the
challenge
becomes,
is
you
are
the
target
yo
still,
so
it
becomes
a
real,
very
good
option.
When
the
out
of
the
box
configuration
you're
having
to
go
in
and
basically
tell
about
you
comment
something
you
can't
do
it,
but
there's
no,
you
know
here's
what
comes
from
open
somebody
other
than
don't
work,
very
underrated.
Here's
these
the
sorts
of
things
to
go
to.
B
A
C
A
B
B
A
A
C
No
but
I
think
some
of
the
maybe
some
of
the
components,
the
types
of
Parliament
you
would
do
for
some
products
are
not
necessarily
obvious,
which
I
think
it's
fitting
is
great.
That
book
is
this:
where
people
a
lot
of
security
experts
got
together
to
look
you
here's
the
things
you
need.
You
can
open
sac
to
heart
because
sometimes
just
don't
drown.
One.
A
Different
than
say,
ESX,
where
the
hosts
are
actually
real
nodes
with
real
systems
on
it,
real
operating
systems,
and
while
that
potentially
means
is,
you
have
to
learn
how
harden
them.
It
also
means
that
you
can
put
tools
and
detection.
You
can
actually
touch
that
right.
Where
is
something
like
an
ESX
host?
You
don't
need
this
place
just
completely
black
that
a
lot
you
know
it's
locked
down,
but
this.
C
Is
sort
of
a
trade-off
because
you
there's
a
much
smaller
attack,
surface
area
when
you
don't
have
much
of
extra
man
of
application,
20
knockouts
to
solar,
it
gives
you
it
gives
you
better
tooling,
for
detecting
and
responding
to
intrusions,
but
it
might
potentially
open
up
a
lot
more
intrusions
as
well.
So
it's
definitely
a
hypervisor
decision.
There
is
it's.
A
trade-off
of
Zen
is
convinced
that
there
are
models
more
secure
and
kingdoms
them
all
more
secure.
So.
B
A
B
A
A
B
A
Turned
into
running
systems
there
we
all
have
conversations
about.
Is
this
the
best
way
to
run
versus
the
best
way
to
run
some
sort
of
web
application
stack
or
what's
the
best
way
to
run
post
rats
or
mice
equal
or
whatever,
in
in
an
openstack
cloud
and
letting
I
think
you
know
we'll
see
a
see,
a
huge,
huge
advance
in
user
adoption
of
OpenStack
when
we
finally
eat
like.
A
Yeah
there
is
this
going
to
pick
those
up.
That's
my
my
personal
yeah
believe
just
just
looking
watching
our
customers
struggle.
Okay,
you
know.
If
you
spent
a
year
and
a
half
building
a
cloud
on,
that's
it.
People
have
been
busy
trying
to
build
clouds,
but
we
just
give
them
what
they
just
kind
of
stare
at
that
flashing,
cursor
and
they're
like
so.
It's
kind
of
you
know,
but
the
suit
is
more
productive.
We
can
make
developers
and
people
building.
A
Good
yeah
sorry
never
resist
using
chatham
yep,
I'm
about
to
security
for
a
minute
out
of
the
other
time
patching.
So
I
guess
everybody's
probably
kicked
their
own
operating
system.
Street
looks
back
and
let's
say
you
get
risley.
Is
there
our
own
lilies
found
under
see
that
flowing
down
to
the
exhibition?
Measuring
there's?
No
second
one
or
melody.
C
Management
team
that
follows
the
same
way
that
most
large
source
projects
do
motor
abilities
where
you,
if
you're
on
the
vulnerability,
announce
list.
So
all
the
security
professionals,
various
companies
are
there.
Then
you
get
notification
with
the
vulnerability.
This
really
is
fixed,
so
it's
kept
private
until
the
fix
is
out.
We
released
the
fix
with
the
announcement
and
then
they
pull
those
in
two
branches
and
they
read
the
different
vendors
world.
So
there's
kind
of
a
tiered
process,
because
it's
the
same
way
that,
for
example,
Linux
what
our
ability
to
Colonel
work
center.
C
There's
a
stable
release,
branch
for
each
previous
release
and
the
used
in
release
team
manages
that
release
and
that
won't
learn,
bug,
fixes
and
blood
abilities
of
that
release.
And
then
there
is
debug
there's
another
team
by
camera.
What
it's
called
does
the
previous
releases
and
I
generally
backboard
on
or
current
policy.
C
No
Cuban,
my
releases
are
ever
so
that's
about
basically
like
at
least
a
year
year
and
a
half
of
of
back
burning
vulnerabilities
that
will
probably
get
longer
as
OpenStack
becomes
more
popular
currently
like
there's
not
really
late,
running
the
release
of
your
half
ago
anymore,
so,
which
is
what
the
cactus
or
easily
so
there's
not
really
any
point
putting
vulnerabilities
all
the
way
back
to
cactus,
but
there
probably
will
be
you
know
more
people
staying
on
folsom
for
a
while,
etc.
So
we
might
keep
that
one
to
place
or
to
your
response.
A
B
A
All
of
that
Ethel
to
channel
all
the
channels
of
security
information
that
are
found
in
the
most
common
OpenStack
implementation
blueprints
up
into
a
common
point
so
that
you
can
actually
see
that
is
that's
what
a
child
to
be
costly.
Looking
at
what's
happening
with
this
kernel
or
what's
happening
this,
this
code
base
misko
business,
isn't
because
every
implementation
is
different
there,
you
almost
have
to
have
like
a
few
of
web
tool.
You
say
well,
my
OpenStack
implementation.
C
The
maintainer
of
the
enterprise
linux
kernel
and
he
was
talking
about
security,
lazy,
basically
so
lettuce
Torvalds,
as
the
current
did
all
that
tree.
He
maintains
the
tree
for
the
release
that
was
just
out
and
like
there's.
They
keep
like
one
maintenance
release,
two
versions
back,
and
so
he
was
talking
about
applying
patches
to
his
creates
life.
Well,
you
know,
ideally,
all
the
security
patches
get
an
announcement.
They
go
through
this
process
and
everything.
But
there's
a
lot
of
bug.
Fixes
then
like.
When
you
look
at
the
mud
face
you
go
well.
C
That
probably
could
have
been
exploited
somehow,
but
it
didn't.
No
one
actually
found
it.
It
was
just
a
kind
of
an
update
that
someone
did
is
at
all
I
tuned
you
out
by
one
error
here
and
it
didn't
go
through
anybody
thinking.
While
this
needs
to
go
through
house
and
say
a
security
release,
so
basically
he's
constantly
emerging
patches
that
fix
potential
security,
bugs
that
don't
even
get
enough
term
I
think
we
need
a
much
better
way
of
dealing
with
updates
in
in
all
parts
of
the
system.
C
C
B
A
Does
it
I
didn't
get
a
chance,
is
dancing
hook
from
the
tech
ridge
vent
to
so
our
this
facility
I
love
this
facility,
it's
a
great
great
place,
especially
you're.
An
entrepreneurial
I'll
talk
to
her
nauru,
just
hang
out
and
go
work.
Sometimes
this
is
the
Austin
tech,
ridge
and
they've
got
this
really
nice
facility
that
we
read
as
part
of
the
meetup.
So
if
you
get
a
chance
check
them
out,
they
have
campfires,
which
basically
is
a
social
networking
with
local
entrepreneurs
and
people,
interesting
startups
and
innovation
on
a
monthly
basis.
A
So
you
come
by
the
F
Everage
a--'s
and
it's
a
great.
It's
really
actually
really
good
place
to
get
work.
I
need
I,
want
to
thank
Rob
for
repelling
organizing
this
and,
if
you
guys
have
questions
which
have
three
people
in
Austin:
Bob
black
Eric
rain,
I,
Eric,
brunker,
royal
icing
and
bathroom
there.
So
if
you
want
to
do
have
questions
about
OpenStack
they've
got
access
to
some
of
the
folks.
We've
got
in
Seattle
and
you're,
not
mountain
view.
California,
so
feel
free
to
also
attack
them
too.