►
From YouTube: Webinar - Intro to Ceph and OpenStack
Description
Thinking about deploying OpenStack? Curious about Ceph? Check out our webinar co-hosted with Dell for a full introduction to get you started.
A
Welcome
to
today's
webinar
intro
to
staff
with
OpenStack
I'm,
Danielle
Rumble,
director
of
marketing
at
think
tank,
your
moderator
and
webinar
organizer.
Before
we
start,
let's
just
take
a
moment
to
ensure
that
everyone
is
familiar
with
the
webinar
control
panel
panel.
At
the
top
of
the
slide
panel,
you
will
find
four
buttons.
These
funds
can
be
used
to
ask
a
question
answered
voting
questions
do
attachments
with
additional
related
material
and
also
allows
you
to
rate
the
webinar
and
leave
us
feedback.
A
A
Thank
you
for
joining
our
second
webinar,
which
is
a
part
of
the
webinar
series
at
will
that
we
will
be
running
over
the
next
couple
of
weeks.
Today's
webinar
will
feature
experts,
kamesh
hemorrhage,
a
senior
product
manager
for
cloud
solutions
at
Dell,
with
specific
focus
on
Dells
OpenStack
powered
cloud
solution
and
marisol
Sowinski
principal
Technical,
Marketing
engineer
at
each
tank
together,
a
single
described
health
house
Deb
fits
with
OpenStack
and
by
use
the
best
choice
for
cloud
storage
now
that
they
could
turn
it
over
to
Mara
claws.
B
Thank
You
Danielle,
so
Danielle
mentioned
we're
going
to
be
focusing
on
how
staff
and
OpenStack
work
together
during
today's
webinar.
The
agenda
for
today
is
to
talk
about
the
software
and
companies
involved.
Cloud
storage
considerations,
talk
about
stuff
architecture,
the
unique
features
and
benefits
that
Steph
brings
to
OpenStack,
then
we're
going
to
examine
some
Ceph
OpenStack
best
practices
and
wrap
up
with
resources
and
next
steps
at
a
really
high
level.
Steph
is
a
distributed,
unified
object
and
block
file
platform,
it's
created
by
storage
experts
and
is
open
source.
B
It's
been
part
of
the
Linux
kernel
for
several
years
and
it's
integrated
into
a
number
of
cloud
management
platforms,
especially
OpenStack,
which
is
gaining
a
lot
of
adoption.
Conventional
touch
on
that
more
later,
in
tank
is
a
company
that
has
been
created
to
provide
professional
services
and
work
around
stuff.
It
was
founded
in
2011
and
seed
funded
by
DreamHost,
with
more
angel
investment
by
Mark
Shuttleworth
sage,
while
is
the
CEO
and
the
creator
of
staff,
and,
to
tell
you
a
little
bit
more
about
OpenStack,
here's
from
ash.
C
Thanks
Martha
everyone:
this
is
comics
from
Dell
I
thought.
I
would
spend
a
few
minutes
talking
about
what
OpenStack
is
for
those
of
you
who
are
not
familiar
with
it.
Openstack
is
basically
an
open
standards,
open
source
cloud
operating
system.
Now,
if
you
look
at
what
customers
and
and
companies
are
trying
to
do,
they
want
to
build
private
or
public
clouds,
but
they
don't
have
very
good
options
out
there
in
terms
of
creating
very
large
scale
that
are
flexible
clouds
and
that
are
cost-effective.
C
So,
when
I
talk
to
customers
that
are
trying
to
build
clouds,
they
typically
tell
me
they're,
pre
biggest
pains
are
cost.
They
were
trying
to
implement
something
with
with
a
mainstream
option
like
VMware,
Microsoft
or
or
others.
It's
pretty
expensive
and
they're
they're
unable
to
quickly
address
consumer
demand,
and
they
want
several
options
and
on-premise
clouds,
off-premise
clouds
and
hybrid
clouds
and
and
with
open-source.
If
you
look
at
the
history
of
cloud,
you
know
Amazon
built
their
clouds
based
on
open
source.
You
know
Google
Facebook.
C
Many
of
them
have
open
source
as
their
foundation
for
the
clouds
they
have
built.
So
it's
natural
for
companies
to
look
at
open
source
and
build
their
products
around
it.
So
what
open
source
clouds
allows
companies
to
do
is
to
innovate
on
open
platforms
and
frameworks
and
accelerate
their
time
to
market
by
leveraging
community
building
and
collaboration.
C
Specifically,
OpenStack
is
a
project
that
has
gained
a
tremendous
amount
of
momentum
over
the
last
year
or
two
years
when
it
was
first
incubated
by
NASA
and
in
Rackspace
it
has
one
of
the
largest
and
the
most
active
communities,
open-source
communities,
building
open
source
components
which
I'll
touch
upon
in
a
second.
It
has
garnered
a
pretty
high
level
of
high
profile
contributors
and
companies
that
have
been
actively
contributing
or
otherwise
promoting
the
product.
You
know
companies
like
AT&T,
Dell,
HP,
IBM
and
many
other
large
players
are
part
of
the
part
of
the
community
they're.
C
The
other
big
thing
that
makes
OpenStack
really
stand
out
is
their
regular
release
cycles.
They
come
out
with
a
release
every
six
months,
typically
around
the
April
and
October
time
frame,
which
are
immediately
followed
by
what
we
call
a
cloud
and
OpenStack
summit
where
the
developers
get
together
and
talk
about
the
next
release
cycle.
Now,
OpenStack
is
not
owned
by
any
one
company,
it's
governed
by
an
independent
foundation.
C
I'll
touch
upon
the
various
services
that
OpenStack
provides
is
pretty
broad
across
the
spectrum,
and
best
of
all
OpenStack
is
actually
built
with
a
very
sound
architecture
from
the
ground
up
keeping
in
mind
the
scale
out
needs
and
it's
pretty
loosely
coupled
asynchronous
message
based
architecture
and
it's
highly
distributed,
and
therefore
it's
ideally
suited
for
massively
scalable
applications.
So
what's
in
OpenStack,
it's
a
collection
of
projects
that,
as
I
said,
loosely
coupled
that
communicate
with
each
other
through
asynchronous
means.
C
There
are
a
number
of
components
and
services
that
OpenStack
provides
their
compute
services
called
Nova
storage
is
object.
Storage
Swift
is
one
option.
That's
already
part
of
OpenStack
plans
with
an
imaging
service
horizon
is
a
dashboard
that
provides
user
interface,
key
source
authentication
quantum
as
a
network
service
there's
also
cinder,
which
is
not
mentioned
here,
which
is
the
volume
service
that
just
came
out
in
a
recent
release.
C
So
what's
the
value
proposition,
you
know,
obviously
the
biggest
pain
for
customers
is
that
they
are
locked
into
proprietary
solutions,
particularly
when
cloud
is
just
gaining
momentum
and
they
don't
want
to
be
locked
into
either
a
proprietary
technology
or
a
vendor
that
they
cannot
get
out
of
so
it
limits
lock-in,
which
is
one
of
the
biggest
concerns
we
hear
from
customers.
It
allows
for
massive
scalability,
it's
it
uses
open-source
hypervisors
underneath
KB
M
s
and
L
XE,
and
there
their
efforts
underway
to
also
provide
support
for
ESX
and
hyper-v.
That
is
gaining
momentum.
C
Now
Microsoft
has
recently
contributed
a
hyper-v
compatibility
with
OpenStack
ESX
with
vmware
joining
the
OpenStack
foundation
is
starting
to
become
active
in
that
space,
so
we're
seeing
a
lot
of
activity
and
on
making
the
existing
hypervisor
is
also
available
for
OpenStack
and,
of
course
the
most
important
thing
is.
You
know
the
whole
standards
API
is
that
allows
the
ecosystem
to
grow
around
it.
Now
Dell
has
been
a
pretty
active
with
OpenStack
and
open-source.
C
We
have
been
the
first
hardware
solutions
vendor
to
book
back
OpenStack
we're
one
of
the
first
companies
to
work
with
Rackspace
at
NASA
and
when
OpenStack
first
took
off,
and
we
also
introduced
the
first
OpenStack
solution.
That's
integrated,
soup-to-nuts
hardware,
software
services
and
support
back
last
year,
and
we
were
also
the
first
OpenStack
deployment
solution.
We
built
a
solution
called
crowbar
which
I'll
touch
upon
shortly
and
miroslav
will
cover
it
as
part
of
the
set
presentation.
C
That's
the
IP
that
we
contributed.
Crowbar
is
also
open
source
and
we've
been
very
regularly
participating
in
various
community
activities.
We
have
been
sponsors
of
OpenStack
conferences,
all
this
all
the
OpenStack
conferences.
Today
we
lead
the
Austin
and
Boston
OpenStack
made
up
and
a
number
of
community
events
worldwide
and
I
mentioned
OpenStack
governance
and
the
foundation
earlier.
We
we,
our
Gold
level,
sponsor
of
that
foundation
and
Dell
also
has
two
seats
on
the
Foundation
Board
of
Directors
and
I'll
touch
upon
crowbar
shortly.
C
But
it's
it's
the
contribution
that
Dell
made
to
the
community
and
it's
it's
grown
pretty
rapidly
over
the
last
year
or
two
we
have
more
than
500
followers,
3,000,
1,000,
plus
crowbar
hits
in
just
90
days,
2000
downloads,
etc.
It's
doing
pretty
well
as
an
open-source
project.
It's
available
on
github.
You
can
try
that
one
out
too
so
a
little
bit
about
what
crowbar
is
crowbar
is
an
open-source
software
management
framework,
so
think
of
a
situation
where
a
bunch
of
hardware
servers
to
show
up
at
your
doorstep.
C
How
do
you
go
from
there
to
a
fully
functional
OpenStack
cloud?
It
turns
out
that's
a
very
tough,
pretty
difficult
problem
because
of
all
the
complexity,
that's
involved
in
provisioning,
the
the
server's
getting
your
BIOS
and
rate
settings
properly,
making
sure
you
have
the
right
firmware
settings.
You've
got
all
your
services
like
NTP,
DHCP,
everything
set
up
correctly
and
not
to
mention
all
the
different
parameters
and
configurations
you
need
to
worry
about
at
the
OpenStack
level,
so
crowbar
effectively,
automates
all
that,
for
you,
it
brings
up
a
an
openstack
cluster
and
also
a
safe
cluster.
C
In
this
case,
you
will
hear
about
it
more
from
eros
lab,
basically
from
bare
metal
up
to
a
fully
functional
system
that
you
can
use
right
away.
It
was
built
with
an
extensible
architecture
in
mind.
There
are
these
things
called
bar
clamps,
which
are
independent
modules
that
add
functionality.
For
example,
if
you
have
a
new
switch-
and
you
want
to
add
that
to
you
into
your
cluster,
you
can
have
crowbar
provision.
C
If,
for
you
or
if
you
have
a
new
software
component
like
like
SEF,
for
example,
you
can
create
a
bar
clamp,
that'll,
let
let
you
install,
deploy
and
configure
stuff
so
that
it
works
with
open
staff.
A
number
of
companies
have
collaborated
with
us
in
the
community
Intel
SecureWorks
VMware
built
a
bar
clamp
around
Cloud,
Foundry
DreamHost
has
been
an
active
contributor,
and
the
ink
tank
folks
have
actually
built
a
bar
clamp
on
to
go
ahead
and
install
and
deploy
a
staff.
Buster
Sousa
has
also
been
an
active
contributor.
C
So
the
other
part
of
probar
is
that
it
is
actually
an
operational
model,
so
it
deploys
bare
metal
as
I
mentioned
earlier,
but
also
once
you
have
your
cluster
up
and
running.
There
are
these
things
you
have
to.
You
have
to
operate
it
in
a
highly
efficient
manner
as
new
patches
coming
as
you
deploy
new
software
software
applications
to
it
and
as
servers
fail
and
things
like
that,
crowbar
can
help.
You
manage
that
thing
in
a
very
efficient
fashion.
It
is
designed
for
hyper
hyper
scale
environments,
so
you
can
put
extensions
into
bar
clamps.
C
You
can
plan
for
scale
using
proposals
underneath
crowbar
is,
is
ops
called
chef,
which
is
a
configuration
management
system
that
is
used
for
these
kinds
of
devops
type
of
management
capabilities,
and
it
also
helps
you
with
continuous
integration.
So
you
will
hear
more
about
how
crowbar
is
used
in
the
case
of
SEF.
It
allows
a
basically
crowbar
bar
clamp
that
ink
tank
build,
allows
customers
to
deploy
an
entire
set
Buster
from
the
bare
metal
up
and
effectively
integrate
that
into
OpenStack.
So
you
can
use
the
full
capabilities
of
SAP.
C
C
B
B
That's
important
being
able
to
delegate
management
so
that
you
don't
have
to
have
a
team
of
administrators
that
are
doing
everything
for
the
users.
You
can
delegate
management
to
different
teams
or
groups
or
users
as
needed,
no
single
point
of
failure.
Commodity
Hardware,
all
of
those
are
really
important
commodity
Hardware
because
of
the
costs-
and
you
know,
the
availability
of
commodity
hardware
is
part
of
what's
driving.
This
move
toward
cloud
architectures
in
the
first
place.
B
Having
no
single
point
of
failure,
thanks
to
redundancy
and
lots
of
hardware
available
in
the
cloud,
is
also
key
for
continuous
operation
having
the
ability
to
have
that
infrastructure
be
self
managing
with
fail
in
place.
Maintenance
is
really
useful
and
critical
for
some
deployments,
because
what
this
lets
you
do
is
deploy
your
clouds
in
cost-effective,
remote
locations,
coalos,
where
you
might
not
really
have
anyone
on
your
staff
visit,
except
maybe
once
every
couple
of
weeks,
so
being
able
to
have
everything,
run
more
or
less
by
itself.
B
Be
managed
remotely
with
lights
out
is
a
critical
feature
of
successful
cloud
deployments,
dynamic
data
placement
so
that
you
can
put
data
where,
wherever
there
is
free
space
in
a
way,
that's
balancing
the
load
across.
All
of
your
hardware
is
important,
as
is
the
ability
to
dynamically
adjust.
So,
as
you
add,
more
nodes
and
disks
to
your
cloud
having
the
data
automatically
adjust
and
migrate
in
such
a
way
that
it
takes
advantage
of
the
new
hardware,
but
minimizes
data
movement
and
unnecessary
migration
is,
is
key.
Support
for
VM
migration
is
also
important.
B
Having
the
ability
to
live
migrated
VM
instances
between
different
compute
nodes
in
the
cloud
is
critical
and
that's
one
of
the
things
that
is
difficult
to
do
without
some
kind
of
centralized
storage
mechanism
like
SEF
or
some
expensive,
proprietary
arrays,
but
as
you'll
show
yourself
gives
you
that
capability
and
then
having
your
storage
Foundation
with
unified
storage
capabilities.
The
ability
to
do
block
as
well
as
object
and
file,
adds
flexibility
to
your
cloud
deployments,
because
now
you
can
host
a
wider
range
of
applications
and
map
the
appropriate
storage
type
to
the
application
based
on
need.
B
Right
now,
OpenStack
or
OpenStack,
Essex
folsome
just
came
out
and
starting
to
move
into
production
in
more
places,
but
especially
in
Essex,
there
was
a
fairly
sizable
gap.
Around
block
storage
in
native
OpenStack
solutions.
Swift
if
you're
familiar
with
OpenStack,
has
been
a
good
object,
storage
starting
point.
It
provides
you
know,
mature,
solid
foundations
for
objects
and
images.
But
if
you
need
to
have
a
lot
of
dynamic
changes
in
your
cloud
infrastructure,
it
can
be
a
little
bit
painful
to
to
reconfigure
across
Hardware
changes
in
failure.
B
But,
more
importantly,
Swift
does
not
provide
any
block
file
capabilities
and
it's
really
not
optimized.
You
can
kinda
sort
of
fake
it,
but
it's
not
optimized
for
use
cases
that
you
know
would
typically
use
virtual
block
devices
or
virtual
disks
like
more
higher
performance
storage
database
operations
or
any
other
kind
of
existing
apps
that
need
file
or
block
storage
access.
Because
of
that
people
have
been
using
nova
volume
based
on
LVM
and
I
scuzzy
to
meet
some
of
those
storage
needs.
B
But
there's
a
couple
of
downsides
to
that
one
is:
there
are
many
layers
to
build
this
you're
effectively
taking
a
bunch
of
disks
managing
them
through
some
disk
controllers
with
LD
m
and
then
L
VM
is
providing
disk
drive.
You
know
targets
that
are
then
exported
by
I
scuzzy,
there's
an
initiator
on
some
compute
nodes,
that
maps
to
that
l,
vm
LAN
di
scuzzy
and
connects
it
to
the
VM
instance
to
use
as
a
disk
through
the
Linux
layer
and
then
the
hypervisor.
So
there's
a
bunch
of
different
layers.
B
You
know
not
really
optimal
performance
but,
more
importantly,
the
node
becomes
a
single
point
of
failure.
If
the
node
that's
hosting
those
I,
scuzzy
longs,
VI,
l
vm
goes
down
all
of
the
instances
that
are
using
those
disks
will
go
down
as
well
and
even
if
the
node
doesn't
go
down,
but
you
simply
lose
a
disk.
The
raid
reconstruction
based
on
l
vm,
is
going
to
be
great
performance
and
take
a
really
long
time
to
rebuild,
especially
if
you're
talking
about
you
know
three
terabyte
two
terabyte
SATA
drives
so
because
of
that
we
wanted
to.
B
You
know
position
staff
as
a
way
of
closing
that
block
storage
gap
in
OpenStack
there's
a
number
of
benefits
to
using
CF
as
the
block
storage
provider
and
OpenStack.
As
kamesh
mentioned,
it's
integrated
with
probar,
we
wrote
out
bar
clamp
that
plugs
right
into
crowbar.
You
just
you
know
upload
it
through
the
GUI
and
you
now
have
self
capabilities
in
your
OpenStack
cluster,
so
it
simplifies
growing
deployments
or
me
or
building
them
from
scratch.
B
It's
tested
with
OpenStack
Steph
has
been
kind
of
evolving
alongside
OpenStack
and
with
OpenStack
there's
a
number
of
cross
contributors
on
both
projects
for
several
years.
One
of
the
key
capabilities
that
it
provides
is
Enterprise
style,
Storage
Management
at
much
lower
dollar
per
gigabyte
than
you
would
get
with
some
other
enterprise
offerings,
and
it's
also
scalable
to
many
petabytes,
which
is
something
that
many
enterprise
arrays
have
a
really
hard
time
doing.
It
has
no
single
point
of
failure
and
it
has
self
management
and
fail
in
place
behaviors.
B
So
the
autonomous
operation
allows
you
to
deploy
in
remote
locations
and
not
have
to
worry
about
frequent
visits.
You
can
do
all
of
your
management,
lights-out
and,
in
fact,
crowbar
kind
of
simplifies
that
on
top
of
Dell
Hardware
with
by
integrating
the
BMC-
and
you
know,
lights-out
management
capabilities
in
terms
of
storage
capabilities,
it
provides
dynamic
data
placements.
So
your
data
is
distributed
across
all
of
the
different
hardware.
That's
available
in
the
cluster.
You
can
write.
B
You
know,
policies
to
tailor
that
distribution
for
if,
for
some
reason,
that
is
more
optimal
for
you,
but
it
is
capable
of
evenly
leveraging
all
the
hardware
in
the
cluster
and
dynamically
adjusting
that
as
your
cluster
changes
and
then
it's
also
a
unified
storage
platform.
So,
while
we
are
focusing
on
the
block
storage
capabilities
during
this
webinar,
there
is
also
an
object,
access
method
and
a
filesystem
access
method.
B
Some
of
the
things
that
make
staff
interesting
and
different
and
a
very
powerful
storage
mechanism
is
the
crush
algorithm.
If
you
haven't,
you
know,
kind
of
looked
into
staff
crush
is
the
the
key
that's
used
to
distribute
data
evenly
and
to
allow
this
massive
scalability.
Basically,
what
crush
does
is
give
you
the
ability
to
compute
metadata
are,
rather
than
have
to
store
it
and
synchronize
it
across
multiple
metadata
management
nodes.
The
crush
computation
is
distributed
so
that
the
components
that
need
to
do
the
metadata
operations
are
able
to
do
them
in
a
very
distributed
fashion.
B
Basically,
the
clients
that
need
to
look
up
the
metadata
do
the
computation
based
on
a
map
that
the
crush
processes
and
then
the
OS
DS.
The
discs
themselves
act
in
a
intelligent
object,
store
peer-to-peer
fashion
to
also
use
the
crush
algorithm
to
figure
out
which
other
OS
DS
are
their
peers.
So
they
can
distribute
data
I'll
get
into
a
lot
more
of
that
detail
later
on.
So
after
that,
the
the
fact
that
SEM
provides
an
advanced
virtual
block
device
capability
that
gives
Enterprise
style
storage
capabilities
from
utility
server
hardware
is
an
important
differentiation.
B
So
things
like
thin
provisioning,
allocate
or
write
snapshots,
LUN
cloning
and
integration
with
the
Linux
kernel
and
OpenStack
are
all
part
of
what
separates
to
the
table
when
you're
using
staff
as
the
block
provider
in
OpenStack.
It's
an
open
source
solution.
There's
a
lot
of
benefits
for
that.
One
is
just
maximizing
the
value
of
your
deployment
by
leveraging
free
software,
but
the
other
real
benefit.
B
Is
you
get
to
control
your
own
destiny
with
access
to
the
source
code,
so
whether
you're
building
a
really
long
term
archive-
and
you
know
you
want
to
ensure
that
if
you
know
sometimes
companies
aren't
there
for
the
long
term.
But
if
you
have
access
to
the
source
code,
you
have
control
of
your
own
destiny
and
the
ability
to
customize
and
modify
that
code
provides
differentiated
values.
B
So
you
know
there's
lots
of
different
people
that
are
jumping
into
the
cloud
business
and,
if
you're
looking
to
build
a
differentiated
value,
having
access
to
the
source
code
that
you
can
modify
and
customize
gives
you
a
chance
to
differentiate
the
value
that
you
bring
to
the
market
and
then
lastly,
I
mentioned
this
a
number
of
times.
But
as
a
unified
storage
platform,
you
get
to
host
multiple
use
cases
in
a
single
storage
cluster,
so
that
gives
you
your
kind
of
benefits
of
scale.
B
You
can
have
lots
and
lots
of
disks
without
having
to
create
silos
or
islands
of
storage
that
are
only
available
for
a
block
or
only
available
for
object,
storage
or
only
an
ass
file.
Server
set
lets
you
build
a
common
pool
from
which
you
carve
out
space
as
needed
for
all
of
those
different
access
methods.
B
One
of
the
things
that's
really
useful
in
in
a
self
deployment
for
a
block
provider
is
persistent
by
using
SEF
Raiders
block
device.
You
get
volumes
that
are
persistent
by
default.
Vms
behave
more
like
traditional
servers
or,
if
you're
familiar
with
VMware
environments,
how
VMware
VMs
behaves
they
don't
disappear
when
you
reboot
them
and
they
you
know
they're.
Their
disks
are
basically
persistent.
It
gives
you
hosts
independence
because
the
Ceph
cluster,
the
storage
cluster,
is
accessible
from
all
the
different
nodes.
B
If
you
need
to
migrate
a
VM,
you
don't
have
to
move
its
storage
to
the
new
location.
You
can
just
migrate
the
compute
and
have
the
new
the
destination
hypervisor
host,
simply
access
the
same
storage
pool
with
SEF.
It
provides
easy
snapshots,
so
you
can
make
snapshots
of
instances
that
also
snap
the
underlying
storage
and
then
it
provides
easy
cloning.
You
can
use
the
API
to
create
a
new
volume
populated
with
the
contents,
from
an
image
from
glance
and
then
you're
ready
to
boot.
From
that
new
volume.
B
B
You
see
the
the
core
layer,
it's
the
self
object,
storage
layer
and
it's
called
Rados,
which
stands
for
reliable
autonomous
distributed
object,
storms,
it's
self
healing
self
managing
and
the
key
is
that
it's
made
up
of
intelligent
storage
nodes,
so
you
know
get
into
more
detail
later,
but
the
core
piece
is
this
thing
that
we
call
the
OSD,
and
this
is
the
object,
storage
device.
It's
basically
managed
by
the
object,
storage,
daemon
and
what
that
does.
B
Is
it
takes
a
dumb
disk
and
makes
it
into
a
intelligent
object,
storage
device
that
has
benefits,
because
these
intelligent
objects
have
peer-to-peer
behaviors.
You
can
ask
them
to
interact
with
their
peers
in
intelligent
ways,
and
it
also
is
extensible
so
just
like
in
an
object-oriented
programming,
you
can
extend
what
the
objects
can
do
through
code
and
give
them
the
capability
to
do
things
that
are
a
good
fit
either
with
the
higher
level
functions
that
you
see
in
the
diagram
or
with
the
application.
B
You
know
one
example:
we've
used
in
the
past
is
thumbnails,
so
if
you're
using
the
Rados
object
store
as
a
image
repository,
you
can
extend
those
image
objects
in
the
pool
by
providing
a
method.
That
knows
how
to
compute
thumbnails
of
a
certain
resolution,
and
if
someone
wants
a
thumbnail
rather
than
the
original
image,
you
don't
have
to
return
a
five
megabyte
image
you
and
just
have
the
OSD
compute
the
thumbnail
and
return
the
thumbnail.
So
that's
part
of
the
power
that
Rados
brings
to
the
table.
B
Now
that
power
is
mostly
harnessed
through
this
live
Rados,
and
this
is
a
library
of
applicant
that
applications
can
integrate
with
to
take
advantage
of
those
Rados
function,
functions
and
all
of
the
access
methods
that
we're
going
to
talk
about
are
built
on
top
of
libraries,
including
the
self
file
system.
You
know
the
in
the
diagram
is
kinda,
shows
it
bypassing
library
knows,
but
in
reality
it's
using
libraries
as
well.
B
There's
the
Rados
gateway,
which
provides
a
restful
gateway
for
object
storage,
so
the
Rados
gateway
basically
provides
an
s3
and
swift
compatible
restful
api
and
it
translates
those
s3
Swift
api
calls
into
native
SEF
objects
or
accesses.
There's
the
Rados
block
device
RBG,
and
that
is
the
component
that
we're
going
to
spend
most
of
our
time
focused
on
here.
This
is
really
a
reliable
and
fully
distributed
block
storage
device
with
thin
provisioning
cloning
snapshots,
a
host
of
powerful
features
that
takes
advantage
of
a
highly
distributed,
back-end
storage
pool
and
then
finally,
there's
CSFs.
B
B
So
how
do
we
build
this
whole
system?
The
two
key
components
are
monitors
and
these
OS
DS
that
I
mentioned
the
monitor,
is
maintained
a
cluster
map.
So
basically
you
have
an
odd
number
of
monitors
in
the
system
for
quorum,
and
you
know.
Typically,
we
would
start
with
three
monitors
in
the
system
and
you
know
you
might
go
to
five
if
you
have
a
very
large
cluster,
but
basically
monitors.
B
Look
at
the
health
of
the
components
of
the
cluster
and
the
map
that
they
maintain
is
then
distributed
to
the
clients
that
need
to
take
advantage
of
cluster
storage
and
the
OSDs
that
need
to
replicate
data
to
their
peers.
So
the
map
is
there,
but
it's
relatively
small
in
size.
It's
proportional
to
the
number
of
disks
you
have
in
the
cluster,
so
the
number
of
you
know
objects
which
it
needs
to
manage
would
be
on
the
order
of
you
know
tens
of
thousands
at
most.
B
Not
you
know
billions
like
you
would
if
you
have
to
maintain
traditional
metadata
and
that
map
changes
very
infrequently,
so
it
really
changes
only
when
components
fail
or
new
components
are
added
to
the
system
other
than
that
the
map
should
stay
stable
and
the
clients
and
the
OSDs
know
how
to
talk
to
the
monitors
to
get
updates
of
the
map
is
a
map
changes.
Then
there
are
mechanisms
for
distributing
the
new
map
to
the
clients
and
the
OSDs.
B
The
monitors
do
not
serve
any
objects
or
data
to
the
clients.
So
that's
really
a
critical
component.
Part
of
what
enables
us
to
scale
is
the
fact
that
monitors
maintain
a
map
of
the
cluster
and
do
not
participate
in
any
actual
data
serving
functions.
The
OSDs
are
the
key
component
of
where
data
is
stored
and
how
it's
served
so
as
a
best
practice,
we
recommend
one
OSD
object.
Storage
daemon
per
disk
in
the
cluster
is
a
process
that
runs
in
Linux
and
have
at
least
three
nodes
in
the
cluster
for
replication
redundancy.
B
Those
you
serve
the
stored
objects
to
the
clients
and
they
intelligently
peer
with
each
other
to
perform
replication
tasks.
So
a
client
would
consult
the
use,
the
map
with
the
Crush
algorithm,
to
figure
out
where
to
write
data,
and
it
would
write
the
data
to
a
primary
OSD
and
based
on
that
crush
map.
B
The
client
and
the
OSD
would
know
who
the
pure
replicas
Oh,
as
these
are
the
primary
OSD
that
gets
that
right
would
then
be
responsible
for
making
sure
that
it
has
up-to-date
replicas
or
copies
of
that
data
on
its
OSD
peers,
and
it
would
then
push
out-
and
you
know,
make
sure
that
that
data
is
persistently
stored
on
those
peers
and
it
also
supports
different
object
classes.
So,
there's
a
notion
of
what
those
objects
are
okay
on
this
slide,
what
we
show
is
what
a
cluster
would
look
like
composed
of
OSDs
and
monitors.
B
So
on
the
bottom.
You
see
this
cluster
each
one
of
those
kind
of
blue
and
red
boxes
represents
a
node
and
the
gray
boxes
are
the
monitors
that
are
in
the
cluster.
If
you
drill
in
to
one
of
the
OSD
nodes
which
you'll
see
is
it's
made
up
of
some
physical
disk
that
has
a
filesystem
on
top
of
it
and
the
file
systems
that
we
do.
Our
testing
with
is
butter
FS,
x,
FS
and
ext4,
depending
on
applications.
B
Okay,
the
Rados
block
device
allows
you
to
store
virtual
disks
and
Rados.
It's
basically
a
presentation
layer
that
translates
from
something
that
looks
like
a
virtual
disk
and
behaves
with
virtual
disk
type
semantics
and
knows
how
to
respond
to
requests
for
snapshots
and
clones.
On
top
of
that,
Rados
object
store.
One
of
the
things
that
it
brings
to
OpenStack
is
decoupling
VMs
from
the
and
supporting
live
migration.
B
The
data
behind
the
volumes
and
images
are
striped
across
the
cluster
for
better
throughput,
so
the
underlying
data
is
broken
up
into
objects
and
those
objects
are
distributed
throughout
the
self
cluster
and
then
also
replicated
throughout
the
South
cluster.
So
there's
no
single
point
of
failure.
One
of
the
cool
features
is
that
seth
is
also
integrated
with
qumu
and
KVM
with
OpenStack
Nova.
So
this
allows
you
to
boot
directly
from
volumes
stored
and
staff.
You
can
also
mount
the
volume
in
Linux
via
the
SEP
kernel
driver.
B
That's
been
part
of
the
Linux
kernel
for
a
while,
and
that
can
be
useful
for
either
VM
applications
if
you're
running
Linux
in
your
VM
instances
or
it
could
be
connected
to
the
hypervisor
hosts
for
various
other
applications.
So
it's
flexible.
It's
not
just
Linux
kernel,
it's
also
KVM
in
qumu,
but
if
you're
using
other
hypervisors,
you
know,
then
the
alternate
path
through
Linux
might
work.
B
Okay
in
this
diagram,
which
we
will
see,
is
what
something
might
look
like
from
the
VMS
perspective
at
the
top.
You
have
a
virtual
machine,
that's
using
a
virtual
disk,
or
it's
connected
to
a
rvg
instance,
a
set
block
device,
it's
using
that
as
the
boot
device,
and
also
can
it
use
it
for
application
storage.
B
So
while
the
VM
sees
a
disk,
a
virtual
disk,
the
underlying
software
Maps
that
and
distributes
the
data
throughout
the
different
nodes
in
the
and
stat
cluster,
as
mentioned
before,
this
enables
live
migration.
If
you
have,
you
know
the
VM
running
on
the
node
on
the
Left,
it's
accessing
you
know
the
set
of
blocks
that
make
up
its
virtual
disk
if
the
VM
migrates
to
the
compute
node
on
the
right
that
compute
node
still
has
access
to
all
of
those
different
pieces
and
knows
how
to
consistently
present
them
as
the
same
virtual
disk.
B
So
you
can
migrate
the
compute
without
having
to
do
any
movement
of
the
storage,
and
this
is
a
powerful
capability
that
traditionally
people
have
needed
sand
or
nails
appliances
to
do
in
a
kind
of
virtual
infrastructure,
environment,
and,
as
mentioned
previously,
the
hosts
when
the
host
is
running
Linux
is
also
able
to
use
that
block
device
and
access
that
block
device
directly.
So
basically,
the
rvg
has
a
kernel
module
in
Linux
that
takes
advantage
of
library
dos
and
can
access
blocks
distributed
throughout
the
cluster.
B
And
so
that
was
basically
the
the
architecture
of
the
Ceph
virtual
disks
or
the
the
Rados
block
device.
Now,
let's
talk
a
little
bit
about
the
Ray
dos
gateway,
which
is
the
restful
interface
to
the
Rados,
object
store
on
the
bottom
layer,
basically
in
a
OpenStack
or
or
any
deployments,
what
would
happen
is
you
would
instantiate
the
Fredo's
gateway
process
on
some
number
of
nodes
in
your
cluster,
and
you
know
the
best
practice.
B
It
might
be
useful
to
dedicate
certain
hardware
for
running
the
Rados
gateway,
or
you
can
just
run
that
process
on
nodes
that
exist
in
the
cluster.
Just
make
sure
that
you
have
some
extra
CPU
cycles
in
memory
to
make
that
work.
It's
effectively,
Apache
fast
CGI,
but
the
Rados
gateway
will
talk,
vibratos
or
talked
to
the
Rados
object
store
on
the
bottom,
and
it
will
talk.
Rest
API
with
s3
and
swift
compatible
API
call
to
the
application.
So,
typically
in
a
deployment,
you
would
have
multiple
instances
of
this
Rados
gateway.
B
They
all
connect
to
the
same
object
store,
so
they
all
have
access
to
the
same
object
and
then
very
some
load.
Balancer,
that's
not
drawn
in
this
diagram,
but
typically
there
is
a
load
balancer
in
front
of
those
Rados
gateways
so
that
the
application
requests
first
come
to
load,
balancer
and
then
are
distributed
across
the
multiple
instances
of
the
Rados
gateway
process.
B
Okay,
so
that
kind
of
covers
the
Ceph
architecture.
The
next
section
is
best
practices,
but
before
we
get
to
that,
let's
take
a
few
minutes
or
like
half
a
minute
to
take
a
look
at
these
questions
and
vote
you'll
be
able
to
vote
even
after
the
30
seconds
are
done,
but
you
know,
let's
just
vote
down.
Please.
B
Okay,
so
if
you
haven't
had
a
chance
to
vote,
you
can
just
vote
as
I
keep
presenting,
but
now,
let's
take
a
look
at
some
OpenStack
and
stuff
best
practices.
Some
of
these
have
been
you
know,
tested
and
validated.
Some
of
these
are
based
on
our
experience
with
SAP
in
general,
but
first,
let's
start
off
with
networking.
B
So
one
of
the
things
that
we
recommend
for
a
low,
latency
and
high
bandwidth
is
taking
advantage
of
10-gig
switches
like
the
dell
force,
1048
10
switch
and
having
a
top
of
rack
switch
that
carries
both
front
side
and
backside
10
gig
traffic
to
the
nodes.
In
that
rack,
we
also
recommend
kind
of
a
lower-cost.
B
We
recommend
trunking
4
of
the
10
gig
ports
to
create
40
giggling
between
the
the
racks
to
the
end
of
row
or
the
spine
router
in
the
spine
topology.
What
would
happen
is
each
row
of
racks
would
have
the
top
of
rack
high
speed
switch
that
is
linked
to
two
redundant
end
of
row
or
spine
switches,
and
these
are
effectively
routers
that
we
are
then
going
to
interconnect
to
build
out
a
net
work
smash
in
the
data
center.
B
So
each
pod
contains
two
spine
switches,
each
leaf
switches,
redundantly
up
linked
to
the
spine
switches
on
the
end
of
each
row.
The
spine
switches
are
then
redundantly
linked
to
each
other.
Also
using
the
40
gig
trunks
and
then
each
spice,
which
has
three
up
links
at
40,
gigs
to
other
other
rows
and
one
of
the
cool
ways
to
connect
the
different
rows
in
the
data
center
of
this
cluster
is
to
basically
build
rings
using
those
spine
routers.
B
Two
counter-rotating
rings
out
of
those
pods
or
rows
in
such
a
way
that
you
basically
have
a
clockwise
path
between
rows,
a
counterclockwise
tap
and
then
a
cut
through
path
between
the
different
rows
and
what
that
does
is
it
ensures
that
the
maximum
number
of
hops
between
any
two
nodes
is
n,
divided
by
four,
when
you
have
n,
pods
or
n
rows
in
your
data
center,
so
from
a
network
layout
perspective,
this
is
one
of
the
things
that
we
found
to
be
a
good,
relatively
optimal
deployment.
That's
not
too
complex
to
set
up
or
maintain.
B
Now,
let's
take
a
look
at
best
practices
for
the
OSDs.
One
of
the
keys
is
to
capacity
plan
with
enough
CPU
cycles
and
memory
per
OSD
process.
So
what
we
recommend
is
two
gigs
of
memory
and
one
gigahertz
of
zeon
enough
processing
power
per
OSD
when
everything
is
working
smoothly.
The
OSD
processes
don't
use
anywhere
near
as
much
as
that.
B
But
if
there
are
error
cases,
if
you
have
discs
that
are
failing-
or
you
know,
it's
not
an
error
case,
but
if
you're
adding
capacity
to
the
cluster
and
then
you
need
to
like
replicate
and
distribute
data-
or
you
know-
replicate
data
for
failed
discs.
It's
useful
to
have
some
amount
of
extra
CPU
cycle
in
memory
that
can
be
used
to
do
checksumming
on
a
regular,
ongoing
basis,
as
well
as
all
of
the
error
processing
for
figuring
out
how
to
adjust
things.
When
some
something
fails
in
the
cluster
it
needs
to
have
data
redundancy
replicated.
B
It's
also
possible
as
another
best
practice
to
use
SSDs
as
journal
devices
to
improve
latency.
That
might
be
useful
for
some
workloads
if
it
might
not
be
useful
for
others
and
then
one
of
the
things
that
is
worth
considering
is
building
different
tiers
of
storage.
For
mixed
application
clusters,
so
you
can
mix
OS
DS
of
diff
on
top
of
different
types
of
disks
to
create
gears
right
out
of
the
box.
B
You
know
dollar
per
gigabyte,
SSDs
and
another
layer
that
is
sitting
on
really
high
density,
high-capacity
SATA
disk,
and
then,
when
you
create
different
pools
for
different
groups
to
use
you
can
choose
which
of
these
tiers
you're
going
to
use
to
create
the
new
pool.
Okay,
so
those
are
ONC
best
practices
when
it
comes
to
monitors.
Some
best
practices
are
to
deploy
the
monitor
role
on
dedicated
Hardware.
The
monitor
is
not
very
resource
intensive
or
you
know
it
doesn't
need
a
lot,
so
you
can
use
a
relatively
small.
B
You
know
not
powerful
1u
box
for
the
monitor,
but
having
dedicated
Hardware
lets.
You
make
sure
that
the
monitor
resource
is
never
starved
for
CPU
or
memory,
because
the
monitors
are
very
critical
pieces
in
that
cluster
they're
redundant,
but
it's
still
better
to
just
have
them
all
off.
Rather
than
has
to
deal
with
monitors,
you
know
being
starved
for
CPU
and
running
out
of
resources.
B
If
you
do
deploy
monitors
on
the
same
hardware,
that's
also
running
other
things
like
OSD
s2,
make
sure
that
you
provision
enough
CPU
and
memory
to
make
sure
that
the
monitor
never
runs
out
of
resources.
One
of
the
other
important
pieces
is
to
always
deploy
an
odd
number
of
monitors.
Three
or
five
are
good
numbers.
This
is
for
forum,
because
what
we
need
is
the
majority
of
the
monitor
nodes
to
be
often
available
in
the
cluster,
for
the
cost
of
a
half
quorum
and
operate.
B
If
you
have
less
than
200
nodes
in
the
cluster,
three
monitors
should
be
fine
for
larger
clusters.
You
might
benefit
for
five
and
then
the
main
reason
that
you
might
go
past
five
to
something
like
seven
is
if
you've
defined
many
different
fault
zones-
and
you
know
want
to
make
sure
that
there's
coverage
of
monitors
in
the
different
fault
zones-
I
guess
one
of
the
things
is
since
I'm
touching
on
fault
zones.
B
Now
that's
why
the
fault
zones
are
important
and
that's
something
that
adds
to
the
power
of
crush
mixed
use
deployments
for
simplicity.
If
you
have
many
different
roles
in
the
cluster,
it
might
be
a
good
idea
to
dedicate
hardware
to
that
specific
role.
If
you're
running
relatively
small
clusters,
that
might
not
be
practical
or
cost-effective,
so
if
you
need
to,
you
can
combine
multiple
functions
on
the
same
hardware.
B
A
B
But
that
generally
just
adds
to
the
expense.
So
what
we
recommend
is
the
best
practice
is,
you
know,
simply
to
allocate
one
OSD
per
disk
use
a
RAID
controller,
or
you
know
a
disk
controller
that
you
know
doesn't
necessarily
have
NVRAM
or
raid.
Whatever
is
just
cost-effective
for
providing
connectivity
to
the
disks
without
creating
a
disk
controller
bottleneck,
and
we
have
a
number
of
performance
investigation,
articles
on
safecom,
blog
site
that
talk
about
different
types
of
drive
controllers
and
the
benefits
of
each
one
back
to
the
slide.
B
Sef
incorporates
this
notion
of
multiple
nested
fault
zones.
I
mentioned
this
a
bit
before
so
Stef
understands
those
fault
zones
and
how
they're
nested
within
each
other.
Each
OSD
is
tagged
with
the
location
in
multiple
zones
and
then
crush
uses
that
information
to
distribute
data
in
such
a
way
that
you're
protected
from
zone
failures.
Roles
need
to
be
replicated
in
the
cluster
in
the
same
way
that
data
is
replicated
in
the
cluster
so
ensure
that
there
are
multiple
nodes
in
the
cluster
within
different
fault
zones
for
monitors.
You
need
at
least
three
for
OSD
nodes.
B
You
know
a
minimum
of
two
for
production,
so
you
can
replicate.
Data
is
what
we
recommend.
It's
a
good
idea
to
have
three,
so
you
can
have
three
copies
of
the
data,
but
two
is
functional
if
you're,
really
starting
small
it'll,
be
able
to
scale
up
to
thousands
of
nodes
and
then
also
just
to
bring
the
point
home
staff
need
sufficient
nodes
per
fault
zone
so
that
it
can
replicate
those
objects.
B
If
you
care
about
reliability-
and
you
know
managing
those
fault
zones,
you
need
to
make
sure
that
there's
enough
nodes
of
each
role
in
each
fault
zone
and
then
finally
support
you
know
in
tank
and
Dell,
provides
expert
support.
We
are,
you
know,
partnered
together
and
we
will
partner
with
you
for
complex
deployments.
We
can
help
with
solution,
design
and
proof-of-concept
solution
customization.
B
So
if
you
kind
of
know
the
broad
strokes
of
what
you
want
to
build,
but
you
want
to
customize
it
and
tailor
it
to
your
specific
needs
so
that
you're
going
to
differentiate
your
value,
we
can
help
their
capacity
planning
performance,
tuning
and
optimization.
Those
are
all
areas
where
we
can
help
and
having
access
to
expert
support
is
definitely
a
production
best
practice.
B
There's
a
number
of
companies
that
you
know
really
levels
and
source,
but
not
comfortable,
going
into
production,
because
there's
no
specific
support
for
that
software,
so
Dell
inning
tank
aren't
there
together
to
provide
support
for
staff
and
OpenStack.
You
know
troubleshooting,
debugging
or
just
general
support
is
there
for
you
now.
Let's
talk
about
some
resources
where
you
can
learn
more.
B
This
slide
shows
all
the
different
web
pages
and
blogs.
Where
you
can
go
to
learn
more
about
Dells
OpenStack
powered
cloud
solutions.
You
can
go
to
the
ink-stained
tank
site
at
slash
Dell,
for
seeing
how
we've
partnered
with
Dell
there's
actually
a
bunch
of
different
videos
and
a
white
paper
with
the
best
practices
you
can
download
from
that
link.
Openstack
org
is
the
community
for
OpenStack,
and
then
there
are
a
number
of
bloggers
at
Dell
in
ink
tank
that
you
can.
You
know,
consult
their
blog
for
more
information
and
more
depth
right
kamesh.
C
I
also
wanted
to
add
on
the
professional
services
and
support
side
of
things
Dell.
You
know
we
talked
a
lot
about
Saif
great
technology.
At
the
same
time,
we
are
also,
we
also
have
complete
services
and
and
reference
architectures
for
OpenStack.
So
if
you're
looking
to
build
an
openstack
cluster
from
the
ground
up,
we
have
the
expertise
and
reference
architectures
that
includes
hardware
that
we
have
tested
in
has
supported
supporting
software
OpenStack
and
other
components
that
require
you
to
have
a
completely
functional
cloud
environment
from
the
ground
up.
So
there
are
lots
of
resources.
C
B
Truth
stay
at
West,
so
it
you
know,
as
kamesh
mentioned,
there's
tons
of
resources
out
there
in
the
community
on
Dells
website
and
some
four
four
in
tanks
website
as
well.
If
you
want
to
learn
more
about
consulting
and
support
services
that
you
know
you
can
get
through
Dell
and
in
tank,
you
know
check
out
these
links
at
the
bottom
of
the
slide.
This
presentation
is
being
recorded,
so
you'll
be
able
to
access
it
afterwards
and
you
know
review
something
you
might
have
missed
and
check
out
the
slides
or
the
links
themselves.
B
B
While
the
creator
of
staff
is
going
to
be
talking
about
advanced
features
of
Ceph,
so
if
you
already
know
a
lot
about
SEF
and
want
to
know
even
more
february
12th
grade
webinar
to
see
the
webinars
are
going
to
be
recorded
and
they're
available
at
this
link
below.
So,
if
you
go
to
the
ink
tank
site
at
news
events,
webinars
you'll
be
able
to
find
links
to
the
webinars
and
any
additional
resources
that
are
relevant
to
that
webinar
wheel
attached
to
that
webinar
page.
So
you
can
then
download
those
attachments
as
well.
B
If
you'd
like
to
contact
us,
here's
the
contact
information
for
both
in-tank
and
del.
You
know
you
can
send
us
email
and
give
us
a
call
and
check
out
more
resources
online
at
the
following
URLs,
you
know
in
tank
is
on
Twitter
and
Facebook,
and
you
know
we
have
our
YouTube
channel
so
check
it
out.
Those
are
great
places
to
learn
more
about
Saif
and
how
people
are
using
seth
and.
C
B
C
Us
at
Dell,
at
OpenStack,
FL,
comm
I,
do
want
to
point
out
the
github
site
on
the
previous
slide.
There
you
can
find
all
the
open-source
bits
from
Dell
on
OpenStack
as
well
as
crowbar.
You
will
find
links
to
community
sites
and
newsletters,
and
things
like
that
so
be
sure
to
check
out
those
links
as
well.
B
A
Now
we
only
were
full
of
minutes
left,
but
I'd
like
to
ask
a
couple
of
questions
for
both
snares
off
and
comets--.
That
was
asked
during
their
presentation.
One
was
built
at
birth
with
sin.
B
So
to
answer
that
question:
basically
let
me
go
back
to
the
the
kernel
rd
module
so
then
is
not
directly
integrated
with
stuff
in
the
same
way
that
q,
mu
and
KVM
are
so.
What
you
can
do
if
you're
using
Xen
is
the
hypervisor
is
connect.
The
Rados
block
device
directly
to
Linux
it'll
show
up
as
a
Linux
disk
device
like
dev
RBG,
something,
and
then
you
can
connect
your
V's
nvm
directly
to
that
block
device.
So
it
will
work
with.
Then
the
past
is
not
quite
as
direct
as
it
is
with
KBM.
A
Great,
thank
you
one
more
is:
when
is
the
next
OpenStack
release
coming
out.
C
Yeah,
as
I
mentioned,
OpenStack
comes
out
every
six
months
and
the
last
release
of
OpenStack
was
called
Folsom,
which
was
mr.
Lee's
in
October,
the
next
one's
coming
out.
It's
called
grizzly
and
it's
coming
out
in
April.
There's
some
really
nice
enhancements
that
are
happening
to
the
volume
component
called
cinder,
which
can
be
we
can
take
advantage
of
in
terms
of
you,
know,
better
access
to
better,
better
performance
and
better
access
to
volume,
devices
and
block
devices.
A
Great
thank
you.
Mary
popping
commissure,
coming
right
up
on
the
hour,
so
I
like
to
thank
everyone
for
attending
I
know
that
you're
everyone
has
a
very
busy
schedule.
There
are
also
a
lot
of
other
questions
that
were
asked
and
we'll
follow
up
with
them
personally
within
the
next
few
days.
The
recording
should
be
ready
for
you
shortly.
The
actual
presentation
slides
are
available
right
now
under
the
attachments
button,
and
there
is
also
the
link
to
the
Dell
and
new
tank
landing
page
there,
as
well
with
many
different
assets.
Thank
you
very
much.