►
From YouTube: Sebastian Han @ Ceph Day Paris
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Ok,
hello.
Everyone
thanks
for
being
here
this!
This
presentation
is
about
self
and
OpenStack
and
how
we
can
deploy
staff.
Can
you
hear
me,
can
you
hear
me
ollie
me?
Okay,
should
I
speak
louder
or
no?
Okay,
good
good,
so
we're
going
to
spend
the
next
20
minutes
explaining
what's
the
state
of
the
integration
of
staff
into
OpenStack?
What
has
been
done
during
this
juno
cycle
and
we
will
also
touch
a
little
bit
about
how
to
deploy
staff
by
using
ansible.
A
A
So
basically
you
can
reach
me
at
cebit,
redhat.com
and
well
pretty
amazing
right.
I
got
dee
dee
aliases,
so
that's
a
pretty
proud
of
it.
I'm
an
architect,
so
I
basically
build
design
platform.
So
this
involves
compute
storage
and
networking
as
well.
My
main
domain
of
expertise,
our
OpenStack
and
and
stuff.
Obviously
so.
Well,
this
is
my
personal
blog,
and
this
is
you
know:
Vince
blog,
don't
acetate,
you
check
them
out.
We
have
tons
of
really
good
articles,
so,
let's
start
by
OpenStack
and
Seth
and
what
has
been
done
into
the
Juno
cycle.
A
Just
for
you
to
know
the
junos
icon
is
not
over
yet
so
that's
going
to
be
well,
I
think
I
think
we
still
have
one
month
and
a
half
or
something
we
I
think
we
already
reached
a
jun
03.
So
many
things
are
I've
been
implemented
and
things
are
ongoing
work.
So
obviously
one
of
the
best
thing
is
def
stacks
F,
so
I
struggled
during
during
seven
months
to
get
this
patch
down
into
into
devstack,
but
I'm
pretty
happy
because
that
that
will
definitely
improve
the
way
we
work.
A
The
way
we
could
and
the
way
we
can
implement
new
things
into
opens
type
of
SEF.
So
basically,
if
you
just
get
clone
the
latest
version
of
the
up
to
devstack
repo,
you
can
just
put
a
local
RC,
have
an
example
on
the
next
slide
and
it
would
basically
configure
a
SAP
cluster
with
configure
glance,
no
vas
cinder
and
even
seen
the
back
up
with
it.
A
So
I'm
not
sure
if
it's
quite
readable
but
I
will
slink
will
send
the
slides
later.
This
is
an
example
of
a
local
or
see
the
most
interesting
flags
are,
of
course,
this
one
enabled
services
Seth.
You
can
specify
many
options,
defining
how
big
the
cluster
should
be
a
10
gig
or
by
default
it's
for
a
path
for
the
configuration
file
of
staff.
You
can
want
to
play
with
the
safe
replication.
You
can
also
say:
okay,
1123,
replicas
and
I-
believe
that's
it
yeah.
A
A
So
now
what
has
been
implemented
into
OpenStack,
because
that
stack
is
not
really
inside
the
OpenStack
on
Nova.
We
now
have
the
support
and
copy
on
write
cloning.
To
give
you
a
little
bit
of
background
by
default
when
you
boot,
a
vm,
you
need
to
the
default
process,
is
Nova
compute,
contacting
the
glint
service
and
then
ask
for
the
image,
so
it
will
download
the
image
locally
and
then
it
would
put
the
vm
prior
to
dispatch.
A
The
behavior
was
more
less
the
same,
but
the
main
difference
that
we
were
still
getting
fetching
the
image
from
glance
through
the
compute
node,
and
then
we
had
to
import
the
image
into
self,
which
we,
which
was
really
really
inefficient.
That's
why
we
ended
up
using
the
copy
on
white
cleanings,
because
our
body
images
well
snapshots
from
our
body
images
supports
and
support
cloning.
So
we
can
just
do
a
copy
on
wed
clone,
but
basically
everything
is
happening
now
at
now.
Everything
is
happening
at
the
self
level.
A
So
when
you
decide
to
boot
a
virtual
machine,
if
the
image
is
already
living
in
south
so
Glenn's,
except
used
as
a
back-end
for
crow
glance,
as
soon
as
you
upload
the
image,
the
image
get
snapshot
protected
and
then,
when
you
want
to
put
a
vm,
we
just
do
a
copy
on
write
clone.
We
just
trigger
the
kvm
processing.
We
just
simply
attach
the
link
to
the
blog
device.
So
in
terms
of
well
and
the
the
good
thing
about
this
is
that
booting,
a
virtual
machine
is
extremely
fast.
A
So
first
thing
it's
really
fast
and
it's
really
efficient
as
well
in
terms
of
space,
because
using
copy
on
like
clones
well
is
really
efficient
by
default,
because
everything
is
referred
to
the
parent
image,
so
everything
as
soon
as
you
want
to
perform
a
new
io.
If,
if
the
the
fact
that
you
want
to
access
is
already
in
Sawle
ready
in
the
parent
image
to
switch
from
the
parent
image-
and
you
just
just
write
what
needs
to
be
written
basically,.
A
A
When
you,
when
you
ask
for
live
migration,
the
Colosseum's
that
booth
destinations
already
exist
because
you
use
short
storage,
so
it
will
just
check
if
the
directory
of
the
instance
is
already
present
on
the
destination,
compute
node
and
if
the
Liberty
XML
is
already
dead
too,
and
in
our
case
it
wasn't
there
because
it's
not
distributed
file
system
for
storage,
but
it's
shirts
storage
by
using
blood
devices.
So
this
checks
failed
and
then
we
ended
up
winning
back
to
the
block
migration.
A
So
we
had
to
blow
to
move
everything
byte
by
byte,
so
that's
wasn't
really
efficient.
Now
we
needed
to
distinguish
two
things:
is
it
shirt,
storage
and
is
it
shared
folders?
So
if
the
photos
are
shirt,
then
it's
like
migration,
but
we
can
also
have
the
case
where
you
set
and
it's
not
shared
storage
with,
not
sure
directories,
but
it's
shirt
storage,
because
we
have
the
exact
same
entity
that
is
managing
all
the
devices.
A
A
So
basically
the
Nova
compute
process
is
just
scanning
this.
This
partition
can
be
a
partition
or
just
scanning
the
file
system,
and
then
it
reports
the
available
size
which
in
our
case
was
was
wasn't
true
because
we
are
using
Seth
and
what
we're
reporting
is
the
size
of
the
file
system
and
not
the
size
of
staff.
A
So,
basically
we
we
needed
a
way
to
tell
the
instance
all
its
metadata
so
also
named
IP
dns
and
everything
since
you
don't
have
any
dhcp.
You
can't
reach
the
metadata
server.
So
now
what's
happening
is
that
during
the
boot
process,
the
the
computer
process
creates
a
tiny
image.
It's
like
a
few
p,
dr
fat
british
fat
drive
where
they
way
it
puts
all
the
metadata
like
once
again.
A
Hostname
IP
dns
and
this
disk
would
be
attached
to
the
kvm
process
and
then
it
will
appear
as
a
second
device
on
the
file
system
and
if
you
had
the
proper
cloud
unit
configuration
the
coordinate
for
just
open
and
mount
this
partition
and
read
all
the
metadata
and
then
do
the
rest.
We
configure
the
IPS
and
things
like
this.
You
want
to
do
static
route.
Things
does
rooting,
for
example,
that's
that's
the
way
you
should
do
it.
A
The
problem
was
that
we
have
the
staff
we
have
the
disk
living
in
sev
and
we
have
the
config
drive
that
it's
that
wasn't
living
in
SEF.
So
when
you
want
to
trick
you
a
live
migration,
then
you
can't
because
at
some
point
the
code
detects
that
you
actually
have
a
show
you're,
actually
using
a
shared
storage,
so
for
the
blog
for
the
main
partition
the
root,
a
vessel
of
the
of
the
instance,
but
the
config
drive
is
not
even
safe.
It's
part
of
the
local
file
system
so
once
need
live.
A
Migration
and
the
other
one
is
needs
block
migration.
So
everything
was
broken
now.
The
config
drive
is
also
stored
into
the
into
SF.
So
that's
a
live.
Migration
is
not
broken
anymore
from
the
city
on
the
single
side,
what
a
really
tiny
hot
strap
size
for
stripe,
not
strips,
sighs
support
for
for
the
rpd
drive
hurts
basically
when
you
create
a
new
rbd
volume
or
image
whatever
you
want
to
call
it,
you
can
define
a
different
stripe
size
by
default.
A
A
So
this
what
has
been
implemented
this,
this
might
doesn't
look
that
that
well
that
much,
but
just
for
you
to
know
this
patch
in
this
patch
will
individually
almost
400
lines.
So
that's
a
lot.
A
lot
of
time
is
spent
to
review
and
going
back
and
forth
so
that
that
takes
a
lot
of
time
to
get
such
featured
merged.
So
actually,
for
me,
it's
a
really
big
improvement
for
Juno.
That's.
We
definitely
need
it
like
this
one
in
this
one
to
have
a
proper
environment
running.
A
What's
in
progress
since
steps
like
self
is
here
now
we
can
just
enable
self
within
the
gate.
So,
just
for
you
to
know
every
time
you
send
a
new
pad
set
into
open
starting
to
get
it,
we
have
what
we
call
the
gate
where
we
it's
just
a
CI
that
runs
that
bootstraps
several
several
deaths,
tax,
virtual
machines
with
different
capabilities-
maybe
for
VMware,
maybe
for
neutron
oriented,
and
then
we
can
have
something
four
steps.
A
As
soon
as
you
push
a
new
pad
set
for
Seth,
the
new
patch
will
be
will
be
tested
against
the
Ceph.
See
is
so
we
ensure
that
we
don't
break
anything,
because
what
you
know
how
it
is.
I
guess
we
had
a
new
feature,
but
in
the
meantime
we
also
create
a
new
bug.
So
we
just
try
to
minimize
the
the
effect.
So
that's
that's
in
progress
and
that's
definitely
will
that's
definitely
going
to
be
for
for
Juno.
A
We
also
need
to
fix
the
Nova
evacuation.
The
Novi
evacuation
is
the
only
way
to
provide
a
good
disaster
recovery
feature
for
the
OpenStack
environment
because,
as
you
might
know,
there
is
no
H
a
functionality
for
the
virtual
machines
within
OpenStack.
So
if
a
computer
goes
down
well
all
the
beams
that
will
running
just
die
as
well.
So
basically,
what
you
can
do
is
if
the
compute
node
dies.
A
You
just
do
a
no
vive
evacuate
and
this
will
just
respawn
the
vm
on
another
computer
node
using
using
self-insure
storage
is
what
we
can
really
benefit
from
that
here,
because
we
won't
lose
any
data
where
not
much.
Maybe
maybe
some
iOS
would
be
lost
it
during
the
crash,
but
we
will
have
the
exact
same
environment
on
another
computer
node
and
once
again
the
evacuation
is
really
fast,
because
we
just
need
to
spoon
a
new
kvm
process
on
another
computer
and
then
reattach
the
block
device
on
that
kvm
process,
so
I
treat
the
pet
them.
A
Unfortunately,
things
that
have
been
postponed
to
kylo
cinder.
We
still
don't
have
the
ability
to
migrate
to
my
great
singer
volumes
because
within
cinder
well
couple
of
years
ago,
they
introduced
a
new
feature
called
the
Milky
back
end,
so
you
can
define
a
volume
type
for
back
end,
so
you
can
have
a
letter
back
end
that
the
type
is
called
a
netapp
and
then
save
back
end.
A
We
still
count
well.
Most
of
the
code
is
already
there,
but
we
just
the
same
as
this
one.
We
missed
some
spec
freeze
and
then
we
we
just
can't
get
it
into
kylo.
So
we
will
just
continue
to
work
during
the
kilo
cycle
and
that's
probably
less
likely
to
be
implemented
in
kilo
for
for
Nova.
We,
when
you,
when
you
take
a
snapshot.
Basically
you
we
use
qemu
and
it's
just
a
flat
snapshot.
So
it's
a
complete
snapshot.
We
don't
do
an
increment
or
any
things
like
this.
So
basically,
the
snapshot
goes
locally.
A
So
when
you,
when
it,
when
you
do
the
snapshot,
it
goes
through
well
lived,
libvirt
snapshots,
one
thing
and
then
it
gets
really
important
into
glance
and
then
goes
into
set,
and
so
that's
not
really
efficient.
And
if
you
only
have
a
la
gigabit
network,
one
gig
network
that
might
be
quite
slow
to
perform
this
operation.
A
So
that's
why
we
would
like
to
implement
the
snapshots
rd
at
the
rbd
level,
because
yvd
volumes
support
snapshots,
and
so,
if
we
can
do
this,
we
don't
need
to
go
through
the
from
the
compute
node
to
the
Glens
and
then
to
the
to
the
stuff
again
we're
as
good
as
well
as
that.
It's
a
major
problem
for
other
the
public
cloud
providers,
because
when
you
take
snapshots
one,
if
you
just
want
to
start
with
staff
and
then
you
just
think,
okay,
everything
will
be
will
be
living
in
Seth.
A
A
Oh,
we
didn't
remember.
But
anyway
this
was
six
months.
I
have
passed
and
many
things
happened
into
this
repository,
so
I'm
well,
I
feel
pretty
proud
of
it
and
I'm
really
happy
to
you
see
that
this
repository
have
been
getting
many
tractions
for
many
users
from
many
users.
So
this
is
good.
A
little
bit
about
architecture,
supports
and
separation
supports,
so
the
instable
can
deploy
and
as
have
been
tested
and
proven
on,
given
with
you've
been
to
both
mts
centre.
Seven
flora
20,
you
can
ask
the
the
ansible
playbook
to
deploy
any
stable
brain.
A
Any
stable
version
of
staff
might
be
Emperor
Firefly
whatever.
If
you
want
to
do
some
testing,
you
can
also
provide
a
branch
from
from
the
gate,
Master
repository.
So
maybe
you
want
to
try
to
work
in
progress.
You
profile
size
for
ssds
whatever,
and
then
you
can
try
this
one
fastest.
Is
this
and
it's
also
supports
all-in-one
deployments,
so
you
can
just
boost
wrap
one
node
with
even
one
disk.
It's
just
it's
more
testing
but
testings.
You
know
you,
but
it's
supported
as
well.
A
So
what
can
I
did
one
actually
deploys
money
towards
always
these
MDS?
Where
is
gateway?
You
can
also
use
a
chipper
Cupid
balance,
all
the
requests,
so
I
would
say
its
default
terms
of
I.
Recently.
I
recently
added
many
sub
settings
that
have
proven
to
be
extremely
useful,
like
the
ability
to
to
disable
in-memory
logging,
which
tend
to
improve
performance
a
lot
and
while
many
many
sad
settings
that
are
quite
important
for
for
running
a
sequester,
its
vibrant
friendly
and
as
for
the
providers
that
are
supported,
we
support
virtual
box
and
vmware
fusion.
A
So
we
have
also
several
always
these
scenarios.
When
you
deploy
your
your
cluster,
you
can
specify
that
I
have
one
disk
and
I
want
to
collocate
both
you
know:
Jules
normal
and
OSD
data
on
the
same
disk,
so
just
I
would
just
create
a
tiny
partition
at
the
beginning
of
the
device
and
then
the
rest.
We
go
for
the
OSD
data
and
we
have
a
second
scenario
where
OSD
data
and
journals
are
separated.
So
I
have
rendre
mall
have
n
o
Izzy's
drawn
all
for
one
device.
A
So
let's
say
you
have
journal,
while
always
the
one
to
three
and
I've
with
SSD
one,
and
then
that's
that's
what
guns
going
to
happen.
The
first
scenario
is
more
an
extension
of
the
second
one,
because
you
can
have
multiple
OS,
DS
and
multiple
journals.
So
maybe
you
have
OSD
one
two,
three
we'll
go
there:
don't
we
go
to
SSD
10
SD,
four,
five:
six.
We
go
to
SSD
two
and
so
forth.
So
that's
that's
really
flexible.
A
We
also
support
OS,
DS
and
roll
on
directories.
Let's
say
you
only
have
one
server
and
you
only
have
one
disk,
but
you
want
to
try
self
as
well.
We
don't
even
need
to
create
a
loopback
device.
Well,
you
can
but
studied
needed.
You
can
just
reference
the
the
directory
path
where
you
want
to
have
you
always
d
deployed
and
you're
done
so
for
for
servers
with
limited
capacities
and
limit
and
also
limited
in
disk.
That's
that's
quite
useful.
A
Going
further.
I
also
wanted
to
provide
more
extensions
to
the
playbooks.
So
now,
if
you
feel
not
really
familiar
with
ansible,
at
least
there
is
a
tiny
food
basket
that
script
that
can
install
ansible
for
you
and
supports
many
many
distros
there
is
a
running
upgrade
playbook
from
which
you
can
just
say.
Ok,
I
want
to
go
from
this
version
to
another
version,
and
then
we
just
upgrade
everything
in
a
rolling
fashion.
So
that's
that's
quite
good.
I!
Also
added
a
purge
playbook
that
goes
through
all
the
ways.
A
Ds
removal
removes
all
the
data
and
delete
all
the
partition
and
everything
so
that
did
this
actually
I've
been
using
a
lot
this
functionality,
while
deploying
new
clusters.
When
you
do
performance
testing,
for
example,
you
can
quickly
deploy
your
cluster
with
one
of
the
scenario
that
I
mentioned
for
the
OS
DS
and
you
can
create
quickly
purge
the
staff
cluster
and
we
create
one.
So
that's
that
everything
can
happen
really
fast,
so
you
don't
anytime,
deploying
and
destroying
a
step
closer.
You
can
just
focus
on
what
you're
doing
benchmarks.
A
Okay,
so
as
well,
just
just
a
little
example,
what,
as
I
said,
it's
back
when
friendlies,
which
is
to
just
go
in
the
repo
just
do
background
app
and
by
default
you
will
get
three
monitors.
30.
Is
this
and
run
rattles
gateway
virtual
machine
a
little
bit
about
the
route
map
I
like
to
move
everything
into
ansible,
galaxy
and
simple
galaxy
is
a
collection
of
rules
that
you
can
use
individually.
A
So
if
you
have
a
central
and
super
repository
or
new
platform,
you
can
just
grab
one
roll
and
then
use
that
will
into
your
playbooks
just
to
extend
what
you
can
do.
So
actually
everything
is
ready
and
everything
is
already
ansible.
Galaxy
friendly
I
just
just
need
the
time
to
put
them
on
the
galaxy
and
I
like
also
to
rename
many
variables
for
a
consistency,
because
we
have
many
variables
now
and
what
I
like
to
do
is
to
have
the
role
name
as
a
prefix
for
every
variables
way.
A
Once
again,
it's
nothing
but
I
just
need
time.
There
is
a
huge
factor
that
is
going
on
on
the
on
the
llamo,
syntax
and
apparently
is
about
to
get
merged
pretty
soon
too.
So,
not
that
much
about
the
road
map,
because
because
I
believed
the
PlayBook
already
does
many
things
and
probably
way
more
than
the
chef
and
the
puppet
can
can
do
at
the
moment.
So
that's
that's
it
hey
mfc
and
thanks
for
your
attention
and
I'll,
be
happy
to
take
questions
because
we
still
have
five
minutes
so
so
fire
up.