►
From YouTube: IBM @ Ceph Day London
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
welcome
everyone.
You
know
I.
I
thought
when
I
saw
that
there's
gonna
be
a
safe
day
here
in
London
that
it
might
be
a
good
idea
to
talk
to
you
about
what
we
do
at
IBM
research,
especially
since
it's
all
the
research
first,
our
way
always
tight-lipped
bad.
What
we're
working
on
right
and
what
IBM
wants
to
do
itself,
because
you
know
I
think
it
it
could
be
a
major
render
jumping-
and
this
are
25,
train
and
but
I
don't
want
to
tell
you
anything
about
the
marketing
side
right,
I'm,
not
a
marketing
guy.
A
A
I'm
going
to
talk
to
you
about
what
we
do,
that
we
decided
my
workplace,
which
is
the
research
lab
in
zurich
and
then
a
soft
layer
which
is
a
clad
company
acquired
by
IBM
last
year
and
then
what
we
want
to
do
with
self
in
the
future,
which
involves
contributing
code
back
to
the
community.
So
hopefully
we're
also
going
to
jump
on
the
train
and
become
upstream
contributors
provided
lawyers
degree.
A
So
what's
the
zurich
research
lab,
it
was
the
second
research
lab
IBM
opened,
I'm
very
proud
of
the
fact
that
I
actually
worked
there
you'd
be
surprised,
but
eighty
percent
of
the
people
working
there
are
physicists.
So
the
IT
staff
is
really
really
small
and
I
don't
belong
to
the
IT
guy.
So
if
you
have
a
broken
laptop,
you
don't
call
me
I'm,
actually
the
class
relations
group,
so
we're
looking
at
what
to
do
with
with
J&I
storage,
what
to
do
and
with
the
client.
A
If
people
want
clamp
because
people
do
want
clients
right,
they
want
to
build
their
own,
they
want
to
jump
on
it.
What
could
we
tell
them?
Ok,
that's
going
to
be
in
five
years,
so
it's
and
for
these
guys
the
researchers
who
are
looking
at
what
to
do
with
client
obviously
need
eclectic
play
on
right
and
for
legal
purposes.
A
Switzerland
has
a
very
strict
data
retention
low,
so
some
of
the
data
we
can't
even
export
from
the
country
and,
second
of
all,
with
the
volumes
we
have,
which
is
freeze,
experiments.
It
could
be
certainty
for
terabyte.
We
can't
just
take
it
up.
Software,
which
is
delayed.
The
nearest
data
center
is
in
Amsterdam,
so
we
need
to
local
the
local
DC,
be
to
local
equipment,
local,
clad
and
meaner,
to
give
people
a
portal
where
they
can
jump
on,
create
a
VM.
A
If
they
need
10,
if
they
need
100,
they
just
go
there
do
their
work
low.
Do
there
whatever
experiment
they
want
and
be
happy
with
its
right.
So
we
don't
have
the
usual
end
users,
because
they
always
have
special
needs.
Sadly,
and
they
always
need
their
own
special
softer.
So
actually
they
can
upload
any
image.
They
can
run
any
software
image
in
rem,
so
we
don't
restrict
any
in
any
way.
Okay-
and
we
started
doing
this
round
two
years
ago-
went
into
collection
a
year
and
a
half
ago
and
I
read
second
iteration.
A
But
what
did
we
start
with
right?
We
start
to
really
small
the
headless
3
compute
nodes
with
14
terabytes
of
SSD
storage
connected
with
dual
tanking,
and
we
were
we
really
happy
with
it.
Okay,
so
at
that
time,
I,
don't
know
how
many
of
you
run.
Openstack
show
hands:
okay,
not
so
many
people.
So
the
main
use
case
for
us
is
OpenStack
Widseth.
The
OpenStack
is
the
focus
and
we
didn't
want
to.
You
know,
invest
into
big
storage.
Although
I
p.m.
do
sound
storage,
it's
even
for
us
internally,
it's
a
cost
right.
A
A
We
just
have
honor
to
actually
get
it
up
and
running
and
working
with
Shaff,
but
we
did
it
at
the
cost
of
not
being
able
to
upgrade
right,
and
so
that
was
a
major
pain
and
that
people
start
to
jump
on
it
and
we
have
this
graph
from
mooning.
You
can
see
that
basically,
but
people
did
they
had
various
mobloc,
our
jobs
and
actually
they
started
running
Hadoop
jobs.
They
started
running
their
own.
God
knows
what
gives
the
VMS
are?
Black
boxes
and
their
storage
need
just
started
to
grow.
A
Grow
grow
grow,
so
we
decided
to
do
the
second
iteration.
The
network
was
doing
fine,
so
we
do
all
10
gig.
We
actually
did
not
have
any
network
issues,
internships,
contention
or
performance
throughout
line.
The
biggest
issue
was
the
actual
hardware
being
sometimes
Diane's
right
yeah,
so
we
needed
to
replace
witches,
but
we
never
actually
had
any
issue
first
by
the
softer
it
to
be
done
right,
one
issue
that
you
run
into
and
which
was
already
mentioned
today.
A
If
you
run
out
of
space
and
one
of
their
s
DS,
and
then
at
that
whatsapp
does,
when
you
run
out
of
space
and
even
one
of
the
arrestees,
instead
of
being
intelligent
and
writing
the
stuff
to
other
arrestees
it
just
it
just
blocks
all
rights.
Ok,
so
that's
really
really
bad
for
your
VMs
and
that
time
you
need
to
make
some
space
available
somewhere
or
expand.
That's
how
we
grow
from
16s
as
these
originals
asserted,
because
we
started
shuffling
in
New
SSDs
all
the
time
right
and
then
we
started
looking
at
how
ok?
A
How
could
we
grow?
How
could
we
give
users
the
ability
to
actually
store
more
data?
You
want,
so
what
we
did?
We
build
these
systems.
We
have
six
of
these.
This
is
one
rack.
We
have
two
wrecks
fully
independent
power
Network.
What
now
and
what
we
did
here
is
we
have
12
x,
2
terabytes
of
disks
in
each
node,
be
packing
SS
this
with
the
one
to
four
magical
ratio
that
everyone
seems
to
be
happy
with
an
unhappy
with
it
too.
A
So
we
have
two
SSDs
in
the
bag
that
you
can
see
which
has
the
the
u.s.
plus.
It
has
the
slices
for
the
journals
and
we're
not
writing
the
journals
right.
Why
would
you
rate
them
so
six
discs
follow
in
this
case,
sir.
It's
126
124
is
in
the
other
case,
so
here
it's
126
and
we're
still
happy
with
it.
So
we
partitioned
SSDs
each
OSD
get
the
slice
of
the
SSDs
eternal
and
that's
how
we
run.
We
have
three
monitors
running
VMS
latin.
These
hosts.
A
We
find
it
that
basically,
the
monitors
are
not
resource
hanging
around
you
can
run
them
on
anything
good
fan.
Anything
you
find
in
the
company
that
in
a
small,
desktop
PC
whatnot
we're
using
VMS
when
SSDs
they
really
need
that.
Has
these
I
think
that's
the
based
on
based
on
what
I've
seen
and
what
the
certain
guy
seen
and
what
community
seen
for
the
monitors?
You
really
want
to
use
SSDs,
because
otherwise,
when
you're
actually
have
an
issue,
you
need
to
do
back
feel.
A
So
you
need
to
do
up
to
a
lot
of
classroom
management,
your
your
monetary
alerting,
Stein,
so
we're
going
to
that.
Has
these
there
and
we
actually
shot
down
the
previous
cluster.
So
we
have
a
bunch
of
40
SSDs
available
now
which
we
want
to
repurpose
fear
as
a
second
fool
inside
thanks
to
the
crash
man
right,
you
can
really
find
you
in
how
you
want
to
store
your
data
which
OS
this
you
want
to
put
in.
A
So
what
we're
gonna
do
when
I
get
back
to
Zurich
later
this
week,
we're
going
to
reinstall
those
notes,
we're
going
to
put
them
in,
and
the
user
will
have
the
ability
when
they
actually
want
to
use
storage.
If
you
have
the
ability
to
selectively
I
want
a
lot
of
quotients
on
normal,
spinning,
graphs
or
I
want
a
little
bit
of
kurta
and
very
fastest
Aziz
right,
but
then
of
course,
you're
going
to
hit
some
small
I
our
blog
problems.
Instead,
that's
being
worked
on
and
will
be
much
better
in
the
next
release.
A
So
that's
what
we
do
it's
a
quite
happy
with
it
will
need
to
grow
soon
because
the
users
are
eating
up
all
day.
So
it's
almost
done.
230
terre
by
that
we
have
here
and
we're
gonna
buy
again
this.
This
is
the
amount
we
settled
inside.
We're
gonna
get
six
of
these
boxes
again,
so
we're
gonna
go
to
this
260
terabytes
and
then
up
from
there,
and
this
guy
is
beautifully.
We
haven't
had
any
issues:
okay,
I'm,
not
given
this
experiment,
and
given
this
experience
we
had
and
lamp
IBM
decided.
A
Why
not
put
it
on
software
right,
so
soft
layer,
ok,
I,
have
it
later,
so
IBM
is
growing
in
what
we
do
it
with
the
client
right.
We're
part
of
the
OpenStack
foundation.
Reply.
We're
part
of
the
class
foundry
foundation
participate
in
open
source
project
around
around
any
class
related
thing
and
scale.
I,
storage
and
I
believe
safe
is
the
basis
of
scalars
cloud
storage.
A
So
we
wanted,
to
you,
know,
bring
the
two
words
together:
bring
our
enterprise
enterprise
hosting
platform,
which
is
not
soft
layer
and
this
technology
together.
So
what
we
did?
We
start.
We
will
start
offering
staff
on
software
service.
It's
not
just
a
4k,
just
a
it's
a
hosted,
managed
private
land
and
software
which
have
used
for
volume
storage.
So
you
get
a
cloud
from
IBM
which
is
managed
by
IBM.
The
hardware
is
hosted
by
us.
A
It
just
complicates
the
setup
and
doesn't
new
doesn't
really
gain
anything
right.
So
we
decide
that
you're
going
to
have
three
clad
controllers,
running
everything
your
VMs,
including
the
set
monitor
so,
for
example,
your
open
class
services.
They
are
running
also
in
VMs
separately
separately,
on
separate
physical
machines,
so
you
have
higher
available
to
also
for
your
database.
There
we're
going
to
have
if
you
either
one
gig
or
10
gig
natural
connectivity,
depending
on
which
datacenter
you
choose
from.
We
don't
have
to
authentic
available
in
all
of
the
software
data
centers.
A
Yet
so
that's
going
to
take
some
time
when
you
board
you're
going
to
have
to
choose
your
story:
sighs
we're
pretty
conservative
right.
So
this
is
our
first
offering
grants.
Have
you
know?
Ibm
has
GPFS,
which
is
kind
of
competing,
but
we
wanted
to
give
this
a
go
so
from
a
296
terabyte.
This
is
very
been
a
very
small
in
this
regard
and
when
you
board
you
can
choose
what
size
you
like.
A
Ok
and
I'm,
not
I'm
emitting
how
many
computers
you
have
it
memory
and
whatnot,
because
that's
not
really
interesting
from
a
safe
perspective
right
and
we
decided
to
go
with
two
times
replication.
Instead
of
the
no
recommended
three
right,
it
was
changed
in
firefly
to
go
with
three
times
replication
and
I
don't
know
how
many
people
actually
used
it,
but
they
have
a
reliability.
A
It
was
mentioned
as
a
google
summer
of
code
project,
but
they
have
reliability
calculator
so
how
many
guys
have
actually
used
the
self
reliability,
calculator,
okay,
23,
yeah,
so
chef
has
this
wonderful
tool.
You
enter
your
details
about
your
physical
environment,
like
what's
your
failure
rating
disk?
What's
your
unrecoverable
read
rate
on
your
disks
and
it's
going
to
be
the
statistical
model,
and
it's
gonna
tell
you.
Okay,
I
think
this
is.
A
This
is
going
to
be
your
availability,
mathematically,
speaking
right
and
if
you
put
all
of
our
details
into
that
tool-
and
you
run
it
you're
going
to
get
more
than
ninety-nine
point
nine
percent.
Even
it
is
a
two-way
application
in
these
setups
okay
and
we
decided
that's
good
enough.
Okay,
so
because,
in
terms,
if
you
want
to
go
with
freeware
application,
it's
going
to
add
the
high
life
a
lot
of
cost,
because
you
need
to
again
replicate
that
amount
of
data,
and
you
could
say
why.
Okay,
guys,
why
don't
you
use
richer
code?
A
It
pulls
rinds,
it's
already
out
there
and
we
don't
use
your
extra
couldn't
post
because
we
don't
trust
it
yet.
Ok,
St
is
the
same
with
the
cash
tier.
So
have
you
guys
tried
to
run
the
cash
dear
code
in
the
current
Firefly?
How
many
of
you
have
tried
to
actually
run
it
in
machines?
Yet
so
not?
Okay,
not
so
many
again.
A
So
one
of
the
drawbacks,
with
the
current
cash
theory
iteration,
is
that
whenever
you
hit
even
a
small
part
of
a
four
megabytes
chunk
on
your
back
OST,
it's
going
to
put
all
four
megabytes
in
to
the
cash.
So
if
you
have
a
very
diverse,
not
so
cash
friendly,
Berkeley,
you're
gonna
actually
slow
down
things.
I'm
gonna
overload
your
staff
here
right
for
your
cash
dear
sir,
so
so
we
decided
that
in
this
release
that
we're
gonna,
this
is
going
to
happen
soon.
A
We're
not
going
to
give
you
a
hydration
code
or
we're
not
gonna,
give
you
caching
yet,
but
these
are
all
options
right.
So
what
we
do,
what
we
have
self
test
lab
and
at
my
work,
fits
right
at
the
research
lab,
where
we
constantly
try,
add
the
shrine
code
in
different
scenarios
and
see
what
happens
right
and
then
we
see
a
good
result.
A
We
try
to
transfer
it
to
softlayer,
so
people
can
use
the
same,
and
you
know
lots
of
good
things
has
been
said
by
it
safe
this
today
so
far,
but
for
us
right
and
for
people
using
software
costs
is
one
of
such
options.
Right,
you
don't
need
to
buy
the
big,
expensive
storage
arrays,
it's
enough
that
you
just
get
your
commodity,
I
wouldn't
say
desktop
drives,
because
obviously
you
get
enterprise
SSD,
at
least
for
your
journals.
A
But
you
is
still
a
lot
cheaper
than
buying
the
big
storage
vendors,
even
even
our
own,
and
then
keep
in
mind
us
to
softlayer.
It's
it's
a
very
new
acquisition
series.
They
are
using
of
the
chef's
harbor
they
are
using
super
matrix.
The
supermicro
is
not
going
to
give
you
the
super
fancy
very
expensive.
Try
shrine!
That's
what
not
why
you
buy
super
mature.
A
So
that's
what
we're
doing
us
off
there.
I
think
is
pretty
exciting
that
you
can
get
that
we
have
a
30
minute,
SLA
and
everything.
So
if
your
drive
breaks,
if
our
network
gear
breaks,
what
not
it's
it's
going
to
be
replaced
insert
a
minute
and
then
that's
so
far,
what
we've
done
with
it
so
far
and
then
let's
have
a
look
at
the
futures,
because
I
think
this
is
the
interesting
part.
This
is
the
part
that
we
think
safe
is
currently
lacking.
An
IBM
might
be
able
to
help
right.
A
A
You
want
to
keep
an
eye
on,
so
if
staff
uses
one
or
two
or
three
authentication
tokens
to
actually
access
your
self
pools,
your
readers,
we
were
RB
the
images
and
you
read
us
files
director
blocks
and
that's
not
very
security
friendly,
because
you
look
at
it
and
what's
going
to
happen,
if
some
gets
hold
of
that
kid
right.
So,
ideally,
what
you'd
like
is
different.
You
spawn
a
vm,
let's
say
I
start
to
be,
and
then
that
vm
actually
uses
my
own
SEF
key
to
access
my
self
storage
and
that's
currently
not
there.
A
A
So
this
is
planned
for
the
next
step
in
securities
next,
a
trail,
so
you
still
have
a
I
it,
but
acoustic
has
the
six
months
releases
and
the
they
just
release
the
newest
version
Juna
and
on
october
2018
here.
So
the
next
version
is
going
to
happen
six
months
and
this
for
has
been
OpenStack
right.
So
the
first
two
bullet
points
has
been
what
I
OpenStack
is
missing.
I
talked
about
whatsapp's
missing,
so
imagine
you
have
multiple
datacenters
right,
so
it's
after
we
have
a
lot
of
data
centers
and
ideally
for
disaster
recovery.
A
What
you
want
is
that
ok
I
write
my
data
here,
but
I
have
a
well
I
would
say
a
copyright,
but
that's
very
expensive.
On
the
other
date
center,
and
if
you
look
at
nasse
vendors
enterprise
storage,
vendors,
they
you
know
for
you
Metro,
clustering
and
egg
enough
for
you
skin
eyes
clustering,
but
they
often
not
going
to
tell
you
that
it
comes
at
a
cost
right.
It's
going
to
limit
your
eye
live
performance
if
you're
doing
synchronous
replication
over
very
high
latency
links.
A
So
what
we'd
like
to
add
to
or
see
in
staff
is
that
weekend,
racing
replication
on
selected
operations
so
right,
the
local
copy
in
the
local
data
center
first
then
do
an
async
right
of
the
other
parts
in
other
data
centers
later.
So
this
is
currently
not
there
right
with
the
crash
map,
but
you
can
currently
do.
Is
you
can
set
your
data
centers
up
and
then
you
can
do
across
datacenters
snapshot
and
that's
gonna,
because
it's
a
copy
on
write,
operation
inside
side.
A
That's
going
to
happen
without
looking
any
of
your
local
liar
and
it's
going
to
nicely
bride
the
data
to
the
other
plate
center.
But
that's
and
for
a
lot
of
purposes.
That's
enough
I
mean
you
could
write
the
script
that
snapshots
your
data
every
hour
or
every
ten
minutes
or
whatnot,
but
for
for
more
smoother
experience
which
what
you'd
like
to
have
is
just
you
tell
ok,
replicate
this
there
and
it
should
happen
automatically.
A
The
other
thing
which
comes
here
is
that,
if
you
have
multiple
safe
clusters,
how
do
you
manage
them
currently
right?
So
your
self
tool
doesn't
support
multiple
datacenters
and
there's
no
way
to
actually
give
the
semantics
of
a
data-centric
to
staff.
It
knows
the
same
tax
be
quiet,
a
crash
man,
but
it
doesn't
have
semantics.
A
So
what
you
usually
want
to
do-
or
you
would
like
to
do
in
light
of
herb
right-
is
that
you,
the
South
common
line
tool,
would
have
actually
semantic
meaning
of
what
are
they
sent
race
and
you
should
be
able
to
say
okay,
I
have
three
monitors
in
these
data.
Centers
three
monitors
in
this
date
center
and
manage
your
trust,
datacenter
linkage
why
the
same
comment
line
to
all
right,
so
this
is
something
we
feel
like
we
need
inside.
A
So
the
the
current
developer
summit
is
very,
very
close
to
actually
write
the
blueprint
and
get
all
of
this
there.
So
maybe
after
the
next
release,
we
could
see
this
SF
upstream
right
at
least
I'm
very
much
like
to
see
this
upstream,
because
I
think
this
is
the
last
piece
that
missing.
If
you
look
at
the
change
like
everything's
improving
right,
this
ton
of
change
is
going
into
the
erasure
coded
different
type
of
fresh
curds.
A
There's
a
ton
of
change
going
into
SSD
performance
array
this,
but
there's
I,
haven't
seen
any
single
comment
that
deals
with
multi
DC
ryderz
or
royal
air
strike.
Okay.
So
that's
that's
the
plan
right
it
again
as
it's
not
something
we're
committed
to
at
this
point
yet,
but
we
hope
to
see
it
one
day.
Alright,
so
that's
what
we
do!
Currency.