►
From YouTube: Ceph Day Melbourne Roundtable Q&A
Description
A
C
A
C
D
C
E
C
B
Testing
and
comparison
with
between
the
different
leadership
coding,
algorithms,
we
tested
coffee
food,
&,
Reed
Solomon.
One
thing
that
we
found
was
that
at
least
for
flash
I
get.
This
is
experience
with
infinite
flash
with
I,
don't
know
if
it
is
generally
for
all
the
other
flash
as
well
is
the
n
plus
to
the
hell
she
couldn't
place
to
was
far
or
performing
in
the
rest.
A
Start
with
that,
is
there
been
a
couple
of
people
that
have
started
doing
geo,
replicated
single
set
clusters,
or
are
I
guess,
when
scale
set
as
the
best
I
put
it?
Unfortunately,
no
one
has
been
doing
it
over
true
the
public
internet.
All
of
the
examples
I
know
of
like
those
telecom
does
this:
they
have
a
single
set
cluster,
that's
replicated
from
northern
Jovian
to
southern
Germany.
It's
a
single
set
Buster,
but
they
control
all
the
fiber
of
each
week.
A
So
it's
almost
cheating
become
the
me
most
guys
that
r
n
b
wing
of
the
Malaysian
government
have
a
three
site
set
cluster.
That's
I
think
there
the
longest
stretches
400
kilometers
between
sites
and
they
don't
actually
own.
All
of
that
fiber
and
I
think
they
published
some
of
these
results
and
sharing
it
in
sep
users.
So
maybe
you
can
ask
them
again
again,
but
they
they
seem
to
be
getting
relatively
decent
performance
out
of
this
cluster
and
people.
You
know
obviously
we'll
hit
it
in
I
think
they
did
some
crazy
magic
with.
A
A
The
hope
was
to
start
looking
at
some
ways
to
introduce
eventual
consistency
that
is
beyond
your
minimum,
strongly
consistent
application
level
so
that
you
could
have
actual
and
scale
set
that
would
serve
to
medicate
it
well,
even
though
you
know
you're
not
obviously
going
to
be
writing
those
copies
in
real
time,
but
in
terms
of
actual
performance
of
people
doing
that
I
would
start
with
set
features.
The
people
that
are
doing
that
the
actual
numbers
I
think.
B
The
digital
economy
is
doing,
in
fact
they
were
doing
that
even
with
netapp
systems,
I
mean
a
metro,
clustered
distance,
so
what
they
from
what
we
heard
from
them
that
they
are
selecting
the
workloads.
There
are
not
two
Layton's
latency
sensitive
and
just
doing
that,
but
for
the
other
key
work
towards
they,
they
have
they're
not
replicating
that
across.
B
H
H
C
H
D
C
C
A
Do
need
to
I
would
stop
into
the
next
CD
s.
I
can
almost
guarantee
that
there
will
be
another
windscale
SEF
discussion
as
I
know.
The
me
most
guys
were
working
on
there's
a
dark
fiber
network
plug
it
into
and
trying
to
get
Seth
to
operate
a
plane
skills,
and
so
they
were
taking
a
number
of
different
kind
of
experimental.
D
C
G
C
A
Hasn't
been
a
huge
amount
of
push
for
it
just
because
most
of
the
most
of
the
applications
utilizing
SEF
have
their
own
security
layer
above
it.
So
they
don't
have
to
worry
about
that
as
much
there's,
less
multi-tenancy,
inset
pools
and
more
just
set
as
providing
the
storage
for
a
particular
application,
but
we
are
starting
to
get
more
requests
for
that.
So
I'm
sure
you'll,
see
more
I
mean.
C
B
C
A
F
Yep,
if
you're
deploying
to
save
that
cluster-
and
you
want
to
do
volumes,
zinda
object
and
see
if
this
is
it,
how
does
the
single
cluster
handle
that
workload,
or
should
it
be
split
into
separate
I'm?
Pretty
new
steps,
I'm
just
wondering
how
you
you
do
like
as
each
sort
of
volume
the
object
that
will
look
different
workloads.
So
how
does
it
handle
that
pretty
well.
A
A
So
obviously,
you
can't
to
physically
tuned
for
multiple
different
workloads.
You
can
tune
for
I'm
going
to
get
best
effort
for
all
things
in
terms
of
tuning,
but
in
terms
of
the
actual
performance
of
the
cluster
itself.
That's
one
of
the
nice
parts
about
the
set
architecture
is
that
you
don't
have
there's
no
bottlenecks
right.
So
all
of
the
actual
data
path
is
client
to
a
specific
OSD
or
more
specifically,
to
multiple
OS
DS,
so
you're
parallelizing
a
lot
of
that
workload.
So
it's
not
like
you
know.
A
All
of
your
object
is
here
and
all
of
your
file
is
here,
and
you
know,
or
or
it's
not.
Everything
is
on
one
server,
your
paralyzing
pretty
much
everything.
Even
if
you
look
at
like
your
block
device,
it's
a
thinly
provisioned
block
device
that
sits
across
multiple
OS
DS,
like
you
know,
so
any
money
lock
device
rather
than
sitting
on
a
physical
server,
is
split
across
multiple
hosts
and
so
by
having
multiple
hosts
and,
of
course,
the
more
hosts
you
have
in
SF
cluster.
G
G
A
C
Well,
I
guess
that
that
diagram
that
I
showed
the
physical,
the
logical
layer
of
one
of
our
clusters,
that,
as
the
rate
of
gateway
and
set
their
stuff
there,
you've
got
a
fierce
as
a
separate
metadata
pool.
So
you
can
tie
that
to
different
hardware,
and
you
know
it's
just
like
with
any
file
system
with
you.
If
you've
got
the
ability
to
separate
the
metadata,
then
you
usually
you're
in
that
storage
for
iOS
yeah.
C
Also,
with
the
with
roller
skate
way,
you've
got
different
pools
for
the
I
guess:
the
management
of
Raiders
gateway
like
the
users
and
the
bucket
indexes
and
the
metadata
all
that
stuff.
That's
the
stuff
that
you
want
to
be
fast,
so
you
know
you
don't
if
your
backends,
a
razor
coded,
you
still
go
and
put
that
stuff
on
replicated.
J
C
J
A
H
A
Actually
end
by
it
by
you
using
birth
kalavari
that
that
probably
tells
me
that
we're
not
doing
a
good
enough
job,
that
calamari
is
now
the
API
and
Romana
is
the
actual
physical
GUI.
So
they've
split
those
two
things
and
they've
been
focusing
a
lot
on
calamari
the
API,
because
there
are
a
number
of
different
monitoring.
/
management
type
tools
out
there
that
are
doing
what
calamari
used
to
do
part
of,
and
so
they
really
wanted
to
kind
of
enable
the
people,
the
vast
community
of
people
that
was
developing
these
tools.
A
Rather
than
trying
to
do
it
all
ourselves
with
the
one
or
two
one
and
a
half
guys
that
that
we
had
working
on
calamari
romana
is
still
being
developed.
It's
still
in
the
Red,
Hat's,
f,
storage
or
whatever
they
call
it
out.
That
has
been
growing
by
leaps
and
bounds.
The
hopes
is
more.
The
next
versions.
We
will
be
able
to
do
things
like
you
know,
install
new,
OS,
DS
or
spin
up
a
set
cluster
using
nothing
but
the
GUI.
A
Vsm
Inc
scope,
barrage
guys
did
one
so
there's
there's
a
number
of
them
out
there
mo
they're
all
open
source,
as
far
as
I
know,
have
started
to
use
the
calamari
api.
So
more
and
more
people
are
saying
I
already
have
the
management
management
dashboard.
Then
I
just
want
to
plug
this
thing
into
so
now
you
can
with
this
architecture,
you
can
plug
it
into
whatever
your
existing
dashboard
tool
is
or
if
you
want
something
off
the
shelf,
you
can
use
romana
or
vsm
earnings
go
for
any
of
the
ones
that
are
available.
He's.
A
Rahmani
yo
mama
is
open
source
as
well
I'll
kinds
of
the
stream.
It's
got
as
it's
got
his
own
thing.
You
can
see
think
calimary
that
read
the
docs
has
all
the
information
on
everything
but
yeah.
It's
all
open
source.
That
was
something
sage
and
I
did
two
weeks
after
the
acquisition,
which
was
took
calamari
and
threw
it
out
the
door
yeah.
B
D
B
K
B
K
H
A
A
Mean
in
terms
of
strictly
monitoring
I
think
romana
is
probably
still
the
best
they've
done
a
lot
of
good
improvements
on
large
closure.
First,
you
know
visualization
type
stuff
in
terms
of
management.
I
think
vsm
is
probably
out
in
front
of
the
fact.
That's
the
Intel
one
you
can
actually
deploy
a
cluster.
You
can
add
and
remove
au
s
DS.
You
can
do
actual
physical
maintenance
on
your
SEF
cluster
through
the
GUI,
so
there
I
think
they
focused
on
that.
Whereas
romana
started
off
it's
more
just
up
here,
monitoring
tool,
there's.
D
H
H
A
B
A
H
B
A
B
A
A
A
A
Know
that
very
very
much
the
straight
yeah,
that's
my
favorite
parts
is
I
was
early
days
of
Steph,
I
would
go
out
there
and
I
would
tell
tell
people
about
all
the
coolness
about
steph
and-
and
invariably
I
would
get
people
who
came
up
to
me
afterwards-
that
were
hardcore
old-school,
sysadmin
storage,
admin,
people
they'd,
say
all
right.
Well,
how
many
terabytes
per
storage
head
do
I
need
to
think
about
you've
got
one
or
two
SEF
attitudes
can
handle.
You
know
hundreds
of
petabytes
worth
of
SEF
storage
and
they
just
never
believed
any
other
questions.