►
From YouTube: Ceph Orchestrator Meeting 2022-10-18
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
A
So
essentially,
the
idea
is
to
use
CBT
in
order
to
do
comparisons
between
different
zipper
back
ends.
A
B
A
Is
interesting,
though,
I'm
not
sure.
A
B
B
So
we
can
see
rather
or
Quincy's
all
right,
Quincy
now
Pacific,
so
yeah
I'm
not
I'm,
not
entirely
sure
that
all
of
that
has
been
merged
Upstream
yet
and
I.
Don't
know
how
to
run
motor
anyway.
I
don't
know
if
you
need
special
hardware
for
it.
Even.
B
B
D
Okay:
okay!
Yes,
as
far
as
I'm
aware,
I
think
it's
it's
all
open
source.
There
may
be
some
proprietary
bits
in
there.
I
don't
know,
but
but
it
was
we
we
actually
lost
a
grant
to
them
back
in
the
very
very
early
inkt
days,
it
was
like
40
million
dollars,
I
think
that
was
through
Lawrence
Livermore
in
a
couple
other
places
and
I
thought
that
that
was
the
the
whole
purpose
of
it
was
to
fund
like
an
open
source,
Next
Generation
luster
replacement.
D
So
on
the
CBT
side
of
this,
even
if,
if
on
the
rtw
side,
is
not
ready
yet
what
I
was
kind
of
thinking
is
for
for
ages.
I
wanted
to
basically
finish
the
abstraction
of
CBT
clusters,
not
being
just
Seth
I
kind
of
started
in
that
direction
and
never
really
finished
it,
because
cluster
sort
of
that
was
the
first
thing.
D
I
was
going
to
Target
and
it
it's
kind
of
now
just
on
life
support,
but
just
being
able
to
stand
up
deos
would
maybe
be
the
first
step
and
then
from
there
building
out
the
the
kind
of
client
endpoints
in
CBT
to
be
able
to
do
different
things
with
it,
and
even
if
rgw
isn't
ready,
we
could
start
out
with
like
IO
500,
because
I've
wanted
to
get
that
integrated
into
cpt2.
B
So
yeah
I
mean
I,
guess
it
depends
kind
of
on
the
workload
you're
running
sort
of
basic
workloads
should
work
as
far
as
I
know,
but
there
are
definitely
things
that
are
not
yet
implemented.
D
Sure
sure,
like
basic
gut,
puts
deletes
bucket
listing.
B
Those
should
all
work
but,
for
example,
I'm
not
con
I'm,
not
positive,
they've,
implemented
multi-part.
Yet,
okay-
and
you
know
certainly
like
any
admin
stuff,
you
will
have
to
do
outside
of
rgw
riting
users
changing
ownership.
Any
kind
of
things
like
that.
D
D
If
it's
not,
we
can,
we
can
fix
it,
but
right
yeah,
like
the
goal,
would
be
made
to
make
it
so
that
that
well
yeah,
that's
a
little
tricky
I
guess,
but
in
any
event
I'm
sure
we
could.
We
could
figure
out
a
way
to
to
make
that
all
kind
of
reasonable.
At
the
very
least,
yep.
B
But
I
think
we're
fairly
close
to
being
having
good
coverage
for
a
lot
of
this
stuff.
The
stuff,
that's
mostly
missing,
is
is
stuff
related
to
like
extended
clusters.
You
know
zones
and
Zone
groups
and
multi-site
and
life
cycle,
and
you
know
things
unrelated
to
actually
reading
and
writing
and
storing
data.
So.
D
Yeah
I,
don't
I,
don't
remember
if
Dales
actually
requires
optane
or
not
it
might.
We
do
have
octane
nodes,
though
there
are
nodes
that
have
opt-ins
in
them,
they're,
not
the
the
dims.
Just
the
the
you
know,
standard
block
devices,
but
maybe
that's
good
enough.
I,
don't
know.
D
It
appears
that
Lenovo,
at
the
very
least,
has
day
of
stuff
on
the
io-500
I,
don't
know
if
IBM
works
with
them
at
all
or
if
that's
just
completely
separate
now,
but
I
thought
that
was
kind
of
interesting
they're
actually
reasonably
high
up.
Let's
see
this
thing
is
Lenovo.
D
Makes
it
look
really
really
fast,
I,
don't
think
I've
ever
seen
any
benchmarks
of
how
they
do
once
they
turn
replication
on
which
they
apparently
do
have
now,
but
in
an
unreplicated
mode
it
it
appears
to
be
really
really
quick.
D
I
suspect
that,
in
its,
if
you
run
it
in
in
one
of
the
modes
where
it
runs
really
fast,
it'd
probably
be
significantly
faster
than
Seth
back
ends
are.
D
C
D
C
D
D
B
B
D
B
Doesn't
owe
Tesla.
A
So,
looking
back
at
the
agenda,
it
turns
out
that
the
deos
back
end
was
contributed
by
Seagate.
That's.
C
D
That's
really
interesting,
I
wonder
what
see
its
involvement
in
davises
these
days.
I
know.
Intel
at
one
point
seemed
like
they're,
really
really
targeting
their
own.
You
know
octane
high
performance
storage,
but
now
that
that's
kind
of
on
the
way
out
who
knows.
B
I
build
against
it,
but
I
don't
I
haven't
actually
run
them.
Okay,
so
I
know
I
installed
the
correct,
RPMs
and
stuff
to
build
it,
but
I
haven't
actually
run
it
so
sure.
D
Sure,
well,
maybe
I'll
give
it
a
shot
just
see
if
I
can
get
something
stood
up,
is
motor
open
source
or
is
that
proprietary?
That's.
C
B
That
is
I,
think
they're
approaching
a
release
would
be
my
guess:
I
occasionally
get
either
accidentally
or
on
purpose,
tagged
on
their
PRS
for
their
up
for
their
Upstream
stuff
and
there's
definitely
a
lot
of
stuff
going
on.
B
D
B
C
B
D
No,
no
I
know
yeah
I
wasn't
even
trying
to
imply.
It
was
a
fork
of
stuff
I'm,
just
trying
to
figure
out
like
what
the
design
differences
are.
A
Right,
especially
if
both
are
have
some
kind
of
luster
heritage,
foreign.
D
B
C
I
had
a
quick
question
for
Casey:
what's
up
regarding
perf
counters
cash
stuff,
so
I
was
going
back
through
the
meeting
we
had
a
few
weeks
ago
and
thinking
about
it
and
would
one
way
to
like
have
the
dumb
approach
be
to
have
a
separate
curve:
counters
collection
for
the
labeled
curve
counters
instances
or,
if
I,
like
change
the
data
structure
at
all
metrics
it's
attached
to
the
stuff
to
a
CF
context,
the
same
way,
the
proof
counters
collection
is.
A
A
If
we
want
to
do
something
more
complicated
with
caching
and
lru,
then
it
might
be
a
different
class.
That
kind
of
satisfies
a
similar
interface
for
for
dumping
to
the
admin
socket
mm-hmm.
B
C
Well,
that's
it
for
me.
A
D
D
That's
you
know,
maybe
the
the
way
I
did
with
the
io500
was
basically
just
you
know,
start
taking
notes
and
and
try
to
form
a
procedure
for
how
to
deploy
things
and
and
do
things
the
same
thing
with
luster,
then
from
there
we
can
try
to
figure
out
how
to
actually
you
know
automate
it
in
some
fashion.
D
Is
that
does
that
seem
reasonable
to
you?
I?
Don't
think
you
guys
there's
any
real
requirement
on
you
guys
to
do
much
other
than
keep
keep
you
know
making
it
work
on
your
side.
A
Yeah
great
Ben,
do
you
have
do
you
know
any
like
official
contacts
for
this
stuff,
or
should
we
just
pick
people
that
opened
PRS
for
these.
B
B
D
It's
cool
then
I'll
just
dig
around
in
here
and
see
if
I
can
find
any
kind
of
docs
on
on
how
they
they
typically
try
to
install
this
stuff
like
like
with
Seth
I,
may
actually
dig
underneath
their
their
installers
and
see
if
there's
a
a
more
low-level
way
to
do
it.
A
Yeah
so
I
mean
we,
we
have
DB
Store
working
well,
we
haven't
really
tried
any
performance
stuff
because
we
don't
exactly
expect
SQL
Lite
to
give
great
performance,
especially
with
parallelism,
but
that
might
potentially
be
an
interesting
Target
as
like
a
cluster
abstraction,
since
it's
deployed
a
little
differently,
yeah
pretty.
D
Sure
sure,
right
now
in
CBT,
we
can't
have
this
concept
of
endpoints,
where
I'm
trying
to
remove
the
direct
connection
between
benchmarks
and
different
clusters
and
kind
of
have
this,
like
almost
abstraction
layer
between
them.
D
That
says
What
that
particular
kind
of
cluster
can
support
for,
for
you
know
accessing
our
accesses
so
like
as
an
example,
this
is
the
the
real
simple
right
now
endpoint
for
rgw
S3,
so
it
might
be
kind
of
fleshing
this
out
and
figuring
out
exactly
what
the
abstraction
looks
like
as
we
move
forward,
but
right
now
this
is
kind
of
the
the
piece
that
ties
the
stuff
cluster
class
to
to
the
well.
D
The
only
one
that
I
really
have
going
right
now
is
the
the
hot
sauce
Benchmark.
But
theoretically
we
actually
have
still
might
be
able
to
run
cause
bench
and
get
put,
which
was
a
thing
that
Mark
Seeger
wrote
like
now
five
or
six
years
ago,.
D
D
B
Who
did
you
say
it
was
cow
patch,
okay,.
D
D
And
so
like
I'm,
not
this
is
my
thought
on
this.
Is
that
it's
not
like
a
super
time
critical
thing.
You
know,
there's
lots
of
other
stuff,
that'll,
probably
interleave,
so
I'm
I'm,
hoping
that
we
can
make
forward
progress
on
it,
but
I
don't
know
that
it
will
necessarily
be
like.
You
know,
high
priority
to
finish
this.
This
fall
or
something.
A
Okay,
yeah,
that's
great
I
mean
this
whole
thing
is
kind
of
a
very
long-term
project
and
a
lot
of
things
like
how
we
test
and
Benchmark
other
backends
in
the
ceph
kind
of
ecosystem
is
is
still
being
decided,
but
yeah
I
think
this
would
be
a
great
help.
D
Sure
sure,
and
and
for
me
my
goal
with
this
is
is
not
just
rgw2
it's
you
know
if
we
can
actually
run
deos
or
run
motor
or
whatever
else,
and
if
they
got
like
a
block
compatibility
layer.
My
My
Hope
Is
that
we
can
actually
use
that
and
directly
compare
with
RBD
versus
one
of
these
other
things
you
know
are
we
behind?
Are
we
significantly
behind?
We
want
to
know
yep.
D
Yep
so
yeah,
and
even
like
IO
500,
that's
kind
of
the
goal
with
it,
but
every
everyone
runs
it
so
differently,
like
you
know,
different
numbers
of
clients
and
and
different
setups,
and
all
these
other
things
that
different
Hardware,
exactly
it's
you
know
like
they've,
got
the
10
node
challenge,
but
what
does
ten
knows
mean
right
like
when
you're
on
Fast
devices,
you
know:
are
these
10
nodes
full
of
opt-in
dims
or
are
they
10
minutes?
Full
of
you
know,
qlc
Flash.
D
Foreign,
so
yeah
it'd
be
neat
there'll,
be
some
work
for
sure,
but
there's
a
lot
that
we
could
do
there
so
we'll
see
see
how
it
goes.
D
Foreign
I
had
a
slightly
different
question:
Adam
I'm.
Did
you
see
the
email
I
sent
out
this
morning
with
the
RBD
stuff
cool
cool?
Would
you
mind
if
I
I'm
thinking
about
writing?
Basically
that
whole
thing
up
as
a
Blog
article
and
just
wanted
to
mention
your
work
on
on
SEO?
Would
you
mind
if
I,
if
I
kind
of
just
mentioned
your
name
and
what
you
were
doing
on
it?
Oh.
B
Not
at
all,
and
that
and
your
mail
actually
gives
me
hope
that
we
will
get
that
our
rgw
performance
too,
once
we
get
Asko
into
it,.
D
Yeah
it
you
know,
I
I
didn't
ever
really
look
too
closely
at
client
performance
changes
after
you
merge
that,
but
it's
I
don't
think.
We've
ever
before
gotten
like
130
000
iops
through
just
a
single
lip.
Rbd
instance
like
that
yeah.
It's
looking
like
it's
a
pretty
impressive
improvement
with
that
code.
So.
B
D
Yeah
I
don't
know
if
I
put
you
on
the
the
email,
Casey
I
just
included,
Adam
and
Matt,
but
I
can
add
you
too.
Basically
IBM
Acadia.
They
were
seeing
really
really
poor,
qmu
KVM
performance
and
they're
getting
nervous.
So
earlier
this
week,
I
went
through
and
actually
wrote
up
instructions
for,
like
tying
in
lib
RBD,
and
you
know
looking
at
different
performance
aspects
of
it
and
we
can.
We
can
do
like
123
000
read
apps
through
one
qmu
KVM
instance,
which
is
not
bad.
D
Using
Libra
BD
1.12,
which
I
don't
remember
what
version
that
is,
is
that
Nautilus?
Something
like
that?
The
max
you're
heading
was
about
83.
A
Nice
I
think
I
think
there's
some
more
potential
for
deeper
integration
with
azio
like
all
the
way
down
to
the
the
messenger,
but
I
think
that
would
be
a
lot
more
complicated.
D
Yeah
I
saw
some
of
the
discussion
that
was
happening
on
the
the
original
PR
that
that
Jason
had
made
for
utilizing.
Seo
and
I
I
saw
some
of
that
yeah
that'd
be
really
cool.
D
A
Yeah
and
I'm
I'm
glad
that
RBD
has
made
use
of
it.
Rgw
still
hasn't
been
able
to
convert.
That's
something
on
the
still
on
the
horizon.
D
A
Yeah,
hopefully,
once
we
get
out
from
under
this
multi-site
restarting
stuff,
we'll
be
able
to
tackle
that.
D
Well,
yeah,
let
me
know
how
it's
going
with,
if
you
guys
can
move
on
to
the
SEO,
because
it
certainly
looks
interesting.
D
And
yeah
I
mean
we've
got.
We've
got
enough.
Backend
OSD
throughput
at
this
point
that
you
know
on
a
cluster
of
like
the
the
size
that
we
tested
here
was
just
I,
think
five
nodes
and
30
osds
on
nvme.
It's
fast
enough
that
that
we
can
actually
yeah
make
use
of
of
of
these
mvme
drives.
At
this
point,
they're
they're
getting
worked
fairly
hard.
D
So
yeah
client-side
is
is
definitely
is
interesting
right
now
in
terms
of
what
we
can
do
to
to
really.
You
know
make
use
of
all
that
back-end
throughput,
that
we've
got.