►
From YouTube: Ceph RGW Refactoring Meeting 2022-11-02
Description
Join us every Wednesday for the Ceph RGW Refactoring meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contrib...
What is Ceph: https://ceph.io/en/discover/
A
B
Yeah
thanks
Jason,
so
let
me
put
this
put
the
link
in
either
bed
too.
In
case
you
don't
have
it.
So
that's
the
pr
that
I
initially
opened
a
couple
days
ago
and
the
idea
for
that
one
is
just
make
the
multi-object
delete,
delete
individual
objects
concurrently
right
now,
I,
have
it
spinning
off
every
single
individual
object
within
the
request
onto
a
new
co-routine
and
trying
to
do
them
all
in
parallel,
but
this
case
you
just
commented
on
it.
We
do
need
some
kind
of
limit
for
the
async.
B
The
number
of
Co
routines
that
we
spawn
off
so
I
did
want
to
talk
about
that
a
little
bit,
but
it
looks
like
based
upon
your
comments.
You
had
a
similar
idea
to
what
I
was
thinking,
probably
add
a
new
configuration
parameter
to
have
some
limit
for
the
number
that
are
spun
off
at
any
given
time
for
this
particular
request,
type
and,
and
that
way
they
can
either
fall
back
by
setting
it
to
zero
one
to
do
them
all
synchronously
as
it
is
now
or
have
up
to
whatever
asynchronous
level
they
want.
A
Yeah
I,
don't
I,
don't
necessarily
know
that
it
needs
its
own
config
variable,
but
I
guess
that
would
be
consistent
with
what
we've
done
elsewhere.
So
whether
you
want
to
make
it
configurable
or
not,
I
think
is
up
to
you.
B
Okay,
it
seems
like
it
would
be
nice
to
be
configurable
from
my
side,
because
we'll
probably
want
to
play
with
it
when
we
start
using
it
for
quality
of
service
purposes,
which
kind
of
leads
me
into
the
next
thing
this.
B
So
our
clusters
in
production
are
spinning
discs
for
the
data
pool
and
we
use
eraser
coating
with
eight
plus
three,
and
so
with
all
the
right
amplification
from
the
Erasure
coating
and
trying
to
do
in
this
case
right
now,
with
the
pr
I'm
allowing
up
to
a
thousand
at
a
time
since
there
can
be
up
to
a
thousand
individual
objects
within
a
multi-object
delete
at
once.
It
has
a
huge
you
know.
B
It
basically
has
a
huge
number
of
iops
that
it's
inducing
on
the
cluster
and
we're
opening
up
to
just
allow
our
cluster
to
be
slaughtered.
Basically
by
these
multi-object
delete
requests
and
the
the
underlying
thing
that
we
need
to
be
able
to
get
to
a
few
thousand
deletes
per
second
for
for
multiple
tenants
at
once.
Basically,
we're
not
going
to
be
able
to
get
there
as
long
as
the
hard
drives
need
to
be
involved
in
every
single
delete
and
right
now,
that's
basically
necessary
looks
like
so.
B
The
other
thing
that
I'd
like
to
change
and
I
would
like
to
discuss
is
the
idea
of
having
another
configuration
object
to
prevent
rgw
from
storing
any
data
in
the
head
object
and
let
that
only
be
the
metadata
so
that
the
delete
operation,
part
that
has
to
be
done,
synchronously
and
that
can't
be
part
of
the
GC
chain
can
all
be
on
nvme's,
because
we
all
have
we
have
our
DB
was
all
and
nvmes,
and
then
we
can
make
that
scale
really
nicely.
C
B
I,
have
a
I
have
a
typical
change
already
that
I'm
sort
of
testing
at
the
moment,
and
it's
it's
quite
easy.
Actually,
since
you
guys
already
have
like
some
objects
that
don't
have
data
in
the
head
like
for
the
multi-part
and
for
the
append
type
objects,
and
so
it's
a
pretty
simple
change
or
small
change
at
least
and
seems
to
work
pretty
well
for
us.
B
A
Yeah,
another
interaction
is
with
storage
classes,
I,
don't
know
if
you've
looked
into
those,
but
any
objects
that
are
uploaded
to
a
non-default
storage
class.
Don't
store
any
data
in
the
the
head
object.
C
B
A
Well,
I,
don't
think
that's
exactly
what
you
want
like
by
default.
All
of
the
objects
are
uploaded
to
the
default
storage
class,
where
they
would
store
data
in
the
head.
It's
only
if
there's
another
storage
class,
that's
not
the
default,
then
we
put
all
of
the
data
in
its
tail
objects.
Instead,.
A
C
A
But
the
the
configuration
for
that
stuff
lives
in
the
zone
placement
and
so
rather
than
doing
like
a
cefconfig
variable
which
might
not
be
the
same
on
our
gws
I.
Think
the
Zone
placement
would
be
the
place
to
control
that
okay.
B
So
other
than
that,
I
guess
I
just
wanted
to
make
sure
that
I
kind
of
gut
checked
my
understanding
of
why
the
data
was
co-located
with
the
head
object
in
the
first
place
and
it's
I
think
just
because
it
reduces
or
eliminates
one
extra
rados
claw,
especially
for
small
objects.
Obviously
there
could
be
it
could
double
the
the
overhead
if
you
make
two
versus
one.
Is
that
the
is
that
the
whole
idea,
or
is
there
more
to
that.
A
B
Okay,
well,
I
think
that's
all
I
had
in
terms
of
discussion
on
that
I'll-
probably
open
up
another
PR
related
to
that
piece
and
also
add
a
commit
for
the
qos
type
stuff
on
that
existing
commit
here
today
or
tomorrow,
and
then
we
can.
We
can
discuss
it
more
on
on
GitHub.
C
You've
all
wanted
to
talk
about
something
related
to
the
perf
counters.
Rick
I
was
doing.
A
I,
don't
know
you
know,
I
I
just
wanted
to
have
like
a
sync
up.
I,
don't
think
this
is
part
of
the
this
meeting.
Oh.
C
A
Okay,
well
I'll,
wrap
up
and
stop
the
recording,
and
you
guys
can
discuss
what
you
want.
Just
a
final
couple
of
reminders:
stuff,
developer
monthly
is
tonight
in
the
Asia
Pacific
time
slot
and
then
the
virtual
cephalicon
starts
tomorrow.
I
think
the
first.
The
first
talk
is
tomorrow
and
you
can
see
the
schedule
on
the
link
in
The
Ether
pad
thanks.
Everybody.