►
From YouTube: Ceph RGW Refactoring Meeting 2023-04-26
Description
Join us every Wednesday for the Ceph RGW Refactoring meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute
What is Ceph: https://ceph.io/en/discover/
A
The
only
topic
so
far
is
a
discussion
about
delete
races
in
multi-site
there's
a
Tracker
linked
that
essentially
involves
races
where
we
delete
an
object
on
the
source
Zone,
but
it
ends
up
being
recreated,
so
objects
persist
after
they're
deleted.
A
B
B
He
I
believe
he
made
some
progress,
but
I'll
ask
him
to
update
later
in
the
tracker.
A
Yehuda,
do
you
have
any
thoughts
on
this?
It's
been
a
while,
since
you
wrote
it,
but.
D
Yeah
I
I,
don't
quite
remember
all
the
and
now
you're
about
I
would
think
that
you
wanna
remember
these
for
the
time
window,
where
you
know
where
this
race
can
happen,
which
is.
C
D
What
you
have
with
the
full
thing
is
the
this:
the
Maybe
that
time
that
the
ID
or
the
log
ID
where
you
started
the
operation,
maybe
don't
think
anything
that
would
create
up
that
afterwards,
no
I'm,
not
sure.
D
Okay,
like
thinking,
is
if
the
full
stink
is
a
problem,
sync
everything
that
was
created
prior
to
the
start
of
the
full
sink,
and
then
everything
following
that
would
be
thinked
back
on
the
incremental
sync
he's
probably
racing
not
a
good
idea,
but
that's
not
an
option
to
explore
something.
D
A
D
There's
more
than
two
zones.
D
So
if
we
so
we
we
list
the
objects,
we
think
an
object
and
it
is
just
being
deleted.
D
D
Okay,
let
me
take
a
step
back,
so,
let's
even
understand
it
correctly,
you
have
Zone
a
Zone
b,
let's
say
a
thinks
from
B
pH
as
data
is
from
B,
it
fetch
an
object
and
the
object
was
now
deleted.
On
B,
like
object,
o
was
deleted
from
B,
but
we
just
created
it
on
a.
A
D
It's
not
on
the
I
think
traces.
We
don't
store
it
on
the
index.
D
Well,
it
could
be
that
it's
only
on
the
log,
but
at
the
time
I
I
remember
the
cliff
wanting
to
put
it
there.
A
Yeah,
if
it
is
in
the
index,
then
we
could
extend
the
list
objects
API
for
sync
to
include
the
trace
for
each
thing
and
I
think
we
could
solve
it
that
way.
D
D
Yeah,
we
shouldn't
think
object
that
we
were
the
origin
or
that's
not
quite
true.
Actually
maybe
you
wanna,
but
maybe
that
that
would
be
a
problem
if
you
want
to
recover
its
own.
C
D
For
this
specific
scenario,
but
again
I'm
not
I'm,
not
100
sure
we
keep
it
on
the
on
the
index.
It
might
be
that
the
protocol
has
it,
but
we
don't.
C
D
D
Do
the
index
because
I
remember
missing
it
somewhere?
When
did
the
second
floor
provider
I?
Remember
not
that
I
think
it
was
missing.
A
Yeah
I
agree
with
everything
there
we're
just
looking
for
solutions
that
don't
require
us
to
track
these
deleted
objects
forever.
A
So
when
you
were
looking
into
this,
you
you
found
the
Zone,
Trace
and
I
know
that
that's
stored
in
the
bi
log
entries.
Do
you
know,
did
you
see
it
stored
in
the
bucket
index
entries
also.
E
I
didn't
I
think
it's
only
in
the
bi
log,
but
I
was
just
trying
to
see
that
now.
Actually,
okay.
A
D
I,
remember
like
the
reason,
I
think
at
the
time
where
I
created
the
short
Zone
idea
was
so
that
it
it's
gonna,
be
abbreviates.
It
like
show
times
like
they're
thinking,
whether
it's
going
to
be
too
heavy
on
the
indexes,
but
maybe
we
don't
ended
up,
not
sending
it
because
it
kind
of
worked
only
with
a
putting
it
in
bad
bucket
index.
A
Yeah,
the
short
ID
would
have
helped
there,
but
yeah
you
did
add
more
or
per
bucket
replication,
since
the
entries
could
come
from
different
buckets.
I.
Remember.
A
One
other
thing
to
mention
is
that
I
have
an
outstanding
PR
ated
to
the
Zone
Trace.
It
adds
it
to
a
an
attribute
on
the
head
object.
A
So
that
a
get
aperture
head
object
can
return
a
replication,
Trace
header
saying
which
zones
have
seen
it
already
so
I
think
the
bucket
listing
could
potentially
read
all
of
the
head
objects
in
order
to
provide
the
trace
from
that.
Assuming
that
we
merge
this
PR,
but
that
would
make
the
bucket
listings
themselves
a
lot
more
expensive.
Currently
they
just
read
the
index,
but
they
would
have
to
stat
every
entry
to
serve
that
well,.
D
You
can
do
it
continuing
to
do
it
like
list
everything,
but
then
add
a
constraint
on
the
fetch
right
like
so
you
know
try
to
try
to
but
or
you
can
have
the
only
the
listing
for
this
purpose
that
you
know.
A
And
actually
yeah,
maybe
listing
itself
doesn't
need
to
send
the
traces
but
the
the
get
request
to
fetch
the
object.
Actually
yeah
we
could.
We
could
send
a
header
with
our
own
Zone
name
and
it
would
return
an
error
if
that
zone
was
already
in
the
trace.
Yeah.
D
A
A
Yeah
but
both
do
depend
on
us,
storing
the
trace
and
the
head
object
at
X
additives
yeah,
so
any
fix
that
we
have
kind
of
wouldn't
wouldn't
solve
existing
object
uploads
only.
It
would
applied
a
new
new
sinks.
E
A
All
right
well,
would
you
would
it
help
if
I
type
up
kind
of
a
design
proposal
in
the
tracker
issue
and
if
you're
willing
to
work
on
it,
then
we
can
follow
up
from
there.
A
A
All
right,
soumya
and
shoppa-
you
guys
were
also
looking
into
kind
of
related
issues.
Is
this
making
sense
to
you
guys
any
input
that
you
have.
C
Hi
Casey
I
don't
have
much
input
so,
but
I
I
get
at
least
it
was.
These
issues
were
seen
even
with
incremented
food,
so
I'm
not
sure
if
it's
for
the
exact
reason
or
similar.
C
D
A
A
A
But
I
think
just
fixing
the
race,
and
then
we
can
look
at
the
tombstone
cash
part
separately
and
do
more
testing
afterwards.
It
would
make
sense.
C
A
Any
interesting
discussions
from
cephalicon
that
anybody
wants
to
share
I,
see
we're
missing
Dan
and
Matt,
but
anybody
else
that
was
there.
B
One
thing
is
the
for
the
fairness
issue,
like
we're
wondering
when
this
PR
can
be
merged.
If
there
are
any
pending
discussion
still
there
or.
A
From
my
view,
it's
good
and
just
needs
a
200g
run.
Okay,
anything
else
that
you
think
needs
to
happen.
G
Yeah
and
no
so
I
did
schedule,
it
I
mean
you
did
schedule,
technology
run
and
there
were
some
wild
Grand
issues.
So
I
don't
know
if
it's
related
or
something
else
so
I
have
to
look
through
the
results,
but
otherwise
I
don't
see,
there's
anything
more
to
add
to
it.
B
G
Yes,
that's
that's
the
plan.
Okay
got.
B
This
okay,
yeah
or
just
the
reason
we're
I'm
asking
is
to
say,
is
there
anything
like
we
can
like
help,
but
we
can
like
talk
offline
on
this
yeah.
G
A
That's
great
yeah.
If
you
have
thoughts
on
extra
kind
of
automated
testing
that
we
could
do
Upstream
I
think
that
would
be
really
valuable,
yeah
itself.
Okay,.
F
Sorry,
yeah
I
I'm,
not
sure.
If
you
have
mentioned
this
book,
I've
been.
F
Oh,
is
it,
is
it
better
now
yeah,
yeah
Yeah,
so
basically
we're
having
trying
to
test
this
by
data.
C
F
Fix
for
the
fair
this
issue
and
we
found
yes,
it
works
by
like
dispute
the
shot
relatively
evenly
across
all
the
rgw
instance,
but
it
seems
one
large
W
instance
gets.
The
look
is
never
released
it.
It's
just
like.
C
A
So
I
think
it
is
intended
in
the
original
design,
so
each
energy
W
is
generating
a
bid
for
each
Shard
right
and
it's
the
highest
better
highest
bidding
rate
of
GW
that
that
takes
the
lack
yeah
in
the
original
design.
A
F
A
G
F
A
Yes,
yeah
the
the
design
kind
of
assumes
that
there
would
be
an
even
load
between
large
shards,
yeah
and
I.
Think,
especially
in
metadata
sync,
that's
probably
not
always
the
case
or
data
sync
I
would
expect
it
to
be
a
lot
more.
Even.