►
From YouTube: RGW Refactoring Meeting 2023-08-16
Description
Join us every Wednesday for the Ceph RGW Refactoring meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute
What is Ceph: https://ceph.io/en/discover/
B
Hey
thank
you.
Casey
yesterday
I
opened
up
a
PR
that
includes
the
first
draft
of
the
perf
counters
cash
and
instrumenting
parts
of
the
rgw
for
the
front.
End
operated
counters
to
have
labels
on
them.
B
B
I
know
this
is
also
going
to
integrate
into
work
that
you've
all
in
Ali,
masarawar
doing
and
I
know
that
the
the
folks
at
Corey's
company
I
forget
what
it's
called
also
wanted
to
be
in
discussions
about
getting
metrics
to
Prometheus
as
well.
A
So
I
I
did
scan
it
yesterday
and
one
point
of
initial
feedback
is
that
it's
creating
several
different,
like
instances
of
the
perf
counters
for
different
apps,
like
put
get
delete,
Etc
and
I
have
a
feeling
that
we're
going
to
want
all
of
those
in
the
same
instance
since
we're
going
to
be
doing
the
caching
by
by
bucket
name
or
username,
so
you
wouldn't
want
some
apps
like
puts
to
be
cached
or
evicted
separately
from
get
apps
on
the
same
thing.
B
Yeah,
it
does
make
sense
the
reason
I
made
it
that
way
was
just
I
didn't
know.
If
we
were
going
to
have
like,
like
I,
wanted
to
make
sure
there
was
flexibility
so
that
when
we
added
multi-site
counters
in
for
the
future
or
if
there
was
other
non-front
encounters,
there
would
be
an
ability
to
have
those
as
a
separate
set
of
counters.
I,
absolutely
can't
add
all
of
the
all
the
put
all
the
front
end
counters.
In
the
same
instance,
that's
not
an
issue.
One
thing
I
would
say
about
that.
B
Either,
combining
all
those
or
sending
them
individually
will
send
the
same
thing
to
Prometheus,
I
guess
the
one
of
the
reasons
I
also
did
it
is
that
if
a
user
or
a
bucket
or
something
was
particularly
or
a
set
of
them,
were
particularly
right,
heavy
or
read
heavy
that
we
wouldn't
have
a
bunch
of
zeroed
out
counters
that
gets
sent
out
every
time
and
that
there's
and
they're
not
stored
that
way
and
so
they're
not
stored
as
well.
B
A
B
Some
very
large
number
of
buckets
and
users
so
that
the
cross
would
be
decently
high
and
we're
going
to
see
how
that
goes
with
the
exporter
and
how
it
Go
and
it
go,
goes
in
Prometheus
I've
gotten
the
whole
solution
to
work
end
to
end
locally,
but
for
that
I
had
single
digits
to
dozens
of
users
and
buckets
nothing
anything.
Nothing
too
big.
A
Okay,
that
sounds
good.
How
does
the
the
cash
eviction
work
currently.
B
The
cash
eviction
there's
a
config
variable
you
set
for
the
size
and
when
the
size
gets
larger
than
that,
then
the
counter
is
a
Vic
is
evicted
from
the
perf
counters
collection
in
in
the
cache
that
that
being
as
well
and
so
like,
if
bucket
one
user,
one
bucket
one
gets
evicted,
and
then
it
comes
back
an
hour
later
and
gets
put
in
the
cash.
Then
the
values
to
be
zeroed
out.
B
B
Yeah
I
I
I
was
hoping
you
or
Adam
could
take
a
look.
I
wanted
to
make
sure
I
did
the
made
sure
that
the
lru
was
thread
safe.
B
If
you
guys
have
any
comments
there
or
any
ways
that
I
can
test
that
or
Smith
or
any
other
feedback.
Oh,
that
would
be
great
as
well.
B
B
No
I'm
hoping
to
get
some
of
the
testing
started
this
week
and
into
next
week.
One
more
thing
I
might
add
when
I
talk
to
Mark
about
this.
Is
that
the
way
the
architecture
is
supposed
to
be?
B
Is
that
there's
one
exporter
per
rgw
and
so
I
guess
the
single
exporter
is
always
getting
the
counters
from
the
rgw
and
then
sending
them
off
to
Prometheus
I,
don't
know
what
the
upsides
or
downsides
are
to
that,
but
I
guess
it's
probably
simpler
to
just
have
a
one-to-one
relationship
instead
of
one
exporter,
doing
a
bunch
of
different
requests.
A
Well,
what
I
remember
about
ceph
exporter
is
that
it's
just
using
admin
sockets
locally,
so
I
think
you
you
would
deploy
one
exporter
per
node,
whether
that's
running
one
rgw
or
multiple.
B
That
makes
sense
I
will
you
know
I'll
look
into
the
exporter
more
and
because
I
I
I'm
not
sure
that
I
have
the
ability
to
even
pass
in
multiple
admin
sockets
to
it.
B
Actually,
yeah
I
know
that
that
actually
sounds
right.
You're
right
I
have
the
command
open
for
what
I
was
using
to
run
it
yeah.
That's
that's
correct!
So
then
it
would
be
one
per
node,
okay!
Well,
that's
something
that
I'm
also
something
that
I
think
getting
set
up
for.
The
testing
will
be
beneficial
to
learn
about
to
see
how
it
can
handle
lots
and
lots
of
fast
dumps,
lots
and
lots
of
data
being
dumped.
A
Yeah
I
tend
to
think
the
exporter
itself
will
be
fine,
but
it's
more
about
how
much
we
can
actually
store
in
Prometheus
if
we're
feeding
at
tons
of
different
kind
of
different
combinations
of
of
labels.
A
Thanks
Ali
next
on
the
agenda
is
from
Tobias
feedback
on
his
work
to
handle
course,
options
requests.
C
Hello,
yeah
I,
did
add
APR
to
S3
tests
to
reproduce
the
issue
and
then
I
have
a
PR
open
for
RW
that
solves
the
tests.
C
C
C
I'm
actually
unsure
if
the
issue
itself
would
be
innovate
using
V2
signatures,
I'm
not
sure,
is
there
like
a
plan
to
Deputy
too
either
way.
B
A
C
A
C
I
guess
they
could
investigate
adding
tests
that
actually
tries
to
reproduce
it.
For
me
too,
as
well
see
if
there's
an
issue
there,
because
I'm
actually
not
sure.
A
C
The
other
Alternatives
I
have
like
either,
if
you
find
a
good
place
for
it
or
at
the
helper
function
somewhere
just
to
clean
it
up
a
bit,
but
I
mean
there
I
guess
we
could
implement
it
as
a
complete,
like
AVS
strategy
in
the
authentication
layer,
but
that
seems
rather
overcable
for
a
couple
of
lines
of
code,
but
but
it
would
be
cleaner,
though
you
know.
C
B
Sorry,
I
think
what
I
think
we
wanted
the
ability
to
very
easily
toggle
between.