►
From YouTube: Ceph RGW Refactoring Meeting 2023-03-29
Description
Join us every Wednesday for the Ceph RGW Refactoring meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute
What is Ceph: https://ceph.io/en/discover/
B
Yeah
I
know
in
the
past,
when
we
discussed
this,
we
were
looking
for
use
cases
or
burst
things
to
label
and
send
off
to
Prometheus
and
Matt
had
pointed
me
at
this
rfe
that,
where
a
downstream
customer
would
like
to
be
able
to
view
in
Prometheus
per
rgw
user
metrics
and
specifically
the
quota
allocated
to
the
user
and
their
current
usage,
the
number
of
objects
and
bytes
that
they're
using
when
I
understand.
We
already
checked
this
stuff
or
get
this
information.
B
B
The
only
the
only
issue
is
I
think
we'd
still
need
some
kind
of
store
when
for
when,
like
the
same
user,
this
quota
is
updated
over
and
over
again,
something
something
along
those
lines.
We
need
some
kind
of
cash.
C
Doesn't
that
doesn't
doesn't
that
make
it
conversion
with
what
the
work
you
did
for
temporarily?
You
know
temporarily
a
local
store
for
such
for
information
related
to
a
label
counter
yeah.
So
we're
not
going
to
start
we're
not
going
to
assort
We're,
not
gonna,
we're
not
we're
we're
not
painting
on
the
underlying
quarter,
information
lives
or
we.
C
A
A
It
sounds
like
if
they
want
a
graph
of
user
quota
or
user
stats
versus
quota
and
Prometheus,
then
we
would
have
to
have
like
a
a
timer
of
some
interval
like
every
minute.
Read
stats,
read
their
quota
and
send
that
information
to
Prometheus
and,
like
Matt,
said
I,
don't
think
that
scales,
especially
if
you
want
a
small
granularity
and
lots
of
users.
C
But
what
we
could
do,
based
on
the
work
that
you
guys
already
did
like,
with
the
hit
cache
and
in
the
for
counter
hit
cache
you
could
you
you?
Could
you
could
Define
some
events
like
changing
quota
and
enforcing
quota
on
particular
users
and
keep
them
in
there,
for
whatever
defined
period,
is
a
minute
10
minutes
whatever,
and
you
can
send
the
things
that
we've
currently
that
we've
currently
seen
to
Prometheus.
C
We've
already
we've
already
gotten
the
the
rest
of
the
group
and
the
the
consumers
of
Prometheus
to
agree
that
labeled
counters
can
have
maybe
sparse
and
transient
whether
they
actually
when
they
actually
for
the
first
time.
They
actually
see
such
a
thing.
They
may
all
they
made
they
made
their
heads
may
explode,
but
I'm,
confident
that
Ilia
and
at
least
a
few
other
people
who
don't
understand
what
I'm.
What
we're
talking
about
here.
C
It
could,
or
or
maybe
couldn't,
access
I'm,
not
quite
sure,
I
mean
that
would
be
much
more
expensive,
but
if
not
crazy,
because
we're
accumulating
them
in
the
cache.
So
maybe
maybe
we
would
accumulate
them
into
the
cache
when
something
happens.
That's
relevant
and
drop
out
of
the
cash
whenever
we
following
the
rules,
you
guys
agree
to
like
every
every
few
minutes,
but
we've
done
we
dumped
the
stuff
that
hasn't
been
updated.
So
we
have
like
a
snapshot
of
activity
that
reflects.
C
Similarly,
this
is
this
is
how
I've
come
to
see
some
of
those
things
where
people
are
asking
for
like
one
of
the
hot
when
they
say
when
they
want
to
know
all
the
way.
Well,
information
about
buckets
and
replication.
What
they
really
want
to
know
is
what's
been,
what
what's
the
what?
What
what's
the
system
doing
and
and
what
are
users
doing
and
and
we
could
Define
that
around
quota,
as
you
know,
a
couple
of
ways,
but
but
that's
all
I
needed
not
just
quoted
denials
necessarily
Maybe.
C
But
I
definitely
think
that
you
know
that
they're
keeping
the
other
trying
to
trying
to
Define
it
as
moving
and
moving
the
moving
current
information
and
moving
snapshot
of
what's
Happening,
with
with
with
with
with
a
synchrony
with
everything
with
asynchronous
semantics,
would
be.
The
right
it
would
would
be.
It
would
be,
would
be
useful
that
is
tractable,
I
think
but
like,
but
everything
that
everything
that
involves,
knowing
all
the
buckets
and
knowing
all
the
users
and
knowing
all
this
and
all
of
that.
C
C
Okay,
that's
what
I
mean
is
fine.
Let
me
just
let
me
write
down,
I
mean
I,
don't
know
what
it
solves.
I
mean
I
mean
it
reduces
a
lot
of
activity.
The
the
caching
reduce
and
then
restriction
and
the
restrictions
on
the
size
of
the
cash
and
the
age
of
the
cash
hopefully
reduce,
reduce,
reduce
the
reduce,
reduce,
reduce
the
problems
to
a
table
of
current
information
that
we're
willing
that
we
have
at
any
given
moment.
D
C
A
C
If
it
goes
and
and
the
ability
when
I
I,
think
I
mean
I,
there's
been
plenty
of
miscommunication
about
what
brometheus
is
doing,
but
I,
but
I
understood
it
for
Community
case
after
two
years
of
negotiating
and
talking
with
different
people
Julia
and
one
me
and
other
people
that
that
is
that
it's
possible
to
keep
that
it's
possible
to
have
a
to
have
a
counter
that
represent.
C
You
know
whose
Prometheus
view
is
the
top
ten
Acts,
so
it
could
be
the
top
10
labeled,
the
top
of
the
top
10
values
for
the
for
in
the
for,
for
the
for
the
current
you
know
for
the
product
for
for,
for,
let's,
let's
say
Ops
per
bucket
or
Ops
per
user,
and
and
then
that
that
counter
could
be.
We
could
Define
the
semantics
of
that
counter
coming
out
of
us
and
through
the
exporter,
to
be
either
one
to
be
to
be
to
be
wanted
to
be
the
current
value.
C
That
would
mean
the
current
top
10
at
any
given
time,
and
then
it
would
not
imply
a
historical
view
of
all
the
summarized
information
that
it
ever
got.
If
it
did,
I
mean
clearly
I
think
you're,
correct,
Prometheus
would
fall
over
and
I
haven't
explored.
What
Smith
is
doing
with
it,
but
I've
been
told
that
such
a
counter
can
exist.
C
Or
the
top
10
user
label
combinations
so,
like
you
know
so,
yeah
I
mean
there's
a
different
content
for
a
user.
This
and
you're
usually
doing
this
and
you're
doing
that.
Maybe
but
but
I
think
so
and
I
don't
know
what
so
much
but
I
I
don't
know
anything
about
really
what's
persistent
in
Prometheus
but
to
some
extent
I
I,
I'm
gonna
I
mean
I'm
thinking
that
some
of
these
counters
are
should
not
should
be
treated
by
Prometheus
as
ephemeral,
and
hopefully
it
can
handle
that
I'm
told
that
is
a.
D
C
Think
it's
I
think
the
labeling
is
something
that
is
a
terminology
from
there.
I
I
call
I
I
thought
of
this
as
something
else
initially,
but
as
paramet
is
parametric
essentially
but
I
mean,
but
but
going
back
to
my
conversation
with
Jason
Tillman
two
years
ago
it
seemed
he
was
convinced
that
this
is
this.
Is
this
is
coaching
and
and
nobody
and
and
it's
gradually
it
seems
like
most
people
more
people
agree
to
it,
although
no
one,
no
one
is
yet
to
be
fair.
A
So
I
mean
maybe
we
don't
have
any
counter
stuff
or
quotas
at
the
moment,
maybe
just
trying
to
send
our
existing
app
counters
with
a
user
label
on
them
and
figuring
out
how
to
how
to
limit
what
Prometheus
stores
and
you
know,
investigate
it.
From
from
that
standpoint,.
C
Should
start
I
mean
adding
to
what
you're
just
saying
I
think
we're
saying
the
same,
a
compatible
thing.
We
should
Define
what
what
the
set
of
what?
What's
what
what
the
you
know,
what
the
period,
what
the,
what
the
snapshot
View
of
each
of
such
a
label
counter
should
be,
and
and
as
it
flows
to
the
system,
I
mean.
C
C
C
True,
that's
true,
although
the
people
who
work
on
Prometheus
have
have
nodded
and
and
waived
at
this
and
I
think
we
just
have
to
cross
the
Rubicon
with
it,
because
because
it's
because
it's
because
it
because
it
follows
from
other
things
that
are
going
on
it's
it's,
the
same
problem
arises
effectively
from
from
the
work
that
Ilya
is
doing
with
RBD
by
the
way
as
the
sort
of
volumes
and
grows
and
changes
the
the
the
the
the
the
Matrix
of
of
of
of
of
of
of
how
counters
grows
accordingly.
C
Without
that
obvious
bound,
and
so
if
a
Prometheus
were
accumulating
them
all
and
representing
this
Matrix
forever,
it
melts,
just
as
it
does
without
weather
counters.
We've
been
talking
about
in
RW.
A
Well,
I'm,
not
so
sure,
because
when
we
were
discussing
this
with
Ilia
lately,
he
said.
Scalability
is
not
a
problem
for
RBD,
because
it
has
a
strict
limit
of
like
100
on
the
number
of
images.
A
B
B
Seeing
what
seeing,
how
Prometheus
flows
over
or
what
it
can
handle.
C
So
now,
I
think
from
Downstream
perspective,
I
think
we've
agreed
that
that
doesn't
our
W
stuff
there
are
Jimmy,
coming
counters,
aren't
going
into
six
one.
So
it's
more
of
an
upstream
part
that
lands
when
it's
ready
and
then
yeah.
If,
if
the
train
derails
as
it
were,
it
should
happen
very
quickly
and
it
won't
hurt
anyone.
C
C
B
A
And
I
mean
the
cash
wouldn't
really
be
tracking
the
top
and
of
any
particular
counter.
It
would
just
be.
A
A
B
B
Yeah
and
I
mean
here's.
The
other
thing
is
the
currently
all
the
counters
are
stored
in
the
in
a
self-context,
perv
counters
collection
and
they're
all
just
dumped
into
mapper,
something
like
that
and
the
only
thing
which,
from
what
I
understand,
is
thread
safe.
The
only
thing
that
that
collection
thing
didn't
have
is
some
kind
of
get
method.
A
I
I
think
the
collection
is
probably
fine,
I
mean
it,
it
manages
them
the
The
Collection
under
a
mutex,
and
so
if
the
cache
was
just
adding
and
removing
things
under
that
lock,
then
I
think
it
would
be
fine.
Okay,
the
cash
would
need
its
own
thread.
Safety
for.
A
B
Okay,
I
can.
C
It
sounds
exactly
perfect
to
me:
cool.
B
So
what
happens
in
the
multi-gateway
instance
or
I?
Guess
that's
something
we'll
find
out
on
the
Prometheus
and
like
if,
if
like.
C
A
B
A
Yeah,
my
understanding
is
that
the
counters
are
all
going
to
the
same
place
in
the
end,
but
the
demon
name
ends
up
being
a
label.
B
C
A
C
Given
that
yeah
that
occurs
me
that's
true,
so
yeah
that
implies
they're
not
being
aggregated
in
the
exporter.
C
Maybe
well
I
wish
I'd
it
up
in
the
CLP
call,
because
what
we
should
what
they
should
do,
maybe
you
could
hint
this
daily
or
somebody.
What
they
should
do
is
is
required
is
to
do
it
in
two
steps
and
give
it
a
few
weeks
between
bouncing
basically
for
quickly
request.
Have
the
organization
require
it
which
which
makes
GitHub
require
it?
C
A
C
A
C
A
C
C
B
D
D
Yeah,
okay,
so
this
is
follow-up
to
a
discussion
about
how
to
disable
logging
for
the
Zone.
It
may
not
be
related
to
only
saying
policies
in
general
I
see
that
many
places
we
check
for
the
flat
log
data
to
decide
whether
to
lock
a
particular
operation
or
not
in
data
log
and
for
by
logs.
A
Yes,
I
I
do
think
that
we
should
move
to
the
supports
data
export
and
supports
data
import
things
instead
of
the
The
Zone
wide
Flags,
we
haven't
right.
We've
had
the
logic
that
just
hard
codes,
log
data
in
the
zone
group
for
a
long
time,
and
so
it's
not
really
an
option.
I
think
we
should
eventually
just
remove
it
entirely.
A
D
A
D
D
Yeah
so
I
think
sync
Handler
fights
at
least
from
my
understanding.
They
don't
create
pipes
at
all
it
it's
if
they
think
is
suitable
for
a
particular
direction
for
One
Source,
don't
hold
this
expression
don't,
but
in
this
case,
in
general
multi-site
case,
we
will
have
bi-directional
sync
enabled
by
default
right
so
to
disable
for
a
particular
Zone.
I.
Think
sync
model
manager
is
the
only
way
to
know.
A
Sorry,
I
didn't
quite
follow
that
I
mean
most
most
of
the
places
where
we
are
like.
When
we're
writing
an
object,
we
decide
whether
to
write
it
to
the
data
log
or
the
bi
log.
That's
like
a
per
Bucket
Level
decision
right
and
so
I
think
it
makes
sense
to
go
through
the
per
bucket
single
quality
stuff.
D
Because
I
think
that
is
already
there
to
filter
through
the
sync
policy
pipes
if
there
are
any
configured
per
bucket,
but
of
course
that
at
least
in
this
case
since
archive
wouldn't
have
anything
policies
configured,
it
would
pass
that
check
and
then,
after
that,
I
think
the
only
way
to
check
whether
we
need
to
log
a
particular
operation
or
not,
based
on
its
own
tier
type,
I
think
we
can
use
only
this
support
data
export.
A
D
I
can
look
into
it.
I
briefly
looked
into
it
like
in
sync,
policy.
Handler
usually
creates
pipes
even
in
the
default
multified
cases,
so
it
can
check
for
tier
type
and
then
make
a
decision
whether
to
create
pipes
or
not,
but
the
problem
was
sync
policy
handler
was
called
before
the
service
manager,
sync
module
service
manager
gets
initialized,
so
the
this
supports
data
export
was
always
returning
false
for
all
the
zones.
A
I
see,
okay,
do
you
think
we'd
be
able
to
change
the
initialization
order
to
fix
that
yeah.
A
D
I
mean
update
the
pr
if
it
looks
fine,
it's
okay,
otherwise,
by
the
way
then
change
this
log
data
to
support
data
enforce.
A
D
D
A
All
right,
hopefully,
that
works
and
I
I
think
that
accomplishes
what
I
was
looking
for.
Making
same
policy
understand
the
sync
modules.