►
From YouTube: RGW Refactoring Meeting 2023-08-09
Description
Join us every Wednesday for the Ceph RGW Refactoring meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute
What is Ceph: https://ceph.io/en/discover/
A
All
right,
so
the
reef
release
just
went
out
today,
thanks
to
everybody
for
their
hard
work
on
that
we're
gonna
start
doing
some
blog
posts
to
announce
new
stuff.
If
you
worked
on
anything
new
in
the
reef
release
and
want
to
share
a
blog
post
about
it
with
the
community,
let
me
know
otherwise.
First
on
the
agenda
is
from
Yuval
about
notifications.
We've
got
a
just
link
here.
B
Yeah
I
I
mainly
wanted
to
kind
of
discuss
the
the
next
steps
and
the
plan
and
I
can
always
see
that
we
have
the
other
people
here
that
might
join
the
effort.
So
I
mean
the
details
are
in
the
in
the
link,
but
just
as
a
quick
overview.
B
We
want
to
make
persistent
notifications
better
in
in
several
senses.
We
want
to
make
the
observability
better.
B
This
is
something
that
Ali
did
a
while
back
so
now
in
in
red
Escape
admin,
you
can
run
a
command
and
ask
for
the
status
or
the
stats
of
a
specific,
persistent
queue
that
is
used
for
notifications.
It
shows
how
many
entries
are
in
the
queue,
how
much
size
they
occupy
and,
and
that's
the
main
thing
important
to
note
that
the
size
that
we're
showing
there
is
not
the
size
of
the
object.
B
So
if
you
use
the
save
command
to
see
the
size
of
the
object,
you
would
see
one
number,
but
the
size
of
a
queue
is
different.
The
queue
is
implemented
as
a
ring
buffer
and
as
a
ring
buffer
I
mean
the
allocated
side
is
the
whole
object,
but
the
actual
size
is
the
difference
between
the
head
and
the
tail
of
the
Ring
buffer.
So
it's
a
different
number.
So
if
anyone
saw
those
numbers
is
confusing,
then
this
is
why
we
we
need
in
the
Raiders
Gateway
I
mean
the
size
of
the
queue.
B
The
next
step
in
this
observability
kind
of
task
is
to
use
labeled
perf
counters,
on
top
of
the
redis
Gateway
admin,
because
they're
useful
for
other
things,
for
example
in
deployment
where
it's
more
difficult
to
jump
on
the
box
and
call
already
Skateway
admin,
and
you
do
want
to
collect
stats,
maybe
on
a
look
at
some
Trends
or
time
series
of
the
stats
and
then
using
the
perf
counters
would
be
better.
They
can
go
into
committees
and
present
it
nicely
in
grafana
or
other
places.
B
So
this
is
why
this
is
the
the
next
the
next
step
there.
So
this
is
one
one
kind
of
Avenue
of
work
that
we
want
to
have
in
the
systemification
to
make
them
more
more
debuggable
or
you
know,
to
understand.
What's
going
on
there
with
different
topics,
the
other
Avenue
is
with
the
with
the
retries
and
with
yeah
replacing
the
the
queue
with
with
the
SEO,
a
Sprite
so
with
the
retrys.
Also
Ali
did
the
work,
or
at
least
the
first
step.
B
So
now
there
are
two
two
numbers:
one
is
the
or
three
numbers.
One
is
the
each
notifications.
Gonna
have
like
a
TTL,
so
if
notification
stays
in
the
queue
for
too
long,
then
after
whatever
configured
amount
of
time,
this
is
considered
to
be
obsolete
or
not
needed
anymore
and
we're
not
gonna
indefinitely
retry
it.
The
other
one,
is
the
number
of
retries,
so
those
two
numbers,
the
first
one
that
is
reached,
we're
gonna.
B
It
will
fail
sending
notification
and
we
see
that
it's
we
want
to
retry
it,
but
we
see
that
we,
we
tried
it
only
two
20
times
or
the
notification.
B
There
kind
of
expired,
then
we're
not
retrying
it
and
we're
just
deleting
it,
and
that
would
kind
of
solve
the
problem
of
infinite
tree,
tries
and
also
would
kind
of
make
the
the
queue
size
smaller,
because
we're
not
going
to
keep
on
in
the
queue
notifications
to
be
there
for
a
week
so
month
and
there's
another
configuration
number
there,
and
this
is
the
spacing
between
the
reach
rise.
B
So
this
would
to
guarantee
that,
even
though
that
you
put
like
maybe
a
small
number,
we
tried,
that's
it
10
retwise,
we
spaced
them
so
that
we
give
the
Kafka
server
or
whatever
that
system
that
was
done.
We
give
them
a
chance
to
come
up
again
and
the
retries
would
be
more
effective.
B
So
that
was
all
this
is
already
done,
and
the
next
step
there
is
the
couple
of
more
fixes
that
or
enhancement
there
one
is
to
make
the
configuration
restful
per
topic.
So
the
idea
is
that
different
topics
May
point
to
different
systems
and
they
have
different
characteristics,
so
we
might
want
to
have
different
parameters
for
the
different
systems
and
another
enhancement
there.
That
we
want
is
in
case
of
migration.
B
So
the
the
number
of
retries
is
is
not
persistent,
so
it
is
saved
in
memory
which
means
that,
if
the
object
of
the
restarts
we
kind
of
restart
the
retries
from
zero,
which
is
not
a
big
deal,
the
number
that
kind
of
guarantee
that
we
don't
overflow
or
you
know
keep
things
infants
in
in
the
queue
is
the
the
time
to
leave
of
the
notification,
and
this
number
is
persistent
inside
the
queue.
B
We
did
not
implement
the
migration
for
this
number,
which
means
that
cues
that
would
be
created
in
a
previous
release
when
they're
upgraded
to
the
release
that
does
support
the
TTL
would
not
apply
to
TL
on
those
on
those
entries
because
it
don't,
it
won't,
have
the
timestamp
of
creation.
So
we
will
not
know
when
they're
created.
B
So
we
want
to
fix
it
by
adding
this
timestamp
by
the
the
side
that
kind
of
tries
to
input
the
queue
if
they
see
that
if
they
failed,
then,
instead
of
just
saying,
okay,
we're
gonna
retry
again
and
they're
going
to
delete
the
entry
and
put
again
the
same
entry
with
the
timestamp.
So,
regardless
of
how
much
time
the
entry
was
in
the
queue
eventually
everything's
going
to
be
cleared
up
if
we
set
up
this
TTL,
so
those
are
the
two
next
steps
in
in
this
in
the
retry
feature.
B
So,
first
of
all
any
questions
I
mean
those
are
the
most
straightforward
things.
The
moving
to
the
CLS
rifle
is
kind
of
more
challenging.
So
if
you
have
any
any
questions
or
something
wasn't
clear
about
those
things-
and
please
ask
them
now
and
before
we
move
to
the
fight
for
discussion.
A
B
Yeah
yeah
I
mean
the
fifo
is
more
complex
than
the
queue.
So
so,
let's
have
the
discussion
later
on,
like
everything
that
I've
got
so
far
is,
are
the
things
we've
done,
the
things
we
still
need
to
do,
but
they're
pretty
straightforward
unless
you
have
some
some
concerns
or
questions
about
them,
so
so
this
is
a
good
we're
going
to
move
to
the
question
of
fight
for
so
first
first
question
is
why
moving
to
this
fight
row?
Well,
the
first.
B
The
most
important
reason
is
the
size
of
the
cube,
so
currently
I
think
the.
B
C
B
Yeah
you're
right
yeah,
and
this
is
okay
if
the
Kafka
server
is
down
for
let's
say
an
hour
depends
with
the
rate
of
course,
but
you
know
something
like
that,
but
there
could
be
cases
where
the
server
is
down
for
a
couple
of
hours,
maybe
up
to
a
day,
and
it
could
be
that
we
want
to
survive
or
we
want
notifications
to
survive
those
longer
down
times
or
maybe
shorter
done
times.
B
If
we
have
a
system
that
has
a
much
higher
rate
of
updates
so
again,
and
for
that
we
need
to
be
able
to
have
a
much
longer
queue.
So
even
if
we
have
the
the
kind
of
limitation
number,
we
tried
and
we
have
time
to
live
on
everything
we
still.
Let's
say
you
know
in
a
system
with
very
high
rate.
B
Even
if
time
to
live
is
let's
say,
24
hours,
then
we
want
to
survive
up
to
24
hours,
which
means
we
would
need
a
much
much
bigger
cue
and
and
for
that
we
need
the
the
cls-50.
Now
a
couple
of
challenges
with
the
serious
fifo
that
I
think
thought
about,
but
you
know
could
be.
There
could
be
others.
B
First
thing
and
okay,
so
there's
one
question
of
migration.
This
is
a
like
one
thing:
I
guess
we
can
probably
fix
the
migration
issue
by
keeping
two
cues
at
the
same
time
like
an
old
queue
and
an
EQ
and
serving
both,
assuming
that
the
the
old
queue
with
the
time
to
leave
and
retry
limitations
that
we
have
eventually
would
go
away.
So
I
think
migration
could
be
fixed
like
that.
B
A
second
problem
is
the
Observer
observability,
with
the
Silas
fighter
So.
Currently
the
numbers
that
we
give
when
we
ask
how
many
entries
we
have
in
the
queue
of
the
size
of
the
queue
are
accurate,
I
mean
they're
a
snapshot,
but
I
mean
they're
accurate
for
the
time
that
we
give
them
I,
don't
think
we
can
have
something
like
that
with
this
fight
for
because
it
doesn't
have
any
Central
Point.
B
That
knows
knows
exactly
how
many,
how
many
I
mean
it
has
a
head,
but
the
whole
point
of
the
cos54
is
that
you
don't
want
to
go
and
update
the
head
every
time
you
put
something
in
the
tail
of
the
cube.
You
don't
want
to
create
this
contention,
so
this
is
why
I
think
it'd
be
more
complex
to
give
the
actual
number
of
entries
in
the
queue.
B
One
thing
we
may
want
to
do
is
that
each
node
in
the
queue
May
maintain
their
own
number
and
if
you
ask
for
the
number
of
the
queue
that
could
be
like
a
more
complex
process
and
kind
of
less
accurate.
Well,
you
have
to
Traverse
all
the
nodes
and
fetch
the
numbers
from
all
the
nodes
and
and
accumulate
them.
Something
like
that.
So
maybe
Adam.
If
you
have
a
any
comment
on
that,
but
that
could
be
one
approach.
B
B
B
So
that's
that's
with
the
observability
the
overall
size
in
byte
I.
Don't
think
that's
a
problem
because
you
just
take
the
size
of
each
node
and
multiplied
by
the
number
of
nodes
except
the
last
one
where
you
actually
have
like
the
the
markers
at
the
beginning
and
end
and
calculate
the
size,
so
I
think
the
size
is
easier.
Number
of
entries
could
be
more
challenging
there.
B
So
that's
the
that's.
The
the
second
kind
of
problem
with
the
fifo
there's
a
question
here:
whether
we
want
to
have
some
kind
of
an
upper
bound
on
the
fifo
So.
Currently,
I,
don't
think
we
have
that
implemented.
So
the
fifo
can
grow
up
grow
indefinitely.
But
maybe
we
do
want
an
upper
bound,
I
think
I
think
this
upper
bound
to
the
fifo.
B
If
we're,
if
like,
if
there's
no
requirement
to
be
accurate
on
the
upper
bound
I,
think
that
could
be
easy,
that
we
will
just
limit
the
number
of
maximum
number
of
nodes
in
the
fifo
and
we
can
overspill
by
one
node
and
that's
it,
and
that
would
give
a
some
kind
of
an
upper
bound
and
if
we,
if
we
know
that
we
already
overspilled,
then
we
reject
the
push
request
to
the
flyfo.
So
I
think
that
could
be.
B
That
could
be
kind
of
possible
to
implement
and
again
the
whole
concept.
I
mean
there's
no
accuracy
that
is
really
required
for
an
upper
bound
just
to
have
some
kind
of
a
limit.
So
I
don't
think
it's
a
big
deal.
If
we're
not
accurate,
there.
C
Sorry,
you
all
actually
had
a
doubt
in
this
case
like
in
our
event,
we
anyways
are
doing
TTL
and
number
of
retries.
In
that
case,
that's
even
upper
bound
like
it
just
doesn't
it
become
an
obsolete
option
like
you
eventually
that
size
will
like
what
are
we
trying
to
achieve
with
upper
bound
like
the
whole
idea
of
what
upper
bound
was
we
don't
want
to
like
it's
like
controlling
the
size,
but
then
that
size
is
eventually
getting
controlled
by
those
two
parameters.
Right.
B
B
But
let's
say
there
could
be
a
case
where
you
want
to
set
up
a
system
and
you're
saying
well:
I,
don't
want
to
use
more
than
I
know
two
gigabytes
for
this
queue
because
of
whatever
reasons
that
I
have-
and
you
know
you-
you
don't
really
know
the
actual
rate
of
the
system
all
the
time.
So
the
the
math
of
saying
kind
of
what
is
my
upper
bound
as
a
result
of
my
retries
and
TTL
could
be
correct,
but
could
be
incorrect
depending
on
the
actual
rates
of
the
system.
B
So
I
feel
that
this
is
like
a
another
piece
of
configuration
that
might
be
useful
to
make
sure
that
things
are
not
going
out
of
hand,
but
I
mean,
as
I
said
it
doesn't
have
to
be
accurate.
It
could
be
just.
B
Just
some
some
kind
of
basic
mechanism:
sorry
Casey.
A
Yeah
well
correct
me
if
I'm
wrong,
the
pr
that
added
the
limits
of
retries,
don't
those
default
to
Unlimited.
A
So
and
in
my
opinion,
I
think
it's
important
when
we
were
designing
these
features.
We
wanted
to
have
a
mode
where
notifications
were
reliable
and
that
we
wouldn't
just
forget
some
of
them
and
so
I
think
it
is
important
to
have
to
preserve
that
and
allow
us
to
keep
storing
them,
and
in
that
case
we
also
want
to
abound,
because
we
can't
just
keep
consuming
forever
right.
B
Right,
you're
right
yeah,
that's
that's
a
good
one,
thanks
Casey
yeah!
So
there's
a
difference
between
the
time
to
live
in
the
in
the
upper
bound.
So
time
could
have
keep
the
queue
invisible
size,
but
whatever
is
lost
is
lost.
An
upper
bound,
keep
the
Q
individual
size,
but
whatever,
like
things,
are
not
lost,
the
the
original
request
will
be
rejected.
So
your
your
put
object,
or
whatever
that
caused
the
notification
would
be
rejected
to
the
client.
If
you
tries
to
try
to
create
something
that
is
bigger
than
the
Q
size.
D
So
is
there
any
plan
to
introduce
like
a
debt
letter
Q
like
similar
to
AWS,
let's
say
as
a
system
admin
like
I
set
up
the
set
cluster.
My
clients
set
up
the
topics
like
the
coffee
juice
they
may
be
in.
They
may
not
be
non-functional
and
that's
why
we
have
TTL,
then
retries,
but
as
a
system
admination
set
of
a
more
High
higher
available
like
Kafka
Cube
myself,
and
if
I
I'm
gonna
drop
a
event
notification
I
can
put
it
in
that
queue
like.
Let's
call
it
set
them
to
that
letter.
D
B
B
D
So
let's
say
a
client
is
expecting
a
event
notification,
but
for
some
reason
the
configured
Kafka
endpoint
is
not
accessible.
The
client
clients
generally
have
not
higher
available
like
Kafka
setups,
but
as
assistant
admin,
I
have
a
separate
queue
for
the
whole
step
cluster
that
the
system
admin
maintains
and
they
can
find
the
like
the
timed
out
or
retried
event
notifications
on
a
separate
queue
so
that
the
client
can
see
what
they
missed
while
their
Kafka
endpoint
is
not
reachable.
Okay,.
A
D
B
But
in
our
in
our
case
all
the
cues
are
ceph
or
redis-backed,
so
they
have
exactly
the
same
persistency
and
reliability
measures.
You
can
do
that
I
mean.
Currently.
You
have
like
the
full
freedom
to
create
whatever
topics
and
cues
and
notifications
that
you
want.
So
the
the
topic
could
be
created
by
the
application
user
or
a
duplication
developer
as
they
Define
identification
on
the
bucket.
They
can
create
their
own
topic
and,
if
you
as
an
admin,
you
want
to
create
another
topic
and
point
the
points
like
quantification
there
as
well.
B
D
D
E
B
D
B
Could
okay,
there
is
one!
There
is
one
thing
that
we
could
consider
there
I
think
that
could
be
good,
and
this
is,
if
we
put
the
if
we
put
the
admin
or
the
dead
letter
Q
on
a
different
pool.
So
currently
it
is
very
important
because
every
time
you
put
an
object
to
do
something
with
the
system
and
if
you
have
a
notification
which
is
persistent,
then
you
put
that
into
the
queue
as
well.
B
So
it's
like
one-to-one
actions,
which
means
that
it
is
extremely
important
that
the
the
pool
in
which
the
queue
is
is
going
to
be
as
fast
or
faster.
Then
your
your
objects
right.
Otherwise,
you
slow,
otherwise
you're
slowing
down
your
object,
even
though
the
gentrification
themselves
are
pretty
small
they're
like
a
couple
hundred
bytes,
but
the
the
actual
work-
or
you
know
the
overhead
there
could
be
could
be
high.
B
So
it
could
make
sense
to
because
you
have
like
limited
resources
of
size
on
those
fast
devices
that
you
would
use
the
you
would
use
a
different
like
the
dead
letter
Q
on
a
slower
on
the
floor
device
that
has
more
capacity,
and
you
would
write
into
this
queue
only
in
the
case
of
failure,
so
you're
not
impacting
the
ongoing
performance
of
the
system,
even
if
the
the
the
the
media
or
the
device
that
are
slower.
D
Yeah,
that's
yeah,
that's
only
for
the
dropped
messages
or
message
to
be
dropped
like
as
a
safe
keeping
or
future
reference
for
the
clients,
but
yeah.
That's
that
separate
feature
on
its
own
yeah.
B
B
A
B
I
think
we
have
the
same
problem
with
the
I
mean
it's
not
only
in
the
five
foot.
Even
currently,
you
can't
delete
a
specific
entry
from
the
queue
right
you're,
going
to
only
delete
up
to
something
right,
you
the
again
with
the
current
limitation
of
the
Ring
buffer,
but.
B
B
Well,
in
theory,
you
shouldn't
but
I
mean
even
currently,
you
know
putting
aside
okay,
so
so
the
current
implementation,
even
the
current
computation,
that
has
a
I,
mean
before
we
had
TTL
and
everything
I'm,
sending
the
notifications
to
the
server
in
box
and
whenever
I'm
getting
an
error,
then
I'm
I'm,
considering
everything
up
to
this
error
to
be
failed
like
even
if
it
wasn't
so
the
whole.
The
whole
assumption
is
that
duplicates
are
okay
duplicate.
B
Notifications
can
happen,
always
so
because
this
really
because
of
other
reasons
as
well,
because
you
don't
have
act
to
act,
so
it
could
be
that
they
act
something,
but
you
didn't
get
back
and
you
think
that
something
was
failed.
So
you
retry
that
so
the
whole.
B
The
whole
system
is
based
on
the
fact
that
that
duplicates
could
happen,
which
means
that,
even
in
the
current
implementation,
we
kind
of
always
truncate
the
queue
and
only
like
assuming
that,
like
everything
that
that
kind
of
failed,
but
if,
if
some
of
them
were
successful
in
the
middle,
we
will
just
retry
them.
A
B
You
wouldn't
have
okay,
so
that
that
should
that
should
be
okay,
I
mean,
apparently
here
we
have
just
one
Global
tkl,
but
even
if
we
have
like
the
well
I
wonder
how
that
would
work,
though,
if
we
change
yeah
I,
guess
that
should
be
yeah,
the
TTR
is
always
going
to
be
Global
for
the
queue
right.
So
this
should
be
an
issue
with
that,
whether
even
if
it
changed
it's
going
to
change
Gonna
Change
globally
for
everything
in
the
queue
so
I
don't
think
that's
a
problem.
A
All
right,
great
I,
also
have
some
comments
on
migrations.
B
A
A
I
think
my
my
suggestion
would
just
be
to
have
the
Q
type
be
part
of
the
topic,
so
existing
topics
would
stay
in
the
old
queues
and
new
topics
would
default
to
the
new
one.
And
if
you
wanted
to
switch,
then
you
would
have
to
find
a
way
to
delete
it
and
recreate
it.
I
guess,
preferably
with
some
weight
to
flush
the
stuff
that
was
in
the
old
queue.
B
B
The
topic
is
the
queue
so
I
mean
in
in
the
system.
You
don't
create
cues,
you
create
topics
and
like
if
you
follow.
If
I
follow
what
you
suggested,
then
I
would
need
to
create
a
new
topic
for
the
new
queues,
which
is
fine.
B
It's
not
a
big
deal
because
usually
like
I,
have
I
know
not
too
many
topics,
so
I'll
have
to
go
and
create
the
new
topics
and
they'll
be
different,
because
then
the
the
type
will
be
concatenated
to
their
name,
so
it'll
be
different
topics,
but
the
problem
is
that
I
now
need
to
go
to
a
thousand
buckets
and
say:
oh
by
the
way
your
notification
doesn't
go
to
this
topic.
They
go
to
the
other
topic.
B
Why
was
that
such
a
a
messy
solution?
I
mean
I,
guess
the
only
thing
I'll
have
to
do
in
the
code
is
that
the
code
that
empties
I
mean
in
in
a
new
in
a
new
like
in
the
new
version
that
codes,
the
code
that
writes
into
the
queue
would
write
to
the
fifo,
the
code
that
empties
the
queue
would
empty,
the
CLS
queue
and
then
the
fifo.
B
A
It's
just
got
really
complicated
and
we
introduced
several
bugs
there.
D
B
No
I
agree,
I
agree.
The
problem,
my
problem
is
not
is
recreating
the
topics.
My
problem
is
with
and
I
think
I
think
Kunal
kind
of
mentioned
that
sometime
before
there
are
many
buckets
in
the
system
and
going
through
all
of
them
and
doing
something
is,
is
a
lot
lots
of
work.
A
B
B
Okay,
okay,
I
mean
again
it's
it's
really
a
kind
of
an
operational
question
of
how
difficult
it
is
to
do
that.
We
can
even
provide
the
script,
something.
C
Would
it
make
sense
to
have
Andreas
Gateway
admin
command
to
just
transfer
or
push
all
of
those
old
notifications
to
the
new
queue
by
any
chance.
B
C
B
E
A
C
B
I
mean
we
can
maybe
fix
the
problem
of
kind
of
you
know
handling
this
this
case
for
for
forever,
after
after
the
upgrade
by
by
a
global,
config,
parameter
or
command.
B
B
This
might
fix
this
specific
issue,
but
again
I
mean
I
agree.
It
could
be
complex
to
implement.
E
B
Yeah
so
I
mean
to
to
not
doing
that
that
forever
we
can
just
put
a
configuration
parameter.
That
would
tell
us
that
we
don't
care
of
whatever
stays
in
those
queues
or
all
the
queues
are
done
or
whatever,
and
then
we
stop
doing
that.
This
could
be
kind
of
a
something
that
would
prevent
us
from
doing
that
forever
and
then
just
another.
If
call
in
the
code
checking
this
this
variable
instead
of
I,
read
us
command,
checking
whether
a
cues
there
or
not.
So
that
could
be.
B
That
could
be
one
thing.
Casey
was
also
concerned
about
the
actual
implementation
that
could
have
bugs
and
issues
and
very
difficult
to
debug
and
and
kind
of
figure
out
what's
going
on,
and
he
really
suggested
that
would
be
like
a
manual
operation
of
creating
a
new
topic
and.
B
So
so
yeah
we
can
do
that
to
be
like
a
per
a
per
bucket
command,
so
we
can
even
kind
of
write
a
script
that
goes
through
all
the
buckets
that
has
notifications
and
run
this
command
on
them
one
by
one,
and
then
we
don't
have
the
problem
of
two
types
of
of
cues
working.
At
the
same
time,.
C
But
then,
what
about
the
users
who
have
this
complaint
like
they
would
still
stick
to
fight?
You
mean
they'll
still
be
using
the
the
cyclic
queue
right.
If
they
already
have.
Some
notifications
is
what
up
so
they'll
never
get
upgraded
to
fifo
unless
they
recreate
new
topic
and
resubscribe
their
buckets
with
a
new
topic.
B
Maybe
we
should
separate
that
regardless
right.
So
if
we
use
the
same
Kafka
topic,
name
that
they
don't
care,
what
kind
of
cues
inside
the
rgw
a
Syndicate
from
from
their
perspective
with
graphql,
it's
the
same
topic
so
but
currently
I
think
there's
no
way
to
configure
what
is
the
Kafka
topic
name.
It's
like
whatever
topic
name
that
you
gave
to
your
topic
is
the
Kafka
topic
name
so
like
tied
together.
B
But
I
guess
I
mean
that's
going
to
be
maybe
more
complex
to
kind
of
to
operate,
but.
E
C
B
I
mean
we
can
give
it
a
try.
Also
I
mean
yeah
to
heading
toward
Cornell,
said
the
subcriber
they're
very
asynchronous
right.
They
could
be
reasoning.
Reading
the
topics
the
next
day,
I
mean
reading
from
this
topic.
The
next
day
like
you,
can
you
can
spin
up
a
a
Kafka,
client
and
say
you
know,
give
me
everything
from
the
beginning,
and
you
know
this
could
be
highly
asynchronous
to
whatever
we
do
on
our
side.
So
there
is,
there
is
a
bit
of
an
issue
there.
B
So
this
this
pretty
much
sums
up
the
the
issues
that
I
want
to
present
today.
I
really
appreciate
the
the
feedback.
I'm
gonna,
probably
update
that
in
the
in
the
GitHub
gist.
B
The
the
comments
to
any
run
migration
and
around
the
dead
letter
queue
and
I
mean
feel
free
to
communicate
over
GitHub
or
or
email.
Yes
or
no.
C
I
just
had
one
more
requirement
as
the
part
of
this
notifications,
so
we
see
notification
as
tied
up
to
an
S3
user
versus
being
a
generic
thing
like
currently,
topics
are
not
associated
to
any
S3
users,
even
though,
when
you
need
the
create
topic,
the
rest
command
is
sent
by
an
S3
user,
but
any
commands
that
we
used
to
do
like
a
topic
list
notification
list,
none
of
those
kind
of
spit
out
the
S3
user,
that's
associated
with
it,
so
it
would
be
nicer
to
have
like
an
S3
user
who
created
that
topic
Associated,
so
that
whenever
you
list
anything,
it's
like.
B
B
C
So
on
that
front,
there
is
also
not
a
command
which
is
okay.
This
topic
is
tied
to
X
buckets
or
something
right
like
currently
the
topic
list,
or
this
is
just
a
generic
which
lists
all
the
topics.
It
does
not
give
you
any
mapping
between
buckets
or
S3
user.
So
what
I'm
trying
to
propose
here
is
to
have
some
way
to
tie
up
those
three
together,
probably.
B
Okay,
so
I
think
those
are
two
different
things.
First
is
to
have,
or
to
remember
the
user
that
created
the
topic,
so
I
think
this
should
be.
This
should
be
quite
easy.
We
can
just
store
that
right
in
the
topic,
so
that
would
be
very
easy
to
do.
I'm,
not
sure
how
we
can
do.
B
Let's
say
all
topic,
all
topics
of
a
user
I,
don't
think
we're
gonna
have
this
mapping
because
currently
I
mean
it's
probably
possible,
but
I
mean
that
should
be
something
pretty
new,
because
currently
users,
they
don't
know
anything
about
topics.
So
adding
the
user
to
the
topic
is
easy.
Just
adding
this
information
giving
the
list
list
of
topics
of
a
user.
This
is
more
more
difficult.
C
B
Oh
okay,
so
so
I
guess
this
should
be
yeah.
I
can
just
filter
the
list.
It'll
be
just
a
matter
of
fact
or
if
you
have,
the
information.
I
can
just
filter
the
list,
so
this
should
also
be
straightforward.
This
is
something
like
they.
Where
the
schedule
admin
can
do
that
I.
Don't
right,
maintain
a
list
of
anything
getting
the
list
of
buckets
of.
E
C
B
I'll
have
to
look
into
that:
okay,
so
yeah,
so
I
would
like
to
kind
of
separate
that
to
the
easy
part
and
not
easy
part,
so
the
easy
part
is
adding
users
to
the
topic
and
filtering
by
user.
I
think
that'd
be
very
easy
doing
the
work
of
the
pocket
I
need
to
to
dig
a
little
bit
into
that
and
see
what
needs
to
change
here.
C
B
Mean
from
functional
perspective,
yes,
it
makes
sense.
You
want
to
know,
like
you,
have
a
Kafka
server,
which
kind
of
represents
or
Topic
in
Kafka,
and
you
want
to
know
which
which
pockets
for
identification
to
it
right
so
from
functional
perspective,
it
makes
sense.
B
B
Maybe
we
can
do
something
with
the
the
names
of
the
of
the
topics
right,
I
kind
of
when
I
create
the
names,
I
kind
of
mangle,
I
think
the
all
kind
of
stuff
into
the
name,
so
so,
maybe
based
on
that
I
mean
if,
if
this
is
just
writing,
some
very
complex
Logic
on
the
way
this
Gateway
admin,
then
this
is
this
is
not
not
a
big
thing:
I
mean
yeah,
it
could
be
complex,
but
not
not
too
difficult
if
this
is
storing
new
thing
inside
new
things
inside
radius
and
updating
them.
C
C
B
So
like,
if
you
want
to
say
let's
say,
I
want
to
do,
show
a
topic
and
give
all
the
buckets
that
are
related
to
it
right
so
I
think
that's
it.
C
B
I
need
to
see
if
this
is
possible,
if
you
just
want
to
see
per
like
per
bucket
to
see
if
there
is
a
topic
that
is
associated
with
the
bucket,
it
might
be
easy
because
I
think
I
I
use
the
bucket
name
to
create
the
object
where
I
store
all
that
stuff.
So
I,
maybe
there's
a
way
to
to
do
that
easily.
Just
in
the
Reddit
Gateway
admin
without
inventing
new
greatest
objects
in
the
system
which
I
want
to
avoid.
C
Actually,
my
bad
I
didn't
put
it
correctly
so
yeah
for
the
bucket
complexity.
What
I
was
trying
to
propose
was
two
options
right
either
we
have
a
scenario
where
we
could
say:
okay,
please
spit
out
all
the
buckets
for
this
particular
topic.
If
that's
more
complex,
then
how
about
just
tagging
the
enriching
the
bucket
stats
to
just
spit
out
the
topics
that
are
associated
with
either
of
them
kind
of
solves
problem,
but
yeah,
whatever
is
easier,
it
makes
sense,
is
what
I
was
trying
to
comment.
B
Right
right,
yeah,
so
I
think
the
first
one
could
be
complex.
The
second
one
I
think
could
be
just
something
done
completely
on
the
Reddit
Gateway
admin
side,
which
is
which
is
to
me,
even
if
some
coding,
it's
not
a
big
deal,
because
it's
not
storing
information.
Maintaining
information
inside
some
object
right
right,
okay,
great
so
those
this
the
I
mean
there
are
a
couple
things
here
that
you
can
do,
and
some
of
them
are
quite
easy
and
straightforward.
B
So
I
think
that
would
also
we
get
good
to
to
add
to
the
list.
Thanks
for
now,.