►
From YouTube: Ceph Tech Talk: Persistent Bucket Notifications
Description
Ceph Tech Talk Schedule: https://ceph.io/ceph-tech-talks/
Ceph Tech Talk Playlist: https://www.youtube.com/playlist?list=PLrBUGiINAakM36YJiTT0qYepZTVncFDdc
A
Hello,
everybody
and
welcome
to
another
ceph
tech
talk.
I'm
joined
today
by
evolve,
who's
going
to
be
walking
us
through
persistent
bucket
notifications,
and,
as
I
understand
this
is
it
provides
us.
You
know
a
additional
endpoint
to
like
our
our
publish
and
subscription
kind
of
model
module
rather-
and
this
is
coming
and
the
pacific
release
that
we're
looking
for
coming
out
this
month.
So
thank
you
all
for
taking
the
time
to
share
with
us
this
new
feature
and
I'll.
Let
you
at
the
stage
now
thank
you.
B
Thank
you
mike
okay,
so
I'm
going
to
really
go
really
quickly
through
bug
notifications
in
general,
so
because
this
is
really
building
on
that
bucktifications
are
a
mechanism
to
let
external
systems
know
what
we're
doing
on
the
redis
gateway.
So
whenever
we
put
an
object,
we
delete
an
object.
We
copy,
we
don't
get
certification
for
fetching
or
getting
an
object
or
listing
a
bucket
or
something
like
that,
but
any
modification
to
that
we
do
to
an
object.
You
can
create
identification.
B
This
could
be
sent
currently
to
http
and
atp,
or
rabbit
and
q
endpoints,
and
to
kafka
endpoints
and
in
the
future.
We're
gonna,
we're
gonna,
add
some
more
at
endpoints
this
these
options.
So
this
is
something
that
was
interesting,
introducing
narulos
and
is
also
an
octopus,
and
it
is
working
fine,
but
it
has.
It
has
a
couple
of
drawbacks.
B
So,
first
of
all
I
want
to
talk
about
the
real
reliability
of
the
bug
notifications.
The
way
it's
done
is
that
notifications
are
sent
synchronously
with
the
op.
So,
whenever
you
put
an
object
into
a
bucket,
then
after
you
successfully
put
the
object
into
the
bucket,
then
then,
before
we
reply
to
the
client
that
the
object
was
successfully
put,
we
also
create
a
notification
which
is
like
lots
of
metadata
on
on
whatever
was
done
there
and
send
it
to
whatever
endpoint
that
was
configured.
B
So
if
you
configure
your
kafka
message
broker
as
your
endpoint,
then
notifications
are
going
to
be
sent
there
whenever
the
broker
gonna
act,
which
means
say
to
you
that
now
I
own
this
message,
I
have
my
reliability
and
persistency
in
whatever
mechanisms.
Only
then
we
reply
back
to
the
client
saying
yeah,
your
put
object.
Operation
was
successful.
B
This
gives
reliability
in
the
sense
that
if
the
rgw
crashed
in
the
middle
of
the
process,
then
we're
not
going
to
send
back
the
ax
or
the
the
notifications
would
not
send,
but
also
the
client
didn't
get
a
successful
completion
of
whatever
they
did,
which
means
that
they're
going
to
retry
or
do
whatever
they
want,
but
they
would
know
that
that
the
notification
was
not
sent.
B
So
it
gives
certain
level
of
reliability.
What
it
doesn't
do
is
that
it
doesn't
handle
the
case
where
the
end
point
has
issues.
So
let's
say
that
the
kafka
cluster
is
down,
or
maybe
connectivity
to
the
kafka
cluster
is
is,
is
not
reliable.
Then
we
haven't,
we
don't
have
a
way
to
solve
that.
We,
we
would
log
an
error,
but
we
can't
even
reply
with
an
error
to
the
client,
because
we
don't
guarantee
atomicity,
because
we
cannot
roll
back
whatever
we
did
to
the
object.
B
Let's
say
that
we
deleted
the
object
and
successfully
deleted
the
object,
and
then
we
send
the
notification,
try
to
send
notification
to
kafka
saying
the
object
was
deleted,
but
this
failed
for
some
reason.
Then
we
cannot
reply
that
that
the
operation
failed
because
the
object
was
deleted.
Only
the
notification
part
of
it
that
failed.
B
Another
problem-
and
this
is
really
something
that
actually
happened
in
the
field
and
was
the
trigger
for
adding
the
perceived
notifications-
is
what
if
the
end
point
is
down
but
or
is
very
slow.
B
So
if
you
send
something-
and
you
immediately
get
a
reply
back
saying-
you
know,
the
end
point
is
down
the
kafka
cluster
is
down,
they
wrap
them.
Q
server
is
down,
then
everything
is
pretty
much
fine,
but
if
you
don't
get
a
plot
and
you're
kind
of
waiting
for
some
certain
amount
of
time
until
timeout
expires,
and
only
then
you're
gonna
know
that
things
didn't
go
then,
because
of
this,
this
synchronicity
of
everything
with
the
operation-
that's
gonna,
really
slow
down
the
rgw
and
gonna
actually
bring
it
to
complete
halt.
B
You're,
sending
something
but
you're
not
waiting
for
the
reply
and
and
you're
gonna
reply
immediately
back
to
the
client
and
make
sure
that
the
message
would
would
arrive
to
the
endpoint
later
on
now.
Just
doing
that
is
actually
even
possible
with
the
existing
model
you
you
do
have
in
the
model
in
the
synchronous
notification,
you
have
an
option
to
say
it's
really
fire
and
forget.
B
I
don't
really
care
so
much
whether
notification
is
going
to
reach
the
its
destination
is
not
going
to
reach
the
destination
and
I'm
not
going
to
wait
for
the
ax.
So
in
this
case
it
will
just
work.
It
doesn't
matter
if
the
input
is
down
or
not
nothing's
going
to
slow
you
down,
you're,
going
to
sell
notification,
and
you
know
hope
things
would
work
and
if
it
doesn't
work
you
don't
know
about
that,
but
this
doesn't
work
as
well,
because
sometimes
the
reliability
of
notifications
is
really
critical.
B
B
I
tried
to
summarize
that
here
in
the
table
saying
what
a
regular
versus
certification,
what
kind
of
failures,
they're
gonna
support
and
what
kind
of
guarantees
they're
gonna
they're
gonna
give
and,
as
I
said
in
direct
notifications,
I
didn't
have
a
way
to
I
didn't
have
atomicity.
So
I
couldn't
say:
okay,
the
the
operation
was
successful,
but
identification
was
failed.
B
So
you
know
I'm
going
to
roll
back
the
operation,
no,
that
doesn't
work
and
the
way
that
I
did
that
I
just
did
the
identification
at
the
end
of
the
process.
So
after
I
knew
everything
was
successful,
but
if
notification
failed,
there
was
no
way
to
roll
back.
So
there's
no
atomicity
between
the
the
radius
operations
and
the
notification.
Sending
you
know
to
tackle
that
with
specifications.
B
So
these
two
phases
gives
pretty
good
guarantees,
because
if
you
were
able
to
reserve
some
something
on
the
queue
it
means
you
have
space
on
the
queue
and
if
you
weren't
able
to
reserve
something
like
you,
which
means
that
something
bad
happens
on
the
other
side
of
the
queue.
So
there
are
lots
of
problems
or
disconnects,
or
things
are
down,
so
you
can't
really
push
into
the
queue.
So
you
would
bail
out
early
on
before
you
did
anyway.
This
operation,
you
would
bail
out
and
say
this
operation
is
a
failure.
B
I
cannot
do
the
notification,
so
I'm
not
going
to
try
to
do
the
radius
operation
parts
of
it
so
so
this
is
why
the
two-phase
commit
creates
some
kind
of
atomicity.
It's
not
a
hundred
percent
guarantee
because
the
queue
made
may
be
not
full,
so
you
reserve
something
on
the
keys
to
settle.
B
You
do
the
radius
operations
and
they're
also
successful
when
you
try
to
commit,
maybe
you
had
a
disconnect
with
the
all
the
osds
or
some
other
problem
that
are
more
rare
and,
in
this
case,
there's
no
way
to
roll
back,
because
you
already
did
your
radius
operations,
but
this
shouldn't
happen.
I
mean
this
is
some
internal
problem
with
the
with
the
step
cluster
for
the
cases
whether
you
have
connectivity
problems
or
some
issues
with
the
end
point
that
would
be
covered
by
this
model
of
reservation
and
then
committing.
B
B
So
this
guarantees
automatically
but,
as
I
said,
everything
is
is
asynchronous,
which
means
that
when
I'm
setting
notification,
I'm
just
putting
stuff
into
the
queue.
Well,
maybe
another
another
word
about
this
queue.
This
queue
is
a
radius-backed
object,
so
it
also
survives
crashes,
100
failures
and
whatever,
because
it's
backed
by
the
radius
object.
So
if
the
rgw
crashed
or
if
your
all
the
rgw
crashed
or
your
your
hardware
really
doesn't
work
anymore,
then
everything
is
backed
in
redis
object.
So
other.
B
B
Maybe
a
small
small
comment
here
is
that
even
if
you're
raider
skate,
I
mean
when
you're
ready
to
get
a
crash.
There
could
be
reservations
that
were
taken
on
the
cube,
but
not
committed
or
cancelled.
Yet,
and
in
order
for
that
not
to
accumulate
and
eventually
fill
up
your
queue,
then
there's
like
a
periodic
cleanup
for
preservations
that
are
there
for
too
long
they're
just
going
to
be
deleted.
B
Now,
as
I
said,
the
whole
process
is
asynchronous,
which
means
that
I'm
just
pushing
stuff
into
the,
and
I
need
to
set
it
over,
especially
for
the
cases
where
you
have
disconnect
problem
with
your
kafka
cluster,
or
you
have
other
issues
there
and
for
that
the
only
way
to
actually
guarantee
delivery
or
really
solve
the
original
problem
of
reliability
of
your
endpoint
is
to
have
a
refry
mechanism
and
your
retirement.
B
Is
the
one
that
reads
from
the
queue
and
send
messages
now
those
sends
are
just
like.
The
original
sounds
completely
synchronous,
though
you
you
take
from
the
queue
you
try
to
send
and
whenever
you
get
the
ack
you're
gonna
clear
that
from
the
queue,
if
you
didn't
get
the
ack,
you
don't
clear
that
from
the
queue
which
means
that
next
time
I'm
gonna
go
through
the
queue
again
and
whatever
was
that
was
not
act,
I'm
going
to
resend
it
and
retry
it
and
so
on.
B
If
I
have
big
problems
and
and
I'm
not
managing
to
send
anything,
the
queue
would
have
built
up
and
accumulated
eventually,
you
know,
would
fill
the
entire
size
of
the
queue
and
then
that
would
be
signaled
to
the
rgw's
doing
the
operations
by
the
fact
that
this
queue
is
full.
So
in
this
case
I
would
just
fail
the
operation
immediately,
because
the
queue
full
I
need
to
send
notification.
B
But
I
cannot
because
I
cannot
push
new
notifications,
because,
on
the
other
hand,
I
cannot
really
send
them
and
get
axe
for
them
and
and
therefore
the
problem
can
trickles
back
and
push
back
to
the
client.
But
in
this
case
it
doesn't
push
back
to
the
client
in
a
way
that
would
slow
down
the
client
or
anything.
B
It
will
just
immediately
fail
any
operation
that
requires
notifications
into
an
endpoint
or
a
destination
or
a
queue
which
is
full
the
indication
the
queue
is
full
is
immediate,
and
even
though
I
would
get
errors,
which
is
what
I
want
to
get
in
this
case,
I
won't
get
the
slowdown
of
the
of
the
rgw.
So
this
is
how
the
this
mechanism
tried
to
tackle
the
problem.
B
Now,
I'm
going
to
you
know,
explain
a
little
bit
about
the
the
cues
and
the
topics
and
how
they're
managed
so
in
in
regular
bug
notifications.
We
usually
have
something
called
a
topic.
A
topic
is
really
a
definition
of
an
endpoint,
and
so
let's
say
a
certain
ip
address
of
a
of
a
kafka
cluster
or
cluster
or
web
server
or
anything.
B
Now,
whenever
I'm
defining
a
topic,
I
can
say
that
this
topic
is
a
persistent
topic.
It
is
a
property
of
the
topic
itself,
so
I
have
two
topics
to
the
same:
let's
say
cluster
the
same
endpoint.
One
of
them
would
be
persistent
one
non-persistent,
the
regular
one.
B
Whenever
I'm
going
to
get
the
op,
I'm
going
to
synchronously
send
the
notification
to
this
kafka
cluster,
the
the
persistent
one
behaves
a
little
differently.
So
when
I'm
creating
the
topic
and
I'm
marking
that
as
persistent
the
first
thing,
I'm
doing,
I'm
creating
the
cue.
So
I'm
creating
a
cue-
and
this
is
the
the
radius
object
or
the
queue
the
two-phase
commit
queue
that
later
I'm
going
to
use.
In
order
to
send
notifications
to
this
persistent,
this
endpoint
via
persistent
topic.
B
Now
it
is
important
to
understand
that
you're
going
to
have
a
cooper
topic,
and
this
is
critical
for
the
case
where
let's
say
you
have
multiple
endpoints,
some
of
them
can
be
good.
Some
of
them
can
be
bad.
Some
of
them
can
be
down
some.
Some
of
them
can
be
slow
and
you
don't
and
you
don't
want
the
slow
one
or
the
bad
ones
to
impact
the
other
ones.
So
it
could
be
that
one
of
your
kafka,
entry
points
or
brokers
is
down,
but
the
other
one
is
good.
B
B
So
I'm
creating
this
this
topic,
and
as
a
result,
this
cue
and
what's
going
to
happen,
is
whenever
I'm
going
to
have
an
operation
on
a
bucket
that
has
a
notification
configuration
that
ties
to
this
queue.
Then
I'm
gonna
push
the
notification
to
the
queue
and
the
rgw
is
getting
the
operation.
This
is
the
only
thing
they're
going
to
do
just
push
stuff
into
the
queue
now
I
need
somebody
to
pop
stuff
into
from
the
queue
and
send
it
over
now.
B
Only
in
when
problems
happen,
they
shouldn't
be
able
they
shouldn't
be
needed
or
required
to
handle
duplications
and
reordering
on
a
regular
basis.
This
is
why
we
want
to
guarantee
that,
in
a
regular
operation,
the
system
there's
no
reordering
and
duplication.
So
for
that
purpose
you
need
a
single
rgw
at
a
certain
point
in
time
that
pulls
stuff
from
the
queue
and
send
it
over.
So
we
using
object
locks
for
that.
B
So
whenever
a
new
topic
is
created
arbitrarily,
one
of
the
redis
gateway
is
going
to
see
that,
because
there's
a
list
of
topics
in
the
in
the
object
see
that
and
try
to
take
ownership
of
this
topic.
B
It
will
do
that
using
an
object,
a
serious
log,
and
if
it
is
successful,
then
it
became
the
owner
of
this
topic
and
it
will
kind
of
own
the
queue
start
to
read
from
the
queue
and
send
to
it
to
the
end
point.
If
it,
if
it
failed
to
take
the
lock,
it
means
that
somebody
else
owns
this
queue,
which
is
good
because
somebody
else
handles
that
now
the
queue
or
the
lock
over
the
queue
has
to
be
renewed,
and
this
is
for
the
case
where
rgbs
go
down.
B
So
if
one
of
the
rgw's
owned
couple
of
queues
and
served
them,
which
we
emptied
them
and
send
it
over
this,
this
rgb
can
crash
and
go
down,
and
then
we
need
to
make
sure
that,
after
a
while,
the
lock
would
expire.
So
some
other
rgw
would
try
to
log
in
success
and
be
successful.
All
of
a
sudden,
because
the
the
lock
that
the
the
crashed
rtw
expired
and
then
the
rw,
the
new
rgw,
would
be
the
new
owner
of
the
queue
and
would
start
to
empty
it
and
send
messages
over.
B
So
this
is
where
the
mechanism,
how
the
mechanism
works.
So
once
you
own
the
queue
you
renew
your
lock,
you
know
periodically
and
if
you
don't
renew
that
it
means
that
you're
down
and
somebody
else
can
take
the
lock
and
start
serving
this
cue.
This
is
how
we
guarantee
that
to
be
no
reordering
or
duplications
on
a
regular
basis
and
on
the
other
side.
On
the
other
hand,
there'll
always
be
an
rgw
that
manages
a
queue
and
serves
it.
B
When
you
delete
a
topic,
because
you
don't
need
this
topic
anymore
or
you
decommission
the
kafka
cluster
and
you
want
to
take
down
the
topic,
it
also
deletes
the
queue
and
there'll
be
no
new
notifications
being
set
for
this.
One.
B
What
else
so?
Yeah
I
mean
in
this
guest?
There
are
a
couple
of
commands
that
can
help
you
figure
out
which
rgw
owns,
which
queue
this
could
be
helpful
for
the
purpose
of
of
debunking,
like
a
large
system.
Would
that
have
lots
of
organelles
and
so
on.
The
the
kind
of
gotcha
that
exists
there
is
that
currently,
the
identification
of
the
of
the
rgw
that
is
owning
the
queue
so
the
then
this
is
part
of
the
redis
command
figure
out.
B
The
queue
is
based
on
the
either
the
redis
client
id
or
the
this
address
identifier,
and
you
don't
have
a
good
way
to
get
those
except
looking
at
the
at
the
log
file
at
the
rgw
log
file.
So
we
probably
need
to
improve
on
that,
but
now
the
the
only
way
to
kind
of
associate,
a
certain
cue
with,
which
is
the
rgw
that
serves
it
and
activate
and
send
the
notifications
from
it
is,
is
by
running
this
command
and
then
matching
that
to
whatever
you
see
in
the
rgw
log.
B
So
this
is
pretty
much
what
it
all.
What
I
wanted
to
to
talk
about
when
it
comes
to
persistentifications,
mainly,
you
know,
give
some
hints
about
debugging
and
how
the
mechanism
work.
If
somebody
is
using
them
and
things
are
not
working
or
not
working,
as
as
they
expect
them
to
work.
C
Sure
I'll
have
a
quick
question.
You
guys,
so
you
are
in
the
end
you
had
mentioned
about
the
two
times:
two
lock-in
timer
periods,
one
was
90
seconds
and
one
was
30
seconds
is.
B
B
I
think
it's
hard
coded
this
I
mean.
C
B
So,
if
I
own
a
queue
that
I'm
going
to
renew
my
ownership,
every
30
seconds
and
90
seconds,
this
is
the
timeout
for
the
lock
to
expire,
which
means
that
if
I'm
down
then
at
most
90
seconds
after
I'm
down
another
rgw
gonna
pick
up
on
this
queue
and
and
start
to
to
own
it
and
you
know,
serve
it.
So
what
would
be
the
concern
like?
Why
would
we
need
to
change
that?
I
mean
it
could
be
good
reasons.
I'm
just
asking.
B
B
B
I
also
create
you
know
a
little
bit
of
of
jitter
in
those
timeouts
to
make
sure
that
there's
no
too
much
race
conditions
at
the
same
time,
exactly
if
the
both
rgw
started
at
the
same
time,
I
don't
want
them
to
their
cycles
to
be
completely
in
sync,
so
I'm
editing
some.
So
it's
not
exactly
90
seconds.
I
added
some
g
to
there
so
to
make
sure
that
it
doesn't
really
match.
B
It
be
a
use
case
to
yeah
could
be
yeah.
It
could
be,
I
mean,
depending
depending
on
the
rates
and
especially
if
there
are
like
only
two
rgws.
This
could
be
a
valid
concern
too,
because
I
mean
what
can
happen
is
that
if
the
rates
are
really
high,
then
during
those
90
seconds
the
queue
can
fill
up
and
then
all
of
a
sudden
you
will
start
to
get
errors,
and
it's
like
you
know-
and
you
just
waited
for
90
seconds
for
no
good
reason,
because
you
could
have
picked
up
on
this
queue
in
10
seconds.
B
Maybe
so
like
I
don't
want.
Like
the
reason
I
took
those
zombies
because
I
don't
want
like
false
positives.
I
don't
want,
because
you
know
one
rgw
is
really
busy
with
doing
something
for
a
couple
of
seconds
and
didn't
have
the
chance
to
renew.
Then
I
don't
want
it
to
lose
the
ownership
for
no
good
reason,
but
90
seconds
could
be
a
could
could
be
decreased.
I
think
I
mean
you
can
submit
a
an
r3
in
the
attack
in
the
tracker
for
making
this
timeout
configurable
or
even
making
the
hard-coded
numbers
smaller.
C
A
So
maybe
you
could
speak
on
like
future
development
in
terms
of
this
feature,
you
mentioned
currently
right
now
to
see
the
q's
ownership
you
have.
It
pulled
up
right
now
in
terms
of
running
this
particular
command.
What
is
the
plan
if
any
for
quincy.
B
Yeah
yeah,
so
yeah
I'd
be
happy
to
talk
about
some
future
stuff
with
the
back
notifications.
So,
first
of
all,
yes,
I
mean
having
to
debug
that
myself
kind
of
really
proved
to
me
that
the
we
need
a
little
bit
of
tooling
here.
You
know
trying
to
to
match
something
by
scanning
a
log
file.
It
really
doesn't
doesn't
work
so
well
so
having
a
command.
That
really
tells
you,
which
one
owes
what
it's
gonna
be
great
and
that's
gonna,
be
shortly
added.
B
We
also
thinking
or
when
you're
thinking
this
actually
kind
of
almost
done
and
in
in
ready
state
that
we're
going
to
add
more
endpoint
types,
so
as
part
of
the
2020
gsoc
project
that
we
were
part
of
there
was
a
student
that
added
aws
endpoints.
So
the
student
added,
a
lambda
endpoint
and
an
sns
input,
so
sns
is
like
kafka
only
it's
an
aws
service
and
the
lambda.
This
is
like
the
serverless
function
of
aws,
so
you
can.
B
Instead
of
sending
notification
to
kafka
or
to
write
mq
like
in
an
on-premises
case,
you
can
send
notification
to
to
an
aws
sns
that
would
do
whatever
they
want
with
it
or
you
can
even
trigger
an
aws
lambda
with
it.
So
this
is
almost
ready,
hopefully
pretty
soon.
B
So
that's
that's
another,
and
we're
also
talking
about
some
other
options
I
mean
in
there
might
be
we
might
be
adding
so
we're
supporting
mqp
the
rabbit
and
q
version
of
amqp,
which
is
the
the
more
common
one,
but
there's
also
an
mqp
1.0
version
that
is
supported
by
activemq,
so
both
the
apache
project
and
red
hat
products,
support
that
and-
and
we're
going
to
add
this
as
another
endpoint
there.
So
these
are
there's
a
couple
directions
that
we
have
there.
B
I
mean
there
are
always
fixes
and
enhancements
to
our
to
the
implementation
of
communication
with
kafka
and
other
things.
But
overall,
this
is
where
it's
currently
currently
headed.
Another
thing
that
is
kind
of
on
a
back
burner,
but
might
also
be
in
quincy,
is
to
send
so,
as
I
said,
magnification
center,
where
the
object
is
put
or
deleted
or
copy,
and
so
on.
B
We
also
want
to
solidification
when
an
object
is
deleted
due
to
a
life
cycle
policy,
but
this
is
currently
not
supported,
but
this
would
be
a
very
nice
enhancement
in
the
sense
that
you
know
if
somebody
is
interested
in
deletion
of
objects,
life
cycle
deletion
is
also
deletion
of
objects.
So
this
is
another
use
case,
and
this
is
also
going
to
be
added,
hopefully
in
the
near
future,
to
as
a
notification
trigger.
A
And
thanks
for
mentioning
the
google
summer
of
code,
I
did
put
a
link
into
chat
in
case
people
want
to
take
a
look
at
that.
I
noticed
as
well
that
you
have
yeah
those
two
projects.
The
next
level
advanced
message
queue
protocol
project
as
well
as
going
gnats.
So
that's
pretty
fun.
B
Just
one
word
so
that
the
going
that
is
an
example
of
of
how
maybe
we
wanted
to
add
notifications
for
endpoints
that
are
not
that
important.
I
mean
like
one-offs,
so
the
the
idea
there
is
that
you
know
you
have
your
like
nas,
it's
like
a
kafka
that
it's
a
it's
a
cloud
native
like
a
cncf
project,
whatever
it's
not
very
commonly
used,
so
I
won't
go
and
put
that
in
our
code
base.
But
you
know,
maybe
somebody
really
wants
to
identification
to
it.
B
I
mean
so
we
want
to
enable
that,
via
like
a
scripting
mechanism,
that
is
more
kind
of
ad
hoc
integration
point
than
the
writing
system,
putting
that
in
our
code
base
and
so
on.
So
that's
been
an
example
of
something
like
that.
A
And
just
I
apologize
for
my
ignorance
with
this
feature
in
particular,
but
is
this
something
that
might
get
surfaced
up
to
be
configurable
like
in
the
cf
dashboard
as
a
operator
usf
cluster.
B
So
there
is
work
to
add
bucket
notifications
to
rook.
B
It's
it
should
be,
I'm
not
100
sure
where
it
stands
out
right
now.
I
was
involved
in
the
design
process
and
we're
going
to
add
that
to
the
crd.
So
whenever
you
define
a
bucket,
you
can
also
define,
I
mean
you'll,
be
able
to
define
topics
and
then
tie
the
topics
to
the
bucket
buckets
via
notifications,
and
that
would
be
part
of
work.
So
yes,
currently
sometimes
defining
magnification
could
be
a
little
bit
cumbersome.
B
I
mean
you
have
to
write
a
couple
commands,
although
you
can
use
the
standard
tooling
for
that,
so
the
aws
cli
tool
is
and
border
three.
Both
they
support
the
topics
and
everything.
So
it's
not
like
you
have
to
craft
your
own
rest
client
for
that
you
can
use
standard
tool,
but
still
riding
a
yaml
and
pushing
that
into
into
an
open
shift
or
kubernetes
cluster
is
easier,
so
the
the
design
for
adding
that
into
the
bucket
or
bucket
claim.
I
mean
there
are
a
couple
of
things
there.
B
A
So
for
the
operators
they're
tuning
in
what
exactly
for
enabling
this
in
pacific,
what
steps
do
they
have
to
take?
You
did
mention
the
crds
inside
of
rook,
but
I
don't
know
if
that's
actually
necessary.
B
I
mean
if,
if
very
I
mean,
if,
if
this
is,
if
this
is
just
you
just
running
a
step
cluster,
then
everything
is
supported
via
standard
interfaces,
so
you
can
use
the
aws
cli
you.
You
can't
really
use
the
s3cmd,
because
the
topic
and
stuff
those
are
not
s3
commands.
Those
are
sns
commands.
Those
are
the
standard,
aws
tool
set
commands,
but
they're
not
s3
commands,
so
you
need
the
aws
cli
or
both
three,
both
the
it's
the
same
thing
pretty
much
and
and
we
support
like
we
work
with
them.
A
B
Here
we
have
explanations
about
how
to
actually
do
that,
specifically
for
both
three
and
aws
cli.
So
even
like
we,
we
extended
and
changed
the
interface
a
little
bit
like
we
have
more
than
the
standard
anywhere
support,
but
we
also
provide
a
file
like
it's
a
json
file,
which
is
an
extension,
and
if
you
put
this
file
in
place,
then
you're
gonna,
you're,
gonna
automatically
get
all
of
our
extensions
to
the
standard
tool.
B
You
don't
need
to
change
one
line
of
code
in
the
in
python
or
in
this
in
the
tool
and
using
this
file
you
can
actually
get
all
of
our
extensions
and
modifications
to
the
to
the
interface,
so
it
shouldn't
be
too
bad.
I
mean,
if
you
don't
need
to
craft
your
own
messages
or
or
curl
based
client
or
whatever.
You
just
use
this
aws
tool
all
border
three,
if
you
use
python
and
and
things
should
be
just
working,
so
it
shouldn't
be
too
bad.
A
Well,
so
if
people
wanted
to
get
involved
and
join
you
for
future
development
in
quincy,
how
may
they
get
in
contact
with
you,
I'm
assuming
on
irc
and
the
seth
million
was
but.
B
B
A
All
right
all
right,
well
great!
Thank
you
so
much
for
taking
the
time
I
understand
you
know
you're,
taking
especially
after
hours
now
you're
in
your
own
evening,
so
I
hope
you
do
enjoy
the
rest
of
your
evening
have
a
relaxing
time,
and
I
thank
everybody
as
well
for
joining
us
and
tuning
in
for
this
neat
feature
inside
of
pacific,
and
I
would
also
like
to
remind
people
that
we
have
ourself
a
user
survey
that
is
ending
april
2nd,
so
please
fill
it
out.