►
Description
Every month the Ceph Developer Community meets to discuss one aspect of Ceph code, spread knowledge of how it works and why it works that way.
This month we're joined by Yuval Lifshitz on RGW Bucket Notifications with AMQA/Kafak.
Schedule: https://tracker.ceph.com/projects/ceph/wiki/Code_Walkthroughs
Playlist: https://www.youtube.com/watch?v=nVjYVmqNClM&list=PLrBUGiINAakN87iSX3gXOXSU3EB8Y1JLd&t=0s
A
Welcome
everybody
to
this
code
walkthrough
mike
perez
here,
I'm
joined
with
yuval
who's,
going
to
be
speaking
about
reno's
gateway,
bucket
notifications,
specifically
with
amqa
and
afka.
I
got
that
right
and
though
you've
all
will
you
please
take
it
away.
B
Sure,
thanks
mike
yeah,
so
I'll
I'll
try
to
go
through
the
code
there
and
and
explain
how
the
things
are
working.
I'm
going
to
share
my
screen.
Second.
B
I
would
appreciate
guys,
if
you
tell
me
if
you
see
the
screen
and
if
this
the
font
is
large
enough
and
it's
clear
enough.
Look
at
the
code.
A
A
A
A
B
Okay,
so
just
couple
of
words
about
the
feature
itself:
the
feature
is
about
sending
notifications
on
actions
happening
on
objects.
So
whenever
an
object
is
created
or
deleted,
then
we
need
to
send
a
notification
to
a
pre-configured
endpoint
that
waits
for
those
notifications.
B
This
is
like
really
a
nutshell,
the
feature,
but
I
won't
spend
too
much
time
with
that,
because
I
want
to
go
into
the
code
a
couple
of
things
before
that,
so
the
feature
has
two
main
modes:
one
mode
is
push
mode,
and
this
is
pretty
much
what
I've
described,
which
means
that
something
is
happening.
B
We
plan
to
deprecate
this
this
mode
in
the
future,
so
I
would
not
touch
that
in
the
in
the
code
walked
walk
through,
but
I
I'm
still
mentioning
that,
because
you're
going
to
see
a
bunch
of
code
that
I
would
ask
you
to
ignore,
and
I'm
not
going
to
explain-
and
this
is
why
this
is
the
reason
another
kind
of
disclaimer
similar
one
is
that
when
we
just
started
the
feature,
the
api
for
the
feature
for
configuring,
the
feature
was
not
aws
compliant
and
it's
also
deprecated.
B
B
Now
there
are
two
main
flows
to
the
code.
One
is
the
flow
where
you
provision
or
configure
the
the
bucket
the
feature,
the
bucket
notification
saying
what
are
the
end
points
and
which
buckets
needs
notifications
on
them
and
filters
on
those
notifications
and
so
on,
and
the
other
flow
is
when
you
actually
do.
The
actions
like
put
object
or
delete
objects
then,
and
a
notification
was
previously
configured
then
I'm
going
to
show
the
flow
of
what's
happening
and
going
on
there.
B
So
this
is
kind
of
to
to
map
the
the
different
things
that
we're
touching
here.
So
the.
B
We'll
start
I'll
start
from
the
flow,
where
the
that
describes
the
way
that
things
are
being
configured,
and
these
are
the
files
that
that
manage
this
part,
and
this
is
the
rgw
underscore
rest
and
we
have
breast
pub
sub
and
rest
pops
up
common
again,
the
rhythm
that
we
have
both
pops
up
and
pops
up
common.
So
one
thing
pops
up
means
bug
notifications,
here
later
on,
when
I
talk
about
pub
sub,
this
is
specific
to
the
thing
that
I
want
you
guys
to
ignore.
B
This
is
the
the
pull
mode,
but
for
historic
reasons
or
whatever,
whatever
the
file
name
say,
pops
up
so
rest,
pops
up
and
rest
pops
up
common
are
the
the
files
that
handle
the
configuration
and
common
versus
the
pub
sub
is
whether
it's
like
whatever
is
common,
is
common
to
both
the
compliant
and
non-compliant
definition
of
the
api,
and
this
is
why
we
have
these
two
files
so
just
to
see
what
what's
what
a
thing
there
there's
something
called
a
topic.
B
B
And
to
delete
the
topic,
those
are
the
different
apis
here
now,
there's
something
called
subscriptions:
I'm
going
to
skip
on
all
the
apis.
It
has
to
do
with
subscriptions,
because
the
subscription
is
exactly
the
real
pub
sub
mode
where
things
are
stored
inside
seph
and
being
pulled
from
the
outside,
and
this
is
something
we
want
to
deprecate.
So
I'm
gonna
not
look
into
these
and
then
the
the
next
set
of
important
apis
are
the
notifications.
B
Now
notification
is
what
glues,
a
topic
which
I
said
is
like
the
the
endpoint
definition
of
an
endpoint
with
a
bucket,
so
identification
glues,
those
things
together
and
on
top
of
that,
also
provide
a
specific
filter
of
saying
like
not
only
that
whenever
I'm
getting
any
any
change,
I
want
to
send
to
a
specific,
let's
say,
kafka
broker.
B
It
also
defines
some
field
of
saying
you
know,
I'm
just
going
to
send
all
the
creations
or
the
deletions,
or
I
have
even
more
fine-grained
filtering
that
you'll
see
later
on
saying
you
know,
I
want
to
send
creations
only
an
object
that
start
with
acts.
So
all
those
definitions
and
configurations
are
part
of
the
notifications
and
we
have
the
set
of
apis
here
that
we
can
create
a
notification.
B
B
I
mean
there
are
maybe
a
couple
of
things
there
that
we
need
to
look
at
so,
for
example,
when
we
talk
about
the
so
all
the
configurations,
everything
they're
stored
as
system
objects.
So
when
you
say
a
topic,
it's
really
a
system
object
of
a
topic
when
it's
identification.
This
is
a
system
object
of
saying
all
the
notifications
that
or
the
glue
between
the
the
topics
and
the
specific
buckets.
So
all
that
stuff
is
being
implemented
in
in
another
set
of
files
and
I'm
going
to
show
them
in
a
second.
B
B
You
would
see
that
it
uses
something
called
create
topic
of
an
object
that
we'll
see
later
on,
that
actually
creates
the
system,
objects
and
everything.
B
This
is
the
file
that
deals
with
the
compliant
interfaces.
So
when
he
talks
about
writing
the
system
object,
it
doesn't
matter
whether
this
is
the
compliance
of
the
non-compliant
api.
But
when
we
talk
about
the
how
to
extract
the
parameters
so
the
the
get
params
functions,
then
it
is
different
and
we'll
just
go
through
the
compliant
api.
So
this
is
kind
of
pretty
much
the
structure
of
those
aws
commands.
Those
are
post
commands,
but
they're,
not
the
parameters
are
not
encoded
in
the
url.
B
Their
parameters
are
encoded
in
kind
of
in
the
body
of
the
message
and
and
extracted
from
there.
So
all
the
different
parameters
you
extract
them
for
here
there
are
a
couple
of
special
cases.
For
example,
when
you
define
an
endpoint,
then
this
endpoint,
let's
say
kafka
endpoint
we
might
require
whoever
configures
that
may
require
that
this
endpoint
uses
some
secure
communication
or
uses
a
password
in
order
to
log
into
this
endpoint,
and
we
want
to
prevent
from
anything
like
that
to
leak.
B
So
we
don't
want
to
allow
any
configuration
of
secrets
over
non-secure
interface
here.
So
we
check
whether
there
are
any
secrets
in
the
message
and
if
there
are,
we
want
to
make
sure
that
this
was
over
secure,
socket
and
otherwise
we're
just
going
to
say
this
is
incorrect.
Don't
do
that
and
we
don't
configure
anything
over
insecure,
socket,
there's
another
set
of
of
special
cases
here,
and
this
is
for
persistent
notifications.
B
This
is
still
a
different
type
of
notifications,
but
we're
going
to
touch
that
later
and-
and
this
is
pretty
much
it
so
what
you
would
see
is
is
that
in
in
this
file,
where
you
have
compliant
apis,
then
you
would
see
that
the
get
params
is
different,
but
the
actual
action
of
writing
the
system
object,
holding
the
configuration
is
the
same.
So
this
is
why
it's
in
the
underscore
common
file,
so
this
is
just
to
to
map
those
things.
B
B
So
the
next
thing
we're
going
to
look
at
is
what
we
said
is
the
pub
sub
one,
and
this
is
the
the
actual
structures
and
interfaces
that
stores
the
information
that
is
configured
over
rest,
so
algebraic
pops
up
dot
h
is
where
the
the
structures
are
defined
and
in
those
structures,
the
quite
a
bit
of
legacy
of
from
the
from
the
pull
mode
or
from
the
pub
sub
mode.
So
I'll,
try
to
kind
of
you
know,
put
notes
here
and
say
you
know,
don't
look
at
that.
This
is
not
important.
B
For
now
we
can
just
go
through
the
different
objects
of
restoring.
The
key
filter
is
what
I
said.
This
is
the
data
structure
that
holds
the
what
whatever
do
further
filtering,
so
the
basic
filtering
is
based
on
type
like
you
know.
I
just
want
to
get
notifications
on
object,
creations
or
object,
deletions
or
object,
copies
or
multi-part,
upload
and
or
stuff
like
that.
B
This
is
like
the
event
type,
but
then
there's
further
filtering
based
on
object,
name,
prefix,
suffix
and
regular
expressions
and-
and
this
is
being
held
here
as
the
filter.
B
And
there's
another
set
of
filters
that
is
based
on
tags
and
and
and
attributes,
and
you
can
also
use
these
and
those
are
like
key
map
filters
and
saying
that
you
know
I
just
want
to
get
notifications
if
the
object
tag
is
named
x,
equals
y-
or
I
just
want
to
get
notifications.
If
there
is
a
an
attribute
that
with
certain
names
it
has
certain
values.
B
So
this
is
another
set
of
filters
and
overall
you
can
have
a
key
filter
which
is
based
on
the
object
name,
which
is
a
perfect
suffix
or
the
regex
you
can
have
the
this
is
the
attribute
the
metadata
filter
and
you
can
have
tag
filter.
So,
overall,
you
can
add,
combination
those
filler
and
those
are
are
stored
here
in
this
object.
B
This
is
the
the
the
api.
Syntax
is
how
the
those
are
filters
being
sent
by
the
way.
Those
filters
are
not
supported
by
aws
anywhere
else.
I
think
just
have
the
suffix
rule,
if
I'm
not
mistaken,
the
prefix
and
or
maybe
suffix
and
prefix
anyway.
The
regex
and
the
metadata
and
tags
are
our
extensions
to
this.
Now,
there's
something
called
notifications,
and
you
you
saw
before
that.
B
You
know
the
rest
api
had
like
a
create
notification
and
after
doing
all
the
getting
all
the
parameters
and
and
from
the
rest
message,
you
need
to
actually
call
the
the
the
the
function
that
stores
this
notification
inside
the
system
object,
and
this
is
the
structural
notification,
so
there
is
an
id
for
identification.
This
is
useful
when
you
want
to
later
on
delete
a
specific
notification
and
you
have
events.
This
is
what
I
said.
This
is
the
basic
level
of
filtering
puts,
delete
copy
and
so
on
so
forth.
B
This
is
the.
As
I
said,
notification
is
an
association
between
a
topic
and
a
bucket.
So
this
is
the
the
name
of
the
topic
that
this
certification
is
being
associated
with,
and
this
is
the
extra
level
of
filters,
the
prefix,
the
regex,
the
suffix
or
the
tags
and
metadata
filters.
B
So
these
are
the
the
data
structures.
Another
important
data
structure
that
we
have
here
is
the
the
events
that
we're
sending.
So
overall
there
is
the
the
notification
event
and
information
that
we're
sending.
This
is
mainly
metadata
on
on
the
object
that
something
happened
to
it.
So
this
is
the
structure
again,
it's
a
standard
taken
from
the
aws
structure
of
messages.
There's
information
here
that
combines
kind
of
both
information
about
the
transaction.
B
So
you
know
where
we
got
this
request
from,
and
also
information
about,
the
actual,
whatever
happened,
so
the
the
buckets
and
the
objects
and
all
kind
of
things
like
that.
So
this
is
the
the
information
that
we're
sending
now.
B
This
is
the
the
data
that
is
being
held
here
now
in
general.
We
we
don't
need
to
store
this.
I
mean,
as
I
said,
in
pub
sub,
we
store
those
events
inside
radius
objects,
but
usually
in
the
case
that
we're
sending
we
were
talking
about
we're
just
sending
them
over.
So
this
is
why
actually,
this
bufferless
encoding
is
not
very
useful.
B
What
is
useful
or
more
useful
for
our
for
our
case
is
that
we
have
json
encoding,
and
this
is
what
is
sending
those
messages
to
whatever
endpoint
or
the
other
side.
This
is
the
old
event,
so
we're
not
gonna.
Look
at
that.
B
Now
this
is
the
definition
of
of
the
definition
of
the
the
endpoint
that
we
want
to
send
to
the
bucket
name
and
oid
prefix.
This
is
only
for
the
pub
sub
case
where
we
actually,
the
endpoint,
is
a
bucket
that
we're
storing
the
information,
so
those
you
can
just
ignore
them,
and
what
is
more
interesting
for
us
is
that
who
we're
sending
the
messages
to
so
we
have
the
push
endpoints
and
endpoint
arguments
and
the
topic
air
in.
B
We
also
have
two
flags
here
that
characterizes
the
the
topic,
and
this
is
the
whether
it
stores
the
secret,
because,
as
I
said,
this
is
important-
we
don't
want
not
only
that
we
don't
want
to
allow
anyone
to
configure
a
topic
that
has
user
and
password
in
it.
If
it's
not
over
ssl,
we
don't
want
to
allow
them
to
fetch
them
later
on.
B
So
if
somebody
is
fetching
want
to
fetch
a
topic
or
get
the
list
of
topics
over
http-
and
we
know
that
those
topics
have
stored
secrets,
we're
not
going
to
reply
to
them.
So
this
is
why
we're
storing
that
there
and
persistentification
is
something
special
that
we
also
need
to
to
save
here
as
information.
B
B
Of
this
is
the
definition
of
a
notification,
so
it
includes
the
the
id
of
identification
as
well
and
yeah.
Those
are
like
the
non-compliant.
B
Okay,
so
we've
seen
the
the
rest
messages
or
the
rest
structure
and
the
data
structures,
and
I
want
to
go
now
and
and
have
a
look
at
what
we
do
with
them.
As
I
said,
we
want
to
store
those
objects
as
system
objects.
B
Those
are
for
xml,
so
in
the
header
files
we
have
all
the
buffer
list
encoding
and
decoding,
and
this
is
to
store
the
configuration
in
the
system
objects.
Here
we
have
lots
of
xml
encoding,
and
this
is
because
we,
when
somebody
is,
is
want
to
get
those
messages.
We
need
to
encode
them
in
xml,
because
this
is
the
api
of
a
rest.
B
So
we
have
lots
of
functions
here
that
does
the
xml
encoding
for
those
thing,
and
there
should
be
also
json
encoding
for
the
for
the
big
notification
structure,
because
this
is
encoded
in
json
when
we
send
it
over.
B
We
have
here
functions
that
does
the
the
matching.
Those
are
the
matching.
This
is
the
matching
logic
of
saying
whether
a
specific
a
specific
value
needs
like
matches
the
the
filters
that
were
were
defined.
B
B
B
B
B
Okay,
now
we
get
to
the
actual,
writing
and
management
of
those
system
objects.
So
we
have
one
one
object,
which
is
a
global
object
that
holds
all
the
topics
it.
It
is
global,
but
it
has
in
its
name.
We
append
the
tenant
name.
So
it's
actually
it's
global
per
tenant,
but
per
tenant.
We
have
one
global
object
and
holding
all
the
configurations
of
all
topics.
B
So
this
is
the
we
have
a
read
topics
and
right
topics,
and-
and
this
is
to
get
this
information
now,
the
way
that
we
manage-
and
this
is
a
little
bit
confusing-
the
way
that
we
manage
notifications
and
certifications
is
what
ties
the
it
was
it?
What
ties
the
topics
to
the
to
the
bucket,
so
we
kind
of
call
them.
Instead
of
calling
them
notifications,
we
call
them
topics
but
they're
stored
per
bucket,
so
those
those
are
like
bucket
topics.
B
So
whenever
you
see
here
in
the
code,
the
term
bucket
topics,
this
is
actually
notifications.
So
if
you
can,
if
we're
gonna
jump
for
a
second
to
the
to
the
definition,
so
it
is
a
list
of
of
topics
with
filters
and
because
it
is
stored
per
bucket.
So
it's
an
object
per
bucket.
B
Those
are
bucket
topics,
so
we
don't
have
so,
as
I
said,
for
for
for
the
topics
themselves,
we
have
one
global
object
that
is
holding
the
topics
for
the
entire
system,
but
for
the
notifications
we
have
an
object
per
bucket
because
it's
the
associations,
association
of
the
of
the
topics
to
a
specific
bucket,
but
it's
also
have
in
its
configuration.
It
has
those
filters
that
we
mentioned
before
so.
This
is
why,
when
you
see
the
the
structure
here,
then
it
has
the
the
string.
B
B
It
has,
it
has
all
the
filters
and
if
you
want
to
read
those
topics,
so
we
haven't
read
topics
here,
but
this
read
topics
read
all
the
topics
from
the
global
object.
This
is
what
these
three
topics
read.
All
the
topics
from
the
bucket
specific
topics
object.
So
it
will
just
get
you
the
notifications
of
this
specific
bucket,
and
this
is
similarly
the
right
one
you
to
write
the
write,
a
topic
to
a
specific
bucket,
which
is,
in
other
words,
create
identification.
B
What
else
okay,
so
this?
This
one
is
probably
the
more
complex
function
here
and
this
one
is
the
one
that
creates
identification.
So
when
you
create
defication,
you
actually
associate
a
topic.
B
The
the
event,
which
is
the
first
layer
of
filtering
of
this
notification.
Then
this
optional
filter
that
had
all
the
special
stuff
regarding
regarding
key
names
and
and
metadata
and
tags
and
so
on,
and
the
first
thing
that
you
do
is
that
you've
verified
that
this
topic
indeed
exists,
because
if
it
doesn't
exist,
then
you
cannot
create
notification
and
then
it
it
reads
the
the
topics
from
the
bucket.
So
it
reads
the
existing
notifications
from
the
bucket
and
then.
B
It
it
adds
them
to
this
to
this
list
of
of
notifications
of
the
bucket.
So
this
is
the
pro
the
process
of
adding
a
notification,
writes
them
back
here
and
if
we
got
this
special
filter,
we
also
put
the
filter
in
similarly
there's
a
remove
function
that
takes
out
a
specific
notification
from
from
the
bucket.
B
And
you
can
also
remove
all
notifications
of
a
bucket.
This
is
very
useful,
especially
when
we
talk
about
you
know.
If
you
delete
the
bucket,
then
you
don't
want
those
notifications
to
remain.
You
don't
want
to
do
a
complete
cleanup
for
everything,
so
this
is
an
easy
way
to
remove
all
notifications
of
the.
B
Bucket
here's
the
function
to
create
the
topic.
B
B
You
can
also
remove
a
topic
yeah
now
these
are
the
subscriptions,
so
we're
not
going
to
look
into
that.
Those
subscriptions
are
what
we
said.
Is
this
pull
mode
the
pub
sub
mode,
so
we're
just
gonna,
ignore
them.
B
Okay,
so
this
really
summarizes
the
the
part
of
of
how
to
configure
topics
and
notifications.
This
is
like
really
the
configuration,
the
flow
of
configuration
things
and
this
is
done
before
beforehand
and
and
and
that's
it
so
now
I
mean,
if
you
have
questions
about
this
part
or
something
wasn't
clear.
B
Maybe
this
is
the
good
time
a
good
time
to
ask
them,
because
the
next
part
is
going
to
be
about
the
the
process
of
actually
sending
notifications
and
what's
happening
when,
when
the
event
escaper
is
running
and
getting
ops,
so
that's
gonna
be
a
slightly
different
thing.
So
I'll
be
happy
to
see
any
questions
now.
A
I
don't
see
any
in
chat
I'll
keep
a
lookout.
A
B
There
was
a
question
if
it's
possible
to
configure
that
from
the
console,
so
the
okay,
so
what
I've
shown
before
those
rest
apis,
they
I
mean
sure
you
can
send
those
rest
messages,
but
you
know
handcrafting.
Those
messages
is
usually
very
difficult
and
not
recommended,
but
you
can
use
the
like
the
aws,
cli
command
or.
B
B
In
aws
sns
api,
so
s3cmd,
if
you,
if
you
use
to
this
tool,
won't
really
help
you
with
most
of
the
things
that
are
needed
here.
To
do
this
configuration
but
again
crafting
those
messages
by
hand
could
be
quite
difficult,
and
it's
probably
better
to
use
those
tools.
B
We
don't
have
commands.
We
don't
have
like
a
renaissance
gateway
admin
commands
to
configure
those
topics
and
notifications
from
redis
gateway
admin.
We
do
have
ready
schedule.
Admin
commands
to
fetch
the
configurations.
Just
you
know
to
make
it
easy
to
see
what's
going
on,
but
there's
no.
You
cannot
configure
them
through
the
tool.
B
B
B
Why
we
need
those
two
functions
in
order
to
certifications?
I
would
explain
when
we
talk
about
persistent
notifications,
but
in
theory
you
just
need
one
function
to
send
those
messages
out
and
you
would
see
those
those
functions
in
like
in
many
places,
for
example,
because
we
want
this
identification
alone.
Many
things
are
happening
so,
for
example,
in
this
algebra
you
put
object,
execute,
which
is
where
all
object,
uploads
or
most
of
them
are
happening.
B
Then
we
want
to
to
see
to
to
get
notifications
in
some
cases,
so
I'll
I'll
show
one
simple
flow.
Let's
say
that,
starting
from
here
and
and
see
how
things
are
working
for
now,
it
won't
be
100
clear
why
we
need
the
the
reserve
and
commit
commands.
But
when
I
speak
about
notifications,
maybe
that
would
be
clearer
so.
B
Here,
publish
reserve
and
which
means
that
we
make
a
reservation
for
notification,
we're
not
sending
it
yet,
but
we're
giving
it
all
the
information
that
we
need
in
order
to
to
create
notification
now
what
kind
of
notifications?
Well,
we
need
to
tell
the
notification
mechanism
where
we're
coming
from,
because
this
the
calling
function
only
knows
where
you're
coming
from,
and
we
also
need
to
provide
all
kind
of
information
about
the
the
object
and
the
request.
As
I
said
when
we
send
the
notification
over
both
the
filtering
needs
to
know.
B
B
B
It
gets
all
the
information
that
we
need
about
the
about
the
the
object
and
the
transaction
first
thing
that
we
we
do
is
we
need
to
get
the
topics
so
as
as
we
said
before,
when
we
did
the
rest
provisioning,
we
stored
the
topics
in
an
object,
so
we
stored
the
actual
notifications
in
an
object
and.
B
These
are
the
bucket
topics.
The
bucket
topics,
as
we
said,
is
a
different
name
for
notification.
So
those
are
the
the
topics
together
with
their
filters
for
this
specific
bucket.
Now
we
know
what
what
bucket
we're
talking
about,
because
this
is
information
that
that
we
got
from
from
from
the
message.
B
So
you
would
see
that
we
have
this
bucket
here,
so
we
know
what
the
bucket
is,
and
now
we
read
this
topic
now,
maybe
you
know
there
is
nothing
there,
so
this
is
fine,
but
if
we
did
find,
if
we
did
find
topics
that
are
associated
with
this
bucket,
then
we
need
to
go
further
and
figure
out
if
we
actually
need
to
send
notifications
on
them,
so
we're
going
through
all
those
configurations
of
those
bucket
topics
and
as
we
as
we,
you
know,
if
you
remember
what
you've
seen
before
those
have
filters,
and
those
also
have
the
the
name
of
the
topics
associated
with
them,
and
we
need
that
because
later
on,
if
we
want
to
send
somewhere,
we
need
to
know
where
to
send
to,
but
we'll
start
with
the
filtering.
B
So
here
the
there
is
the
first
kind
of
question
we
have
persistent
notifications
of
persistent
notification
are
asynchronous
certifications
that
are
not
being
not
being
sent
immediately.
They're
sent
later
on,
so
we're
going
to
skip
that
for
now,
and
if
this
is
not
a
perceived
notification,
then
this
whole
process
of
reserve
and
commit
is
not
very
useful.
So
the
only
thing
we're
going
to
do
here
is
say:
okay,
we,
like
the
the
the
process,
is
that
if
we
found
that
this
notification
is
relevant-
and
this
has
done
the
matching
for
the
filter,
then.
B
Then
we're
going
to
put
this
inside
or
store
that
inside
this
data
structure
saying
later
on
when
you
commit
you
use
it
just
send
it
if
we're
sending
asynchronously
there's
a
whole
thing
other
thing
to
do
here,
but
we're
going
to
skip
that
for
now.
B
So
for
the
synchronous
of
regular
notifications,
we're
going
to
look
through
all
the
notifications
or
bucket
topics
that
are
called
here
and
if
they
match,
which
means
if
the
parameters
of
the
request
and
tags
and
names
and
events
and
all
that
stuff
matches
the
filter
that
was
configured
then
this
one
is
good:
it's
not
going
to
match
we're
going
to
just
go
to
the
next
notification,
maybe
the
other
one
match.
If
multiple
is
going
to
match,
then
we're
going
to
put
multiple
of
these,
because
it
could
be
that
for
a
single
op.
B
Let's
say
one
upload:
you
have
different
filters
and
those
filters
are
sending
to
different
endpoints,
and
this
is
valid.
This
is
okay,
so
in
the
regular
non-personifications,
we're
just
gonna
in
place
or
put
those
those
configurations
in
the
list
and
later
on.
B
If
we
go
back
to
the
ops
later
on
this,
this
reservation
object
is
going
to
hold
all
the
notifications
that
match
the
filter
and,
what's
going
to
what's
gonna
happen
here,
is
that
we're
actually
going
to
do
like
this?
Is
the
actual
ops
like
doing
the
actual
work
and
after
the
actual
work
ends,
and
that's
gonna
be
somewhere
down
here
after
we
finished,
you
know
doing
the
uploads
or
whatever
we
actually
did
with
the
with
the
ops.
B
Then
we're
going
to
do
the
publish
commit
so
here
it's.
It
seems
a
bit
stupid
that
we
we
split
those
two
functions
apart,
because
the
first
one
went
through
all
the
talks
read
the
configurations
of
this
bucket
went
through
all
the
filters
and
checked
if
any
of
the
filters
that
that
fitted
stored
that
in
a
list
here-
and
then
we
when
we
do
the
the
actual
commit
we
are
actually
going
to
send
it
so
doing
the
commit.
B
And
then
again,
if
it
is
persistent,
we're
actually
not
doing
the
actual
sending,
because
somebody
else
is
doing
the
sending
so
we're
going
to
skip
that
for
now.
But
if
it's
not
persistent,
then
we
actually
want
to
send
it.
So
the
first
thing
that
we
do
is
with
that.
We
create
an
end
point
because
we
have
in
the
configuration
is
that
we
stored
everything
that
was
relevant
in
the
list
and
we
have
the
configuration
of
the
endpoint
and
we
we
create
the
input
now
the
endpoint
here.
From
our
perspective,
we
don't
care
like
it.
B
Just
has
something
that
a
function,
that's
called
send
to
completion,
that's
it,
but
the
actual
creation
of
endpoint
can
create
different
kinds
of
endpoints,
and
this
is
really
dependent
with
the
configuration.
So
the
end
point
could
be
an
amqp
broker
or
a
kafka
broker
or
could
be
an
http
endpoint
in
the
future.
We
can
have
more
and
more
endpoints,
so
this
is
a
different
system
that
we
have
for
endpoints,
and
this
is
a
different
file
that
handles
them.
But
let's
say
that
this,
this
factory
function
created
the
endpoint
for
us.
B
So
once
we
have
the
end
point,
then
the
only
thing
that
we
need
to
do
is
to
send
the
notification
to
the
endpoint.
So,
whatever
mechanism
we
have
there
for
for
descending,
it's
going
to
be
used
here
now.
B
This
is
asynchronous
sending
in
the
sense
that
it
uses
the
the
yield
mechanism
in
order
to
make
sure
that
we're
not
blocking
here.
Until
we
get
the
answer-
and
this
is
kind
of
I
mean
we
are
blocking
this
specific
co-routine.
So
inside
the
the
ops
I
mean,
if
you
assuming
you're,
using
a
vistron
and
then
every
ops
has
its
own
core
routine,
and
this
is
synchronous
inside
the
score
routine,
but
we're
not
blocking
other
core
routines
that
could
be
running
and
doing
other
things
at
the
same
time.
B
So
inside
this
authentic
completion
async,
it
means
that,
whatever
mechanism
that
we
have
let's
say
kafka,
then
we're
going
to
send
to
kafka
we're
going
to
wait
here
until
we
get
in
the
answer
from
the
kafka
broker,
that
the
identification
was
is
safe
and
sound
inside
kafka,
but
we're
not
going
to
block
other
ops
running
impeller
doing
doing
other
things.
So
so
this
is
what
this
center
completion
is
sink
means.
B
It
does
mean
that
we're
synchronous
here
in
this
specific
core
routine-
and
this
is
one
of
the
reason
that
we
have
we
had
to
add
the
persistentifications
of
the
asynchronifications,
because,
for
example,
if
the
kafka
broker
is
down,
then
we're
going
to
wait
here
until
we
get
a
timeout,
and
this
could
be.
This
could
take
some
time
and
it
means
that
even
though
other
requests
can
go
on
freely,
this
specific
request
gonna
really
wait
for
a
long
time
here
and
and
block
the
the
entire
system.
B
So
so
this
is
why
we
had
a
personification.
I'm
gonna
explain
that
later,
but
overall,
the
basic
flow
here
is
that
we
have
the
publish
reserve
function
and
this
function
is
the
main
thing
that
is,
it
does
for
non-persistent.
B
Notifications
is
going
through
the
all
the
notifications,
the
bucket
topics,
getting
their
filters,
see
if
those
filters
match
and
if
they
imagine
just
put
everything
in
this
list
and
later
on,
when
we're
done
with
the
actual
op,
writing
the
object
or
deleting
or
whatever,
then
we're
going
to
call
the
publish,
commit
and
publish
copy.
It's
going
to
go
through
everything
that
was
stored
here
and
create
those
in
again
in
the
non-persistent
case.
Create
those
endpoints
here,
sorry
create
those
end
points
and
send
them
in
a
synchronous
way
or
synchronous
per
quarantine
away.
B
Those
notifications,
now
I'm
gonna,
not
sure
I'm
gonna
have
time
to
look
at
everything,
so
I
prefer
to
have
a
little
look
at
the
persistentification,
because
this
is
a
whole
different
mechanism.
B
So
I
don't,
even
if
you
remember,
but
at
some
point
when
we
did
the
when
we
did
the
the
configuration
of
the
of
the
topic.
So
if
we
can
look
at
the,
then
there
is
so.
This
is
the
function
that
gets
from
the
rest
message
they
create
topic
and
one
of
the
parameters
of
creating
a
topic
is
whether
this
topic
is
persistent
or
not.
B
To
to
learn
more
about
the
personifications,
then
I
have
a
blog
post.
It's
not
really
about
the
code,
it's
about
the
kind
of
persistentification
how
the
mechanism
works
and
all
kinds
of
things
about
that.
I'm
just
going
to
paste
it
here
and
if
you
want
to,
if
you
have
a
word
to
have
a
look,
then
you
can
have
a
look,
but
if
somebody
defined
the
endpoint
as
persistent,
it
means
that
the
core
routine,
that
is
initiating
the
sending
or
the
publish
commit
command,
is
going
to
return
immediately.
B
It's
not
going
to
wait
for
the
actual
answer
of
ack
or
neck
from
the
kafka
broker
or
ngp
broker
or
anything.
So
it's
going
to
return
immediately,
which
is
better
for
for
the
clients,
because
it's
not
going
to
wait
in
case
the
broker
is
down
and
also
better
because
we
also
have
a
mechanism
for
retries,
because
it's
done
asynchronously
then
there's
a
mechanism
there.
That
does
retries
and
the
last
thing.
That
is
good,
and
this
is
kind
of
explains
why
we
need
the
reserve
and
commit
is
that
we
have.
B
We
have
some
kind
of
atomicity
here,
because
if
you
look
at,
if
we
have
the
case
of
of
non-persistentifications,
then
you
know,
let's
say
that
we
failed
now
all
the
lines
before
here
we
upload
the
object.
The
object
is
already
the
system.
Everything
worked
perfectly
fine,
but
what?
If
we
failed
here?
So
if
we
failed
here,
it's
too
it's
too
late
for
rollback.
B
We
can't
say:
oh,
we
failed
the
entire
action
because
the
object
is
already
in
set,
so
we
can't
say
we
failed,
but
the
notification
is
not
sent.
So
on
the
other
hand,
this
is
not
good
either,
because
if
somebody
configured
bug
notification,
it
means
that
they
want
notifications
there.
So
it's
a
bit
of
a
problem
and
it's
not
really
solvable
if
this
is
done
in
one
action,
but
if
we
split
that
to
a
reserve
and
a
commit
action,
it
means
that
we
have
better
guarantees
for
the
user.
B
So
if
the
reserve
action
failed,
which
means
that
no
notification
is
going
to
go
out
because,
for
example
like
we
use
a
queue
there,
the
queue
is
full
or
whatever.
Then
we're
going
to
fail
the
interaction,
but
this
is
good
because
this
is
before
the
actual
object
was
put
into.
Second,
if
somebody
wants
notifications,
they're
not
going
to
get
them
they're
going
to
fail
the
action,
so
somebody
going
to
try
and
upload
an
object
and
get
them
an
error
back.
B
But
if
this
one
was
successful,
it
means
that,
even
if
kafka
is
down
or
there's
some
other
issue
or
error,
then
we
store
this
notification
inside
the
queue
and
we're
going
to
do
our
best.
You
know
sometime
in
the
future,
to
actually
send
it.
So
the
fact
that
we
did
this
commit,
which
means
that
we're
going
to
retry
the
message
and
even
if
things
are
down
even
if
they're,
currently
a
problem,
this
is
still
going
to
be
sent
so
we're
not
lying
to
the
user
when
we
actually
replying
with
a
200.
B
B
So
this
is
why
we
have
this
split
between
the
reserve
and
the
commit,
and
if
you
want
to
look
further
into
the
details
of
how
this
is
working,
so
this
is
here
again
in
the
rw
notify
file
and
and
now
we're
going
to
look
at
the
the
if
case
that
we
skipped
before
and
for
example,
in
the
reserve
case.
If
the
configuration
of
the
endpoint
to
be
persistent,
then
what
we're
actually
going
to
do
we're
going
to
reserve
a
spot
on
a
cls
queue.
So
we
have
a
special
queue.
B
When
we
do
the
the
publish
commit
function
and
we're
going
to
look
at
into
this,
if
of
saying
what,
if
this
is
a
persistent
endpoint,
then,
instead
of
creating
the
like,
as
we've
seen
here,
creating
the
endpoint
and
send
to
completion
we're
not
going
to
do
that.
We're
gonna
know
that
we
made
a
reservation
on
the
queue
and
and
now
we
can
commit
this
renovation,
which
means
that.
B
I
mean
we
have
a
couple
of
checks
here,
but
overall
we're
going
to
commit
the
reservation,
which
means
we're
going
to
write
this
here.
There's
some
some
issue
with
the
size,
but
overall,
what
we're
going
to
do
here
is
going
to
commit
the
reservation,
which
means
that
we're
going
to
write
the
notification
into
the
into
the
queue.
B
Now
we
just
written
to
the
queue
we
didn't
send
to
kafka
or
anything
here,
and
this
is
why
we
have
a
whole
other
mechanism
that
takes
things
from
the
queue
and
send
them
over.
So
in
a
second
we'll,
look
there
there's
also
an
abort.
The
reason
that
we
may
use
an
abort
is
because,
let's
say
that
the
action
let's
say
that
the
action
that
we
did
in
the
ops
failed
for
some
other
reason.
B
So
it
could
be
that
you
know
radars
failed
to
put
the
object.
So
in
this
case
we
want
to
make
sure
that
we
do
proper
cleanup
of
the
reservation,
and
this
is
why
we
avoid
the
reservation.
A
B
Of
run
out
of
time,
I'll
have
a
quick
glance
at
the
at
an
example
of
of
kafka
and
amqp.
B
Just
to
show
you
around
the
files
a
little
bit
and
then
I'm
gonna
leave
some
time
for
questions
and
the
actual
mechanism
of
sending
those
notifications.
I
won't
be
able
to
do
now
so.
B
The
the
kafka
and
nqp
ones
they
have
very
simple
like
if
you
look
at
the
headers,
they
have
very
simple
apis,
so
they
have
an
in
it
and
a
shutdown
because
they
have
like
a
global
managers,
and
I
just
want
to
enter
them
and
shut
down
them
from
the
main
function.
So
if
you
look
at
rgw
main,
you
would
see
the
init
shutdown
of
kafka
and
mqp,
and
what
is
more
interesting
here
is
that
this
publish
was
confirmed,
and
there
are
a
couple
of
other
functions
here
that
you
can.
B
So
if
we
look
at
the
at
the
kafka,
so
the
kafka
has
will
start
with
the
publisher.
B
So
this
is
the
function
that
is
being
called
and
what
this
function
does
it
puts
stuff
stuff
into
a
queue
and
what
it
gives
it
puts
into
the
queue
is
the
message
which
is
like
the
notification,
this
json
structure
that
we
need
to
send
and
the
callback
and
the
callback
is
what
releases
the
the
the
yield
from
the
core
routine
in
the
ops
context.
B
So
this
this
goes.
This
finishes
quite
quickly
because
it
it's
actually
asynchronous,
it
just
puts
stuff
into
a
queue,
and
there
is
the
internal
thread
of
the
kafka
manager
that
reads
from
the
queue
and
sends
over
the
kafka
library
the
messages
and
then
receives
answers.
It's
all
asynchronous
like
the
sending
and
receives
and
bulked
and
then
releases
those
callbacks
that
actually
releases
the
ops
core
routine
to
continue
with
with
its
work,
and
it's
pretty
much
the
same
for
kafka
and
hp.
B
The
same
kind
of
concept
like
when,
when
you
do
the
publish
with
confirm
you,
you
put
stuff
into
the
queue
and,
on
the
other
side
of
the
queue,
there's
a
thread
that
empties
the
queue
send
messages
then
wait
for
answers,
get
answers
unlocks
those
callbacks
and
and
so
on
so
forth.
There's
also
like
some
connection
management.
You
can
have
multiple
kafka
brokers
and
manage
here.
So
you
know
every
time
there's
a
definition
of
a
new
connection.
B
You
create
the
connection
and
then,
if
you
just
send
something
to
connection
just
reuse,
the
connection
you
don't
need
to
recreate
it
and
so
on
yeah.
I'm
sorry,
I
didn't
go
through
everything,
but
I
want
to
give
like
five
minutes
for
questions.
If
you
have.
B
B
So
there's
this
file,
that's
called
pub
sub
push,
and
this
is
what
is
creating
those
endpoints.
If
you
remember
inside
the
the
case
where
we're
not
using
persistentification,
there
was
something
that
creates
the
creates
the
endpoints
and,
and
it
has
like
all
kind
of
types
of
endpoints.
So
this
file
really
configures
like
there's
an
http
endpoint
here,
and
this
is
where
the
center
completion
is
defined.
B
There
is
the
mqp
endpoint
and
there
is
also
the
kafka
yeah,
the
kafka
endpoint,
so
those
endpoints
are
defined
here
and
here
we
have
this
factory
that
is
creating
them
here
at
the
end.
So
this
is
the
create
one
that
creates
the
endpoint.
Now
it
doesn't
really
create
the
connection
everything
this
is
done
inside
the
kafka
and
mkp
or
whatever
managers.
B
But
this
one
it's
creating
the
the
eight
point,
type
and
returns
that,
and
if
anyone
wants
to
in
the
future,
add
new
type
of
endpoints,
then
this
is
the
place
where
you
you
glue
them
in.
B
So
you
know
you
write
your
endpoint
in
whatever
mechanism
that
you
want
and
then
you,
you
define
here
a
wrapper
for
the
for
the
endpoint
that
you
have
and
the
the
main
thing
that
you
need
here
is
this
send
to
completion
async
function,
and
this
is
the
function
that
you
know
in
case
of
kafka,
that
this
published,
with
confirmed
to
kafka
and
also
is
doing
the
yielding
and
waiting
on
the
callback
that
we've
seen
before
that
they
were
the
kafka
due
to
here,
but
the
kafka
is
releasing
when
it's
getting
the
act
from
the
other
side.
B
So
this
is
where
those
things
are
being
glued
together,
yeah,
so
that's
pretty
much
it.
If
you
have
any
specific
questions
or
you
want
more
details,
don't
hesitate.
Please
send
an
email
and
I'll
try
to
get
back
to
you.