►
From YouTube: Using Async API schema to define event driven architecture with AWS SNS - Ayush Goyal, Postman
Description
AsyncAPI Conference 2021 - Day 3
18th November 2021
In this talk, Ayush will talk about a real-world scenario in which he used the Async API schema to define event driven architecture that used AWS SNS as the principal component of the pipeline for communication between two different teams at Postman. He will cover the end-to-end development workflow. He will be defining the contract between the producers and consumers using the Async API schema that allows both the parties to work on their side of changes independently and afterwards, incorporating SNS in the pipeline to allow multiple consumers to subscribe to the events from a single producer.
A
So,
let's
start
with
defining
what
is
an
event,
so
an
event
is
any
significant
occurrence
or
change
in
the
state
of
a
system,
hardware
or
software-
I'm
adding
hardware
here,
because
the
changes
might
include
from
a
separate
iot
device
to
change
over
sensor
value,
or
things
like
that.
You
might
have
heard
about
even
notification
in
different
contexts
before
it
is
actually
quite
different
from
event.
It's
because
even
notification
is
a
message
or
notification
sent
by
the
system
to
note
define
another
part
of
the
system.
That
is
that,
actually,
an
event
has
taken
place.
A
So
it
just
acts
as
a
message
or
a
notification
service
for
an
event,
as
the
name
suggests,
an
event
can
actually
carry
in-depth
context.
Information
of
what
has
actually
happened,
or
it
can
just
have
an
identifier,
in
which
case
you
have
to
call
the
necessary
apis
to
get
more
contextual
data.
Regarding
the
event,
let's
come
to
what
actually
is
an
event-driven
architecture,
so
it
is
an
architecture
which
uses
events
to
trigger
and
communicate
between
decoupled
services,
and
it
is
actually
quite
common
in
modern
applications
built
with
micro
services.
A
So
an
even
driven
architecture
has
three
basic
components.
First
is
the
event
producer?
It
is
the
part
where
the
event
actually
originates.
Then
we
have
an
event
consumer.
The
consumer
is
the
part
of
the
system
that
has
to
perform
certain
actions
on
the
occurrence
of
the
event
and
needs
to
be
notified
of
new
events.
Now.
The
third
is
event
router,
so
it
is
a
connecting
link
between
the
even
producer
and
the
event
consumer.
A
A
In
this
architecture
pattern,
the
producer
and
consumer
are
completely
separated
from
each
other
and
communicates
only
through
the
event
router.
For
example,
we
can
take
a
simple
system
of
bitcoin
price
change.
Notifier.
We
should
send
an
email
notification
if
the
price
is
above
a
certain
limit.
Only
so
in
this
simple
system,
we
can
have
a
simple
cron
job
which
fetches
the
current
bitcoin
price,
using
the
necessary
apis
same
every
few
minutes,
and
it
has
to
send
it
to
the
router
then
so
it
will
be
our
even
producer.
A
Now
we
will
have
another
service
which
sends
a
out
email
notification,
so
this
secondary
service
will
have
only
single
job
to
send
the
email
notifications
wherever
it
is,
we
have
configured
to
now.
This
actually
will
be
our
event
consumer.
Now
we
will
define
our
router
to
publish
price
change
event
to
email
notifier.
A
Now,
let's
talk
about
like
what
are
the
different
benefits
of
using
an
even
driven
architecture,
first
and
foremost,
is
like
parallel
processing,
so
say
on
occurrence
of
an
event.
You
wish
to
perform
a
large
number
of
heavy
actions
with
event
driven
architecture.
You
can
execute
them
in
parallel
without
any
hassle,
one
can
actually
add
separate
consumers
for
the
actions
to
be
performed
and
each
will
get
triggered
individually
on
receiving
the
event.
A
This
will
actually
also
ensure
failure
of
one
action
will
not
affect
the
another
which
might
occur
if
you
are
using
a
traditional
synchronous
system.
A
Second,
is
a
versatility
in
choosing
the
technical
stack,
so
since
producers
and
computes
consumers
are
completely
decoupled
in
even
driven
architecture,
it
allows
the
user
to
use
different
software
stacks
for
both
and
same
hold.
True
for
individual
consumers
too,
one
can
have
a
producer
which
is
say
using
python
to
generate
the
events,
and
consumer
might
can
be
written
in
node.js
for
some
optimization
purposes.
A
This
is
actually
pretty
useful
in
efficient
working
of
complex
systems
too
like
in
which
a
part
in
which,
like
say,
one
part,
could
be
written
in
a
high
level,
language
for
better
abstractions
and
support
of
better
libraries
and
another.
The
consumer
side
could
be
written
in
a
low
level,
language
for
better
performance
and
optimizations.
A
Now
the
third
we
have
is
like
hassle-free
cross-team
dependency,
so
to
explain
it.
Let's
take
an
example
say:
in
a
company
there
are
three
teams
a
b
and
c.
They
are
already
using
even
driven
architecture
to
communicate
in
which
a
publishes
an
event
and
bnc
consumes
them
and
performs
some
actions
now
say
if
a
new
team
comes
new
use.
Case
comes
for
team
d
in
which
it
actually
needs
to
perform
some
actions
on
the
events
which
a
is
producing,
since
they
are
already
using
the
event
driven
architecture.
A
There
will
be
no
work
required
to
be
performed
by
team.
A
team
d
just
needs
to
subscribe
to
the
events
on
the
event
router
and
it
can
start
processing
the
events
effortlessly
apart
from
this
event,
driven
architecture
also
allows
producers
and
consumers
to
scale
independently.
A
So
if
say,
we
the
producer
is
getting
a
lot
of
traffic
and
a
consumer
is
just
triggered
on
very
few
subset
of
events.
We
can
have
much
better
scaling
for
producer
and
we
can
have
less
number
of
instances
for
our
consumer
now.
Let's
talk
about
what
is
actually
sns.
So
sms
stands
for
simple
notification
service.
It
is
an
aws
service
which
acts
as
a
event
router
to
provide
message:
delivery
from
producer
to
consumer.
A
A
Because
of
this
sns
all
topic
also
groups
the
set
of
producers
and
consumers
together
for
easier
maintenance.
A
Let's
see
like
how
this
actually
works.
So
in
this
case
here
is
the
producer
which
will
send
even
to
sns
and
sns,
can
then
forward
this
even
to
various
different
services
like
if
you,
if
the
whole
infrastructure
is
in
aws,
it
can
easily
send
the
events
to
lambda
kinesis
data,
sqs,
https
apis,
which
you
have
apart
from
this.
It
can
also
actually
send
push
notifications,
email,
notifications
and
say
sms
directly
through
sms,
without
using
any
other
external
service.
A
Now
I
mentioned,
like
the
sns,
actually
allows
you
to
filter
messages.
Let's
take
a
look
like
how
does
that
message?
Message
filtering
works
on
sns,
so
when
a
producer
publishes
a
message
to
sns
along
with
the
payload,
it
can
actually
attach
some
key
value
pairs
to
the
message
which
will
be
used
in
filtering
so
point
to
note
here
is
that
you
can
not
directly
use
the
payload
data
for
filtering
purpose,
because
sns
also
allows
you
to
encrypt
that
payload
data
for
better
security.
A
A
So
the
field,
as
you
can
see,
the
producer
is
adding
some
attributes
consumer,
while
adding
the
subscription
tells
like
which,
for
the
value
of
which
attribute
should
be
triggered
for,
for
this
particular
event,
to
be
sent
to
the
consumer.
So
this
filtering
logic
is
completely
boolean:
either
it
matches
the
filter
policy
or
it
won't.
So
when
it
matches
it
delivers
the
event,
and
if
it
is
not,
it
just
skips
the
consumer
and
checks
on
the
next
one
so
for
for
a
particular
message
to
match.
A
So
if
in
the
filter
policy,
you
have
added,
say
three
at
different
attributes,
all
of
them
should
return
true
for
it.
It's
not
like
if
you
added
three
different
subscription
filter
policy,
three
attributes
there,
and
only
one
or
two
are
matching,
and
third
one
is
not.
It
will
not
deliver
the
message
to
that
particular
consumer.
A
A
Also,
the
one
thing
to
remember
is
the
matching
is
exact
without
any
case
folding
or
any
other
string
normalization.
So
you
have
to
keep
a
keep
that
in
mind,
while
adding
the
filter
policy.
Also,
the
number
matching
is
done
on
the
string
representation
itself,
so
300
is
not
equal
to
300.00.
A
So
this
is
a
very
important
gotcha
for
this.
If
you
are
working
with
sns
also,
the
value
of
each
key
in
the
filter
policy,
which
we
are
defining
on
the
consumer
subscription
level
is
an
array
containing
one
or
more
values,
so
the
policy
matches,
if
any
of
the
values
in
the
array
matches
the
value
in
the
corresponding
message
attribute.
A
So
if,
in
the
filter
policy
for
attribute
a
you,
are
sending
the
setting
the
value
of
the
arrays
as
one
two
and
three
and
the
message
attribute-
has
the
value
of
as
two
it
will
get
matched
so,
but
if
the
value
in
the
message
attributes
in
itself
is
an
array,
then
filter
policy
actually
matches
it.
If
the
intersection
of
the
policy
and
the
message
attribute
array
is
non-empty,
this
is
how
the
subscription
works.
Let's
look
at
a
diagram
to
see
like
how
this
actually
happens.
So
here
ignore
this
in
gesture.
A
For
now
say
we
have
two
producers
monitor
and
collection.
They
are
two
separate
producers
monitor
sends
event
a
and
b
collection
sent
event
c.
All
these
events.
These
are
the
message
attributes
attached
the
attribute
key
is
even
type
and
event
action.
All
those
three
three
different
values
are
added.
We
have
two
separate
consumers.
Sqs
one
is
our
first
consumer
here.
Sq2
is
the
second
one
for
sqs
one.
A
We
have
defined
the
filter
policy
as
this
like
event,
type
can
be
monitored
or
collection
and
action
can
be
monitored,
run,
finished
or
collection
updated.
Similarly,
in
sqs
2,
the
second
consumer
we
have
type
defined
as
monitor
and
collection
and
action
could
be,
monitor,
updated
and
monitor
collection
updated.
A
So
when
these
three
events
reach
at
sns,
let's
see
individually
what
happens
so,
let's
say
first
the
event
a
approached
sns,
so
it
will
first
match
the
filter
policy.
Try
to
match
the
filter
policy
of
first
consumer
event-
type
as
you
can
see.
Its
monitor,
monitor
is
present
in
this
array.
It
matched
event.
Action
is
monitored
and
finished
event.
Action
here
is
also
monitored,
unfinished
it
matched,
so
it
will
send
the
event
a
to
sqs
one.
A
Now
it
will
check
before
sqs2
here,
if,
though,
even
type
matches,
but
even
action
is
not
present
in
this
array.
Hence
it
will
not
send
this
event
here
in
case
of
event
b,
the
type
and
action
which
is
monitor
updated.
The
action
is
monitor,
updated
here
matches
for
the
policy
of
sqs
one,
so
it
will
send
the
event
b
here
for
the
first
consumer
when
it
checks
for
the
second
consumer.
Again
it
matches,
so
it
will
send
a
copy
of
event
b
to
sqs
2.
A
Also
so
producer
send
the
single
event
and
it
gets
fanned
out
on
the
consumer
level
at
sns.
Similarly,
in
case
of
c,
the
filter
policy
at
sqs
one
doesn't
match,
but
at
excuse
two
matches,
so
it
sends
the
evenc
only
to
sqs2.
A
Then
we
all,
we
would
then
process
those
events
and
trigger
relevant
integrations,
so,
as
the
producers
can
be
large
in
number
in
our
case,
like
different
teams
at
postman,
we
wanted
the
service
to
be
completely
decoupled,
requiring
minimal
changes
on
their
site
for
faster
adoption
and
less
maintenance
for
future.
A
In
future,
we
might
actually
need
to
add
more
producers
to
the
service
also.
Hence
we
wanted
it
to
be
easy
and
fast
for
the
new
teams
to
start
publishing
their
events,
so
it
should
not
be
like
they
have
to
do
a
lot
of
development
work
to
only
enable
the
publishing
of
events
also.
A
Lastly,
there
could
be
other
teams
also,
which
might
want
to
perform
some
actions
on
the
some
of
the
events
we
are
receiving
so
to
prevent
them
from
reinventing
the
wheel.
We
wanted
it
to
be
easier
for
them
to
to
start
consuming
the
events
which
we
are
using
right
now.
A
So
now
I'll
like
to
tell
like
different
approach,
we
use
for
the
solution
and
then
I
will
talk
about
like
what
was
the
final
approach.
We
decided
to
go
with
so
this
is.
This:
was
the
approach
one
so
here,
ingester
is
just
us,
a
small
service
which
performs
some
validation
checks
and
adds
an
abstraction
layer
to
communicate
with
sns.
So
all
the
different
teams
just
send
their
events
to
ingest
it
in
gesture,
validates
and
checks.
A
The
event
payload
adds
corresponding
message,
attributes
and
then
publishes
them
to
sns.
Then
sns
at
sns.
We
have
like
different
queues
for
each
team.
At
the
end
of
the
queue
we
had
a
router
and
the
router
actually
sits
at
the
end
of
sqs,
and
it
essentially
works
as
a
switch,
so
consumers
will
be
connecting
to
connected
to
the
router.
A
Router
will
be
connected
to
the
queue
and,
whenever
an
event
comes
at
the
router,
it
will
check
like
whether
which
consumer
wants
to
listen
to
this
particular
type
of
event,
and
then
it
will
just
fan
out
to
those
consumers
at
that
level.
So
the
technical
stack
I
thought
initially
would
could
be
a
lambda
sorts
for
the
router.
So
there
were
a
lot
of
issues
in
this
but
whole
design.
A
First
of
all,
the
problems
with
router
like
constant
update,
needs
to
be
added
on
the
router
for
any
new
consumer
added
or
removing
an
existing
consumer
router
needs
to
be
updated.
There
would
be
an
added
cost
and
maintenance
overhead
for
the
routers
too.
So
that
was
a
problem.
Another
thing
is
consumer
has
to
know
which
particular
queue
and
router
it
has.
It
has
to
subscribe
to
and
adding
a
completely
new
domain.
Like
let's
say
we
have
like
four
domains
right
now:
collection,
monitor
api
update
team
activity.
A
Adding
a
new
domain
would
require
a
lot
of
infrastructure.
Changes
like
we
have
to
connect,
add
a
new
queue
router
connected
to
the
sns,
which
is
very,
very
much
hassle.
So,
at
the
end
we
finalized
on
this,
so
even
producers
will
publish
events
to
injester
injustice
publishes
those
event
to
sns
and
after
that
the
whole
infrastructure
would
be
owned
by
the
consumer.
So
if
say,
consumer
wants
to
add
a
queue,
they
can
add
an
sqsq
and
then
their
consumption
logic.
A
They
can
add
a
direct
http
api
to
sns
and
the
filtering
part
would
be
completely
owned
by
sns
itself,
so
provide
more
autonomy
to
the
consumers
as
they
can
define
how
they
want
to
ingest
the
image,
and,
as
I
mentioned,
the
filtering
was
done
by
donet.
Since
the
filtering
is
done
at
sns
level,
it
removed
all
the
maintenance
overheads
we
had.
A
So
this
is
the
workflow
we
finalized
on
to
for
our
use
case
now,
I
would
like
to
discuss
like
how
async
api
helped
us
so
one
of
the
first
and
very
foremost
thing
which
async
api
added
in
this
whole
process
was
easy
review
of
payload
and
the
event
structure.
So
there
was
a
separate
team
which
will
be
generating
the
event.
Then
our
team
would
be
consumed
consuming
those
events,
so
the
the
coordination
needs
to
be
precise
here.
A
So
async
api
helped
in
that
it
actually
allowed
the
producer
and
the
consumer
here,
in
our
case,
to
start
working
parallelly
without
any
dependency
on
each
other.
A
Then
the
api
schema
actually
provides
a
one-stop
repository
for
future
streams,
also
to
so
that
they
can
check
the
event
payload
whenever
they
want
to
create
a
or
add
a
new
subscription
allowing
faster
option
of
this
particular
schema.
A
So
this
is
just
a
depiction
like
how
what
would
happen
like
could
have
happened
if
the
async
api
schema
was
not
present.
First,
the
producer
site
might
have
been
working
on
their
side
of
changes.
We
might
be
working
on
our
side
of
changes
and
when
it's
finally
time
to
merge
both
something
like
this
could
have
happened.
So
async
api
actually
helps
us
to
avoid
this
kind
of
a
situation.
A
Now
I
would
like
to
define
some
essential
components
of
the
async
api
schema,
which
we
used.
First
is
servers
so
before
servers.
Actually,
there
is
a
info
block
also,
which
is
mandatory
for
the
schema.
It
contains.
The
basic
information
like
the
title
description
and
the
version
of
the
ap
schema,
which
you
are
using
servers
actually
allow.
You
to
define
like
a
different
entry
or
consumption
or
the
entry
points
for
the
events
it
will
be
using
for
communication.
A
So
this
here
production
is
just
a
reference
or
identifier
for
the
server.
Then
we
have
the
url
like
what
will
be
the
url
of
the
server
protocol.
We
used,
we
had
like
https,
but
you
can
use
any
supported
protocol
here,
like
http
amqp,
mqtt
websockets,
anything
which
you
like
this
one
is
just
to
define
like
what
this
server
is
like,
whether
it
will
be
running
in
production,
whether
it
is
just
for
testing
on
stage
this
kind
of
a
thing,
then
we
have
like
security
like
what
kind
of
authentication
it
will
be
using.
A
The
basic
auth
here
is
just
an
identifier
which
I
used
in
my
schema.
How
the
security
is
defined
will
be
coming
on
to
next
now
security.
So,
as
you
can
see
this
components
inside
the
components
we
have
security
schemes,
and
here
we
define
like
what
type
of
authentication
we
want.
So
basic
auth,
as
I
mentioned,
is
an
identifier.
It
can
be
anything
abc
which
will
be
used
in
the
previous
slide
if
we
used
type
user
password.
A
So,
for
me,
the
authentication
scheme
is
user,
password
authentication,
but
you
can
use
various
other
supported
authentication
schemes
like
overwatch
to
api
key,
just
to
name
a
few
like
there
are
a
lot
of
supported,
odd
schemes
which
you
can
check
out
on
the
documentation.
Page
and
description
is
just
self-explanatory
to
define
like
how.
Maybe
you
can
get
the
auth
credentials
where
to
look
for
the
auth
credentials
or
how
it
has
happened,
why
it
is
happening
things
like
that,
after
that
we
have
channel
and
schemas.
So
first,
let's
talk
about
channel
object.
A
It
actually
holds
the
relative
path
to
the
individual
channel
and
their
operation.
The
path
here
would
be
relative
to
the
server.
It
is
actually
known
as
topics
routing
keys,
even
types
or
paths.
Also,
this
is
a
basic
schema
for
the
channel
object.
It
looks
like
this
like
for
the
it
could
be
here.
The
subscribe
could
be,
let's
say:
if
it's
a
producer,
it
could
be
publish.
A
Then
message
schema
is
defined
in
the
separate
component
so
that
it's
more
modular
then
we
have
a
schema
object,
so
schema
object
allows
the
it
actually
allows
you
to
define
the
input
and
output
of
the
data
types
it
can
be.
The
object.
Can
it
can
be
an
object?
It
can
be
a
primitive
array,
things
like
that.
It
can
actually
be
directly
used
in
the
reference
object
like
we
are
doing
here
in
the
channel
object.
A
Also,
we
just
just
add
the
path
to
the
object
and
it
will
be
directly
used
when,
whenever
you're,
let's
say,
if
generating
the
html
or
markdown
of
a
particular
thing,
so
that's
it
from
my
site.
This
was
the
end
of
the
talk
thanks
a
lot
for
listening.
These
are
my
handles
on
different
platforms.
If
you
want
to
connect.