►
From YouTube: AsyncAPI + Spring Cloud Stream = Event-Driven Microservices Made Easy - Giri Venkatesan, Solace
Description
AsyncAPI Conference 2021 - Day 3
18th November 2021
The AsyncAPI specification allows you to define your Event-Driven apps. The Spring Cloud Stream framework allows you to implement Event-Driven microservices. In this talk, we will bring the two together and show how developers can go from Design to Code using AsyncAPI documents and the AsyncAPI generator that can generate code using Spring Cloud Stream Binders. At runtime, the Binders enable connection to external middleware like RabbitMQ, Kafka, Solace PubSub+ to name a few. This talk will include a hands-on demo using Java, Spring Cloud Stream, and the AsyncAPI java-spring-cloud-stream-template.
A
Event
of
open
standards
and
modern
development
frameworks
building
complex
systems
is
becoming
easier
by
the
day.
In
this
session
we
will
be
exploring
how
async
api
and
sprint
cloud
stream
makes
development
of
event
driven
micro
services,
easy
and
friendly
for
developers
hi.
My
name
is
girivadkartesen.
A
I
will
take
a
real-world
scenario
as
a
use
case
and
explore
how
the
requirements
are
met
by
the
use
of
async
api
and
sprint
cloud
stream
throughout
this
session.
Let's
consider
smart
city,
a
contemporary
concept
that
we
are
all
familiar
with.
Smart
city
is
a
framework
that
utilizes
information
and
communication
technologies
to
develop,
deploy
and
promote
sustainable
practices
to
meet
the
urbanization
challenges
right.
It
requires
smart
solutions
to
manage
the
needs
around
energy,
mobility
and
transportation,
health
and
education
and
various
other
aspects
that
formulate
smart
city.
A
Right
as
we
speak,
we
have
numerous
functioning
smart
cities
that
are
successful
around
the
world.
New
york
city,
singapore,
amsterdam,
helsinki,
zurich
and
many
key
enabler
for
the
success
of
smart
city
is
open
data.
Let's
talk
about
open
data
a
little
bit
historically,
governments,
enterprises
and
individuals
have
held
their
data
close
sharing
as
little
as
possible
with
others.
True
impact
can
be
realized
only
when
all
the
participants
in
an
ecosystem
share
data
and
information
leading
to
the
ability
to
make
informed
decisions
jointly
in
real
time
and
smart
city
definitely
needs
open
data.
A
Open
data
is,
in
a
sense,
redefining
this
entire
digital
transformation.
Across
the
ecosystem
for
our
session
we'll
be
examining
the
needs,
probably
of
say,
a
specific
subset
of
a
mythical
smart
town,
where
apis
are
a
way
of
life
to
demonstrate
async,
api
and
spring
cloud
stream.
What
impact
it
has
on
the
eda.
A
Though
the
smarter
requirements
are
exhaustive
for
this
session,
we'll
be
looking
at
a
subset
involving
temperature
monitoring
and
management.
This
involves
receiving
and
processing
temperature
readings
from
various
sensors
deployed
across
the
town
parks
park,
public
facilities,
vehicles
and
so
on,
we'll
also
be
building
services
to
simulate
the
temperature
sensor
ratings
and
generate
operational
and
aggregate
alerts.
As
events
with
that
said,
let's
analyze
the
possible
interaction
between
the
services
and
application.
In
this
context,
the
controller
interaction
with
energy
management
system
is
to
affect
temperature
changes.
A
Obviously,
it
requires
a
synchronous
request,
response
interaction
based
on
rest,
apis,
it's
great,
the
rest
of
the
interactions
can
be
event
based,
meaning
asynchronous
and
can
use
pubs
of
operations
mediated
through
the
intermediary.
What
we
see
is
an
event
broker
right.
So
in
this
case,
the
events
are
published
around
temperature
reading
and
various
alerts,
services
and
applications
create
their
subscriptions
and
consume
these
events
when
the
broker
delivers
to
them
event
gateway,
essentially
an
event
broker,
mediates
the
interaction
between
the
publishers
and
subscribers
and
offers
decoupling
and
subscription-based
event
routing
okay.
A
So
this
is
what
makes
up
a
event-driven
design
for
a
particular
problem
around
temperature
monitoring
and
management.
Okay,
here's
a
fun
question
for
you:
apis
are
only
asynchronous
in
eda.
Is
it
a
fact
or
is
it
a
myth
right?
So
you
have
10
seconds.
So
please
give
it
a
thinking
and
let's
see
if
we
get
it
right.
A
All
right
here
is
a
myth.
Simple
reason
is
digital
transformation
journey
is
nothing
but
an
adoption
of
event
driven
architecture
on
top
of
existing
architectures.
This
could
include
a
harmonious
interaction
between
microservices
and
applications
in
an
ecosystem,
so
it
can
be
asynchronous
as
well.
It
can
be
both
synchronous
and
asynchronous.
A
Let's
talk
about
async
apis
async
api
is
an
open
source
initiative
that
seeks
to
improve
the
current
state
of
inventive
and
architectures.
That's
great.
The
goal
is
to
make
working
with
eda
is
as
easy
as
it
is
to
work
with
rest
apis
that
goes
from
documentation
to
co-generation
and
discovery
to
event
management
right.
So
most
of
the
processes
that
you
apply
to
your
rest,
apis
nowadays
are
applicable
to
event-driven
asynchronous
apis
too.
So
the
async
api
specifications
settle
the
base
for
a
greater
and
better
tooling
ecosystem
for
the
adas.
A
So
that's
the
foundation.
That's
the
the
purpose
of
ac
kpi,
all
right.
Let's
look
at
broadly
the
apis
and
api
specifications.
Apis
are
nothing
but
application.
Programming
interface
right.
They
make
it
easier
to
exchange
information
between
applications.
A
Tell
you
what
kind
of
information
to
be
passed
as
a
request
and
what
to
expect
as
a
response
and
so
on.
Api
consumers
at
any
point
in
time,
don't
need
to
know
the
whole
part
of
the
implementation,
it's
completely
hidden
from
them.
It
is
true
for
both
synchronous
and
asynchronous
apis,
the
operation
wise
in
a
synchronous
context.
You
will
be
invoking
the
apis,
whereas
in
a
asynchronous
context
you
will
be.
You
will
be
either
publishing
or
subscribing
events,
so
only
the
operational
terminology
changes,
but
they
are
still
apis.
A
In
essence,
building
apis
can
be
tough
and
time
consuming.
Absence
of
standards
and
tools
can
make
it
even
more
difficult.
Here
comes
the
specification.
We
have
open
api
specification
for
the
synchronous,
apis,
aka,
the
rest
apis
and
an
async
api
specification
for
the
asynchronous
apis
in
this
case
event
apis
right
and
then,
if
we
sorry
extending
this
further,
if
apa
standards
and
specification
help
the
developers
with
tools
to
generate
documentation
and
code
as
well
as
access
a
repository
for
lookup
at
runtime.
A
A
Let's
see
how
the
asynchronous
api
draws
a
parallel
to
the
synchronous
apis,
both
of
them
deserve
a
portal
right.
In
the
case
of
synchronous,
we
call
it
a
api
management
portal
and
then,
in
the
case
of
asynchronous,
we
should
call
it
as
event
management
portal,
because
the
interaction
is
completely
based
on
events
being
published
and
subscribed
right.
So
both
synchronous,
synchronous
apis
needs
a
centralized
portal
for
managing
apis
right.
It
starts
with
cataloging
versioning
and
maintaining
the
apis,
essentially
a
public
repository
that
can
be
queried
at
runtime.
A
A
specification
is
also
required.
You
know
we
have
open
api
for
synchronous,
apis
and
async
api
for
the
asynchronous
apis
and
then,
when
it
comes
to
tooling,
swagger
is
a
tool
for
generating
code
from
the
specification
for
rest,
apis
and
ac
kpi
is
also
a
tool
that
can
be
used
to
generate
a
code
for
asynchronous
apis.
A
When
you
look
at
the
generated
code,
here's
where
the
difference
comes
in
when
it
comes
to
rest
api.
It's
a
simple
client
server
interaction
on
a
protocol
which
is
a
single
protocol
that
is
http,
so
the
swagger
generates
the
client,
stubs
and
server
stubs
for
the
from
the
specification
when
it
comes
to
async
api
async
api
generates
code
for
widely
widely
used
frameworks
like
spring
node.js,
python
and
so
on,
and
then
it
also
supports
various
protocols,
amqpmqtt
web
sockets
and
so
on.
The
simple
reason
for
multi-protocol
support
is
underlying
broker.
A
That's
offering
this
publish,
subscribe
capability
might
be
supporting
or
maybe
requiring
support
for
various
protocols,
so
it
automatically
gets
bubbled
up
and
supported
at
the
code
level.
All
right
now.
Another
quest.
The
statement
goes
like
this.
I
will
have
to
redesign
my
rest
heavy
architecture
from
scratch
to
adopt
eda.
A
Taking
a
shot
at
it
and
it
is
a
myth,
simple
reason
is,
you
know
the
rest.
Heavy
test,
heavy
interaction
and
asynchronous
interaction
can
coexist.
Nothing
has
to
be
dropped.
Nothing
has
to
be
adopted
from
the
scratch,
so
a
digital
transformation
journey
is
basically,
you
know,
adopting
a
new
architecture
with
its
legacy
infrastructure
still
in
place.
A
A
It
is
async
api
all
right
on
the
right
hand,
side,
let's
kind
of
explore
a
real-world
scenario,
an
e-commerce
application
and
see
how
the
synchronous
and
asynchronous
nature
of
interaction
comes
into
play.
There
are
applications
we
are
looking
at.
You
know:
order
management
system,
warehouse
and
product
catalog
management
and
marketing
operations
right.
So
these
are
already
existing
in
the
infrastructure
and
they
have
already
exposed
rest
apis
for
applications
and
services
to
interact
with
these
systems
right
now,
let's
take
a
look
at
it.
A
Every
single
interaction
might
end
up
in
generating
an
event
right.
For
example,
a
new
order
plays
on
the
order
management
system
could
kind
of
publish
an
event
could
have
that
could
publish
an
event
called
received,
and
then,
when
the
order
is
processed,
it
could
publish
an
event
called
processed
and
so
on
right.
A
A
All
right,
so,
let's
take
a
look
at
this.
When
you
look
at
this
picture,
we
have
these
products
could
be
internal
external
delivery
service
provider.
You
know
he
need
to
know
about
when
an
order
is
shipped
or
processed
so
that
he
can
pick
it
up
and
then
initiate
the
shipping.
Similarly
price
comparison,
too
could
be
an
external
product
that
need
to
operate
only
on
the
product
that
are
promoted
for
cell
status,
so
that
cell
ready
to
sell
event
is
published
from
the
product
catalog
right.
So
you
can
see
here.
A
A
We'll
have
a
quick
demo
on
ac
kpi
for
a
quick
demo.
We
will
check
out
the
standard
tutorial,
slash
demo,
that's
in
the
async
api
website.
It's
about
street
light
systems.
That's
publishing
the
environmental
lighting
value
to
a
broker
right.
The
technology
that
you'll
be
using
is
node.js
for
the
code
and
mosquito
as
a
broker
to
provide
the
pub
sub
service
right.
It
simply
publishes
the
event,
basically
like
a
fire
and
forget,
there's
no
other
processing
involved
for
simplicity's
sake,
and
here
is
the
async
api
specification
file.
A
You
can
assume
it's
pretty
much
hand
coded.
This
section
clearly
indicates
the
async
api
version
and
then
more
on
the
application
details,
and
this
server
construct
indicates
how
to
make
connection
to
the
broker.
You
know
mosquito
broker
that
can
be
located
at
this
url
using
mqtt
as
a
protocol
and
so
on,
and
the
channels
is
nothing
but
a
collection
of
topics
that
this
application
can
publish
or
subscribe
to
right
so
here
is
a
here.
Is
a
channel
name,
nothing,
but
a
topic
name.
A
An
event
can
be
published
to
this
topic
with
a
message
whose
structure
is
going
can
be
like
this.
So
it's
a
message:
it's
a
payload
is
of
object
type
and
contains
properties
like
id,
lumens
and
centered,
and
their
attributes
and
description
and
so
on,
and
one
thing
to
notice.
The
operation
id
is
an
attribute
that
indicates
that
give
a
hint
to
the
code
generator.
What
to
call
the
function
that
publishes
this
event
right.
A
So
once
the
async
api
spec
is
ready,
we
can
put
it
through
standard
process,
I'm
just
going
to
kind
of
gloss
over
certain
steps
like
installing
using
kpa
code
generator
and
then
creating
the
async
api
spec
as
a
file
and
then
running
the
code
generator
it's
nothing
but
aeg
is
the
tool
name,
the
ml
file
and
then
the
template.
So
this
template
can
vary
depending
on
what
output
you
want
in
the
case
of
node.js,
it's
no
js
template
in
the
case
of
string.
A
It
will
be
spring
template
and
so
on,
and
the
output
directory
and
a
few
specification
around
the
server
type
and
so
on.
So
this
will
generate
the
full
fledged
code
for
this
node.js
framework
and
it's
available
for
us.
Okay,
and
at
that
point
in
time,
you
can
simply
start
the
generated
code.
Of
course
you
know
if
you
want
to
implement
your
custom
business
logic
for
event
processing.
You
can
go
ahead
and
do
that
otherwise
standard
functionality
would
log
the
messages
received
on
the
application
right.
A
So
you
start
the
application
and
continue
to
publish
you
should
see
events
being
exchanged.
You
know
subscriber
and
publisher,
in
action,
let's
go
and
check
it
out,
I'm
going
to
switch
context
here.
So
I
have
three
windows:
that's
already
open
and
ready
to
go.
So
this
is
a
command.
You
will
be
used
to
generate
the
node.js
project
code
base,
which
is
already
done,
as
I
mentioned
here
is
the
code
I'm
going
to
simply
start
the
application.
A
Okay,
it
subscribed
to
this.
It
connect
it
made
a
connection
to
the
mqtt
broker.
Sorry
mosquito
broker-
and
you
subscribe
to
this
topic
like
measure
and
in
this
window,
I'm
going
to
publish
an
event
like
on
the
topic
of
like
measured
and
specified
host
information
and
the
message
payload
in
the
form
of
json.
Okay,
let
me
just
publish
it
so
you
can
see
the
published
event
is
received
here
and
similarly
I
can
also
create
another
subscriber
mqtt
subscriber
on
the
same
topic.
A
A
A
It
also
provides
components
that
abstract
the
communication
with
the
many
message
brokers,
possibly
out
there
away
from
the
code
right
it
uses,
features
of
spring
boots,
spring
cloud
functions,
spring
integration
and
spring
messaging,
making
it
a
part
of
the
broader
spring
stack
right.
One
of
the
key
benefits
of
using
spring
cloud
stream
is,
instead
of
having
to
learn
messaging
apis
of
different
broker
systems.
Developers
just
have
to
understand
the
communication
model
that
spring
cloud
stream
supports
when
it
comes
to
the
communication
model.
A
There
are
three
supported
models,
but
the
support
for
each
of
them
varies
from
binder
to
binder.
First,
one
is
the
publish
subscribe,
which
is
a
traditional
one
subscribes.
Subscribers
are
independent
from
each
other
and
receive
events
in
order
right.
Then,
consumer
groups,
basically
a
fan
out
and
load
balancing
option
across
multiple
consumers.
A
A
To
sum
it
up,
it's
a
framework
for
building
highly
scalable
event,
driven
micro
services
that
is
connected
to
a
messaging
system
and
making
use
of
the
publish
sub
subscribe
infrastructure
right
to
see
more
on
that,
so
the
other
key
component
of
spring
cloud
stream
is
the
spring
cloud
functions
right.
Spring
cloud
function
basically
promotes
the
implementation
of
business
logic
as
java
functions
right.
A
By
doing
so,
we
are
decoupling
the
development
life
cycle
of
business
logic
from
the
specifics
of
runtime
broker
cross
so
that
the
code
can
run
as
a
web
point
stream,
processor
or
a
task,
and
so
on.
A
It
also
allows
us
to
enable
spring
boot
features
such
as
auto
configuration
dependency,
injection,
metrics,
etc.
So,
let's
look
at
the
message
exchange
contract
as
a
java
function.
We
have
three
functions.
First,
one
is
supplier,
nothing,
but
an
event
source
in
messaging
term.
It
could
be
a
producer
or
publisher
of
events.
The
next
one
is
the
consumer.
That
will
be
an
event.
Sync
could
be
a
subscriber
or
a
consumer
of
events,
and
third
one
is
somewhat
unique.
You
know
it's
a
function,
nothing
but
a
processor.
A
A
Methodology
for
implementing
your
messaging
requirements-
let's
look
at
the
key
components
of
a
sprint
cloud
stream.
A
binder
a
binder
is
the
really
the
one
that
makes
a
difference
and
makes
this
framework
useful.
They
provide
an
abstraction
layer
between
your
code
and
the
messaging
system
where,
through
which
the
events
are
flowing
right
and
there
are
many
binders
few
of
them.
Popular
ones
are
binders
for
rabbitmq,
apache
kafka,
ragi,
kinesis,
google,
pub
subs,
solas
pub
sub,
plus
azure
event
hubs
and
so
on.
A
A
A
Last
message:
the
data
structure
used
to
communicate
with
the
bindings
between
your
code
and
your
message
broker.
It's
a
it's.
Basically,
the
mechanism
by
which
information
is
exchanged
via
a
channel
between
the
server
and
the
application
it
must
contain.
A
payload
optionally
can
also
contain
headers
right.
A
It
contains
data,
as
defined
by
the
application,
which
must
be
serialized
into
a
format,
that's
understandable
by
either
the
application
and
or
the
broker
json
xml
avro
binary.
These
are
the
formats
all
right,
we'll
have
a
small
spring
cloud
stream
demo
for
the
spring
cloud
stream
demo
we'll
be
using
the
getting
started
tutorial
from
the
vmware
spring
cloud
stream
web
pages.
A
What
we're
going
to
do
is
we're
going
to
download
a
docker
image
containing
rabbitmq
and
kafka
applications
and
then
run
that
docker
and
when
the
docker
comes
up.
Obviously
we
have
the
message:
broker:
ravid,
mcq,
ready
for
use
and
we'll
create
a
spring
startup
project
with
the
dependencies
on
the
spring
cloud
stream
and
rabbit
and
cube
binder,
and
with
that
we'll
create
a
simple
spring
boot
application
and
start
adding
a
message
exchange
contract,
nothing,
but
the
java
functions
that
can
that's.
A
The
spring
cloud
stream
will
interpret
this
as
a
subscriber
that
subscribes
to
a
topic
by
name
two
upper
and
then
it
follows
a
certain
convention
that
will
be
that
will
result
in
two
upper
dash
in
zero
as
a
topic,
so
a
it
will
create
a
topic
and
b
it
creates
a.
It
will
create
a
subscriber
to
the
topic
whenever
an
event
is
published
to
that
topic,
the
underlying
message
broker
will
automatically
deliver
to
that
subscriber
which
in
turn
will
be
consumed
by
this
function.
A
So
technically,
if
something
is
published
on
this
topic,
we
should
see
an
uppercase
converted
string
getting
printed
on
the
console
when
the
spring
cloud
sim
project
is
up
and
running
right,
it's
a
very
simple
one,
but
it
demonstrates
this
aspect
of
spring
cloud
stream
function,
spring
cloud
stream,
spring
cloud
function
and
then
the
implementation
of
message,
contract
and,
in
turn,
going
through
the
binder
to
create
necessary
message,
broker
specific
artifacts,
like
topics
used
and
so
on,
and
at
runtime.
A
When
message
appears
it's
getting
routed
to
this
function
seamlessly
in
this
entire
exercise,
we'll
be
focusing
on
building
only
or
implementing
only
the
business
logic.
Okay.
So
let's
get
started
with
the
downloading
the
docker
yaml
file.
Sorry
there
I
am
going
to
download
this
got
it
and
I
am
going
to
run.
A
A
All
right,
so
it's
ready
now,
let's
go
and
check
the
rabbitmq
console
it's
available
on
this
particular
port.
It's
ready!
Now
we
will
have
to
go
to
this
exchanges
tab
to
see
if
there
are
any
topics
right
yeah.
These
are
all
topics.
That's
existing.
These
are
system
topics.
There's
no
user
created
topic
at
this
stage,
all
right.
So
this
is
good.
Now
rabbit.
Mq
is
ready.
Let's
go
ahead
and
create
a
string
cloud
scene
project.
Let
me
just
make
it
bigger
so
that
you
can
see
things.
A
So
what
we're
going
to
do
here
is
we're
going
to
create
a
new
spring
starter
project
and
name
it
as
scs
getting
started,
that's
all
right
and
we
will
choose
the
dependencies.
Obviously
we
want
spring
cloud
stream
and
also
we
want
the
binder
for
rabbitmq
okay
chosen
these
dependencies
and
we
just
create
a
project.
A
Let's
open
the
application
main
file,
so
we
have
this
function
now.
This
is
the
one
that's
going
to
be
invoked
and
at
this
point
we
need
to
implement
the
spring
cloud
function:
to
create
consumer
right.
Let's
go
back
and
check
the
documentation
of
the
tutorial
you
can
see
here
we
have
done
this
part.
We
just
need
to
implement
a
consumer
all
right.
So,
let's
copy
paste
this.
A
And
paste
this
here,
you
can
see
these
are
unresolved,
let's
just
solve
them.
A
A
A
A
In
our
case,
that
subscriber
is
nothing
but
the
spring
cloud
function
and
it
should
have
received
a
message.
Let's
see
here
there
we
go
so
the
message
got
routed
to
our
spring
application
based
on
the
subscription
it
created
right
so
again
going
back
here.
All
we
did
is
we
implemented
the
business
logic
by
implementing
this
beam
function
called
consumer,
and
then
we
followed
a
nomination,
that's
suitable
for
our
application.
A
We
just
said
to
offer
if
you
had
called
it
as
a
process
temperature
or
something
it
would
have
created
a
topic
with
a
different
name
and
everything
underneath
what
happens
under
the
message
broker
is
hidden
from
us
as
a
developer.
A
We
would
be
still
looking
at
as
if
we
are,
we
are
doing
a
pure
spring
application
using
all
the
aspects
or
the
features
of
a
spring
like
spring
cloud
functions,
print
cloud
streams
and
integration,
string,
messages
and
so
on
right,
but
behind
the
scene,
the
spring
cloud
stream
takes
care
of
working
with
with
the
underlying
message
broker
via
the
binders.
That's
made
available
on
the
classpad
eda
can
help
prevent
cascading
failures
in
a
microservices
architecture.
Is
it
a
myth
or
fact,
let's
wait
and
see.
A
All
right,
it
is
a
fact
for
sure.
The
reason
is
in
ada,
all
the
interactions
are
mediated
to
an
intermediate
messaging
broker,
so
that
creates
a
decoupling,
so
a
failure
never
can
affect
another
service
and
so
on
in
real
world.
A
The
eda
is
pretty
much
useful
and
has
found
a
good
footing
on
various
industries
ranging
from
capital
markets,
retail
banking
manufacturing
and
so
on,
wherever
there
is
a
need
for
real-time
messaging
right,
the
takeaway,
when
you
work
with
the
eda
and
open
standards,
here's
a
small
subset,
open
standards
benefits
by
benefits
the
customers
or
businesses
from
by
preventing
lock-in
with
the
vendor,
and
this
whole
community
driven
model
has
its
own
advantages.
The
rest
of
the
minds
come
together
to
create
the
standards
and
build
the
ecosystem.
A
Let's
revisit
the
smart
town
that
we
kind
of
checked
out
in
the
early
part
of
the
session
in
this
picture,
we
can
clearly
see
there
is
a
rest
based
interaction
and
event-based
interaction.
Both
are
coexisting
to
bring
about
a
solution
for
temperature
monitoring
and
management
right,
let's
dive
in
a
little
bit
deeper
into
each
of
these
microservices.
A
The
first
one
is
a
data
simulator,
basically
in
in
a
demo
setup.
You
know
we
can
have
a
sensor
publishing
data,
so
we
have
a
program.
That's
going
to
publish
temperature
reading
periodically.
So
it's
a
micro
service
that
uses
the
spring
cloud
stream
supplier
contract
message,
exchange
contract
to
publish
temperature
reading-
and
one
thing
to
be
noted
here-
is
all
these
micro
services
are
implemented
using
solas
cloud
pub
surplus
broker
as
the
messaging
broker.
A
So
many
of
the
terminologies
and
the
convention
that
you
see
are
specific
to
are
very
specific
to
solace
and
pretty
much
common
across
the
messaging
brokers
as
well.
So
what
you
see
here
is
the
topic
name
on
which
the
temperature
reading
will
be
published.
The
topic
name
itself
carries
contextual
information
like
the
city
latitude
longitude
and
the
temperature.
This
comes
in
handy
when
you
want
to
create
a
subscriber
based
on
wild
cards
and
so
on,
right
and
the
next
one
is
a
data
collector.
A
It's
a
micro
service
that
uses
spring
cloud
streams,
processor
message,
exchange
contract
to
subscribe
to
temperature
events
and
publish
operational
alert
events,
and
you
can
see
the
topic
on
which
it's
being
published,
there's
also
a
micro
service
that
can
be
that
is
built
using
spring
cloud
stream
and
the
third
one
is
iot
data
collector
data
aggregator.
This
is
a
micro
service
that
uses
a
spring
cloud
stream
processor
exchange
pattern
where
it
subscribes
to
these
operational
alert
events
and
publish
an
aggregate
alert
in
terms
of
processing.
A
It
accumulates
the
operational
alerts
generated
for
a
period
of
time,
say
30
seconds
or
a
minute,
and
then
aggregates
the
temperature
value
and
publish
a
single
event
all
right.
These
are
all
operating
on
solas
cloud.
Pub
surplus
broker,
smart
town
demo
we'll
be
running
pre-built
micro
services
that
uses
stylus
pub
surplus
broker
and
cloud
as
the
messaging
provider.
A
A
Solid
pub
surplus
broker
offers
an
event
portal
that
allows
developers
and
architects
to
collaborate
and
design
applications,
events
and
schema
and
catalog
and
publish
them
rather
publish
high-value
events
as
event.
Apis
download,
async
api
specification
of
these
published
events.
At
this
point,
all
the
tools
that
azkpi
has
to
offer
can
be
put
to
use
in
terms
of
validation,
code
generation
and
so
on
and
from
the
async
api.
A
You
can
generate
a
spring
cloud
stream
projects
that
uses
solace
as
a
binder
solidus
web
surplus,
progressive
binder
to
create
a
complete
ready-to-use
micro
service
application
that
simply
requires
business
logic.
Implementation
quarter
is
a
tool
that
helps
the
architects
to
design
the
entire
eda,
starting
with
an
application
domain
which
contains
one
or
more
applications
stacks
like
a
namespace,
and
each
of
these
applications
in
turn
contain
information
about
the
events
that
it
can
publish,
are
subscribed
to
right
and,
as
far
as
the
events
are
concerned,
the
event
schema.
A
The
underlying
schema
that
a
message
payload
is
expected
to
carry
at
runtime
is
also
specified
here.
Without
writing
a
single
line
of
code.
This
designer
tool
helps
the
architects
to
design
the
entire
eda
all
the
way
down
to
the
schema
level.
Once
this
is
done,
the
catalog
feature
kind
of
lists
out
the
published,
dispose
all
available
events
and
their
associated
applications
with
their
topics
right.
This
makes
it
searchable
and
queryable,
and
so
on
and
last
but
not
least,
is
the
event
api
product.
A
It
is
nothing
but
the
identified
high
value
events
in
the
ada
can
be
exposed
for
external
access
via
async
api.
You
pick
and
choose
the
events
from
the
88
that
you
want
to
expose
with
the
appropriate
permissions
for
publish
or
subscribe
and
make
it
available
as
a
downloadable.
Async
api
specification
and
with
that
the
async
api
takes
over
and
start
building
code
using
the
tools
start
building
application
and
straight
away.
You
just
have
to
implement
the
business
logic.
A
Let's
quickly
review
the
smart
town
microservices
design,
so
we
have
three
micro
services,
a
data,
simulator,
collector
and
aggregator
right.
Simulator
is
the
one
that
simulates
the
sensor.
A
Sensors
by
publishing
temperature
reading
and
data
collector
subscribes
to
this
temperature
reading
and
produces
operational
alert
and
aggregator
is
the
one
that's
going
to
accumulate
these
operation
alerts
over
a
period
of
time
and
produce
an
aggregated
alert
with
an
aggregate
temperature
right.
So
in
this
case,
when
we
completed
the
exercise
of
event
portal
right,
let's
take
a
look
at
this.
We
published
an
event
api
product
that
api
product
is
con,
is
basically
exposing
these
events
with
a
permission
set
in
a
sense
temperature
reading.
A
So
this
event,
api
product,
is
revolves
around
these
two
events,
with
the
permission
set
as
permissions
as
described
here
right
and
this
async
api
specification
is
available,
you
can
download
from
the
portal,
if
not
you
can
it's
made
available
on
a
public
url
when
you
expand
and
see
which
get
all
the
necessary
details
about
the
constructs
that
are
part
of
the
async
api
specification
channels,
service
messages
and
schemas
all
right.
So
at
this
point,
let's
go
straight
into
the.
A
Spring
tools
suite
and
we
have
pre-built
micro
services,
except
for
the
collector,
let's
review
them.
When
you
look
at
the
spring
cloud
stream
project
at
runtime,
the
behavior
is
controlled
by
this
application
yaml
or
could
be
application
properties.
It
contains
all
the
necessary
information
for
the
sprint
cloud
stream
to
operate
the
publish
subscribe
activity
right,
so
you
can
see
here
to
begin
with
the
underlying
binder.
A
You
know
the
type
of
binder
and
then
the
enrollment
properties
for
the
binder
to
make
the
application
to
make
a
connection
to
the
message
broker
and
so
on.
This
will
be
specific
to
binder
use
and
then
the
bindings
properties,
whether
it's
a
published
activity
or
a
subscribe
activity.
You
know,
depending
on
whether
you
see
an
in
or
out
on
the
function
name
and
also
the
topic
on
which
the
activity
takes
place
right.
Anything
above
is
just
a
plain
spring
booth,
startup
parameters.
A
A
We
can
see
that
it's
implementing
this
producer
message
exchange
contract
right,
so
whenever
it's
invoked,
it's
going
to
construct
this
photo
for
its
temperature
reading
and
appropriately
populate
attributes
and
publish
it
to
the
broker
before
publishing
it
sets
the
header
property
called
target
destination
and
sets
the
target
the
topic
name,
which
is
constructed
based
on
the
city,
latitude
and
longitude
values
at
runtime.
Okay.
So
let's
start
this
application
and
see
where
it
goes.
Sorry,
it's
running
I'm
going
to
stop
and
restart
again
so
cdiot
simulator.
A
This
is
going
to
simulate
temperature
reading
data
being
published.
Yes,
it
is
being
published-
let's
not
worry
about
it.
We
will
figure
out
a
way
to
verify
this
and
then
the
next
one
is
based
on
the
picture.
What
we
see
here
is
data
collector.
Let's
spark
this
for
a
moment.
Let's
call
it
aggregator.
This
is
also
a
completed
service.
Let's
review
that
so
right,
let's
take
a
look
at
the
aggregator
micro
service,
let's
review
the
application
yaml
file,
so
you
can
see
the
bindings
properties
calls
for.
Clearly.
A
Both
input
and
output
are
in
and
out
events,
names
and
the
topics
on
which
the
publish
and
subscribe
activity
takes
place,
so
the
function
name
is
aggregate
temperature
and
binder
properties
are
appropriately
set,
basically
to
establish
connection
to
the
underlying
broker.
Let's
look
at
the
application
file.
Just
ignore
the
the
flux
and
mono
use
here.
What
we
are
saying
here
is
we
are
going
to
set.
A
We
are
going
to
construct
a
pojo
by
type
aggregate
alert
and
then
set
it
as
a
payload
and
publish
a
message
to
the
broker
and
the
target
destination
is
going
to
be
the
one
that
is
as
defined
in
the
application
yaml
here
it
is
aggregate
alert,
created
v1
and
it's
it's
going
to
contain
values
like
city,
priority
and
type
right.
It's
going
to
be
the
analytics
all
right,
so
when
this
event
is
published,
obviously
there
will
be
some
subscriber
and
in
our
case,
based
on
the
picture,
we
have
a
dashboard
application.
A
In
our
case,
we
are
going
to
be
again
building
a
simple
web
app
that
subscribes
to
this
event
on
the
broker
via
mqtt
protocol
and
just
plot
them
on
a
google
map,
so
that
one
probably
there's
nothing
big
deal.
You
know
we
just
need
to
run
and
visualize
them.
What's.
The
key
thing
here
is
the
data
collector.
Let's
go
back
to
the
async
api
portal,
where
the
event
portal
has
published
the
event
api
product
and
the
corresponding
async
api
document
is
available
for
download
from
this
location.
It
contains
all
necessary
details
right.
A
So
with
that,
let's
see
if
we
can
generate
the
code
here,
it
is
so
already
installed
the
async
api
code
generator
and
you
can
review
the
parameters
here.
You
know
the
output
folder
and
then
everything
with
the
dash
p
is
a
parameter.
Viewed
is
provided,
binder
is
solace,
dynamic,
header
type,
equal
to
header,
artifact
id
package
name
and
so
on.
Right
all
these
things
put
together
when
you
run
this,
it's
going
to
generate
a
complete
spring
cloud
stream
project
only
have
to
implement
the
business
logic.
Basically,
the
beam
function,
all
right.
A
So
in
keeping
in
mind
time
constraint,
I've
already
completed
that.
So
I'm
going
to
review
that
with
you
here
alert
generator,
and
you
can
see
here
in
terms
of
application
settings
it's
pretty
much
similar
to
what
we
saw
in
other
examples,
and
it
uses
a
message
exchange
pattern,
what
we
call
as
a
processor
and
let's
look
at
the
application
when
it
produced
a
code-
probably
it
wouldn't
have
had
any
of
these
things.
A
It
would
be
an
empty
body
right
now
we
have
populated
the
business
logic,
basically
to
open
up
the
temperature
reading
data
and
check
the
temperature
and
see
what
range
it
is
in
and
based
on,
that
generate,
are
appropriately
assign
the
alert
type
of
severity
and
create
a
pojo
operation,
alert.
A
Add
it
as
a
payload
and
publish
to
the
specific
destination
right.
So
now
we
know
that
it's
already
there
now
you
know
before
we
run
a
completed
one.
Let's
start
the
alert
aggregator,
so
the
expectation
is,
it
should
not
be
able
to
proceed
much
further
because
it's
not
going
to
get
any
events
right
so
over
here
we
are
running
a
simulator,
that's
simply
publishing
temperature
reading
and
over
here
we
are
subscribing
to
operational
outlet,
which
is
not
yet
coming
through
right.
A
A
So
it
should
subscribe
to
temperature
rating
all
right.
There
we
go.
Yes,
it
is
getting
the
temperature
reading
data
and,
in
turn,
is
producing
operational
alerts.
There
we
go
now
that
you
know
when,
in
a
period
of
10
seconds,
it
received
eight
operational
alerts
and
from
that
it
was
able
to
construct
the
severity
type
as
high
another
type
as
high
temperature
and
publish
this
aggregate
alert
events
all
right
now.
This
is
the
window
where
I
can
run
the
web
application.
A
This
is
going
to
subscribe
to
this
aggregate
alert
and
just
plot
them
on
a
google
map.
All
right,
let's
see,
unfortunately,
there's
an
issue
with
a
google
maps
key.
I
could
not
show
it
in
a
live
map,
but
this
is
just
a
presentational
image.
When
I
ran
this
program
with
multiple
cities
publishing
their
temperature
data,
aggregated
alert
data,
you
were
able
to
see
the
and
other
type
all
in
one
place,
so
pretty
much.
What
we
have
done
is
going
back
to
this
picture.
A
We
took
care
of
this
one
subset
of
problem
of
temperature
monitoring
and
management,
more
specifically
everything
around
microservices,
using
async
api
and
spring
cloud
stream
with
solace
as
the
backing
broker
right.
That's
brings
us
to
end
of
this
session.