►
Description
Validating Event Driven Architecture (EDA) with AsyncAPI - Waleed Ashraf, relayr GmbH
Speakers: Waleed Ashraf
Validating Event Driven Architecture (EDA) with AsyncAPI.
https://www.asyncapi.com/ is an open-source initiative to provide a specification for EDA through AsyncAPI. It is based on the open-api initiative which also comes under the Linux Foundation.
At relayr Gmbh, we started using AsyncAPI for Kafka message validations. So, this talk is about what we learned with our experience and how you can easily and effectively integrate the specification in your system.
A
Okay,
hi
everyone.
My
name
is
leadership.
I
work
as
an
odious
developer
at
RIT,
lair
in
Berlin
I'm,
also
a
member
of
no
Deus
Foundation
and
OpenGL
foundation
and
contributed
to
some
open
source
projects.
So
today
we
are
talking
about
validating,
even
driven
architect,
with
a
sink
API.
How
many
of
you
already
know
I?
Think
API?
A
Ok,
so
yeah
not
a
lot,
because
it's
relatively
a
new
project,
but
it's
like
very
useful
for
anyone
who
is
using
Kafka,
RabbitMQ
and
qtt
or
web
sockets,
so
how
many
of
you
use
any
of
these
technologies,
Kafka
yeah
so
to
TLDR
its
swagger
for
Kafka,
so
you'll
get
the
idea
or
what
it
is.
So
just
keep
my
definition,
we'll
talk
about
it
in
details,
and
let's
start
so.
My
name
is
Wally
eff
and
how
my
like
colleagues
pronounced.
It
is
like
this.
Well
it
and
yeah
so
yeah,
that's
the
only
joke!
A
I
have
in
my
slides,
so
you
can
laugh
as
much
as
you
want
right
now.
The
rest
is
pretty
boring.
Ok,
so
I
work
at
relayer.
We
are
a
high
tea
company
based
in
Berlin
of
our
clients.
We
provide
customized
solution
to
clients.
We
will
receive
data
through
impurity
or
cough
cough
through
their
to
hardware
and
sensors.
We
process
them
on
our
cloud.
Then
we
showed
them
in
a
fine
like
fancy
dashboard.
A
We
send
them
alerts
emails
so
just
to
get
you
an
idea.
What
kind
of
like
industry
we
are
working
in
our
process
is
like
we
have
like
some
sensors
on
machines,
then
there's
a
gateway,
and
then
we
ingest
the
data
in
our
cloud.
We
will
do
processing,
analytics
storage
and
then
we
have
some
like
dashboards
and
mobile
apps,
where
the
client
can
see
what's
happening
with
their
devices
etc.
A
So
that
that
puts
us
in
a
situation
where
we
have
a
lot
of.
Would
you
have
to
deal
a
lot
of
asynchronous
communication
from
the
devices
which
are
sending
data
to
MQTT,
Kafka
or
maybe
through
even
straight
the
protocols,
so
a
little
world
or
open,
API
and
swagger?
Is
it
swagger
was
initiated
in
2010?
It
came.
The
the
schema
definition
came
part
of
open
API
in
2015
because
it
called
so
much
value
and
used
by
everyone
in
the
industry,
because
that
was
the
only
way
or
one
of
the
only
way
you
can
validate.
A
What's
in
your
HTTP
requests,
what
you
can
send
in
the
response
or
Hokie
you
can
define
the
schema
and
the
other
part
was.
You
can
get
a
nice
fancy
documentation
for
your
all
of
HTTP
communications,
and
then
we
have
async
api.
The
the
project
started
around
2
years
ago.
It's
been
used
by
slack
SAP.
Some
sales
force
some
companies,
but
it's
not
very
popular
right
now.
A
So
the
need
for
this
is
because
even
the
messaging,
the
icing
communication
we
do
between
the
micro
services
is
it's
a
contract
between
two
two
services
like
one
is
the
publisher
and
is
the
subscriber,
and
they
both
need
to
know
what
kind
of
messages
one
is
publishing
and
other
is
subscribing,
so
they
can
validate
it.
They
can
when
they
receive
it,
they
can
check
the
the
body
and
on
the
other
side
the
producer
should
know
what
data
should
send
or
not
to
send.
A
The
async
API
also
gives
you
an
ability
to
create
documentation
which
is
understandable
by
humans.
It's
not
like
a
JSON,
but
it's
actually
a
a
nice
recommendation.
Just
like
swagger,
you
can
read
about
what
channels
or
topics
you
have.
What
payload
do
they
have
yeah,
there's
also
tooling
around
how
to
use
the
schema
definition
of
your
async
API
for
testing
and
validation.
Like
you
can
test
on
runtime
in
your
production
environment.
If
then,
you
receive
a
cough
course
program
as
Kafka
mqt
message,
that
is
it
to
allayed.
A
According
to
your
definition,
you
have
defined
and
yeah
it
gives
you
ability
to
to
create
nice
documentation.
So
let
me
just
show
you
how
it
looks
like
you
can
go
to
the
this
website:
playground,
dot,
async,
API,
dot!
Io,
you
see
here,
I
have
defined
I
ml,
a
schema
in
the
ml
file,
just
like
the
swagger,
and
here
I
have
a
nice
view
of
it.
Just
like
we
see
in
the
swagger
files,
so
it
has
few
attributes
like
you
can
define
under
the
channels.
You
can
define
your
your
topics
or
your
channels
for
for
mqtt.
A
It's
the
type
of
forum
Kafka.
It's
the
type
of
the
message
like
I,
have
defined
user
events
and
then
message
you
can
just
tag
with
the
ref
just
like
in
the
swagger,
and
you
can
define
what
properties
it
will
have,
what
all
they
required.
What
is
the
type-
and
here
you
can
see
a
nice
example
where
it
also
generates
automatically
generated
example,
and
you
can
see
this
this,
the
schema
itself,
what
are
the
keys
and
what
are
the
restrictions
on
them?.
A
Let's
go
back
now,
you
have
seen
an
example,
so
the
the
problem
we
face
with
even
driven
architect
was
that
we
had
around
30
or
40
micro
services
written
in
node
and
all
of
them
were
communicating
internally
through
Kafka.
We
had
more
than
50
types
of
messages
with
different
payload
and
not
everyone
knew
what's
inside.
So
whenever
you
are
debugging
or
creating
something
adding
some
key.
You
had
to
ask
someone
a
developer,
who
has
already
worked
on
that
type
of
message
to
see
or
go
check
the
code
by
yourself
and
see
what's
inside.
A
If
you
want
to
change
it,
if
you
want
to
add
a
new
attribute,
if
you
want
to
remove
it
and
it
was
a
mess,
no
one.
No
one
knew
everyone
and
we
also
have
some
services
in
Scala,
obvious
color
people
are
working
on
them
and
we
don't
communicate
the
like
the
the
different
teams.
Note,
the
backend
teams
don't
communicate
much.
Even
the
front-end
team
don't
communicate
much
about
what's
in
the
payload
and
every
time
they
have
to
check,
they
have
to
look
at
at
the
code
or
we
created
our
like
a
markdown
files.
A
I
studied
me
too,
to
keep
it
up
to
date
and
then
problem
with
with
such
documentation
is
it
always
gets
outdated?
No
one
cares
about
it,
so
you
have
different
problems
and
that's
I
think
happening
in
all
the
micro
service,
driven
architects
where
you
have
multiple
services
and
they
are
communicating
to
each
other
through
some
of
these
protocols.
And
then
you,
don't
you
don't
know
how
to
have
one
source
of
truth
for
all
the
communication
between
the
services.
A
A
Some
are
in
Munich
Simon
in
Berlin,
and
different
people
are
working
on
different
services,
there's
no
like
one
documentation
and
also
we
have
MQTT
protocol
for
the
devices
for
the
client.
So
we
need
to
tell
them
also
what
you,
what
you
should
send
to
our
cloud
in
a
nice
very
documented
way
and
also
validate
if
they
are
sending
it
right
way
or
the
wrong
way.
A
So
previously
we
were
doing
something
like
this.
We
had
a
readme
file
mark
now
and
someone
would
know
the
codebase
and
know
what's
in
the
payload
and
for
the
testing
we
had
hard-coded
schema
in
a
service
in
the
services
where
we
just
write
down
all
the
properties
and
match
them
using
the
result,
library
or
something,
but
this
gets
outdated
because
and
whenever
you
want
to
update,
you
have
to
like
write
it
down
as
a
cold,
hard
coded
message
with
a
sync
API,
you
created
two
documents
for
one
for
MQTT
and
one
for
Kafka
messages.
A
I've
defined
all
of
our
events,
just
like
I,
showed
you
for
like
device,
on/off
temperature
and
what
internal
communications
you've
created.
User,
updated
user
deleted
whatever.
So
it's
all
our
problem
for
for
the
documentation,
because
it
gives
you
like
a
nice
view
too,
and
it
also
generates
a
output
file
which
you
can
easily
share.
Let
me
just
show
you
how
it
looks
like
in
one
of
our
service,
which
is
public,
so
I
just
go
to.
A
So
this
is
a
bit
old
UI
for
the
async
API,
but
this
is
what
you
get
if
you
want
to
generate
a
St
ml
document
for
out
of
it.
So
you
see
these
are
the
top
topics
on
MQTT
you
can
define,
and
then
we
have
defined
all
the
of
the
payload
and
the
messages
inside
it
and
the
one
of
like
this
document.
We
can
share
easily
with
our
clients
and
they'll,
see
what
what's
inside
and
we
can
also
validate
using
this
schema
and
inside
our
services.
A
So
you
can
see
such
a
very
nice
way.
It
gives
you
all
the
messages
the
schemas
you
have
used
and
the
topics
you
have
so
now
the
come
about
how
to
validate
and
how
to
test
it.
If
the
message
is
right
or
not,
or
what
property
it
is
missing
or
or
not
so
we
wrote
this
library,
it's
open
source,
async,
API,
validator,
it's
a
key
value
base,
schema
validation,
you
give
it
the
key
of
your
message
and
you
pass
the
payload
and
it
will
validate.
A
A
You
can
go
to
the
github
or
and
just
check
it
out.
So
I'll
just
show
you
how
it
how
it
works,
so
I
have
required
the
validator
on
top
and
then
I'm
I
need
to
pass
the
schema
which
is
which
is
this
file.
The
schema
is
same
as
I
showed
you
in
the
example
for
the
user
event.
So
I
have
defined
the
scheme.
A
So
it
says
data
user
name
should
match
for
my
email
and
user.
Creator
is
the
key.
So
if
I
go
and
check
the
schema
or
I
think
it
was
the
user
name,
so
the
format
is
email,
because
these
types
are
defined
as
like
normal
JSON
schema
definition.
So
you
can
just
have
it
as
the
swagger
file
at
the
swagger
definitions,
all
of
them
like
tie
format
or
a
patron.
So
if
I
just
just
change
it
because
I
think
it's
missing
an
aunt
and
then
just
run
it
again.
A
Okay,
so
it's
it's
passing
and
let's
see,
if
there's
any
required
property.
So
this
the
ID
is
a
required
property
in
the
message
with
which
is
mostly,
if
you
use
like
Kafka
or
some
similar
protocol.
So
if
you,
if
you
let's
say
if
one
of
the
message
didn't
send
an
ID
and
the
rest
of
the
payload,
was
okay,
so
yeah,
so
it
will
tell
you
that
yeah,
this
property
is
required,
just
like
just
like
you
have
on
on
swagger
definitions,
if
you
use
it
for
validations
so
yeah.
A
This
gave
us
a
really
good
opportunity
to
an
ability
to
validate
our
messages
which
are
coming
through
the
client-side,
because
we
were
facing
a
lot
of
issues
when
we
didn't
know,
what's
inside
the
message
and
how
to
debug
or
how
to
tell
them
what
was
the
missing
and
also
give
them
a
nice
documentation
which
they
can
use
it.
So,
let's
move
forward
so
we
started
use
it.
We
started
using
it
in
production.
We
made
a
private
package
of
over
a
sink,
a
PS
schema
so
or
like
Kafka
mqt
schema.
A
We
would
push
them
into
a
private
repo
and
on
radhaji,
oh,
and
we
edited
it
as
a
dependency
in
the
services
and
then
used
the
validator
to
validate
the
messages
so
on.
We
also
doing
it
on
run
time
and
production
for
all
the
messages
which
are
coming
through
the
client
side,
but
for
the
messages
internal
communication
like
Kafka.
We
only
do
it
when
running
the
unit
test
or
when
doing
integration
tests
between
the
services
not
on
on
production,
live
environment,.
A
So
the
flow
was
like
this,
so
we
have
we
consume
the
messages
and
we
will
forward
it
to
the
previously.
It
was
like
this.
We
forwarded
to
the
relative
service
so
with
when
we
clear
when
we
started
consuming.
We
started
validating,
also
venerating
the
message
rule
and
if
there's
no
error,
if
the
message
schemas
is,
is
right:
if
it's
not
valid,
we
just
log
the
errors,
but
don't
don't
fail
it
or
don't
send
an
error
back
to
the
client
just
just
log
it
and
see
it
in
the
logs.
A
If
there
was
something
missing
and
if
it's
a
valid
message,
then
just
forward
it
to
relative
service
or
inside
the
cloud
somewhere,
and
even
if
there's
an
error,
it's
not
a
valid
message,
we'll
still
forward
it
to
the
up
service,
the
just.
So
this
was
the
flow
when
we
started
using
the
async
API,
how
to
add
how
to
get
it
inside
the
running
production
environment.
So
we
just
validate
log.
It
and
rest
of
the
roost
case
remains
the
same.
A
However,
it
was
working
before
after,
like
going
through
one
or
two
weeks
of
process
and
seeing
all
the
logs.
We
saw
that
sometimes
it
was
our
schema,
which
was
not
right
and
some
time
it
was
the
payload
the
client
was
sending
was
not
right.
So
we
we
made
tweaks
in
the
in
the
schema
definition,
and
sometimes
we
communicated
back
with
the
client
and
asked
them
to
change
the
payload,
and
once
that
was
done,
everything
was
fine.
We
we
started
sending
the
error
events
to
or
forwarding
we
just
like
stopped
it.
A
A
So
the
use
cases
for
for
us
over
was
are
pretty
obvious.
It
was
the
validating
of
all
the
communication
we
did
for
documentation.
We
use
it
for
the
our
system
test
and
when
now
we
also
introduce
this
process
of.
If
you,
if
a
developer,
wants
to
extend
a
message,
remove
a
key
from
existing
messages
or
want
a
different
type
of
channel
or
topic.
You
first
open
appear
in
the
schema
definition,
repo
everyone's
approves
it.
A
The
team
approves
it
and
then
you
start
working
on
actually
making
the
changes
in
the
services,
so
it
gave
us
a
really
nice
way
how
to
implement
or
make
changes
in
the
existing
messages.
Otherwise
it
was
previously
someone
would
be
working
on
the
code
itself
and
then
reviewing
and
people
would
see.
Oh
ok,
this
properties
remove
these
properties
added.
These
are
the
checks
on
it.
So
now
we
first
get
settled
on
the
schema
definition.
A
How
the
messages
will
look
once
that's
approved,
you
start
working
on
the
on
the
code
itself,
so
this
saved
us
a
lot
of
time
in
reviewing
the
PRS
and
actually
developing
new
kind
of
messages.
It's
just
like.
If
you
do
for
the
swagger
first,
you
change
the
swagger
and
introduce
a
new
endpoint,
and
everyone
is
ok
with
it.
Then
you
start
developing
further
on
so
for
the
external
use
case.
A
It
helped
us
for
a
lot
for
the
clients,
because
previously
we
were
managing
our
document
by
ourselves
to
share
with
them,
and
now
it
also
gives
them
a
really
nice
error
about
what
key
is
not
a
valid
key
and
what
type
of
properties
it
should
have
like.
It
should
be
an
email
or
it
should
have
length
less
than
64
or
something
so
it
was
easy
for
them
also
to
fix.
If
there's
something
is
broken.
A
So
when
I
started
working
when
we
started
working,
I
also
got
started,
contributing
to
a
sync
API,
the
schema
definition.
It's
a
very
nodes,
getting
very
widely
used
in
a
lot
of
companies
like
slack,
and
we
just
released
version
2
of
schema,
which
which
have
a
lot
of
different
properties.
You
can.
You
can
define
custom
bindings
for
protocols,
you
know
it's
supposed
different
kind
of
schemas
and
you
can
do
like
channel
based
validation,
so
we
also
have
a
slack
channel
for
async
API.
A
We
also
have
bi-weekly
meetings
on
YouTube
you,
which
you
can
anyone
can
join
and
yet
open-source
you
can
contribute
in
any
kind
of
way,
and
we
are
also
working
on
a
lot
of
tooling
around
the
schema
definition
right
now.
Just
like
this,
like
this
validator,
we
are
working
on
generators
which
will
generate
the
code
from
your
defined
schema,
and
so,
if
you
define
your
schema
for
different
messages,
you
can
generate
in
JavaScript
or
Java
code
in
different
languages
through
the
schema
definition.