►
From YouTube: Communication Design Patterns for Microservices
Description
In this session, we will explain the various communication design patterns for microservices running at scale, including:
- Brief introduction to microservices
- Microservice design patterns
- Communication patterns between microservices using service mesh and API gateways
- Running microservices on the cloud (multi-cloud and hybrid cloud)
- Scaling microservices
A
Hi
everybody,
my
name
is
lef
rosenthal
and
I
am
with
slalom
I'm
a
practice.
A
Aerial
lead
for
software
engineering
practice
for
new
york
city
market
and
today,
we'll
be
talking
about
communication
design
patterns
for
microservices
before
we
get
to
the
the
fun
part
of
this,
we'll
quickly
run
through
our
introduction
to
microservices
and
running
them
on
the
multi-cloud
and
hybrid
cloud
environments
and
then
we'll
get
into
scaling
patterns
and
communication,
and
these
are
the
fine
folks
we'll
be
presenting
for
you
today
and
we'll
again,
we
will
introduce
ourselves
as
we
go
along.
A
So
as
far
as
microservices
go
so
the
introduction
of
microservices
architecture.
Patents
has
been
basically
described
as
a
running
representing
application,
sometimes
very
complex
applications
as
a
series
of
services
which
are
loosely
coupled,
which
are
independent,
independently
deployable
and
which
might
be
used
by
used
by
or
built
by.
You
know,
smaller
teams
there's
definitely
more
things
there.
But,
as
you
know,
as
I
said
before,
we
can.
A
We
can
talk
about
the
kind
of
details
of
this
basics
offline
after
the
presentation,
as
a
company
or
shop
moves
from
old
monolithic
applications
into
microservices,
good,
experienced
enterprise,
architect
or
enterprise
architect
group
would
be
kind
of
on
the
lookout
for
some
anti-patterns,
which
are
very
easy
to
get
into
as
traps
and
examples
of
those
would
be
running
or
moving
to
microservices.
A
Another
example
of
of
anti-pattern
would
be
focus
on
technology
again,
as
opposed
to
business
and
or
kind
of
getting
into
technology,
where
smaller
groups
will
go
off
the
different
microservices
in
the
context
of
the
same
application
or
the
same
offerings
and
ultimately
kind
of
without
an
architectural
oversight.
Strategic
oversight
will
kind
of
use
different
tools,
use
different
approaches,
different
best
practices
and
is
going
to
catch
up
with
with
the
google
developers
very
very
quickly.
A
One
is
kind
of
purely
by
business
capability
and
another
one
is
by
domains
and,
for
example,
for
domains
if
you're
running
e-commerce
shop-
and
you
have
a
custom,
build
application,
you
would
you
know,
maybe
separate
the
the
application
by
domains
of
shop
and
browse
versus
checkout
versus
account
management
versus
user
management.
A
That's
the
example
of
the
decomposition
again,
it's
good
to
have
an
approach
which
is
which
is
kind
of
adhered
to
and
agreed
upon
by
the
team
and
then
last
but
not
least,
test
automation
as
a
company
or
shop
get
to
the
point
of
deploying
services
independently
with
having
a
pretty
complex
backlog.
A
Creating
test
automation
framework
is
absolute
importance.
A
This
way,
kind
of
ensure
that
the
application
is
you
know
has
uptime,
has
good
good
numbers
in
terms
of
performance
in
terms
of
scalability
and
ultimately
keeps
developers
honest
and,
as
an
example
of
kind
of
moving
from
monolithic
architecture
to
microservices
will
quickly
run
through
kind
of
a
somewhat
typical
path,
which
we
see
a
lot
in
our
clients.
A
What
we
usually
go
for
in
the
very
beginning
is
kind
of
taking
care
of
the
right
side.
So,
instead
of
kind
of
users
who
use
you
know
specific
application,
you
know
what
what
are
the
roles?
What
are
the
needs
that
you
know
the
software
is
trying
to
fulfill
and
again
we
just
have
a
few
examples
here.
The
biggest
point
here
is
that
api
gateway
will
take
care
of
you
know.
Access
concerns,
permissions
concerns
and,
very
importantly,
it
would
take
care
of
api
contracts.
A
So
the
idea
is,
you
kind
of
you
create
the
the
contracts
that
your
users
will
ultimately
expect,
and
this
will
allow
you
to
go
and
start
having
fun
on
the
left
side
with
the
applications.
When
you
may,
you
know,
dockerize
applications
start
creating
separate
services,
start
creating
pipelines
which
again
this
context
is
still
kind
of
on-prem.
A
Then
you
might
be
adding
cloud
offerings
and
again,
maybe
just
pure
containers,
maybe
using
some
serverless
offerings.
But
ultimately
you
may
get
to
the
point
where
you
have.
You
know
multiple
cloud
providers.
It
may
be
on-prem,
maybe
not
on-prem.
That's
really.
A
You
know
up
to
you
obviously,
but
the
most
important
part
here
is
the
gateway
will
hold
those
contracts
where
no
matter
how
many
changes
you
make
on
the
left
side,
the
consumers
will
not
be
affected,
and
hopefully
the
only
change
they
would
see
is
difference
in
you
know,
fixing
problems
better
performance
and
a
quicker.
You
know
back
to
market
for
new
requests.
B
Thanks
lab
to
giving
a
big
overview
on
the
micro
services
hi
everyone,
this
is
mona,
a
solution
architect
for
the
new
york
city
market
with
the
slalom.
So
I'm
going
to
dip
diving
a
little
bit
deep
dive
into
the
scaling
micro
services.
B
So
let
me
if
you
can
go
to
the
next
slide
so
basically
before
getting
into
the
scaling
of
microservices
itself,
let's,
let's
a
little
bit
talk
about
the
scale
cube,
so
the
scalability
cube
is
basically
composed
of
for
the
x-axis
y-axis
and
that
excess
of
what
are
those
axis.
So
the
x-axis
is
basically
a
traditional
way
of
scaling
or
running
a
multiple
copies
of
the
services
or
applications
through
a
load
balancer
through
different
servers.
B
What
is
the
y-axis,
so
y-axis
scaling
the
or
scaling
is
basically
break
the
application
into
multiple
components,
modules
or
the
services,
so
that
that
general
approach
or
approach
of
the
micro
microservices
is
basically
a
y-axis
scaling
and
what
is
a
z-axis.
So
z-axis
is
basically
a
approach
of
the
x-axis
and
the
y-axis,
but
where
we
are
going
here,
we
are
going
to
provide
a
subset
of
the
data
rather
than
the
application
as
a
whole.
B
So
that's
the
sad
access
scaling
now,
let's
talk
about
little
bit
more
monolithic
scaling.
So
what
is
a
monolithic
scaling
as
as
a
scalability
queue?
The
x-axis
is
basically
a
a
monolithic
scaling
where,
where
the
id
infrastructure
is
basically,
whenever
the
demand
spikes
for
the
application,
the
whole
application
had
to
be
multiplied
now
to
the
different
servers
or
different
server
or
different
virtual
servers.
B
So
that's
a
monolithic
selling
now
talk
about
scaling
in
microservices,
so
the
microservices
has
own
benefit,
but
it
also
has
a
complexity
of
the
scalability.
So
so,
whenever
we
so
the
basically
the
micro
service,
it
has
its
own
benefits,
but
that
it
has
also
added
level
of
the
complexity
because
not
microservices.
B
Maybe
when
we
write
a
microservices
for
a
different
component,
it's
not
written
into
the
same
programming
language.
It's
written
into
different
programming,
a
language
loaded
on
different
hardwares
running
on
a
different
hypervisions,
are
deployed
across
a
different
cloud,
so
so
that
that
that's
a
that's
more
complex
than
the
simple
monolithic
way,
because
then
we
have
to
we
have
to
scale
it
up
and
down,
or
in
and
out
both
the
ways.
B
But
again,
if
we,
if
we
use
the
microservices,
then
if
we
and
we,
if
we,
we
use
the
scalability
cube
of
this
ad
in
the
extent
we
can
scale
it
probably
very
nicely,
because
it
it
provides
the
subset
of
the
data.
So,
for
example,
if
if
we
have
a
different
customer
types,
say,
for
example,
free
versus
a
paid
premium
customer
then
then
for
the
free
instead
of
a
free
customer.
If
the
premium
customer
will
get
the
more
bandwidth
of
the
services.
B
B
So
when
the
when
we
talk
about
the
scale
of
micro
services,
so
there
are
three
different
three
key
points
to
scale
micro
services.
To
using
these
three
key
points,
you
can
scale
it
better.
So,
first
a
key
point
is
environment
assessment.
So
you
cannot
just
do
a
scaling
of
the
micro
services
you
need
to.
B
You
need
to
defer
the
conditions
at
the
institution,
those
are
the
external
to
the
users
but
but
effective
for
the
organization
in
terms
of
limitations,
of
the
scaling
of
the
different
cloud
and
also
a
cost
assessment.
So
you
have
to
keep
that
mind
when
you
do
scaling
up
or
down
making
a
strategic
choices
to
the
support
vertical
scaling,
because
you
can
scale
what
is
a
vertical
scaling.
So
you
are
adding
more
power
to
your
existing
machines.
So
when
you
scale
vertically,
it's
often
called
like
up
or
down.
B
So
when
you
refer
to
to
vertical
scaling,
then
you
are
adding
more
cpu
memory
and
resources
to
the
existing
server
or
replacing
one
server
to
a
more
powerful
server,
but
but
the
but
the
cons
for
the
vertical
scaling
is
when
you,
when
you,
when
you
add
more
power,
it's
probably
you.
There
is
a
down
time
for
it
because
you
cannot
just
add
existing
existing
instance
with
the
more
power.
So
you
you
might
need
to
replace
the
instead
itself.
B
So
there
is
a
there
is
a
down
time
for
it
when,
when
you
are
so
the
third
third
key
point.
So
when
you
have
to
do
a
horizontal
scaling,
so
the
horizontal
scaling
is
where
you
are
replicating
the
replicating
the
machines
or
the
microservices
to
the
machines
so
that
that's
basically
you
are
scaling
out
or
in
the
micro
services.
B
So
well
it!
So
what
does
that
mean
like?
So
you
are
providing
provisioning
additional
servers
to
meet
your
meet.
Your
needs
often
splitting
workloads
between
the
servers
to
limit
the
number
of
requests
for
individual
server
getting
in.
So
that's
how
you
you
are
scaling
it
horizontally.
So
in
practice
of
the
scaling,
horizontal
is
usually
the
best
practice
because,
as
I'm
as,
as
I
mentioned,
while
talking
about
vertical
scaling
that
usually
the
vertical
scaling
has
probably
a
downtime,
so
you
need
to
you
need
to
make
a
strategic
choice.
B
So
when
you
are
running
like
jobs
which
which
needs
a
more
power
but
but
that
can
handle
the
downtime,
then
you
have
to
go
with
the
vertical
scaling.
But
when
you
are,
when
you
are
doing
some
kind
of
b2c
services,
or
that
kind
of
which
needs
always
availability-
and
you
don't
get,
you
can't
afford
the
downtime
at
the
time
you
have
to
go
with
the
horizontal
scaling.
Can
you
go
to
the
next
slide,
please
so
of
the
ways
of
scaling
the
microservices.
B
So
usually,
generally,
all
the
clouds
provide
three
ways
to
scale
it:
the
manual
schedule
and
automatic,
as
name
the
manual
is.
The
manual
like
the
engineer,
needs
to
just
click
the
button,
but
they
have
to
do
it
manually
manually,
so
they
have
to
keep
keep
eye
on
every
every
instance
like
what
gets
spikes
and
where
they
need
to
where
they
need
to
increase
the
power.
So
so
that's
the
that's
a
manual
wave,
it's
a
good
when
you
have
a
scheduled
job
that
you
know
like.
B
Okay,
you
have
to
you
have
to
give
a
boost
to
your
jobs.
But
in
that
case,
if
you
forget
to
put
it
your
instance
down,
then
you
might
lose
a
cost.
The
second
way
it's
scheduled.
The
schedule
is
basically
when
your
nose,
like
you,
have
some
big
data
files
are
coming
in,
that
needs
more
power
and
those
are
coming
in
particular
some
days
at
the
time
you
you
set
it
up
the
scaling
in
and
out
or
open
up
and
down,
based
on
that
schedule.
B
So
that's
a
scheduled
way
and
the
automatic
is
basically
when
you,
when
you
know
when,
when
your
computer
database
or
any
storage
resources
scale
automatically
based
on
predefined
rules,
for
example,
event
metrics,
like
memory
or
network
utilizer
utilization
rates
go
above
or
beyond,
then
you,
you
set
up
your
scaling
of
the
micro
services
based
on
that
that's
automatic
way
to
do
it
now
I'll
hand
it
over
to
one
for
the
design
patterns.
C
All
right
so
now
that
you've
figured
out
how
to
go
ahead
and
get
your
micro
services.
You
can
run
them
on-prem
or
you
can
have
them
running
in
the
cloud.
You
also
need
to
figure
out.
How
do
you
get
there?
11
mona
have
discussed
about
moving
on
to
the
cloud
and
also
the
scalability
of
all
these
designs
of
moving
into
the
cloud
with
the
micro
services.
C
But
how
do
you
get
to
the
cloud
and
what
are
the
design
patterns
that
can
actually
assist
you
getting
onto
the
cloud
so
we'll
just
go
to
a
couple
of
them
and
figure
out
how
to
get
to
that
process.
Left
you
want
to
move
on
to
the
next
slide.
Every
organization
that
exists.
They
always
suffer
from
conveys
law,
and
this
is
something
you
could
go
ahead
and
search
all
over
the
internet.
Every
company
has
their
system
set
up
in
the
way
the
organization's
communication
structure
is
there.
There
are
ways
to
that.
C
C
So
if
you
have
an
existing
application
that
you
would
plan
to
move
from
a
monolith
to
a
microservices
pattern
or
you're
starting
fresh,
and
you
want
to
come
ahead
onto
the
microservices
ecosystem
for
a
new
system
that
you're
building
the
very
first
thing
that
you
need
to
identify.
Is
your
business
service
decomposition
what
the
process
is
going
to
be?
How
do
your
services
are
going
to
be
so
you
would
want
to
go
ahead
and
analyze.
C
What
are
your
domains
that
exist
in
your
system,
like
11
mona
mentioned
about
to
serve
about
your
billing
services
or
your
shipping
services,
or
anything
else
that
exists
part
of
it?
What
are
the
business
capabilities
you
want
to
entail
within
your
service
by
itself?
What
are
the
boundaries
of
your
applications
that
you
want
when
you
go
ahead
and
you
chart
these
pieces
out,
you
will
be
able
to
have
a
domain
driven
approach
to
what
your
services
are
and
you
will
be
able
to
define
what
the
different
services
exist
in
your
ecosystem.
C
Once
you
have
a
map,
a
mind
map
of
all
these
services,
all
the
capabilities
and
what
the
domain
exists
in
your
ecosystem,
you
would
be
able
to
draw
lines
and
where
your
microservices
should
reside
within
left.
Can
we
move
on
to
the
next
slide
if
you
are
coming
from
an
existing
application
and
it's
not
easy
for
us
to
just
go
ahead
and
just
blow
out
existing
what
you
have
and
start
it
fresh.
So
there
are
different
patterns
that
exist
in
the
ecosystem,
to
kind
of
understand.
How
would
we
get
there?
C
The
two
main
common
ones
are
the
strangler
pattern
and
the
anti-corruption
pattern
anti-corruption
layer
which
exists,
stranglers,
basically
think
of
it
as
a
big
old
oak
tree.
The
way
martin
fowler
puts
it
on
a
fig
tree,
where
you
have
these
big
trees
and
these
small
trees
wines
growing
around
it
would
slowly
strangle
the
big
tree
but
and
then
take
over
it
completely.
So
you
would
have
small
micro
services,
which
would
come
along
the
way,
and
you
would
strangle
your
monolith
piece
by
piece
into
smaller
fashion.
C
At
the
same
time,
you
can't
just
have
things
into
your
microsoft
in
your
monolith
without
having
disruption
coming
along
the
way,
so
you
start
creating
anti-corruption
layers
between
the
two
systems,
which
allows
you
to
go
ahead
and
build
them
in
parallel
without
compromising
any
existing
designs
or
technological
approaches
that
exist.
For
that
left.
Can
you
move
on
to
the
next
one?
C
Thank
you.
You
will
have
different
communication
styles.
Once
you
come
into
a
microservices
ecosystem
on
how
these
different
services
will
talk
to
each
other,
some
of
them
could
be
just
api
gateway,
communication
with
rest
and
gr.
Pcs.
You
would
have
a
back
end
for
front-ends,
where
you
would
have
a
unified
front-end
with
multiple
services
in
the
back
coming
together
and
communicating
some
could
be
just
back-end
services
talking
about
talking
to
each
other,
using
messaging
with
amq's,
kafkas,
rabbit,
mqs
and
all
the
other
bus
services
that
exist
out
in
the
ecosystem.
C
Some
of
them
could
be
just
domain
specific
where
you
have
emails
coming
in,
or
you
have
video
streaming
options
coming
through,
where
you
would
go
ahead
and
share
the
information
or
communicate
between
your
different
services,
their
old-school
system
of
file
drops
or
ftps
or
s3
buckets,
or
anything
else,
also
one
of
the
ways
that
your
services
can
communicate
to
each
other.
So
it
could
be
a
service
to
service
based
communication
or
a
service
to
use
a
baseline
communication.
Can
we
move
on
to
the
next
slide
left?
C
Thank
you.
Once
you've
identified
your
services,
you've
got
some
patterns
on
how
you
want
to
do
your
communication
and
how
your
service
boundaries
are
done.
You
also
want
to
identify.
How
does
all
this
data
that
comes
into
your
system
needs
to
be
managed
out?
There
are
different
patterns
that
can
work
for
your
benefit
or
how
your
ecosystem
has
been
built.
C
There
is
no
single
point
solution
which
works
all
for
so
you
could
pick
up
an
application
where
you
will
have
a
database
which
is
first
service
based.
You
know
the
data
for
one
particular
application
could
be
isolated
in
that
database
by
itself,
which
could
be
for
security
reasons,
could
be
the
volume
of
data
or
just
for
scalability
standpoint,
some
cases
for
cost
constraints,
for
licensing
purposes
or
legal
constraints.
You
might
have
a
shared
database
where
you
would
have
data
coming
in
from
different
sources,
fanning
in
and
fanning
out
from
a
different
one.
C
You
have
saga
patents,
where
you
have
services
triggering
downstream
services
and
having
the
whole
data
set
to
come
out
as
one
big
pattern.
At
the
end
of
it,
you
have
an
api
composer
which
is
very
common
to
use
on
the
ui
applications.
When
you
have
a
mobile
app
and
you
have
services
streaming
data
from
different
apis
and
different
sources,
then
you
compose
all
of
them
together
to
come
back
to
one
service
on
one
place
and
then,
in
the
end,
the
event
sourcing
point
of
it
for
data
management.
C
C
C
If
you
are
an
existing
on-prem
system,
you
could
think
of
the
old-school
style
of
having
a
machine
per
host,
which
is
not
a
very
viable
option
going,
as
you
add,
more
and
more
services
and
scalability
becomes
an
issue.
If
you
are
still
on
an
on-prem
have
vmware,
then
you
can
have
a
service
per
vm
or
come
on
to
the
new
school
style
of
using
stuff
with
docker
and
kubernetes,
and
allowing
auto
scalability
and
have
containerization
for
your
whole
ecosystem.
C
Aws,
azure
and
gcp
offer
good
options
for
serverless,
so
you
could
go
ahead
and
explore
them,
but
you
don't
have
to
worry
about
the
whole
ecosystem
of
setting
up
your
backup,
your
your
containers
or
your
vm,
or
any
of
the
dependencies
that
come
along
with
it.
Just
write
your
fragment
of
code
for
run
the
serverless
and
you're
hitting
production
within
a
few
minutes
and
getting
your
service
up
and
running.
D
Thank
you.
Thank
you
so
much
so
so
good
afternoon,
everybody,
my
name
is:
I
am
the
solution
principal
that's
slalom,
advice,
c
and
I'll
be
talking
about
microservices
communication.
I
think
now
now
that
you
guys
have
understood
that
how
what
are
microservices,
how
do
you
get
into
cloud?
How
do
you
really
scale
them
and
then
talking
about
the
various
design
patterns
that
one
and
talk
about
and
we'll
have
one
and
one
really
presented
that
we
we
are
in
the
cloud
we
are
running
it
left.
D
Can
you
move
to
the
next
slide
please?
So
now
now
we
are
running
in
the
cloud
we
and
our
micro
services
are
growing
and
they're
growing
on
a
crazy
amount,
because
you're
coding
microservices
for
every
small
business
domain,
specific
requirement
that
you
have
now
you
get
you
get
something
like
this.
The
picture
which
is
on
the
right
side,
you'll
think
how
how
are
these
microservices
going
to
talk
to
each
other
or
going
to
talk
to
the
clients
so
gen?
D
Usually
what
people
really
end
up
doing
or
what
are
what
we
have
seen
at
our
clients
that
they
end
up
doing?
Is
they
they
put
an
elb,
because
that's
the
easiest
solution
to
put
in
front
of
the
front
of
the
micro
services?
Now
you
have
a
load
balancer
managing
that
load
for
you
and
just
you're.
Just
communicating,
however,
there's
a
lot
of
the
things
that
that
is
happening
in
the
background
when
these
micro
services
are
talking
not
just
from
from
external
clients
to
to
the
end
of
your
your
backend
services.
D
Also
what
what,
when
the
data
is
being
transferred
between
these
microservices,
so
the
microservices
traffic
is
usually
in
and
out,
which
is
which
is
also
known
as
not
south
and
as
it
is
in
between
as
well
there,
because
the
microservices
are
communicating
with
each
other,
which
is
called
east-west
left.
Can
you
move
to
the
next
slide?
Please
so
for
not
south
traffic,
though
the
ideal.
The
ideal
communication
style,
which
one
also
talked
about
in
one
of
his
design
patterns,
is
api
gateways.
We.
D
What
we
have
been
doing
from
since
the
microservices
concept
has
been
introduced
is
that
we
put
api
gateways
such
as
apigee
or
com
in
the
mix,
and
we
try
to
handle
the
traffic
which
is
coming
from
from
our
external
clients
from
the
client
applications
and
api
gateway
becomes
that
single
entry
point
for
every
micro
service
calls
and
it
is
routing
the
traffic
to
the
specific
micro
micro
services.
D
So
it's
not
just
doing
the
the
api
route,
the
traffic
routing,
but
it
also
takes
care
of
the
security,
the
rate
limitation
and
and
and
and
some
of
the
some
of
the
other
good
stuff
which
which
the
api
gateway
will
offer.
So
can
you
move
to
the
next
slide,
though
yeah
so
now
they're
there,
then,
when
we
handle
this
this
knots,
our
traffic
really
well,
we
have.
D
We
have
now
microservices
communicating
with
you
that
how
do
we
handle
that
and
the
most
famous
most
famous
design
pattern
for
that
is
a
service
mesh,
because
service
mesh
allows
you
to
communicate
between
these
services
by
decoupling
and
offloading
most
of
the
application
network
requirement.
What
how
it
does
it?
It
follows
a
side
card
pattern
for
that.
D
What
is
a
sidecar
pattern,
a
sidecar
patent
is,
is
really
for
inter-service
communication
which
takes
care
of
monitoring
your
security
related
concerns
and
it
abstracts
all
of
those
needs
by
keeping
the
business
logic
separate
from
your
network
related
communication
information.
So
that
way
you
are
able
to
monitor
your
micro
services
better.
So
now,
at
this
point,
you
would
ask
a
question
about
so
why?
Why
why
service
mesh
is
only
doing
east
west?
And
why?
Because
it
can
do
a
lot
of
the
things
which
api
gateways
are
doing
so
left.
D
So
because
what
happens
is
that
api
gateways
is
really
handling
all
the
api
related
calls
and
and
the
business
functionality
which
you
can
code
in
your
api
service
and-
and
it
has
really
abstracted
away
from
all
the
business
logic
behind,
but
you
can
still
have
the
business
logic
for
specific
for
your
micro
services,
which
which
can
be
there,
but
service
mesh
really
would
handle
all
all
your
network
related
all
your
traffic
limiting
related
infra
requirements
that
it
will
do,
and
these
can
work
together.
D
So
if
you
can
see
that
api
gateway
plus
service
mesh
can
can
provide
you
a
greater
micro
services
ecosystem,
can
you
move
to
the
next
slider
so
yeah?
So
this
is.
This
is
what
we
had,
and
this
is
some
of
the
appendix
that
we
have
put,
and
there
is
a
sample
app
that
that
you
guys
can
go
and
check
out
and
also
some
of
the
reference
links
which
we
which
we
have
put
together.
We
can
we
can
move
to
the
next
slide.
D
This
is
this:
is
it
from
our
site?
Thank
you
so
much
for
attending
this
session
and
we
are
open
for
the
q.
A
now.