►
Description
Dr. Clement Escoffier- Principal Software Engineer at Red Hat, Ken Finnigan- Senior Principal Engineer at Red Hat
---
JakartaOne Livestream Cloud Native for Java (CN4J) is a one-day virtual conference for developers, engineers and technical business leaders with the focus of building enterprise Java on Kubernetes.
This virtual event is a mix of expert talks, demos, and thought-provoking sessions focused on enterprise applications implemented using open source vendor-neutral Jakarta EE and Eclipse MicroProfile specifications on Kubernetes.
A
A
Hi
everyone
welcome
to
another
jakarta
von
lice,
livestream
cloud
native
for
java
session.
I
hope
you
have
been
enjoying
this
event
as
much
as
we
have
and
for
this
session
we
have
dr
clement
scofier,
and
the
topic
of
his
presentation
is
bringing
reactive
to
enterprise
developers.
So
please
take
it.
Take
it
away.
Clement.
B
B
If
you
look
inside
the
oxford
dictionary
at
this
url
over
there
at
the
bottom,
you
will
see
that
reactive
is
defined
as
follow
being
reactive
means
showing
a
response
to
a
stimulus.
Okay.
We
can
do
something
with
that,
but
there
is
a
sub
definition,
which
is
much
more
interesting
for
us.
It
means
acting
in
response
to
a
situation
rather
than
creating
or
controlling
it.
That
means
that
you
are
not
in
control
of
the
flow
anymore.
B
B
B
So
when
you
build
a
active
applications,
you
are
going
to
receive
and
send
a
stimuli
that
can
be
event,
signals
failures
whatever
so,
what's
new
because
well
this
you
can
already
do
it
with
your
current
jakarta,
ee
or
java
ee
technologies
right,
you
can
use
rgbs,
you
can
use
message
driven
bins.
You
can
use
the
app
server
to
do
exactly
this,
so
what's
really
new,
why
do
we
make
so
much
fuss
around
reactive?
B
So
the
thing
is
reactive
comes
with
a
different
concurrency
model
and
at
the
bottom
is
the
idea
of
using
non-blocking
io.
Why?
Because
we
need
to
use
more
efficiently
the
resources
that
are
given
to
the
application.
We
are
going
to
see
why,
in
a
few
minutes,
so
in
the
java
world
there
is
one
non-blocking
io
toolkit
that
rarely
dominates
its
neti.
It's
awesome
really
low
level,
but
yeah.
Most
of
the
recipe
frameworks
that
exist
in
the
java
world
are
based
on
net.
B
On
top
of
that,
you
have
very
interactive
frameworks
that
will,
let's
say,
make
nitty
little
bit
more
variable
for
bear
human
again:
it's
really
low
livernity,
so
it
will
abstract
a
little
bit
and
then
you
have
your
application
code.
So
the
thing
is
because
it
uses
non-blocking
io.
It
will
reduce
the
number
of
threads
to
almost
nothing.
So
one,
two
four
threads
and
your
application
code
may
be
executed
in
these
io
threads.
B
B
B
Reactive
is
not
only
about
one
applications,
it's
also
about
distributed
system
and
applying
the
definition
we
have
seen
in
the
first
slide
to
complete
systems,
meaning
that
we
are
going
to
exchange
events.
We
are
going
to
change
stimuli
and
our
system
is
going
to
react
to
that
and
on
this
side
you
say
yeah,
but
that's
micro
services
and
we
are
doing
monoliths.
It's
actually
not
about
micro
services
versus
monolith
versus
application
versus
service.
B
As
soon
as
your
application
is
interacting
with
something
outside
of
the
process
boundary,
it's
a
distributed
system
and
you
are
in
this
city,
so
call
it
as
you
want.
There
is
no
consensus
about
how
it
should
be
called,
except
well,
it's
a
distributed
system
and
there
is
no
better
definition
of
a
distributed
system
than
this
one
from
leslie
loper,
and
it
can
be
summarized
very
simply:
it's
going
to
fail,
it
can
be
a
genius,
it's
going
to
fail
and
failure
will
be
part
of
the
game.
B
So
you
need
to
embrace
the
fact
that
failure
will
be
there.
There
are
first-class
citizens
and
actually
on
large
systems.
Failures
are
more
common
than
outcome,
which
is
kind
of
weird
and
interesting
enough.
Everybody
want
to
be
distributed
system,
while
it's
really
very
challenging
and
very
very
hard
relative
try
to
make
them
better.
More
reliable
building
a
relative
systems
mean
building
better
distributed
systems
that
focus
on
responsiveness
and
why?
B
Well
it's
a
little
bit
the
google
sport
in
the
sense
that
a
few
years
back
no
problem,
we
can
do
slow
things
or
rebooting
things
at
the
middle
of
the
night,
no
problem,
but
no
everybody
has
has
used
the
google
experience.
It
always
works.
It's
blasting
fast.
There
is
no
problem
on
availability,
it's
never
failing
and
we
need
to
approach
the
same
kind
of
user
experience
and
this
user
experience.
That
means
being
responsive
for
doing
this.
In
our
active
systems,
you
are
going
to
use
messages
and
having
elasticity
and
resilience
pattern.
B
B
You
believe
that
you
have
implemented
loose
coupling,
but
actually
no,
you
are
using
strong
coping,
it's
not
about
the
format
you
exchange.
That's
just
the
first
part,
it's
about
uptime,
meaning
that
if
you
use
some
synchronous
interaction
with
request
reply,
you
mean
you
need
to
have
both
ends
of
your
of
your
interaction
at
the
same
time
there
or
live
at
the
same
time,
and
it's
not
only
zoo
services,
it's
also
all
the
infrastructures
and
so
on.
B
So
there
is
really
uptime
coupling,
which
means
that
if
one
of
your
service
failed
because
it's
synchronous,
just
the
color
is
going
to
fail,
the
color
is
going
to
fail
and
the
color
is
going
to
prevent
until
you
get
the
users
well
and
hanapi
users,
because
again
remember
the
google
experience.
It
must
always
work,
so
you
can
say
well,
but
a
few
years
ago
netflix
have
re-popularized
circuit
breakers
and
repopularized
is
the
right
world
because
they
didn't
invented
it.
They
existed
since
the
60s
and
they're
very
efficient
in
some
cases.
B
So
what's
a
circuit
record
circuit
breaker
is
just
an
object
that
is
going
to
monitor
the
interaction
between
two
services.
Let's
say
a
calling
b.
So
it's
monitor
when
a
equals
b
and
according
to
the
number
of
failures
or
per
unit
of
time,
you
would
decide
whether
it's
worth
to
call
b
or
just
inject
a
full
bags,
or
things
like
that.
So
it
comes
with
retry
and
auto
recovery
mechanism.
So
it's
very,
very
nice
tricky
thing,
tuning
circuit
breakers
is
a
nightmare
and
the
thing
is
circuit.
B
Breakers
are
not
really
made
for
the
world
of
the
cloud
in
which
we
are
living
in
the
sense
that
a
and
b
a
calling
b,
but
how
many
instances
of
b
do
I
have
and
where
is
b?
If
I
call
if
I
have
a
single
instance
of
b
okay,
I
can
define
my
timeout.
I
can
define
my
latency
and
the
number
of
failures.
Okay,
that's
I
can
define.
I
can
customize
that.
B
But
let's
imagine
that
accidentally,
I
don't
have
one
instance
of
b
but
five
as
I
change
a
lot
of
things
then,
because
if
I
have
five
instance
of
b,
then
it
may
react
much
more
quickly
and
maybe
the
instances
of
b
are
not
in
the
same
data
center.
Maybe
one
of
the
instance
is
in
my
data
center,
but
the
other
instances
are
in
another
data
centers.
B
In
that
case,
how
can
I
quantify
qualified
or
quantify
the
time,
the
latency
and
so
on,
and
this
topology
is
going
to
change
dynamically
and
that's
the
beauty
of
the
cloud,
but
unfortunately,
in
that
case,
circuit
breaker
is
not
great.
So
what's
the
solution?
Well,
I
spoiled
it
a
little
bit
a
few
slides
ago.
It's
called
messaging
messaging
exists
since
80s,
probably
70s,
and
the
idea
is
instead
of
calling
b,
I'm
going
to
send
messages
to
a
virtual
addresses,
and
then
I
can.
B
B
I'm
really
really
lost
corporate
here
when
I'm
going
to
send
messages.
I'm
going
to
use
non-blocking
message
spacing
no
problem
without
elasticity
comes
from
the
fact
that
the
number
of
consumers
behind
a
virtual
address
can
evolve
over
time.
I
can
have
one
I
can
add
two
three
and
so
on.
I
can
even
go
back
to
zero,
because
most
of
the
message
middleware
today
implements
some
kind
of
durability,
which
means
that,
even
if
I
don't
have
any
consumers
at
that
time,
no
problems.
The
message
are
not
lost
as
soon
as
one
arrives,
the
processing
will
start.
B
This
is
quite
powerful.
Resilience
also
come
from
the
fact
that
you
can
have
multiple
consumers,
because
if
one
failed
another
one
can
take
over
and
continue
the
the
processing.
Also
message:
middleware
comes
with
retry
and
unmanagement
policies
that
let
you
well
easily
implement
any
kind
of
fault
tolerance.
B
The
thing
is:
when
we
talk
about
messenging
to
regular
enterprise
developers,
they
say
oh
gosh
gms
again,
and
there
is
one
thing
in
common
between
gms
and
koba.
We
still
have
scars
all
over
our
body
and
brain,
and
we
really
don't
want
to
reuse
those
apis.
They
didn't
age
well,
and
we
will
see
later
how,
thanks
to
microprofile
active
messaging,
we
make
the
development
of
such
kind
of
system
really
really
easy.
B
B
You
can
google
these
terms,
it's
a
four
page
document
that
is
very
well
written
and
explain
how
to
build
such
kind
of
systems
and
the
core
id
is
to
use
asynchronous
message.
Spacing
thanks
to
this,
as
we
have
explained
in
the
previous
slides,
it
gave
you
elasticity
and
resilience,
and
so
you
become
responsive
because
you
can
continue
under
the
load
under
the
request
under
the
messages,
even
under
filtering
load
and
even
when
facing
failures.
B
B
So
one
key
aspect
of
this
is
that
asynchronous
message:
pacing
doesn't
mean
multithread,
it
means
actually
the
opposite
and
reactive
doesn't
mean
multi-thread.
If
your
relative,
a
client
or
framework,
is
actually
just
a
broken
layer.
On
top
of
the
thread
pulse
you
do
it
wrong.
Rectus
is
about
never
ever
block.
Why?
Because
remember
what
we
had
at
the
beginning
with
this
io
thread,
that's
this
key,
so
why
there
is
multiple
benefits
on
being
being
reactive
and
never
blocked.
B
This
is
virtualized
to
you,
but
yeah
somewhere
there
is
physical,
cpus
and
memory.
Then
you
have
a
container
cost
on
which
you
will
deploy
containers
if
some
of
your
containers
are
using
a
regular
approach
with
one
thread
per
request,
something
like
that.
Then
your
level
of
concurrencies
is
limited
by
the
number
of
threads
from
your
thread
pool
the
issues
is
with
the
number
of
requests
per
second,
that
increase
because
of
mobility
or
iot,
and
things
like
that.
B
B
So
if
you
start
to
server
with
100
threads,
you
will
use
100
megabytes
of
memory,
which
means
that
the
other
containers
cannot
use
this
memory
because
it's
used
by
the
first
containers,
a
hidden
cost
of
threads,
is
a
cpu
cycles,
because
every
more
threat
you
have
more
time
you
are
going
to
spend
switching
between
threads,
even
if
it
has
been
rarely
optimized
over
the
past
years,
it's
still
non-zero
and
more
thread.
You
have
more
wasting
wasted.
Cpu
cycles
you
are
going
to
spend,
you
can
say
yeah,
but
no
problem.
We
can
use
quarters.
B
B
B
You
don't
have
to
be
wholly
non-active.
You
can
mix
reactive
and
imperative
and
use
reactive
for
commands
roots
common
endpoints
that
are
really
called
a
lot
and
the
traditional
approach
imperative
approach
for
everything
else
where
you
were
a
linear
transaction
and
things
like
that,
where
we'll
see
how
it
can
work
all
together.
B
B
B
The
tricky
thing
is
now
the
first
compilation
stage
you
may
use
it
on
another
name,
which
is
completable
features,
which
is
the
main
implementation
conclusion
stage
being
the
interface.
B
B
You
have
then
applies
and
accepts
and
compose,
and
some
of
the
methods
are
called,
are
suffixes
suffixed
with
async,
but
well
it's
on
completion
stage.
How
about
a
thing
so
why
those
methods
think
what
about
the
other
one
and
what's
the
difference
between
an
I
think
method,
was
that
an
op,
an
executor
parameter
or
with
an
executable.
B
B
So
that
means
that
you,
you
need
to
trigger
the
computation
explicitly
returning
a
completion
stage
means
that
the
computation
has
already
started
with
uni.
It's
not
the
case
and
it's
much
more
even
driven
ways
on
item
on
failures
and
things
like
that.
So
it's
much
more
it's
easier
to
to
to
write
and
the
api
is
actually
more
compelling.
B
How
does
that
work?
Well,
if
you
use
quarkx,
that
is
going
to
be
like
this,
so
you
can
inject
a
recent
point,
return
a
uni,
and
this
means
that
this
endpoint
is
going
to
be
a
synchronous,
which
means
that
we
are
going
to
pause.
The
request
wait
for
this:
you
need
to
redeem,
is
item
and
once
it's
ready,
we
will
resume
the
request
and
and
write
a
response.
B
You
can
use
that
with
the
microprofilers
client
too,
by
returning
a
uni.
So
that
means
that
it's
still
an
http
core,
but
it's
going
to
be
handled
completely
and
synchronously
using
a
non-blocking
io,
and
when
you
get
the
response
back,
we
can
continue
the
computation.
So
it's
already
one
step
to
a
second:
it's
not
totally
perfect.
We
will
see
why.
But
it's
already
one
thing:
uni
are
really
made
for
one
item
or
one
failures,
but
sometimes
you
need
more
than
one.
You
need
streams
for
this
in
mutiny.
B
We
have
a
second
type
and
we
only
have
two
types:
uni
and
multi.
You
need
it's
one
item
multi,
zero
to
n
items.
It
can
be
unbounded,
both
are
lazy
using
a
subscription
mechanism.
So
if
you
use
react
streams
or
eric
java
or
reactor,
that's
the
same
id
on
multi.
We
have
a
request.
Protocol
too.
We
are
going
to
talk
about
that
when
we
will
mention
back
pressure
and
it
has
almost
the
same
set
of
operators.
So
it
comes
with
a
very
rich
set
of
operators
that
focus
on
the
expressiveness
and
not
on
mathematical
concepts.
B
B
So
this
has
an
impact
and
an
issue,
because
if
you
have
a
fast
producer
and
a
slow
consumer
well
that
means
that
very
quickly
the
slow
consumer
is
going
to
be
overwhelmed
and
may
die
out
of
memory
or
something
and
becomes
very
slow.
So
it
doesn't
work
and
may
probably
break
your
complete
systems.
So
you
don't
want
this.
B
So
in
mutiny
we
implement
the
reactive
streams
protocol,
which
actually
is
very
simple.
The
slow
consumer
is
going
to
request
the
producer
and
say:
okay,
I'm
ready.
I
can
under
two
stone
stones
and
the
first
producer
can
send
two
stones,
maybe
a
synchronously,
so
the
first
one
and
one
the
second
one
is
available,
then
the
second
one,
so
we
never
overwhelm
the
consumers,
and
thanks
to
this
we
can
actually
self-paced
our
system
very
very
nicely.
B
Another
benefit
of
this
is
monitoring
it's
very
easy
to
detect
bottlenecks,
because
we
just
monitor
how
many
requests
has
been
done
and
say:
hey
look
here:
we
don't
do
much
a
lot
of
requests,
so
that
means
that
this
consumer
is
low
or
we,
this
producer
has
received
a
lot
of
requests
but
cannot
fulfill
them.
So
what's
wrong
with
the
source
of
events
so,
and
by
doing
this,
you
can
really
really
tune
your
systems
correctly.
B
The
tricky
thing
is
that
how
do
I
get
those
movies?
And
I
don't
know
if
you,
if
you
heard
recently-
but
there
is
something
very,
very
common
or
very
popular
these
days-
it's
oops
casca.
B
We
heard
a
lot
about
kafka.
Many
many
people
are
not
using
kafka
as
a
cornerstone
of
their
systems,
but
a
lot
of
people
say:
oh
because
I
use
kafka,
I'm
reactive.
Now
I'm
I'm
becoming
a
renovant,
that's
not
necessarily
true,
but
used
correctly.
Yes,
kafka
will
give
you
those
relative
super
powers,
so
let's
see
how
it
works.
But
actually
the
thing
is
it's
not
about
kafka.
It's
about
any
message:
middleware
like
imqp,
rabbit,
mq
and
qtt,
or
even
gms.
B
As
I
said,
when
we
mention
gms
to
enterprise
developers,
they
say
nah.
We
don't
want
to
really
use
this
again.
It
yeah.
If
you
compare
the
api
with
the
scorpio
yeah,
it's
it's
not
really
what's
different
ages
different
period
of
time.
B
So
how
can
we
do
that?
We
will
need
a
development
model
which
is
much
closer
to
cdi
or
maybe
even
a
cdi
extension,
and
this
is
exactly
what
we
are
doing
with
micro
perforative
messaging.
So
you
can
imagine
receiving
an
http
request.
Get
through
quarkus
get
some
application
application
code
and
this
code
can
say
hey.
I
want
to
send
a
message
now
and
to
do
that.
B
What
we
will
do
is
that
inject
an
emitter
and
this
emitter
is
going
to
send
the
message
that
will
go
through
caucus
and
back
to
our
micro,
perforative
messenging,
which
will
connect
to
the
messaging
middleware,
send
a
message
and
then
on
the
other
side,
I
received
a
message
and
going
back,
but
we
really
want
something
natural
for
enterprise
developer
from
from
the
cdi
developer.
Point
of
view.
B
So
to
demonstrate
this,
we
are
going
to
use
the
coffee
shop
example,
so
I
don't
know
where
living,
but
at
least
in
france
we
are
not
officially
in
lockdown
anymore,
but
bars
are
still
on
pubs
and
and
coffee
and
coffee
shops
are
still
closed,
which
is
which
is
a
nightmare
but
anyway.
So
here
we
are
going
to
see
two
generation
of
coffee
shops.
The
first
coffee
shop
is
going
to
be
a
whole
school
coffee
shop.
B
B
The
tricky
thing
is
that
during
that
time,
several
customers
have
queued
and
he
was
unable
to
sell
them
because
he
was
serving
me
really
blocking
approach.
There
is
another
type
of
coffee
shop
generally
american
chains.
Where
you
go
to
the
bartender
you
order,
they
ask
for
your
name:
they
misspell
it.
They
send
that
to
a
barista
and
then
they
say
oh
go
over
there
and
we
will
call
you
when
your
beverage
will
be
ready.
B
B
So
the
demo
I'm
going
to
show
you
you
can
get
the
code
at
this
url
over
there.
It's
the
code
that
I'm
going
to
write
in
a
few
minutes,
I'm
using
kafka,
because
I
want
it
to
be
cool
and
it's
really
used
these
days,
but
you
can
use
it
with
any
type
of
messaging
middleware
and
actually
you
will
see
in
the
code.
There
is
nothing
kafka
specific.
B
B
B
If
we
look
at
this
code,
it's
done
like
this.
So
this
is
my
coffee
shop
resources
for
my
caucus
applications.
Oh
by
the
way
I
mentioned
drugstores
a
lot,
but
here
I'm
using
using
spring
right.
So
quakers
come
with
a
spring
api
compatibility
layer,
so
you
can
actually
mix
both
use
spring
api
use,
my
jacket,
ie
or
micro
profile,
apis
or
mix
both,
and
we
are
going
to
do
that
in
a
few
minutes.
So
here
I
have
my
barista
service,
which
is
a
microprofile
rest
client.
B
B
B
So
let's
switch
back
to
the
code
and
see
how
we
can
use
kafka
for
that.
So
when
I
will
switch
the
the
types
I
will
call
this
root
here
and
this
route
is
going
to
not
send
a
single
event
but
two
events
and
we
will
see
why
so
to
sun
events.
I
need
an
emitter,
so
I'm
going
to
inject
an
emitter
that
will
send
an
event
or
message
on
a
channel
microprofile
reactive
messenging
is
using
the
concept
of
channels
and
channels
can
be
mapped
to
kafka
or
whatever
or
even
in
memory.
B
So
here
I
want
to
send
a
message
on
the
orders
channel
and
I
will
call
that
so
it's
an
emitter
and
a
meter
of
order-
and
I
will
call
this
one
others.
Okay,
then
I
will
create
another
one
on
the
q
channels
that
will
get
a
better
age
at
least
status,
and
I
will
call
that
q,
okay
and
here
in
this
method,
I
will
first
send
a
message
to
the
queue
channels
and
that
is
going
to
be
beverage
dot
queued.
B
Just
saying
that
we
have
received
your
orders,
we
will
call
you
when
when
it
could
be
ready
and
we
will
send
another
the
order
to
the
very
start
saying
well,
you
need
to
start
preparing
this
order,
that's
it!
So,
if
I
go
back
here,
I
will
refresh
to
have
a
clean.
B
Yes,
it's
clean,
I
switch
to
messaging.
I
will
give
my
name
that
they
will
misspell
it.
Let's
say
like
this
and
I'm
going
to
start
having
try
and
I
think
you
try
a
lot
and,
let's
see
so
ava,
so
it's
a
kafka
barista
is
starting
to
receive
an
order
and
prepare
them.
And
then
you
say
hey
what
about
ordering.
So
this
is
kafka
specifics
and
you
will
need
to
read
a
lot
more
about
kafka
to
really
get
the
order
right.
B
But
it's
about
partitions
and
keys,
and
things
like
that-
and
here
I've
simplified
it.
So
how
does
that
code
works
because
we
have
seen
this
side
the
coffee
shop?
But
what
about
the
ballista
side?
This
is
going
to
be
my
barista
side
here.
So
it's
another
service
running
in
another
process
and
it
shows
a
different
set
of
annotations
but
very
simple.
B
So
here
I'm
using
a
synchronous
method
and
this
synchronous
method
that
is
here
is
actually
going
to
call
sleep
to
simulate
that
it
takes
time
to
prepare
a
good
coffee
so
because
this
will
be
blocking
and
blocking
is
actually
not
only
correct.
If
you
will
need
to
say,
oh
yeah,
I
know
it's
a
blocking
call.
So
call
me
on
a
worker
thread
and
everything
here
can
be
configured
to
to
limit
to
control
the
concurrency
and
things
like
that.
B
Of
course,
this
is
one
type
of
signatures
we
support,
but
microprofile
iq
messaging
support
more
than
30
different
type
of
signatures.
You
can
asynchronous
processing,
non-blocking
processing
stream
processing.
You
can
use
actually
variety
streams,
objects
like
publisher,
subscriber
processors
directly,
so
we
provide
you
a
lot
of
things.
This
is
actually
one
of
the
simplest
we
have
and
that
one
that's
the
first
step
to
reactive
and
then
people
start
to
remove
the
blocking
things
by
doing
this
one
in
an
asynchronous
manner.
B
Okay,
so
what
does
that
bring
us?
Because,
right
now
we
don't
really
see
the
benefits.
So,
let's
imagine
that
my
baristas
is
taking
a
break,
so
I'm
going
to
stop
it
and
I'm
going
to
all
start
ordering
hot
chocolate.
B
So,
while
with
http,
it
was
failing
immediately
here.
Well,
it's
not
lost,
which
means
that
as
soon
as
I
start,
another
casca
baristas,
it's
going
to
start
the
processing
of
these
spending
orders
and
look
at
how
fast
it
was
to
start
a
kafka
barista.
This
is
the
magic
behind
corkus
that
has
been
compiled
into
native
that
make
it
very
very
fast
to
start,
and
now
we
have
george,
that
is
preparing
my
hot
chocolates.
B
So,
but
what
happens?
If
I
really
really
want
a
lot
of
espressos
really
a
lot
like
another
things,
really
a
lot.
So
george
is
ready.
He
can
start
doing
that,
but
we
may
help
him
and
by
starting
the
second
barrister
here,
which
is
in
java
mode.
So
you
can
see
the
difference
of
starting.
We
should
get
multiple
baristas,
two
hundreds
a
lot.
So
would
we
get
no
turning
groups?
B
We
need
more.
Let's
do.
B
Still
only
george,
so
big,
oh
oliver
is
back
so
it
happens
sometimes
that
actually
my
two
baristas,
the
names
is
picked
randomly
and
it
happens
that
sometimes
they
pick
the
same
name
random.
So
it's
hard
to
follow,
but
here
we
have
now
george
and
oliver
that
are
sharing
the
lord
and
can
get
my
tons
of
espresso
much
much
faster.
B
So
the
question
here
is:
did
we
implemented
erective
systems?
Well
and
if
we
do,
it
should
not
be
so
hard
right.
What
did
we
use?
We
used
an
emitter
that
is
injected
with
art
injects,
so
you
should
be
familiar
with
this
and
on
the
other
side
to
annotation,
I
think,
coming
and
not
ongoing.
Oh,
oh
yeah
and
something
I
forgot
to
show
you
sorry
about.
B
That
is
how
it's
configured,
because
basically
its
channels
can
be
configured
to
to
kafka
and
qp
or
whatever,
and
this
is
how
it's
done
using
micro
profile
config.
You
say
that
the
holders
channels
is
handled
by
this
connector.
So
here
just
a
small
white
kafka
connector
and
this
will
be
my
serializers
and
so
on
and
so
on
and
so
on.
So
it
comes
with
sensible
defaults,
but
everything
from
the
kafka
configuration
point
of
view
can
be
configured.
A
A
B
Yeah
yeah,
so
let.
B
Closed
the
wrong
window-
and
that
was
the
windows
that
we
should
not
close
in
such
kind
of
virtual
events.
B
The
question
here
now
is:
did
we
implement
our
active
systems?
Well,
yes,
because
what
we
have
seen
is
if
we
don't
have
any
baristas
or
if
one
failed,
then
it's
not
a
problem.
Our
orders
are
not
lost
and
I
will
still
get
my
coffee,
and
so
it's
actually
key
here,
because
even
if
we
have
no
barristers,
we
keep
them
and
as
soon
as
a
barista
come
it
will
start
processing.
My
my
coffees
and
my
orders.
B
The
second
point
is
what
about
in
terms
of
elasticity.
Well,
we
have
seen
that
I
can
start
to
second
barry
stars
and
they
get
the
lord
shared
amongst
george
and
oliver,
if
I'm
not
mistaken.
So
it's
quite
nice.
So
of
course,
here
I've
explicitly
started
the
second
barista
to
to
show
you
for
demonstration
purpose.
But
if
you
are
on
the
cloud
you
are
going
to
monitor
the
number
of
pending
orders
and
according
to
this,
you
are
going
to
scale
up
and
down.
B
If
you
use
things
like
kedar,
kubernetes,
711
architecture,
then
this
is
all
done
for
you
and
you
don't
have
to
do
anything
except
saying:
okay,
when
the
depth
is
stands
and
starts
to.
Second,
consumers
something
like
this,
so
we
did
implement
our
active
systems
and
relatively
easy.
So
how
do
that
works?
Well,
we
have
seen
a
few
things
in
this
demo.
First,
we
have
seen
the
little
bit
the
developer
joy.
I
was
able
to
write
some
code
refresh
it
was
there.
B
We
have
seen
the
supersonic
subatomic
java
from
quakers,
where
it
can
start
very
quickly
with
a
low
memory
so
that
make
it
cloud
native,
but
to
be
very
clonative.
You
also
need
to
use
efficiently
resources,
and
this
is
all
about
reactive,
and
today
we
are
focused
on
the
unification
of
imperative
and
reactive,
but
there
is
a
straight
behind
this.
How
is
that
done?
B
If
you
look
at
caucus
code
or
look
under
the
hood?
Well,
you
will
see
that
actually
carcass
has
a
corrective
core
and
everything
you
do
is
going
through
this
relative
core.
You
write
a
juxtapos,
traditional
injectors,
end
point
or
even
a
servlet
is
going
through
this
reactive
engines
and
will
get
dispatched
to
your
subletter
jack
series.
B
B
If
it's
a
imperative
things,
he
will
call
it
on
a
worker
thread
if
it's
erective,
since
it
can
stay
on
the
io
thread.
So
it's
really
interesting,
but
why
it's
so
interesting
to
have
both
because,
let's
imagine
you
have
a
service
and
you
realize
that
yeah
you
could
increase
the
concurrency
by
moving
this
endpoint.
That
is
called
a
lot
to
be
erective.
B
You
don't
need
to
do
a
okay,
let's
go
hauling
reactive
and
implement
everything
like
this.
That
doesn't
really
work
here.
You
can
actually
have
both
and
step
by
step.
You
can
move
all
to
reactive
or
even
keyboards.
There
is
no
problem
in
keeping
both
and
on
the
right
side.
You
have
all
the
messaging
messaging
things
we
that
we
have
seen
today.
B
We
have
reactive
routes
which
are
kind
of
jack,
serious,
but
dispatch
on
the
io
thread,
and
we
have
a
lot
of
things
to
around
reactive
data,
access
to
true
being
truly
non-blocking,
interactive,
to
connect
to
postgres,
mysql,
maya,
db
and
so
on,
and
I
will
say
truly
non-blocking
means
that
there
is
no
pull
behind
it.
It's
really
directly
as
the
protocol.
It's
at
the
protocol
level.
It's
it's
not
blocking.
There
is
no
sweat
pool.
B
So
it
has
been
a
challenge
in
quakes
to
achieve
this,
but
we're
really
really
happy
and
we
have
plenty
of
things
to
deliver
in
the
next
few
weeks
and
months
we
are
adopting
micro
profile
specification
and
some
jacket
ae
specifications
and
we
incubate
some
experimental
features
like
this
ad
blocking
is
actually
something
that
is
incubated
into
a
small
y
and
we
in
if
it
works
well,
and
once
we
have
really
better
understand
how
it's
used,
we
will
go.
We
are
going
to
contribute
that
to
the
spec.
B
That's
all
I
have
for
you
today.
If
you're
interested
by
quarkus
or
by
everything,
I've
said
you
can
go
to
the
caucus.io
website
and
you
will
find
plenty
of
documentation.
You
can
even
go
to
cod.corkis.io
and
start
your
project
right
away.
B
Caucus
is
not
only
innovating
on
on
the
technologies,
it's
also
innovating
on
how
you
interact
with
the
team
and
if
you
never
use
the
lip,
it's
pretty
good,
but
the
first
week
is
a
lifetime
experience.
It's
yeah,
it's
try
it.
You
will
see
it's
like
slack,
but
with
ten
thousand
more
notification
per
second,
you
will
love
it.
B
You
can
follow
us
on
on
twitter
and
also,
if
you
like,
quackers,
if
you
say,
hey,
that's
pretty
nice
and
so
on
and
go
to
the
github
page
and
give
us
a
star
it's
totally
useless,
but
that
makes
us
a
crocus
team.
Quite
happy,
thank
you,
and
if
there
is
any
question,
I
have
a
few
minutes
to
answer
them.
Yes,.
B
That's
a
great
questions.
How
do
I
test
it?
So
actually
I
say
that
blocking
is
not
allowed
inside
your
applications,
but
in
test
it
can
be
allowed
and
when
you
retrieve
an
asynchronous
uni,
for
example,
you
can
block
to
get
the
result,
but
blocking
is
not
hidden
behind
the
join
method
or
get
method.
It's
clearly
saying:
okay,
I
got
my
uni
and
I
will
call
a
weight
dot
indefinitely.
B
So
actually,
can
I
go
back
here,
I'm
going
to
here?
Yes,
so
something
I
could
do
here,
so
it's
not
going
to
compile,
but
if
let's
make
it
a
uni
here
and
if
I
want
to
have
a
blocking
result,
I
just
do
weight
up
and
definitely-
and
here
on.
Of
course
you
can
have
a
timeout
and
things
like
that.
So
that's
how
we
test
it.
We
also
provide
a
few
test
utilities
like
a
test
subscriber
something
like
that
to
verify
all
the
events
that
have
been
received
and
replace
them.
B
So
we
we
provide
a
lot
of
things
around
this
for
reactive
messaging.
How
do
I
test
kafka
and
things
like
that?
We
provide
an
in-memory
connector
which
actually
lets
you
inject
messages
and
check
what
is
received
so
inject
messages
to
a
channels
and
check
what
has
been
sent
to
another
channel,
and
so
you
can
write
it
like
that
without
having
to
start
kafka
or
mqp
or
anything
else.
B
It's
what
we
call
an
in-memory
connector
so
and
you
switch
so
your
application
is
developed
against
kafka
and
you
just
use
in-memory
connector
that
switch
these
channels
and
you're
done.
So
all
this
is
documented.
So
don't
worry!
You
go
to
the
caucus.io
website,
you
look
for
writing,
messaging
or
mutiny,
and
you
will
find
everything
I
just
said
about
testing
and
there
is
more
things
to
come
and
if
you
have
more
ideas,
feel
free
to
ping
me,
and
I
will
be
very
happy
to
discuss
that
with
you
and
maybe
implement
that.
B
Also,
okay,
so
let
me
answer
this
one.
So,
yes,
we
have
planned
for
grpc
support
and
I've
spent
the
last
two
weeks
implementing
it.
So
believe
me,
it's
hot
fresh
and
it's
going
to
really
really
be
cool.
It's
going
to
be
released
with
caucus
1.5,
which
will
be
released
beginning
of
june.
The
exact
date
should
be
june
seconds,
but
you
know
how
it
works
with
releases
and
estimates
and
developers.
It
can
be
a
little
bit
after
that,
but
yeah
first
week
of
june.
B
I
have
trained
a
lot
of
developers
to
reactive
over
the
last
five
years
and
we
came
to
realize
that
it's
hard
for
us
as
we
are
living
in
it's
it's
easy
and
what
we
try
to
do
with
mutiny,
which
is
a
new
relative
programming
library
which
is
not
as
known
as
a
rector
or
regress
java,
is
to
actually
make
it
much
more
simpler
for
regular
traditional
developers
to
use
it
and
it's
actually
works.
We
had
a
couple
of
developers
that
have
started
it
that
we
don't
know
we
didn't
train
them.
B
B
B
We
are
coming
with
a
lot
more
resources
to
learn
such
kind
of
things.
We
already
have
plenty
of
them
in
the
caucus
io
website.
I
have
a
few
presentations
that
are
going
to
be
recorded
later
this
week,
not
jakarta
one.
Unfortunately,
there
is
other
virtual
events
so
stay
tuned
and
we
are
going
to
give
you
everything
you
need
to
start
learning
reactive,
but
yeah,
don't
start
with
highly
complicated
relative
programming
libraries,
it's
you,
you
can
write
the
code
and
look
say
hey.
B
A
And
yeah,
we
have
one
more
question:
do
you
know
if
mutiny
is
being
considered
to
be
part
of
the
reactive
specification
in
microprofile
or
jakarta,
ee.
B
So
initially
in
microprofile
we
were
working
on
another
specification
named
microprofile
reactive
strength,
operators
that
was
using
compression
stage
and
reactive
streams
directly.
It
was
a
great
way,
a
great
integration
layer,
but
the
usability
was
not
well
was
not
good
before
becoming
a
spec.
It
need.
We
need
to
have
feedbacks
and
mutiny
has
been
released
in
december.
We
start
having
feedback
and
really
good
one.
I'm
very
happy
about
that.
There's
still
some
work
to
do
it's
not
complete
and
we
want
more
feedback.
B
And
yes,
maybe
at
some
point,
let's
say
in
six
months
or
to
one
year
we
will
start
discussion
say:
hey,
is
microprofile
interested
to
write
a
specification
on
top
of
that.
We
actually
started
discussing
this
actually,
after
the
first
few
lines
where
we
started
writing
writing
mutinies
to
say,
while
we
are
working,
we're
literally
changing
other
direction,
we
want
to
try
something
and
maybe
it
will
become
a
spec,
maybe
we'll
replace
the
current
one.
B
So
yeah
eventually,
but
I
cannot
tell
you
when
and
I
cannot
tell
you
under
which
from
under
which
specification,
I
hope
so.
A
B
Yeah,
so
thank
you
very
much
and
sorry
to
have
closed
the
wrong
windows.
I
hope
that
well
you
I
was
able
to
recover
nicely.
So
don't
worry.