►
Description
Roberto Cortez - Principal Engineer at Red Hat
---
JakartaOne Livestream Cloud Native for Java (CN4J) is a one-day virtual conference for developers, engineers and technical business leaders with the focus of building enterprise Java on Kubernetes.
This virtual event is a mix of expert talks, demos, and thought-provoking sessions focused on enterprise applications implemented using open source vendor-neutral Jakarta EE and Eclipse MicroProfile specifications on Kubernetes.
A
Hi,
everyone
welcome
to
the
Jakarta
one
livestream
CN
4
JJ
I'm,
your
host
Kristy
and
I'm
here
to
introduce
our
guest
speaker
today,
Roberto
Cortez,
who
will
be
presenting
about
kubernetes
native
java,
with
micro
profile
and
carcass.
If
you
have
any
questions,
please
feel
free
to
leave
them
in
the
ask
a
question
tab
below
the
screen
over
to
you,
Roberto
hi.
B
Thank
you
for
the
introduction
hi.
Everyone
is
real
pleasure
to
be
here
with
you.
First
of
all,
being
all
these
crazy
times
were
living
and
right
now
is
the
only
opportunity
that
we
have
to
be
in
touch
with
each
other,
but
about
that
I
hope
everything
is
LC
and
I
was
staying
at
home.
Every
need
safe
with
their
families.
So
moving
forward
with
our
session,
so
I'm
gonna,
I'm,
gonna
gonna,
have
a
couple
of
slides
here.
B
I
hope,
they're,
not
gonna,
take
too
much
time
on
them,
but
I'm
gonna
do
some
heavy
demos
with
what
I
want
to
show
as
well,
but
as
the
agenda
and
the
session
is
all
about.
We're
gonna
talk
a
little
bit
about
micro
profile
workers
and
then
running
both
workers
and
Michael
profile
applications
in
companies
so
bear
with
me,
and
there
is
a
an
ask
question
over
there
that
I
hope
that
I
get
there
to
the
end.
B
Just
make
sure
if
I
have
and
okay
looks
perfect,
wonderful,
okay,
so
it's
quite
hard
to
actually
do
these
without
proper
feedback
right.
Those
are
some
of
the
challenges
of
doing
virtual
conferences.
Okay,
so
my
name
is
Marta
Cortes
I
work
for
I
start
work
for
that
a
couple
of
months
ago
and
I'll
be
mostly
working
a
lot
with
my
profile
and
the
smaller
projects
at
the
implement,
some
of
the
mock-up
of
file
API
thing
to
Quercus
and
also
while
flying
horntail.
I
used
to
work
on
my
cup
or
file
before
joining
reddit.
B
So
it's
nothing
new
to
me
and
I've
been
trying
to
be
heavily
involved
with
it,
with
the
specs
and
and
some
of
the
API
is
to
to
do
some
of
the
micro
profile
stuff
move
it
moving
forward.
So
what
do
we
have
with
both
micro
profiling
Quercus?
So
both
workers
and
micro
profile?
There
are
open
source
projects
and
the
IDC
they
offer
you
a
stack
to
write,
Java
applications.
B
B
Maybe
Tommy
may
be
WebSphere
Liberty
and
came
up
with
a
couple
specs.
First,
it
started
as
a
CDI
Jack's
arrest
and
Jason
B
and
that
already
existed
in
the
Java
EE
Jakarta
world
then
start
to
expand
on
that.
To
be
able
to
support
things
like
configuration,
health
check,
metrics,
open
tracing,
open,
API,
dot,
authentication
and
propagation
reactive
messaging,
so
it
kind
of
added
a
lot
in
a
very
short
amount
of
time.
B
If
you
look
into
the
moko
panco
profile,
it's
probably
around
4
years
old
right
now,
and
if
you
see
how
much
it
was
accomplished,
it's
really
it's
really
crazy
to
see.
What's
out
there
so
work
in
Quercus
itself,
it's
also
an
open
source
project
that
uses
micro
profile
as
some
of
the
other
api's
to
help
you
implement
micro
services
in
the
cloud
native
way
right
so,
and
why?
Why
do
we?
Why
quark
is
right?
B
So
one
of
the
things
when
you
start
thinking
about
cloud
native
and
cloud
applications-
and
especially
you
don't
have
to
think
about
these
only
regarding
cloud
or
cloud
net
itself,
but
there
is
a
continuous
trend
of
actually
deploying
your
applications
and
containers
right
like
in
docker
containers,
and
one
of
the
main
principles
is
that,
instead
of
you
having
to
distribute
your
application,
you
distribute
the
entire
environment
that
you
have
to
run.
So
it's
going
to
be
easier
to
deploy.
It
gonna
be
easy
to
reproduce
all
the
bills.
Gonna
be
easy
to
reproduce
everything.
B
So
this
is.
It
is
the
trend,
and
this
is
especially
interesting
for
cloud
deployments,
because
when
you
have
things
like
this,
then
you're
able
to
scale
much
easier
able
to
replicate
your
environment,
and
things
are
more
clear
in
that
in
that
particular
way,
but
the
hidden
truth
about
the
Javan
containers
is
they,
as
you
know,
Java.
B
If
you
look
into
Java
Java
started,
maybe
like
as
far
as
I
remembered
like
25
years
ago,
it's
gonna
be
25,
I,
believe
and
at
a
time
that
were
there
was
no
cloud
as
we
know
it
today
it
was
a
little
bit
different.
So
it's,
how
do
you
push
a
new
language
into
a
very
modern
approach
like
the
the
like
clouds
right?
So
if
you
look
into
into
into
Java
there
is
a
lot
of
things
that
that
you
have
like
the
byte
codes
and
number
of
classes
and
the
JIT
compiler
that's
running
on
your
memory.
B
So
this
is
not
very
interesting
for
cloth
containers
because
you
want
to
have
a
very
short
memory
or
very
low
memory.
Outputs.
Now,
if
you
think,
if
you
think
about
this
remember
when
you
used
to
have
those
big
infrastructures
that
were
physical
infrastructures
right,
you
will
usually
end
up
buying
a
couple
of
boxes
or
machines,
and
usually
you
have
to
figure
out
how
much
memory
do
you
require
and
usually
you
ended
up
buying
more
memory
than
you
require
because
you
wanted
to
scale
or
you
want
to
have
the
possibility
to
scale.
B
But
if
you,
if
you
were
not
using
the
entire
memory
shouldn't,
he
wasn't
really
an
issue
because
you
already
paid
for
it.
So
it
was
an
investment.
You
made
it
upfront,
but
if
you
think
about
cloud
deployment,
you
usually
have
to
pay
for
what
you
use
right.
So,
if
I'm
using
10,
Maggie
Peyton
Manning,
but
if
I'm
using
100
Meg
I
have
to
pay
for
that
those
100
Meg's
so
there
if
there
is
a
way
that
I
can
reduce
that
amount.
That
means
that
I'm
gonna
pay
less
all
the
time.
B
B
So
this
this
is
a
somehow
a
limitation
in
the
sense
that
you're
able
to.
If
you
need
to
scale
your
applications,
you
can
scale
with
more
nodes
using
other
languages
or
other
application
frameworks
than
Java
right,
and
we
don't
want
to
do
that.
We
want.
We
want
to
be
able
for,
for
people
to
still
use
Java
and
be
able
to
scale
as
well.
If
you
use
any
other
language
right,
so
we
mean
we
need.
We
need
to
fix
this
problem,
and
this
is
where
we
believe
that
a
quark
cosmetic
deployment
really
shines.
B
B
So
you
don't
mind
the
slides
in
the
sense
that
doesn't
really
mean
that,
for
for
Java
stacks
traditional
Java
stacks
you're
able
to
scale
like
one
to
twelve
choir,
because
I
mean
it's
just
for
comparison
right.
But
the
idea
is
that,
with
the
small
with
a
small
memory
footprint,
you
are
able
to
scale
way
more
Quercus
applications
that
you're
able
to
scale
traditional
java,
stacks
and
I'll
show
you
some
memory
comparisons
and
when
we're
doing
we're
going
to
the
demo
approach,
so
some
of
the
focus
benefits
that
we
usually
say
to
develop.
B
It's
really
easy
to
actually
develop
stuff,
see
the
result
that
you're
after
and
staying
and
continuing
coding
and
I'll
gonna
show
a
couple
samples
of
that
as
well.
Finally,
especially
because
we
want
to
be
reactive
as
well,
we
have
ways
to
identify
the
interactive
and
reactive
coding,
so
you're,
not
you're,
not
bounded
by
the
the
traditional
approach
when
you're
developing
yourself.
So
you
can
either
go
react.
You
can
do
imperative
and
can
mix
both
so
I.
B
Think
it's
a
very
good
compromise,
especially
because
when
you
want
to
go
reactive,
sometimes
it's
harder
for
people
to
make
that
mind
shift,
or
now
they
have
to
code
stuff.
So
in
here
you
can
combine
both
and
you
can
do
some
things
imperative
and
do
something
reactive
and
you
can
mix
them
together.
So
until
you
actually
feel
feel
ready
to
go
full
reactive.
All
the
time
no.
B
Going
a
little
bit
too
too
into
details,
so
one
of
the
things
that
we
have
with
workers,
we
have
zero
config.
You
have
a
live
reload
feature.
So
after
25
years
of
Java,
we
actually
able
to
do
it
too,
hot
reload
stuff
without
any
external,
tooling
or
any
external
libraries
that
you
have
to
put
into
your
projects.
So
quark
is
the
support
hot
reloading
out-of-the-box,
which
is
great.
So
it's
it's
really
awesome.
B
When
you
want
developing
stuff,
a
lot
is
based
on
standards,
but
there,
of
course
we
also
extend
those
standards
to
fill
in
some
of
the
gaps
and
that
you're
having-
and
you
definitely
see
that
with
micro
profile
and
some
of
the
samples
are
going
to
show
you
as
well
so
the
reason
inside
configuration.
So
usually
you
only
have
a
configuration
called
application
about
properties
and
all
the
configuration
is
going
over
there,
so
usually
it's
better
than
when
he
usually
has
applications
or
frameworks,
or
you
have
multiple
configuration
properties
you
have
to
figure
out.
B
So,
let's
just
say
you
have
locking
properties,
you
have
an
open,
API
configuration
properties.
Then
you
have
a
micro
profile,
conversion
properties.
So
all
of
them
go
into
a
single
file,
so
it's
easier
to
have
that
managed
over
there
and
and
of
course,
you
have
all
every
that
you
can,
that
you
can
use
with
the
reason
for
the
API
is
that
we
support.
So
if
we
do
like
a
quick
comparison
and
why
we
why
we
say
it
sparkly
subatomic.
B
Usually
it's
around
70
Mac,
and
if
you
build
it
with
a
native
image
approach
via
growl
BM,
then
the
memory
footprint
goes
down
to
12,
which
is
awesome,
and
now
you
probably
start
to
see
why
you're
able
to
fit
in
more
nodes
into
native
or
into
into
your
cloud
stacks
using
Quercus
and
anything
else
right.
So
if
you
think
about
this,
if
you
wish,
if
you
compare
even
if
you're
doing
like
a
simple
Park,
riskless
JVM
application,
you're
able
to
fit
four
times
more
nodes
using
Quercus
natives,
then
just
the
quirk
with
JVM
traditional
approach.
B
What
I
mean
not
all
applications
are
just
rest.
So
sometimes
you
have
restless
press
which
might
involve
database
in
a
neighbor
Nate
frameworks,
and
here,
of
course,
the
memory
footprint
goes
a
little
higher,
but
still
a
native
approach.
It's
only
28
Meg,
which
you
can
still
fit
more
than
six
notes.
If
you
compare
it
with
the
traditional
Quercus,
Plus
JVM
or
just
a
traditional
cognitive
stack,
you
can
probably
fit
even
ten
more
workers,
plus
native
Grove
VM
nodes.
B
The
memory
footprint
is
not
only
the
the
what
gets
lower
so
also
the
start
time.
It's
it's
it's
less,
so
quark
is
negative.
It's
really
really
really
fast,
because
the
idea
here
with
a
native
stack
is
what
you
have
is
a
binary
file
that
you
execute
that
doesn't
have
any
jelly
cake
inside.
It's
just
just
binary
execution
of
code.
B
So
it's
really
fast,
but
even
if
you,
if
you
use
the
JVM
the
the
startup
time,
it
still
is
still
is
still
considerably
low,
we're
talking
about
under
two
to
three
seconds
or
even
one
second
startup
time,
which
is
very
great.
If
you
compare
it
to
other
technology,
they
usually
take
four
or
five
or
six
or
ten
seconds
which
doesn't
seem
that
much.
But,
as
you
know,
we
all
developers
like
to
be
on
the
weaker
side
of
things.
B
So
if
you
can
shove
some
some
seconds
off
of
my
startup
time,
that's
that's
cool,
but
if
you
think
about
in
the
clouds
in
the
cloud
native
way,
remember
you
do
have
to
pay
for
the
amount
of
CPU
as
well.
So
if
you
have
to
wait
for
ten
seconds
for
application
to
start
up,
those
are
ten
seconds
that
you're
paying
for
usage
were
you're,
not
actually
serving
request
or
not.
Actually,.
B
But
now,
if
you
have
to
scale
for
a
hundred
to
one
hundred
nodes,
then
that
might
be
more
relevant
because
if
you
say
right
seconds
every
time
we
start
or
I
have
to
read
it.
We
we
deploy
or
every
time
we
have
to
to
to
crew,
to
push
more
nodes.
Then
the
cost
might
be
relevant
to
you.
So
that's
that's
something
that
you
have
to
bear
in
mind
so
for
unifying
in
proactive
and
reactive.
So
in
here
we're
using
stuff
like
REST
API,
so
in
this
case
jax-rs
and
micro
profile,
reactive
messaging
right.
B
So
that
is
that
you
can
even
inject
and
push
publishers
from
your
reactive
messaging
system
can
be
Kafka.
It
can
be.
Activemq
can
be
something
else
that
we
support
and
you
can
publish
those
directly
over
a
Jax
forest
endpoint,
which
is
which
is
quite
nice,
so
the
idea
here
is
that
can
combine
both
in
the
same
application
and
use
whatever
technology
six,
your
your
use
case.
B
Finally,
so
while
these
presentation
is
mostly
focusing
on
micro
profile
and
cover,
Neary's,
Quercus
doesn't
great
with
a
lot
of
other
frameworks
out
there.
So
a
couple
of
other
things,
I'm
going
to
show
even
even
I'm,
not
gonna,
show
those
directly
but,
for
instance,
for
doing
fault,
tolerance
and
reactive
message.
I'm
gonna
use,
Kafka
for
doing
open
tracing
and
metric
I'm
gonna
use,
Jaeger
and
Prometheus.
So
all
of
those
are
supported
and,
of
course,
I'm
gonna
deploy
all
of
these
into
a
covering
in
this
cluster
running
in
my
box.
B
So
we
have
support
for
all
of
that.
Plus
there
are
other
things
like
camel:
there's
also
some
spring
support.
So
even
if
you're
used
to
use
your
finger
notations,
you
can
still
use
your
Springer
notations
in
a
Quercus
application,
which
is
which
is
great
and
overnight
and
Flyway
support
and
vertex.
So
there
is
that
there
in
Finnish
pants
or
it's
a
lot
of
other
libraries
are
supported
out
of
pocket
workers
and
that
you
can
use
straight
away
so
how
this
quark
is
work.
B
So
the
idea
is
that
we
wanted
to
move
a
lot
that
happens
in
runtime
to
compile
time.
So
that's
how
Quercus
gets
faster
when
when
starting
up
and
that's
how
we're
able
to
support
hot
reloading
right.
So
if
you
think
how
a
traditional
application
works,
the
server
starts
up.
There
is
a
lot
of
class
back
scanning
that
happens
to
figure
out
how
to
start
up
your
application
right
and
then
the
application
starts,
and
that's
usually
what
consumes
your
memory
and
what
consumes
your
application
startup
time.
B
So,
if
you
move
that
to
the
compile
or
the
build
phase,
you're
able
to
figure
out
a
lot
of
that
beforehand
and
prepare
your
application
to
to
start
him
in
an
optimized
way.
So
that
reduces
your
memory
footprint
and
that
reduces,
of
course,
your
your
startup
time
so
that
a
simplistic
way
to
look
into
how
Quercus
works.
But
there
is
a
lot
under
the
covers
happening.
I,
don't
want
to
go
into
too
much
details
over
here.
B
So
let
me
let
me
skip
all
of
that
and
the
last
bit
I'm
going
to
show
you
and
I'm
going
to
show
that
on
the
demos
is
the
native
image
support?
That's
all
some
of
the
magic
that
workers
does
also
benefit
or
closet.if
support.
So
if
you
heard
about
about
growth,
vm
right
so
gravia,
a
mix
of
polygons
vm
also
open
source,
but
the
idea
with
grow
vm.
It's
able
to
actually
generate
a
native
executable
out
of
your
application,
and
this
native
executable
is
now
Java
anymore
in
the
sense
that
doesn't
run
into
the
JDK.
B
Isn't
it
doesn't
have
the
power
that
JVM
does,
but
it's
very
similar
in
the
sense
that
a
lot
of
the
performance
that
you
get
it's
it's
it's
it's
it's
quite
it's
quite
there
yet,
but
but
yeah
I
mean
if
you
look
into
some
other
presentations
that
I
did
usually
what
happens
is
the
fruit
put
of
JDK?
Applicants
can
be
better
than
a
native
executable,
but
an
executable
occupies
the
last
memory
footprint
that
was
used
by
CPU.
B
So,
in
a
sense
that
you're
able
to
fit
more
notes
from
that
executable
in
and
fill
in,
the
gap
of
a
better
output
or
fruit
put
performance
of
a
regular
VM,
so
in
the
in
the
sense
that
there
is
no
writing
wrong
over
here.
Of
course
you
have
to
measure
and
compare
stuff,
but
it's
a.
This
is
a
very
interesting
feature
that
brought
me
and
brought
to
us
and
quark
was
leveraged,
is
to
be
able
to
create
narrative
and
show
your
application
and
run
them
completely
as
a
native
executable,
with
a
very
small
memory
footprint.
B
Most
of
the
work
is
done
by
yes.
So
all
the
flower
is
that
we
support
we
were
able
to
build
them
natively.
If
there
is
something
that
doesn't
work,
let
link
work
as
well.
You
can
have
a
look
and
try
to
figure
that
out
yourself,
but
usually,
if
you,
if
you
bring
them
to
the
Quercus
team
or
you
open
up
an
issue
and
get
up,
someone
will
be
able
to
help
you
out
on
that.
B
Okay,
so
I'm
gonna
go
into
some
demos
right
now,
so
hope.
Hopefully
this
was
not
very
boring.
This
introductory
part
I
probably
took
a
little
bit
longer
than
I
wanted
to.
But
now
let
me
just
go
into
the
code
which,
which
is
are,
are
the
most
interesting
parts.
So
let
me
just
have
a
look.
If
there
are
any
comments
over
here,
I
think
there
is
like
a
question,
so
I
can
probably
take
that
one
right
now,
very
quick.
B
So
one
of
the
questions
that
we
have
is
the
poor
performance
of
rest
services
or
database
queries
compared
with
less
memory
consumption.
It's
also
faster
in
requests
per
second
or
just
in
memory.
Consumption
start
time.
I
actually
think
that
I
need
to
answer
that
already.
So
you
have
to
measure.
Usually
the
fruit
put
of
JVM
application
is
better
than
the
native
image,
but
I
also
always
recommend
for
you
to
test
to
make
sure
that
everything
is
okay
and
you
get
into
performance
that
you're
looking
for
okay,
so
now,
I'm
gonna
show
you
some
demos
application.
B
So
let
me
go
quickly
for
my
this
sample,
so
this
application
needs
in
in
get
up.
So
after
the
presentation,
you
can
go
over
there
and
can
download
it,
and
you
can
play
with
that
and
if
you
find
any
issue
please
let
me
know
I
mean
working
on
this
for
the
last
couple
of
days,
so
maybe
there
something's
not
quite
there
yet,
but
hopefully
everything
is
okay,
and
then
you
ever
hear
is
that
you
have
a
bookstore
right,
a
REST
API,
to
manage
a
bookstore.
Where
you
can
create
books,
you
can
update
books.
B
You
can
read
books,
of
course,
like
a
general
thing,
so
we
have
like
two
microservice
over
here.
So
one
is
a
book
API
and
then
you
have
another
one
here
called
number
API.
So
number
API
is
just
simple
micro
service
that
generates
a
number
and,
in
this
case
you're
gonna
use
number
API
to
generate
ESP
n
number
of
the
book.
So
when
you
create
a
book,
the
bouquet
P
I
will
call
number
API
to
get
that
number
and
and
create
a
book.
B
So
each
each
microservice
and
or
each
service
over
here
will
use
different
aspects
of
micro
profile
so,
for
instance,
the
on
number
API
and
now
I'm
gonna
show
you
because
itself,
so
it's
not
really
different
from
a
traditional
application.
So
most
likely
you
seen
these
goals
around
and
this
this
is
just
plain
rest
application
using
micro
profile.
As
you
can
see,
you
don't
even
have
any
Quercus
imports
over
here
any
what
you
have
is
just
a
ICD
I-beam
that
injects
a
prefix.
B
Simulate
some
some
kind
of
delay,
traffic
and
running
here
is
your
open,
API
interface,
where
I
basically
just
document
this
rest
endpoint
that
generates
the
number
and
then
I
use
here
a
couple
of
other
classes
that
you
most
likely
you
don't.
You
won't
need
them,
but
these
are
just
to
showcase
some
different
tasks.
B
So
Michael
profiles
on
this
particular
case,
just
like
a
config
source
where
you
can
set
up
the
configuration
property,
it's
going
to
be
used
to
the
generation,
prefix
of
the
blue
numbers
are
being
generated
and
if
I
show
now
the
pump
file
now
you'll
notice
there
a
couple
of
different
things
over
here.
So
there
is
a
plugin
and
now
here's
what
you'll
see,
which
is
the
quark
with
maven
plugin,
and
you
always
going
to
require
these
and
every
Parker's
application.
B
Because
remember
when
I
mentioned
that
quark
is
there's
a
lot
of
things
in
real
time
to
save
on
on
runtime
memory,
consumption
and
startup
time.
Well,
you
do
require
the
plug-in
to
do
that.
Work
for
you
beforehand,
so
you're
gonna
require
to
set
up
the
plug-in
over
here
and
now
you
notice
in
this
one.
Probably
one
of
the
big
differences
from
a
traditional
application
is:
if
you
want
to
use
dependencies,
you
have
to
use
the
Quercus
dependencies
instead
of
traditional
jax-rs
dependency
or
open
api
dependency.
B
The
idea
over
here
is
that
Korkis
dependencies
will
deal
with
everything
that
has
to
be
dealt
regarding
native
image
generation,
so
figuring
out
the
class
files
figuring
out
the
reflection
watching
stuff,
and
that's
why
we
have
to
implement
those
extensions
on
top
of
that,
but
other
than
that,
it's
not
a
really
big
deal,
so
we
purposely
provides
a
bomb
and
you
can
just
import
everything
from
the
bomb
and
you'll
see
here.
There
are
a
couple
of
dependency
for
copper
needles
as
well.
So
let
me
show
you
that
as
well
so.
B
Now
let
me
show
you
here
so
here
now
you
have
these
application
properties
that
were
all
the
application
configuration
lives
and,
as
you
can
see
here,
III
have
some
copper,
Navy's
configurations
where
I
said:
where
is
my
registry?
So
when
I
build
my
doctor,
images
that
are
being
deployed
over
there
what's
the
tag?
B
What's
the
service
type
of
the
load,
balancer
2004
cover
knees
and
the
IV
over
here
is
that,
as
you
can
see
on
my
on
my
palm
file,
I
don't
have
any
specific
company
stuff
I
only
put
some
kubernetes
dependencies
on
purpose,
and
these
will
actually
generate
the
copper
neediest
descriptors
that
I
have
to
use
to
deploy
stuff
over
there.
So
you
can
see
here
is
the
mo
file
photo
burning.
B
So
if
I
show
you
the
open
API,
so
these
are
a
little
more
complex
or
more
extensive
because
as
Maureen
points
to
for
us
to
see
and
all
of
these
again,
if
I
show
you
the
the
imports,
the
everything
is
micro
profile.
You
don't
see
any
workers
imports
over
here,
so
you'll
see
a
micro
profile
or
record
cycle
profile
or
Javits
artists.
Oh
I
haven't
upgrade
these
Jakarta
yet,
but
you
you
get
the
idea
and.
B
Again,
if
I
show
you
the
palm
from
this,
one
has
a
couple
of
more
dependencies:
that's
okay
and
for
book
API.
If
I
show
the
application
properties
now,
you'll
see
a
couple
of
more
configurations,
so
you'll
see
that
in
here,
I'm
also
passing
in
the
environment
parables
that
I
want
to
set
up
my
application.
So
in
the
case
of
the
book,
API
have
a
database
where
you
can
store
the
books,
so
that
will
mean
that
I
have
to
set
up
a
variable.
B
What
I
want
to
say
what
my
GBC
connection
to
my
database
and
I
want
to
set
up
my
number
API
rest
address,
so
the
book
API
can
reach
out.
I
want
to
set
up
my
Kafka
poster,
so
I
can
set
that
up
all
the
environment
variables
and
all
the
container
and
all
the
copper
nearest
configuration
right
on
my
application
properties,
meaning
that
I
don't
have
to
go
with
really
really
complicated
stuff
or
having
to
mess
with
the
cover
image
file,
which
sometimes
it's
a
little
bit
harder
to
check.
B
So,
if
I,
if
I
go
here
to
my
target,
so
usually
corporate
generate
these
and
covering
this
photo
over.
Here-
and
here
is
the
mo
file
for
that
one
over
there,
so
a
couple
of
other
extra
things
that
the
book
API
before
actually
showing
you
the
application,
actually
do
so
we
have
our
book
API
that
exposes
a
typical
crud
endpoint,
so
create
read,
update
and
delete.
So
this
is
just
an
interface
I,
usually
use
just
you
don't
have
to
do
it
that
way.
B
B
This
really
standards
right.
So
just
calling
some
finds
and
persists
and
and
updates
and
removes
so
you've,
probably
seen
here.
There's
like
this
ESPN
generator,
so
this
is
the
part
that
actually
creates
a
beam
that
ingest
injects
a
rest
wine.
That's
calling
the
number
API
to
grab
the
number
so
in
here
we're
using
micro
profile
restclient
in
a
sense
that
this
API
client
is
also
an
interface
that
you
can
just
use
to
module.
Whatever
Resco
calls
it
you
want
to
perform,
and
with
this
you
can
use
these
register.
B
Rest
client
annotation,
that
what
it
does
is
when
you
do
this
on
an
interface,
if
you
actually
generate
implementation
to
call
these
as
a
rest,
client
endpoint
directly
without
you
having
to
provide
the
implementation.
So
the
idea
with
this
is
you'll
able
to
call
the
numbers,
slash,
generate
and
retrieve
a
number
from
that,
or
you
can
just
call
the
health
endpoint
from
the
numbers
API
directly
from
from
books.
B
So,
even
if
my
number
ap
is
down
because
I'm
not
because
I'm
not
able
to
generate
book
numbers
or
yes
BN
book
numbers,
it
doesn't
mean
I'm
not
able
to
create
the
book.
What
it
does
is:
okay,
I'm
able
to
create
a
book.
The
book
just
happened
yesterday
and
fallback
number
for
now
and
then
I
want
my
number
API
is
up
or
my
caste
stream
detectives
up.
Then
you
can
grab
all
the
books
that
doesn't
have
a
nice
phone
number
and
then
then
fill
that
in
later.
So
for
that
you
can
find.
B
That's
one
over
here,
so
Daniel
you
can
use
these
incoming
annotation.
That's
gonna
read
from
the
Cask
a
string
name
books
and
will
receive
the
book
from
that
Casca
stream
and
then
schedule
an
event.
It's
gonna
set
the
SPN
and
then
call
update
on
the
book
and
that's
it
so
as
you
can
see
I'm
using
Kafka
as
well,
but
just
with
a
couple
of
notations
and
there
you
don't
even
see
any
calf
connotations
over
here
anyway,
everything's
and
the
lower
viral
reactive
messaging
from
Michael
profile
as
well.
B
Okay,
so
I'm,
guessing
that
there
are
plenty
of
other
aspects
on
the
code
that
I
probably
wanted
to
show
you,
but
I
want
to
actually
go
for
the
application
as
well.
So
let
me
open
here
in
your
tab
and
if
I
go
to
localhost
8080,
so
have
a
swagger
UI
running
over
here.
So
there
is
no
UI
for
this
application.
Everything
you've
done
by
swagger.
So
if
I
run
this
software,
that's
not
working.
B
That's
not
good,
but
I
know
that's
not
working
because
I!
Actually,
if
I
now
now
I
have
my
couple
in
this
cluster
here
running
and
if
I
look
into
all
my
deployment,
I'll
see
that
I
have
services
for
for
books
and
for
numbers
and
for
that
other
base
for
Kafka
for
zookeeper,
but
I
don't
see
any
path
running
for
for
book,
so
actually
scale
that
to
0
a
while
ago.
B
So
let
me
see:
ok
here
you
go
so
I'm
gonna
start
a
replica
for
a
book
and
scale
it
for
1
and
hopefully,
if
I
run
it
over
here
again
and
see,
I'll
should
have
a
path
for
book
starting
up
and
it
already
it's
already
running.
So
if
I
go
over
here
now,
I'll
see
that
my
book
API
it's
right
there
and
one
things
one
of
the
things
one
of
the
things
I
want
to
show.
B
You
is,
if
you
go
to
docker
and
if
you
talk
up
yes,
all
I'll
see
that
ok,
my
books,
IP
I
started
up
here
is
my
book
container
running
on
kubernetes
and
I
can
do
like
docker
stats
and
passing
that
one
so
I
already
heard
this
beforehand.
But
now
you
see
that
these
hopper
container
is
only
using
20
Meg's
right,
so
this
is
actually
book.
Api
is
running
as
native
in
kubernetes
and.
B
B
B
Yeah
I'm
gonna
I'm
gonna
do
this
in
like
two
minutes
so,
okay,
so
now
you
see
that
Antonia's
over
there
right
and
and
if
I
show
you
docker
logs
or
actually
clue,
CTL
logs
for
my
book
API.
Now
you
see
that
this
one
started
in
hundreds
in
just
a
hundred
milliseconds,
boo
KPI.
So
it's
running
native
and
I
shall
have
here
actually
I'm
gonna
do
something
else
so
really
really
quick!
Oh
I!
Don't
have
number
API
running
over
there,
but
I'm
gonna
deploy
it
like
that.
B
So
basically
I'm
doing
over
here
just
building
my
application,
which
is
number
API
and
you'll,
see
that
I'm
saying
hey
quark
is
please
build
the
image
and
please
do
the
cover
and
supply
as
well.
So
with
this
single
command,
I'm
able
to
run
my
app
in
my
application,
build
the
docker
image
and
deploy
it
to
cover
needles,
and
that's
it
so
before.
If
I
do
my
blocks,
but
if
I
do
that,
one
sorry.
B
B
So
if
I
want
one
I
think
so
if
I
now
rule
a
logs
of
course,
if
I
do
logs
on
the
number
API.
So
now
you
see
this
one
started
in
1.6
second,
so
it's
it's
a
little
bit
slower
because
that's
that's
VM
mode,
but
still
it's
quite
quick.
So
now
you'll
see
the
difference
between
native
and
and
and
and
vm
mode,
so
I
think
I'm
done.