►
From YouTube: Best practices for modern Enterprise Java projects | Jakarta Tech Talk - December 2019
Description
Enterprise Java has come a long way. Let’s not focus on the old, darker days, but on modern approaches how to do cloud-native enterprise applications with Jakarta EE and MicroProfile.
In this session, we’ll have a look at best practices for today’s projects, how to use the best of Jakarta’s and MicroProfile’s APIs, how to build effective development workflows, and how to test efficiently and with joy. We’ll have a look at effective tool usage, for Maven, Docker, Kubernetes, and more cloud technology. This session is aimed at both developers who are just starting out with Enterprise Java, as well as engineers who might want a refresh on how to do things in year 2019.
A
Hi
everyone,
sorry
about
the
delay.
We
had
some
technical
difficulties,
but
that's
what
happens
when
you
go
live,
and
here
are
we
today
again
with
Sebastian
nur
and
he
will
be
talking
about
if
you
can
read
what's
on
the
screen
best
practices
for
modern
Java,
Enterprise
Java
projects,
so
without
further
ado,
let's
just
turn
it
over
to
Sebastian
welcome
Sebastian!
Thank
you.
Yeah.
B
Ultimately,
so
my
name
is
Sebastian
born
and
raised
in
Munich,
Germany
and
I
work
for
this
company
called
IBM
and
I
do
a
lot
of
things
on
enterprise
Java.
This
is
my
background
that
I've
been
in
for
many
many
years,
so
I
do
Java
each
okar
de
micro
profile,
enterprise
job
in
general
and
then
a
lot
of
things
that
just
enterprise
projects
would
need
and
I'm,
not
a
big
fan
of
slides,
so
I
think
code
is
more
interesting
and
I
want
share
out
of
code
and
do
live
coding
here
for
this
hour.
B
So
what
I
will
show
you
now
is
a
very
just
from
the
beginning.
What
are
some
best
practices,
how
to
build
projects
and
another
fun
fact
about
me:
I
like
coffee,
so
I
will
show
some
coffee
shop
example
application
that
we're
going
to
build
in
a
modern
way,
with
some
best
practices
in
mind
and
I
will
show
well
a
lot
of
tips
and
tricks
how
to
basically
be
a
little
bit
more
productive
and
how
we
can
solve
all
these
issues
in
a
modern
world.
So
first
thing
I
want
to
show
you.
B
This
is
a
maiden
pom.xml
of
a
project
you
see,
the
project
is
called
coffee
shop
and
now
one
of
the
few
things
that
I
really
really
care
about
when
we
have
we
build
a
project,
is
how
we
start
building
our
maiden
M
or
Java
artifacts.
When
we
build
our
project
right,
so
this
was
built
by
maven.
You
can
do
very
similar
things
by
a
Gradle
if
you
want,
but
a
few
things
are
important
in
mais.
B
So,
first
of
all,
if
you
see
the
list
of
dependencies,
then
you
only
see
two
dependencies
that
is
to
Jakarta
API
and
the
micro
profile
API.
How
this
works
together.
Well,
I
will
show
in
a
second,
but
the
most
important
thing
for
now
is
that
both
APM
API,
so
both
dependencies
are
provided.
That
means
they
will
not
end
up
in
our
deployment
artifact
and
then
what
else
you
can
see
here
is
a
lot
of
a
lot
of
a
lot
of
test
dependencies
that
are
all
included
with
the
test
scope.
So
that
is
important.
B
I
will
talk
about
testing
a
little
bit
later
and
then
pretty
much,
that's
it.
So
what
else
we
can
see
is
some
maven
plugins
and
the
only
reason
why
I
define
this
is
basically
because
well
I'm
brave
enough
to
use
a
new
recent
job
abortion,
actually
not
the
latest
version,
not
13,
but
Java
12,
and
then
maven
still
has
to
update
the
update
plugins.
So
if
you
run
the
test
plugins
and
the
war
plugin,
it
will
sometimes
fail
or
issue
some
warning,
so
you
manually
have
to
update
the
version
here.
B
Unfortunately,
that
is
not
covered
yet
by
the
Maven
super
pump.
So
you
have
to
refine
just
these
versions
other
than
that.
That's
pretty
much
it.
What
I
want
to
have
in
my
projects?
So
not
more!
You
know
complicated
XML
life
is
too
short
to
do
XML
right.
So
that
is
pretty
much
what
we
don't
want
to
include
here
and
then
how
I
built
this
project.
B
Well,
first
of
all,
is
you
know
I
use
maven
on
the
command
line,
so
I
can
use
things
like
you
know,
may
even
clean
and
store,
for
example,
or
even
simpler,
maven
clean
package
or
only
maven
clean
maiden
package
without
a
clean
face.
If
you
want
I,
typically
issue
this
build
and
then
what
it
will
do,
it
will
just
build
up
project
so
probably
because
of
this
streaming
stuff
going
on.
This
is
a
little
bit
slow.
Let's
do
this
again
for
seconds.
B
It
should
be
a
little
bit
less,
like
maybe
two
seconds
depending
on
well,
it
doesn't
get
faster.
How
fast
your
laptop
is,
but
that's
pretty
much
it.
So
one
of
the
few
things
that
we
want
to
care
about
is
that
our
project
builds
fast,
and
this
is
thanks
to
the
fact
that
we
don't
include
a
lot
of
plugins
or
life
cycle
things
into
maven.
B
If
I
build
the
project,
it
should
build
quickly
when
it
comes
to
testing
we,
we
gonna
go
back
for
that
in
a
second
and
now
the
question
is:
if
I
built
us
here,
then
I
have
a
thin
war
file,
a
thin
deployment,
artifact
and
now
I
have
to
deploy
this
somehow
right.
So
there
are
many
many
ways
how
to
do
that.
We
have
to
get
some
runtime
that
supports
Jakarta
and
micro
profile
here.
What
I
will
do,
but
I
will
include,
is
I
use,
open,
Liberty
to
run
this
project.
B
You
could
use
any
server
of
your
choice
and
I
run
this
locally
and
also
in
production
using
docker
using
docker
containers.
So
here
is
my
docker
file,
and
that
is
already
an
interesting
thing
for
best
practice.
I
want
the
docker
file
also
to
be
simple
and
easy.
Now
you
might
ask
yourself:
okay,
there
are
some
other
technologies
out
there,
where
I'm
not
required
to
write
a
docker
file
in
order
to
build
a
docker
image.
B
There's
some
other
maven
plugins
out
there,
for
example,
but
the
reason
why
it
typically
just
write
a
simple
docker
file
is
well
if
you're
in
plain
either's,
as
you
know,
really
easy
three
lines
of
code
not
more,
and
it
won't
get
much
more
complex
than
that,
because
I
include
a
base
image
that
already
includes
my
runtime
and
JDK
or
JRE.
In
my
case,
Java
12
here
with
open
Liberty,
then
I
might
add
some
options
over
configuration
and,
of
course,
my
thin
deployment
artifact.
That's
it!
B
B
It's
another
thing
that
I
wanted
to
show
you
from
scratch.
So
I
don't
want
to
include
too
much
out
of
the
box
for
you.
So
I
can
code.
This
just
live.
What
I
need
in
order
to
run
this
with
little
buddy
is
the
server.xml
configuration,
and
that
is
basically
just
telling
the
runtime
which
features
to
include
now.
That
is
a
little
bit
of
an
optimization
thing.
If
you
want,
because
we
could
just
go
and
say
our
runtime
supports
the
jakarta
api
and
micro
profile-
and
I
could
just
let
you
know
include
these
two.
B
So,
for
example,
I
can
say
server
that
is
called
feature
manager
feature.
So,
for
example,
this
still
called
Java
EE
here,
job'
aide
and
then
micro
profile
or
I'd
say-
and
this
is
now
a
little
bit
of
an
optimization.
If
I
know
that
my
runtime,
that
my
project
for
now
only
includes,
for
example,
jax-rs-
and
maybe
you
know,
Jason,
bead
and
maybe
CDI,
then
I
can
just
write
these
individually
and
just
specifying
the
features
that
I
want.
It's
a
little
bit
of
question
what
whether
you
care
about
it.
B
The
benefit
you
get
is
that
this
then
starts
up
a
little
bit
faster,
because
the
OSGi
modules
don't
have
have
to
load
everything,
that's
included
in
Jakarta
and
micro
profile,
but
only
what
you
need,
but
that's
a
little
bit
of
a
question
of
a
taste
and
then
you
can
build
this
and
now
again
the
reason
why
I
built
this
on
the
command
line,
just
by
issuing
docker
build-
and
nothing
else
is,
you
know
it
builds
very
quickly
and
it's
very
easy.
I
just
need
a
docker
file
and
then
these
taco
commands.
B
So
that
is
one
takeaway
I,
don't
want
to
overuse
to
tool
usage
here
mostly
regarding
maiden.
This
is
what
I
see
a
lot
in
projects.
You
know
you
have
pom.xml
with
a
few
thousand
lines
of
code
and
then
some
plugins
that
build
your
docker
images
and
things
like
that
and
ultimately,
from
my
experience,
you
spend
more
time
actually
configuring
these
things
here
in
the
pom.xml
to
get
just
right
image
that
you
want.
If
you
could
just
you
know,
write
three
lines
of
code
here
and
then
fire
up
docker
build
push
team
image.
B
If
you
want
to
build
it
locally
and
that's
it
so
for
me,
that's
actually
the
easiest
way
and
also
the
fastest
way
to
build
a
docker
image
right
I
mean
if,
if
I
again
call
maven
package,
if
I
want,
if
I
want
to
rebuilt
that
wall
file
and
then
I
just
issue
the
dr
built
again,
and
then
I'm
done
right,
you
don't
get
much
much
faster
than
this,
so
I
think
this
makes
sense.
Now,
let's
finally
order
some
coffee,
that's
finally
around
this
example
here.
B
So
what
I
have
is
my
example:
application
which
I
want
to
show
you
Abhaya,
HTTP
and
rest.
So
if
you're
familiar
with
EE
ap
is
the
this
is
Jax.
Arrests,
if
you
are
familiar
with
the
spring
world,
is
very
much
like
spring
rest
controllers,
so
we
have
something
like
an
HTTP
resource
that
will
be
available
by
the
URL,
slash
orders
and
then
represented
in
adjacent
format.
We
can
have
some.
You
know
coffee
orders.
We
can
get
some
orders.
B
B
So,
for
example,
if
I
run
this
by
a
docker
run
and
say
I
think
I
called
it
coffee
shop
temp,
then
it
could
just
fire
this
up
and
then
the
starts
my
runtime
right
super
easy,
but
also
what
I
want
to
show
you
you
show
you
this
quickly
is
that
we
have
a
few
more
optimized
approaches
to
do
this
and
as
another
best
practice
what
I
want
to
show.
You
is
basically
how
to
be
a
little
bit
more
efficient
with
your
runtime
kuroh
localhost.
B
If,
for
example,
now
I
go
to
my
coffee
shop,
project,
I
can
check
out
the
orders
right.
So
once
that
is
deployed,
I
see.
Okay,
what
are
my
coffee
orders
in
the
system?
So
this
is
an
empty
JSON
array
because
well
there
is
no
order
yet
so
if
we
would
create
some
order
by
posting
a
JSON
here
with
well,
what
do
we
need
so
already
if
you're
familiar
with
the
API?
As
you
see,
okay,
this
includes
bean
validation
into
jax-rs.
B
You
know
this
is
just
a
nice
story
of
Java,
EE
or
Jakarta
EE
that
you
can
have
the
specs
being
interoperable,
so
you
can
just
mix
and
match
these
specifications
and
now
for
our
example,
we
need
to
include
a
coffee
type.
You
know
like
a
drink
type.
For
example.
We
want
to
post
some
expresso
here.
So
then
we
create
this
espresso
order
and
then
hopefully,
yes,
the
orders
in
the
system.
This
looks
good
okay.
Anyway,
now
we
created
some
some
order
here.
B
That
is
a
very
basic
hello
world
example
using
jax-rs,
CDI
and
Jason
B,
but
well
now
a
little
bit
more
about
the
development
workflow
here
right,
because
what
happens?
If
we
would
like
to
change
something
here
then
well,
we
would
need
to
rebuild,
and
especially
we
would
need
to
restart
the
runtime
and
even
if
we
have
a
runtime.
That
starts
up
very
very
quickly,
and
that
is
actually
another
good
news
and
despite
the
well
common
knowledge
out
there,
modern
at
Jakarta
or
EE
runtimes
are
very
very
fast.
B
Thanks
to
techniques
such
as
OSGi
and
modern
approaches
how
to
build
up
on
modular
runtime,
you
have
service
that
start
up.
You
know
in
in
seconds
you
don't
have
to
wait
half
an
hour
anymore
for
a
heavyweight
runtime.
So
this
is
very
good
news,
but
still
it's
a
little
bit
too
slow.
If
we
always,
you
know
just
change
some
code,
I
would
like
to
test
it
out,
because
then
we
would
need
to
rebuild
our
project
and
then
just
restart
the
container
or
how
I
will
run
it
right.
So
another
approach.
B
What
I
want
to
show
you-
and
this
is
just
for
the
example
for
open
Liberty-
is
you
might
want
to
build
up
approach
way?
Can
minimize
that
turnaround
time
and
what
I
want
to
show
you
is
a
maven
plug-in
that
adds
basically
the
support
for
a
development
mode
for
liberty.
Now
what
I
said
before
is
don't
overuse
the
maiden
usage.
So
what
is
this
plug-in
doing
here
right?
But
in
this
case
I
say:
okay,
just
for
the
development,
a
workflow.
B
It
has
a
benefit,
this
plug-in,
and
luckily
it
doesn't
slow
down
my
build
because
in
the
normal
phase
maven
package
this
is
not
executed.
This
doesn't
do
anything
which
is
good.
So
in
my
case,
what
I
can
do?
Let's
stop
the
stalker
container
again,
if
I
say,
I
include
now
maiden
package
Liberty
death,
which
is
a
development
mode
for
open
Liberty.
Then
this
will
start
up
open
Liberty
in
a
development
mode
where
it
just
will
watch
for
my
file
changes
and
then
do
a
very
quick
hot
reload.
B
So
there
there
are
a
few
technologies
out
there
that
support
this
now
and
whatever
technologies
are
runtime.
You
choose
I,
think
it's
just
important
to
have
such
a
fast
feedback
loop
of
basically
including
new
stuff,
changing
code,
changing
some
configuration
and
then
just
having
these
changes
being
reflected
automatically
on
the
fly.
So
what
I
want
to
show
you
now
is
I
want
to
code
a
little
bit
more
and
especially
I
want
to
show
you
the
combination
of
Jakarta
and
micro
profile,
because
I
think
in
modern
EE.
B
That
is
another
best
practice
to
just
mix
and
match
from
these
standards
and
then
include
things
like
you
know:
micro
profile,
health
or
micro
profile
config,
because
the
backgrounds
from
these
technologies
is
very
similar,
so
both
are
based
on
Enterprise
Java
and
the
programming
model
is
very
nice
and
very
familiar
right.
So
we
also
have
a
declarative
approach
of
doing
things
and
what
I
will
show
you
now?
So
you
see
this
still
works
and
order
some
coffee
again,
then
the
orders
in
the
system.
B
What
I
want
to
do
you
now
show
you
now
is
basically
enhancing
my
project
with
some
some
more
stuff.
So,
for
example,
I
have
my
co
profile
here
at
the
API,
so
I
can
use.
You
know
the
API
in
my
code
these
sites
that
what
I
can
also
do
I
can
add
features
on
the
fly.
So
I
can,
for
example,
say:
please
add
micro
profile,
health
to
zero
and
it
will
hot
reload.
B
Just
this
example,
and
then
you
know
install
that
feature
on
my
run
time,
while
I'm
coding,
while
this
is
running
so
you
saw
there's
something
happening
here
and
then
just
very
quickly.
The
application
is
being
restarted
and
now
apparently,
I
also
can
access
this
resource
that
you
might
know
if
you
have
checked
out
micro
profile,
slash
health,
which
is
a
default
resource
that
just
says:
okay,
it's
you
know
up.
That
is
the
default
resource
with
HTTP
200,
okay!
Well,
that's
not
quite
enough.
B
I
actually
want
to
see
whether
my
application
is
up
and
running
so,
for
example,
I
call
this
health
and
then
what
I
do
I
implement
a
health
check
from
micro
profile?
That's
the
way
how
to
write
this,
and
then
I
implement
just
this
call
that
could
check
for
you
know
whatever
I
would
like
to
have
here
named.
I
call
this
coffee
shop
up
and
built
this,
so
it
will
include
a
default
check
for
this.
B
For
this
example,
I
defined
this
as
readiness
probe
and
applications
go
to
the
I
beam
and
now
the
interesting
news
is
just.
This
will
be
updated
on
the
fly
very
very
quickly,
and
I
see
now
this
check
being
up
there
and
whatever
technology
you
use
right.
So
this
is
liberty
with
micro
profile
health
check,
but
whatever
the
point
is
as
a
developer,
you
want
to
be
sure
that
your
turn
around
cycle
is
very,
very
short
that
immediately
you
see
some
update
right,
so
I
can
code
some
some
more
stuff
here.
B
I
say:
okay,
for
example,
include
some
data
say
you
know
hello
world
whatever
you
want
to
do,
and
then
just
very
quickly
update
that,
and
then
you
have
it
right
here
so
for
me,
I
have
like
when
I'm
programming
I
have
a
rule
like
how
fast
my
technology
or
whatever
I'm,
doing,
reacts
and
I
call
this
the
coffee
rule.
So,
for
example,
if
I
change
something
here,
you
know
from
how
the
world
2
goodbye
world
and
then,
if
I,
do
some
action
or
a
wait
for
something
I
want
to
take.
B
You
know
my
cup
of
coffee,
I,
take
a
sip
I
place
it
back
down,
and
then
it
needs
to
be
finished
right
if
it
takes
longer
than
that,
it's
too
slow.
So
if
you
know
you
build
your
project,
if
you
fire
up
your
test
suite,
if
you
do
whatever,
if
you
wait
for
the
reload,
it
needs
to
be
faster
than
that,
ideally
immediately
right.
So
I
think
that
is
crucial
in
order
to
you
know
just
stay
productive
because
what
happens?
We
are
humans
and
we
just
get
easily
distracted
right.
B
So,
even
if
we
have
to
wait
just
more
than
five
seconds,
even
we
get
distracted
right,
so
I
have
to
wait.
I
have
to
wait
and
then
what
happens.
I
check,
slack
I,
check
social
media
I,
take
my
smartphone
right
and
then
you're
out
of
the
flow
experience.
So
you
just
want
to
stay
in
the
flow
and
make
sure
that
is
being
updated
quickly,
right
or
whatever
you
do.
You
get
a
fast
feedback
now.
Let
me
show
another
thing:
mica
profile
config,
to
do
this
1.3,
so
mica
profile.
Config
is
just
another.
B
What
micro
profile
technology,
where
I
can
have
some
injectable
configuration.
So
if
you're
into
a
EE
and
CDI,
you
might
say
well,
you
can
do
the
same
with
CDI
producers.
Yeah,
that's
totally
true,
but
you
can
also
just
use
this
technology
and
then
save
this
class
and
few
lines
of
code
by
just
using
this
qualifier
at
conflict
property
where
I
can
just
at
an
inject,
for
example,
the
version
of
your
application
and
then
say
instead
of
this
I
just
want
to
omit
the
version
here
and
then
my
application
version
will
be.
B
Case
one
status
updated
hello.
Yes,
you
could
be
waited
for
their
auto
save
from
the
IDE
and
then
you
see
now
include
the
application
version,
which
is
one
two
three
okay.
Now
you
might
ask
yourself:
where
does
this
version
come
from?
Well
the
the
few
and
default
config
sources
and
micro
profile
config
so,
for
example,
property
files
and
is
one
there
is
a
default
convention,
location,
mica,
profile,
conflict
about
properties
where
I
can
just
include
that
and
then
you
know
that
is
being
configured
another
one
is
environment
variables
where
you
could
actually
overwrite
this.
B
So
this
is
very
helpful
for
containerized
environments
right
and
another
good
news
of
this
update.
Plugin
approach
is:
if
you
update
the
properties
file,
what
I
just
did
from
1
to
3
1
to
4,
then
this
change
will
also
be
immediately
reflected
because
again,
this
is
a
convention
path.
So
the
profile
knows
about
this
path,
and
you
know
this
is
just
being
updated.
So,
whatever
you
update
here
in
your
code,
whether
it's
java
code,
whether
it's
configuration,
whether
it's
a
pom.xml
dependency
or
a
runtime
feature,
all
of
that
I
want
to
see
reflected
immediately
right.
B
So
let's
do
a
little
bit
of
more
of
live
code
with
this
config,
because
this
is
just
a
static
configuration
where
I
say.
Ok,
this
string
here
is
always
the
same
with
declarative
lookup,
but
I
can
also
do
a
programmatic
look-up
by
in
injecting
this
config,
and
in
order
to
do
that,
let
me
implement
another
feature.
B
Let's
say
the
coffee
orders
here,
as
you
can
see,
also
have
a
price,
a
price
attached
to
it
by
the
way
in
real
word,
never
do
money,
calculations
for
floating
a
floating-point
numbers
right
only
an
example:
don't
do
this
at
home,
but
I
want
to
add
some
pricing
information
here
as
well
and
of
course,
I.
Don't
want
the
user
to
set
the
price,
but
this
should
be
configured
while,
depending
on
your
drink
type.
Right,
like
you,
know
an
espressos
or
a
bit
cheaper
and
a
latte,
and
things
like
that.
B
B
Let's
write
this
and
then
in
my
coffee
shop.
I
well,
of
course,
want
to
inject
the
price
calculator
here
right
and
then
well.
I
want
to
calculate
some
price,
so,
for
example,
I
want
to
say
well
in
order
to
set
the
price
I
want
to
calc,
like
the
price
here,
for
example,
where
is,
for
example,
calculates
price
for
a
coffee
order,
and
then
this
is
not
white
but
double
for
a
given,
for
example,
coffee
order
and
then
say:
okay
now
for
this
order,
it
depends
on
the
price
how
expensive
it
is
right
so
order.
B
Sorry,
it
depends
on
the
type
how
expensive
it
is
and
then
say
do
something
like
get
configured
price
right.
So
look
up
the
look
at
the
configured
price
for
this
coffee
order,
and
now
how
do
we
look?
Look
that
up
or
basically
we
can
go
to
the
conflict
provider
and
manually,
look
up
this
config
of
my
micro
profile,
config
here
and
then
say:
okay
now,
programmatically,
please
get
this
value
of
well.
Let's
have
a
look
here
of
the
coffee
prices,
so
coffee
prices
dot
something.
So
let's
do
this
coffee
prices
dot
and
then
well.
B
Whatever
is
being
sent
here
so
type
name
and
also
to
lowercase?
Well,
it's
not
really
readable.
It's
outsource
this
into
a
variable,
and
now
we
want
to
have
this
as
a
double
type
right,
so
that
should
be
injected
or
looked
up
as
a
double,
and
then
this
hopefully
will
be
configured
here
right.
So
by
the
way,
I
can
just
keep
keep
coding
here.
What
I
just
did
and
then
you
know
it
will
recompile
and
recompile
and
maybe
fail
because
of
compilation
errors
and
things
like
that,
but
it
doesn't
matter.
B
I
can
just
continue
doing
this
at
once.
The
compilation
will
be
successful
in
again
and
then
I
say:
okay
now
this
should
work
and
then
I
can
calculate
the
price
by
using
this
injected
price
calculator.
And
then
hopefully
we
can
well
try
this
out
again.
So
let's
try
this
out.
I
will
post
another
coffee.
I
will
order
another
espresso
and
now
I
want
to
see
that
actually,
my
coffee
here,
let's
use
this
URL-
that
my
coffee
now
has
a
price
attached
to
it.
Yes,
this
looks
good.
B
So
what
do
you
think
that's,
cheap
or
not
depends
I
guess
where
you
live,
it's
assumed
that
as
in
euros
and
now
for
the
espresso,
it
was
so
in
so
much
so
and
so
expensive
right.
Let's
try
this
out
again
with
another
drink.
Let's
say
this
is
a
latte.
So
now,
in
this
case,
I
want
to
check
whether
this
price
is
being
reflected
correctly
as
well
and
yes,
three
euros
here.
So
this
works
so
depending
on
the
type
we
can
have
a
programmatic
look
up
here
for
micro
profile
as
well.
B
So
as
you
just
see,
this
is
a
very
nice
development
workflow
by
just
keeping
you
know
in
this
flow
mode
by
just
changing
something,
seeing
the
changes
being
reflected
and
so
on
and
so
forth.
So
for
me,
that
is
another
best
practice
to
just
keep
these
turnarounds
cycles
short
and
well.
It's
not
just
about
testing
something
manually.
What
I
just
did
you
know?
I
ordered
some
new
coffee
and
I've
checked
it.
No.
The
same
is
true
actually
for
your
proper
tests
that
you
run.
So,
let's
talk
a
little
bit
about
testing
here.
B
I
have
some
tests
being
included
here,
and
this
is
just
very,
very
basic.
For
example,
I
include
some
basic
unit
tests
that
uses
J
unit
5
parameterised
tests,
as
you
saw
in
my
project.
This
is
just
e
well,
I
would
say
test
technology.
That's
used
in
most
of
the
projects
that
I
see.
J
units
nowadays
show
unit
5
with
mockito
and
ideally
srj
I.
Like
the
M
assertion.
B
The
readable
API
is
of
srj
here
and
we
can,
just
you
know,
write
write
very
basic
tests,
so
the
topic
of
testing
is
a
more
bigger
one
in
a
complex
one
in
general.
What
I
really
really
care
is
that
the
whole
test
suite
runs
just
very
quickly
and
I
mean
instantly.
So
if
you
have
a
more
complex
project,
then
you
know
you
might
have
hundreds
of
tests
classes
and
they
need
to
run
with
less
than
a
second.
B
If
you
have
plain
J
unit
with
the
J
unit
runner,
you
can
literally
run
hundreds
of
tests
in
a
few
milliseconds.
J
unit
is
very
performant
and
very
fast.
If
you
do
this
in
a
code
level
and
in
general
I
just
want
my
test
suite
to
run
quickly
now
for
code
level
tests
you
can
achieve
that.
The
most
important
thing
is
that
you
as
much
as
possible
avoid
test
runners
that
you
know
trying
to
start
up
the
whole
world.
B
So,
for
example,
if
you,
if
you
start
embedded
containers
things
like
if
you're
in
the
spring
world
spring
context,
test
things
like
achillion
things
like
CDI
unit,
so
there's
a
reason.
Why
I'm
not
a
big
fan
of
these
technologies,
the
TLDR
for
that
is
well.
It
typically
adds
a
lot
to
your
test
run
time,
so
they
typically
run
slowly.
You
don't
see
this
impact
immediately.
You
see
it
only
in
more
complex
projects.
Once
you
have
many
many
of
these
tests,
and
this
is
how
you
end
up
with
build
times.
B
Then
it's
actually
more
and
more
important
that
you
end
that
you
test
the
end-to-end
integration
example
of
how
you
how
you
services
interact
with
each
other
of
whether
this
communication
works
as
expected
right
and
for
that
it's
not
enough
to
just
have
simulated
tests
that
fire
up
some
simulated
environment
but
to
run
the
test
against
the
actual
running
environment
or
the
same
environment
that
would
later
on
run
in
production.
So,
for
example,
if
you
run
your
application
in
docker
containers,
it's
crucial
that
you
know
you
test
the
same.
B
The
docker
containers
from
the
same
docker
images
that
you
test
for
example
locally,
or
that
you
test
in
you
see
ICD
pipeline
and
so
on
and
so
forth.
In
order
to
reflect,
you
know
the
correct
examples
here.
So
in
order
to
do
that,
we
show
another
test
that
I
have
I
call
this
I
t4
integration
test,
so
that
is
coffee
shop
IT,
that's
another
maiden
best
practice.
B
If
we
call
our
tests
something
test,
tes
tea,
that
will
be
executed
by
maven
surefire
out-of-the-box,
and
if
we
call
it
AI
t4
integration
test,
then
it
won't
be
executed
for
making
a
sure-fire.
It
will
not
be
included
in
making
package
or
one
of
these
faces,
but
only
if
you
explicitly
run
the
IDs.
So
if
you
can
figure
something
in
your
pom.xml,
where
you
manually
exclude
some
patterns,
that's
actually
not
necessary.
You
can
just
use
the
convention
and
that
will
work
as
expected
as
well.
B
So
what
I
have
here
with
the
I
on
with
this
iti
basically
want
to
test.
You
know
this
is
a
very
basic
smoke
test,
whether
my
application
is
up
and
running,
so
that
should
be
a
test
with
a
more
end-to-end
scope,
where
I
connect
against
my
actual
running
application,
and
just
you
know
test
this
out.
I
could
connect
to
it
and
then
you
know
create
some
coffee
orders
and
connect
it
again
and
verify
whether
the
orders
in
the
system
correctly
and
all
these
things.
B
B
The
proper
production
code
that,
for
example,
emits
your
HTTP
resources
and
your
JSON
works
as
expected,
so
in
other
words,
you
quickly
want
to
fire
up
these
end-to-end
system
tests,
integration
tests
against
your
locally
running
environment
to
just
quickly
verify
that,
and
with
the
approach
that
I
showed
you,
this
is
actually
possible.
So,
for
example,
if
I
say
well,
I
have
this
application
and
another
nice
feature
of
this
plug-in
here
is:
if
you
hit
enter,
then
it
will
just
run
your
tests
and,
first
of
all,
it
will
run
the
Surefire
unit
tests.
B
So
it
runs
this
test
which
runs
very
quickly
and
then
it
will
also
fire
up
the
I
t's,
the
integration
tests,
so
my
typical
maven
build,
doesn't
do
this,
but
now
I,
see
okay,
assertion
fails.
Why?
Because,
while
I
reef
reconfigure
this
before
so
I,
say
check
that
the
system
is
up,
so
this
means
well
check
the
health
check
resource
of
micro
profile
health
and
see
whether
that's
up
and
running,
and
then
also
get
the
application
version
that
I
included
and
check
whether
that's
equal
to
one
two
three.
B
Well,
you
noticed
we
didn't
do
this
here.
We
changed
it
to
one
to
four
so
now.
Let
me
actually
change
it
and
run
the
tests
again,
and
now
everything
is
green.
So
what
this
test
does
it's
basically
very
simple:
it
should
be
client
that
connects
to
my
locally
running
application
and
says
localhost
90-80.
Please
connect
in
this
case
to
health
check
for
is
system
up
whether
the
system
is
actually
up
and
running
or
here
get
the
application
version
that
is
included
in
my
health
check
response.
B
Just
for
me,
right
I
could
change
this
system
driver
here
to
say
please
create
a
coffee
order
and
then
it
will
post.
You
know
this
and
that
to
this
URL
now
get
the
order
for
the
following
ID
and
so
on
and
so
forth.
So
again,
by
doing
this
approach,
I
can
just
run
this
test
very
very
quickly
because
it's
planes
it's
your
unit,
still
I.
Don't
fire
up
anything
that
is
running
here,
but
I
can
just
connect
to
something
that
is
already
up
and
running.
B
B
So,
for
example,
if
I
would
like
to
run
this
on
the
command
line,
I
say
may
even
failsafe
integration
test.
So
this
is
the
way
how
to
manually
and
explicitly
run
di
TS
and
not
the
show
fire
tests.
Actually,
so
you
will
see
hey.
This
runs
the
integration
test,
and
now
this
runs
the
same
thing
connects
to
localhost
and
checks,
my
application
that
is
running
locally
so
again,
I
want
this
quick
verification.
So
let's
try
this
again
if
I
say,
for
example,
I
have
my
property
file
I
want
to
change
or
change
this
back
here.
B
I
want
to
change
this
in
this
regard,
and
then
what
I
do
I
can
run
these
tests
again
and
then
this
needs
to
fail
right
and
then
I
can
again
check
either
my
test
or
change
my
test
or
change
the
application
properties,
whatever
you
want
to
have,
and
then
just
quickly
have
this
verification
so
again,
instant
verification
that
is
important
in
order
to
stay
in
the
development
flow
for
the
tests.
What
else
is
important
to
achieve
that
and
typically
I
care?
A
lot
about
this?
B
Is
that
you
separate
the
test
life
cycles
from
the
test
environment
life
cycle?
Why
I
think
this
is
the
easiest
way
to
manage
so
I
know
there
are
a
lot
of
fans
out
there
form
for
complicated
test
frameworks
where
it
can
fire
up
a
lot
of
stuff
using
Java
API
s,
and
while
this
might
be
handy
on
the
Dom
aside,
in
order
to
create,
for
example,
docker
containers
and
fire
them
up
in
my
test,
the
issue
I
always
have.
If
you
annotates,
you
know,
j-unit
five
extension
or
some
test
run
fo
gaj
unit.
B
For
then
you
always
make
the
life
cycle
more
complicated
and
you
bundle
de
life.
These
life
cycles
together,
so
one
test
class,
then
typically,
you
know
wants
to
fire
up
a
test
environment
and,
and
does
so
and
thus
it
runs.
You
know
much
slower
because
then
you
know
once
you
start
the
environment,
it
just
begins
to
start
something
up
or
sometimes,
but
unfortunately
not
always.
B
It
checks
whether
something
is
already
running
and
keeps
it
running
so
then
it
will
be
faster,
but
then
again
for
me,
it's
actually
harder
or
more
cumbersome
to
configure
and
manage
all
these
things
here
in
this
test
class,
rather
than
I,
would
like
to
separate
the
life
cycles
and
for
me
it's
still
easier
to
run.
You
know
my
environment
either
you
know
using
maven
or
using
some
shell
scripts.
B
B
You
know
a
shell
script
with
docker
run
three
times
or
a
docker
compose
if
you
want,
or
even
a
kubernetes
cluster
locally,
if
you
want
right,
but
start
it
up
first
and
separate
it
from
the
lifecycle
of
the
test,
because
once
I
am
and
the
development
flow
I
just
want
to
fire
up.
My
test
and
I
want
instant
verification,
whether
it
works
and
not
wait
for
even
five
seconds
until
that
is
up
and
running.
B
So
for
me,
that's
the
important
thing
here
now
we
have
some
tests
already
so
another
you
know
best
practice
have
a
proper
test
coverage
for
code
level.
Tests
such
as
my
unit
test
have
a
proper
coverage
for
end-to-end
level
tests.
This
might
be
included
in
the
project
or
once
it
becomes
more
complex,
you
might
have
a
dedicated
system
test
project,
maybe
under
the
same
repository
where
you
literally
connect
against
you
know
a
running
application
and
fire
it
on
up
some
proper
use
case
tests
and
proper
acceptance
tests.
B
Whether
your
application
does
the
expected
things,
and
this
test
suite
might
already
fire
up
a
few
more
containers,
a
few
more
applications
depending
what
you
do
in
this
regard,
all
right,
I
can
show
you
a
little
bit
more
of
micro
profile
technology,
if
you
like,
so,
for
example,
what
we
have
let's
go
back
to
this
on
something
again.
All
of
that
is
included
in
these
ap
is
that
I
just
have
Michael
profile,
that's
the
umbrella
one.
So
it
includes.
You
know
Health
config,
a
few
other
things.
B
Let's
say:
I
want
my
cup
profile,
metrics,
that's
another
cool
one
as
well.
So
this
you
know
am
I
from
my
experience.
This
really
adds
value
to
Jakarta.
When
you
say:
okay,
you
don't
have
to
implement,
for
example,
micro
profile
metrics
with
the
Prometheus
monitoring
format
yourself.
But
you
know
you
can
just
do
this
automatically
or
not
automatically,
but
using
these
using
these
api's
and
you
don't
have
to
implement
it
yourself.
So
let's
try
this
again
when
I
go
to
localhost
metrics,
so
slash
metrics,
that's
the
default!
B
B
Because,
typically,
you
would
like
to
you
know,
see
some
user,
a
name
and
password,
and
so
on
and
so
forth
and
now
I
say:
oh
I,
don't
care
in
this
test.
Example
just
set
the
authentication
to
false
and
again
I
can
reconfigure
stuff
here
on
the
fly
and
now
what
you
see,
there's
the
cryptical
response
of
a
Prometheus
format.
This
is
a
plain
tag,
a
text
line
based
format
of
having
you
know
some
metrics
like
for
default.
Some
run
some
information,
such
as
JVM
information
memory,
CPU
threading,
garbage
collection
and
all
kinds
of
stuff.
B
What
that
already
might
be
interesting
for
technical
terms,
if
you
want
to
monitor
your
application
or
what
is
also
interesting
if
or
more
interesting,
if
you
include
some
business
metrics
right
so,
for
example,
questions
that
the
business
care
about,
so
how
many
coffees
have
been
ordered,
how
much
revenue
we
may
buy
selling
soon
so
many
espressos
and
cappuccinos
and
whatnot.
So
that
is
a
little
bit
more
interesting
to
include
api's
are
include
some
metrics
on
a
programmatic,
API
level,
saying
oh,
for
example,
we
could
have
to
inject
some
metrics
here
and
then
manually
increase.
B
You
know
some
counters,
we
have
some
gauges
histograms
and
depending
on
what
you
want
to
do
or
there's
another
simple
way,
for
example,
and
declarative
approach
again
of
just
counting
actually
method
invocations.
So
that's
a
probably
easiest
way
saying:
okay,
this
should
be
coffees
total,
so
the
total
number
of
coffees,
for
example-
and
this
is
just
a
very
basic
counter
that
works
here
on
this
method
and
then
well.
Let
me
restart
this
again
if,
for
example,
I
post
so
new,
let's
do
a
latte.
Why
not
to
the
system?
B
B
Just
amidst
that
method,
and
then
a
Prometheus
instance
would
scrape
this
method
from
this
very
page,
and
then
you
know
store
it
in
this
in
its
time
series
database
and
then
you
can
have
some
fancy
graph
on
our
dashboards
and
things
like
that
to
displayed
again,
the
nice
story
is:
if
you
use
this
combination
of
Jakarta
and
micro
profile
metrics,
then
you
know
it's
very
easy
to
implement.
You
don't
have
to
write
all
of
these
formats
yourself.
You
can
just
you
know,
configure
the
runtime
to
include
it
have
some
a
little
bit
of
code
here.
B
What
you
want
to
do,
programmatically
and
you're
done.
That's
basically
it
now.
Let's
continue
a
little
bit
more
on
how
to
run
our
project
in
a
more
production
like
setting.
So
what
I
want
to
show
you
more
is
on
the
deployment
side.
We
already
covered
docker,
so
we
have
a
docker
image
of
our
application
here.
What
else
you
probably
have
in
your
project
is
well
some
more
sophisticated
way
to
run
containers.
So
typically,
we
have
some
kind
of
container
orchestration.
B
Nowadays,
for
example,
you
use
kubernetes,
so
things
may
be
darker
compose
what
I
want
to
show.
You
is
a
kubernetes
example,
so
I
have
Q
control.
I
have
a
kubernetes
cluster
that
actually
runs
on
the
cloud.
So,
where
you
get
kubernetes
clusters
from
well
they're
many
many
ways,
for
example,
you
can
install
it
locally,
you
can
have
mini
cube
or
mini
shift.
If
you
want,
you
can
have
a
managed
kubernetes
service.
This
is
typically
what
I
use
so
I
use
the
IBM
cloud
for
manage
kubernetes
gonna.
B
You
can
also
have
a
managed
open
shift
cluster
if
you
want
and
well.
This
runs
actually
here,
not
on
my
laptop
or
any
cloud,
but
doesn't
matter,
and
actually
let
me
do
this
again.
Let
me
delete
the
resources
again
just
to
show
you
the
full
example,
because
now
what
I
deploy
is
just
this
way
of
deploying
a
service-
and
you
know
so-called
deployment
so
basically
say-
take
this
docker
image
and
deploy
it
create
what
is
called
a
part
with
one
replica.
B
Some
one
part
one
running
instance
and
you
know,
deploy
it,
make
it
accessible
by
a
load
balancer
so
called
service.
This.
For
this
coffee
shop,
application
and
just
deployed
the
nice
story
about
this
technology,
is
you
have
what
is
called
infrastructure
as
code
so
same?
You
know
as
a
docker
file.
That's
also
infrastructure.
As
code
you
define
as
code
how
the
runtime
is
supposed
to
look
like.
So
these
are
the
step
for
an
individual
running
container
or
in
this
way
how
the
whole
environment
looks
like
right.
So
how
does
your
test
environment,
production
environment?
B
B
You
just
change
to
declarative
approach
of
specifying
it
here
and
then
the
technology
will
make
sure
that
this
is
the
case
and
of
course
this
is
so
much
more
productive
than
you
know,
calling
support
and
ordering
another
server
that
needs
to
be
installed
physically
and
then
installing
you
know,
Java
the
runtime
and
so
on
so
forth,
some
configuration
so
now
in
this
world
we
just
run
our
containers
and
then
this
works
here
so
now.
This
is
empty.
This
cluster
Q
control,
get
parts,
there's
no
running
applications
or
less
changes.
B
We
Q
control
apply
everything
that's
in
this
folder,
so
we
apply
our
nice
CMO
files
to
basically
create
this.
So
then
it
will
just
start
up
the
application.
This
coffee
shop
here
we
will
have
a
service
being
available.
What
you
just
saw
before
and
then
it
starts
up
this
application.
We
can
access
it.
So
this
looks
good.
Let's
try
this
out
once
my
application
is
up
and
running
and
ready
to
do
some
meaningful
work
again.
We
use
micro
profile
health
check
to
check
this.
Then
I
can
actually
access
it.
So
let's
do
this.
B
Let's
order
some
coffee
on
the
cloud.
I
have
a
magic
script
that
gets
me
the
IP
address
of
my
public
cluster,
then
I
say,
for
example,
coffee
shop
or
let's
check
the
health
first.
If
you
want
and
now
okay,
that's
good.
Now
that
is
up
and
running
and
I
see
you
know,
origin,
one,
two,
three
and
so
on
and
so
forth
and
I
can
even
order
some
coffee
right,
it's
the
same,
URL
again,
I,
say
or
does
now
please
well
post.
B
Some
it's
do
some
latte
again
here
to
this
URL
and
then
nice
201
created
this
works,
and
now
I
can
go
back
and
check
my
coffee
orders
and
there
it
is
so
that's
a
nice
story.
I
can
order
some
coffee
here
and
well.
Another
thing
that
I
want
to
show
you
for
the
testing
approach.
What
I
talked
about
before
so
I
showed
you
this
test
with.
B
B
Cluster
IP
and
then
I
say
well:
I
also
have
coffee
shop,
test,
dot,
port
I
say
set
this
to
a
tea.
That's
my
gateway
here
and
then
I
can
just
run
this
against
now
my
kubernetes
cluster,
the
application
that
runs
there,
and
so
that
works
ok,
actually
never
trust
a
test.
That
is
only
being
green.
Let's
say:
maybe
that
still
runs
the
local
host
version.
Who
knows
so?
B
Let's
miss
configure
the
port
here
and
then,
after
a
few
seconds,
we
hopefully
run
into
a
timeout,
because
that
is
the
wrong
connection
and
says:
ok
now
this
doesn't
work,
so
it
actually
will
test
the
application
that
runs
in
my
cluster
and
that's
why
I
can
reuse
these
to
these
tests.
So
that
is
another.
You
know
nicer
way
to
make
your
test
a
little
bit
more
effective,
so
I
can
reuse
the
efforts
that
I've
put
into
these
tests
here
and
recycle
them.
B
Another
important
thing
what
I
always
want
to
mention
if
you're
looking
to
my
material
that
I
have
for
testing,
you
will
see
this
as
well,
but
I
really
care
that
for
the
tests
you
write
you
implement
at
least
some
proper
code
quality,
even
if
this
is
just
tests
and
just
the
test
scope,
but
it
will
make
your
tests
much
much
more
maintainable.
So,
for
example,
in
this
test,
class
I
only
say
well
assert
that
coffee
shop
system
is
system
up
or
get
the
application
version
equals
to
something
else.
B
How
you
get
the
application
version
is
now
implemented
by
a
delegate
by
this
test
driver.
It's
even
more
obvious,
if
you
say:
ok,
how
do
I
create
a
coffee
order
and
how
do
I
verify
that
the
orders
in
the
system?
You
don't
want
to
leak
some
lower-level,
HTTP
or
JSON
detail
here
in
this
test
class,
because
otherwise,
what
happens?
If
you
have
of
test
classes
at
the
end
or
test
methods?
Well,
then,
if
something
changes,
you're
screwed,
you
have
to
throw
away
or
maintain
a
lot
of
a
lot
of
test
classes.
B
What
it
can
do
here,
you
just
change
the
lower
abstraction,
how
it
is
implemented
to,
for
example,
get
the
application
version
and
then
just
change
the
code
here
and
you
don't
have
to
modify
all
of
your
test
classes.
So
this
is
very,
very
important
and
in
general,
just
to
keep
your
test
more
maintainable.
So
same
is
true
for
test
data,
just
care
about
these
abstraction
layers
in
bridge
in
which
abstraction
layer
you're
currently
in
and
when
it
makes
sense
to
implement
something
here.
B
So
that
is
basically
it's
how
we
can
run
our
best
practice
project
now
in
in
a
kubernetes
example,
so
for
kubernetes
or
containerized
applications,
maybe
you've
heard
of
the
12
factor,
principles
that
basically
is
applied
or
that
as
a
sufficient.
If
you
run
workload
in
this
way
by
a
containerized
approach,
you
could
reconfigure
things
here
and
you
can
rebuild
your
application
very
very
quickly.
B
So,
for
example,
assuming
this
approach
is
not
enough
for
the
dependencies
that
you
have
stopped
now,
my
my
running
workloads
here
again,
you
want
to
include,
for
example,
you
need
to
include
another
dependency.
I
always
highlight
this
because
I.
In
my
opinion,
this
is
sufficient
for
most
of
the
projects
here,
depending
what
you
do.
Sometimes
you
do
have
some
other
dependencies
either
on
your
runtime
or
in
your
code,
where
you
just
need
some
more
stuff.
B
So,
for
example,
if
your
business
use
case
is
to
do
some
image
processing,
then
of
course
it
makes
sense
to
include
you
know
the
API,
the
dependency
for
this
image
processing.
So
this
is
a
real
use
case
requirement
of
some
dependencies
that
you
add,
but
still
what
you
should
do
in
order
to
make
you
build
stay
productive,
you
should
add
them
as
a
provided
dependency
to
not,
you
know,
clutter
up
your
deployment
artifact
and
then
in
your
runtime.
So,
for
example,
that
is
configured
here
in
docker
file.
B
What
you
do
you
add
this
dependency,
this
jar,
for
example,
in
a
lower
docker
image
layer,
because
these
all
of
these
lines,
all
of
these
commands,
result
in
individual
docker
image
layers.
This
is
how
docker
images
work
with
the
so
called
copy-on-write
file
system
and
then
what
you
do,
for
example,
it's
add
just
as
an
example,
a
Postgres
driver
for
a
database.
This
will
be
a
dependency
that
you
don't
even
use
in
a
pom
XML,
but
only
in
the
runtime.
B
Then
I
don't
want
to
include
this
into
my
war
file,
because
you
know
that
just
adds
a
lot
of
weight.
A
lot
of
balance
but
I
say:
please
add
this
in
lower
level
of
detail
and
the
lower
level
image,
and
then
you
know
just
have
this
included
to
the
libraries
same
is
true
for
other
jars.
If
you
want
and
still
you
maintain
these
layers
right.
B
So,
for
example,
you
noticed
this
one
building
when
I
have
my
docker
built
before
you
noticed
I
had
this
these
these
layers,
so
now
that
is
being
updated
as
well,
but
still
if
I
change
my
application,
for
example,
if
I
we
built
this
using,
maybe
packaged
and
only
what
it
does
if
I
rebuild
it
here,
you
notice,
if
I
talk
a
rebuild
this
here,
then
it
will
not
may
change
the
other
layers,
that's
only
cache
and
it
will
only
actually
keep
and
execute
the
last
step.
It's
even
more
interesting.
B
So
this
is
just
a
build
time
of
saving.
Maybe
a
few
seconds
it's
much
more
of
an
impact
if
we
actually
well
pushed
something
over
the
wire.
So,
for
example,
if
we
say
do
a
docker
push
of
a3
built
this
and
another
name,
I
call
the
Sebastian,
so
I
can
actually
push
it
to
a
public,
a
docker
repository
and
if
I
say,
ok,
now,
I
want
to
dock
or
push
this.
B
What
it
does
it
analyzes
the
image
and
it
sees
oh
actually
only
the
last
layer
has
been
changed
so
I
literally
just
pushed
whatever
you
know
the
last
layer
contains
which,
in
my
case
is
a
tiny
war
file
of
you
know
a
few
kilobyte,
and
then
it
only
pushes
a
few
kilobyte
and
regardless
how
the
actual
image,
how
big
the
image
is,
it
doesn't
push
all
of
it
and
I
used.
You
know
I
kid.
B
B
You
know
HCP
handling,
how
to
do
database
transactions,
and
it
also
fits
very
well
these
abstractions
that
we
now
have
in
a
containerized
world
of
cloud
native
technology,
for
example,
if
you
run
it
in
kubernetes,
then
you
know
this
will
implement
how
to
do
the
service
lookup,
all
the
server's
load-balancing.
How
to
do
the
networking
connection
if
you
have
service
meshes
with
Sto
and
other
things,
then
you
implement.
You
know
how
to
do
mutual
TLS,
for
example,
or
how
to
do
the
metrics
scraping
on
a
more
manageable
scale
right.
B
So
you
don't
clutter
up
all
of
these
details
and
you
don't
clutter
your
code
with
all
of
these
details,
because
your
artifact,
you
know,
does
not
need
to
include
that.
That's
not
a
concern
of
the
Java
code.
You
write
so
keep
the
Java
code
clean
for
the
business
logic
and
I
think
this
makes
your
project
more
maintainable
and
especially
the
builds
more
efficient
so
as
some
key
takeaways.
B
What
I
care
about
that
you
use
known
technology
and
api's
I
think
this
is
the
most
or
one
of
the
biggest
advantages
that
you
have
with
Jakarta
that
you
also
have
with
micro
profile.
You
have
AP
eyes
that
are
known
to
a
lot
of
developers.
You
know
such
as
CDI
jax-rs,
and
that
is
typically
nothing
new
and
focus
on
what
solves
a
business
problem
for
you
right
so
in
the
project
that
I
am
that
are
right.
B
Well,
I
want
to
focus
on
what
I
actually
do
right,
create
coffee,
so,
for
example,
right
not
implementing
some
other
lower
level
technical
details,
then
I
think
the
combination
of
Jakarta
and
micro
profile
is
awesome
and
is
very
productive.
One
to
create
modern
applications,
especially
if
you
use
micro
profile
the
projects
to
basically
fill
the
gaps
that
we
still
have
in
in
Jakarta.
That
comes
from
Java
EE
right.
B
So,
for
example,
injectable
configuration,
health
checks,
metrics
fault,
tolerance
and
a
few
other
things
I
think
that's
just
very
productive,
and
this
technology
is
also
very
interesting
if
you
have
projects
with
existing
investment
in
EE
right.
So
if
especially,
the
knowledge
investment.
If
the
developers
know
these
api's
CDI
jax-rs,
it
just
makes
sense
to
use
modern
approach
with
modern
runtimes,
for
example,
open
liberty
or
Quercus
or
payara,
or
you
know
all
of
these
modern
runtimes
that
are
really
efficient
and
fast.
Then
there's
just
a
very
efficient
approach.
B
B
Time
short
and
I
mean
really
short
as
in
less
than
a
second,
ideally
or
less
than
five
seconds.
Otherwise,
you
will
get
distracted
and
it's
just
way
too
annoying,
and
there
are
a
lot
of
lot
of
tools
out
there
for
all
kinds
of
technologies
actually
to
achieve
that.
Might
use
this
open,
Liberty
development
mode
Quercus
has
a
similar,
similar
mode
spring
dev
tools,
I.
B
In
a
thing,
if
you
use
these
approaches,
you
will
just
be
very
efficient
with
modern
Enterprise
Java
and
from
my
experience
it's
also
a
lot
of
fun
to
use
these
technologies.
So
I
hope
this
was
helpful.
Thank
you
very
much
for
watching
and
for
listening
and
if
you're
interested
in
a
few
of
the
materials
that
I
pointed
you
to.
You
can
have
a
look
at
this
get
up
example
and
some
other
materials
there.
Some
thoughts
on
testing
in
order
to
be
a
little
bit
more
productive
as
an
enterprise
Java
developer.
Thank
you.
B
See
this
question
of
if
it's,
whether
it's
a
single
task
or
how
you
exactly
implement
it
I
would
say
it
depends
I've,
seen
multiple
approaches
there.
That
I'm,
mostly
you
know,
depending
on
the
on
the
actual
environment
that
you
have
so
where
Jenkins
runs.
So,
for
example,
if
Jenkins
also
runs
an
in
a
docker
container,
then
you
can
actually
build
docker
containers
inside
of
containers.
But
then
typically,
you
Nusa
use
another
approach
to
build
these
images.
B
So,
for
example,
like
scaffold
or
use
some
other
knowledge
or
OpenShift
builds
to
build
the
images
that
is
more
like
a
you
know,
detail
I
would
say
whatever
makes
sense
for
your
environment,
but
in
general
just
have
a
proper
outcome:
sophisticated
CIC,
deep
pipeline
that
builds
this
ideally
using
the
same
approach
that
you
could
do
locally,
but
that's
not
a
requirement.
I
hope
this.