►
Description
Sebastian Daschner- Lead Java Developer Advocate at IBM
---
JakartaOne Livestream Cloud Native for Java (CN4J) is a one-day virtual conference for developers, engineers and technical business leaders with the focus of building enterprise Java on Kubernetes.
This virtual event is a mix of expert talks, demos, and thought-provoking sessions focused on enterprise applications implemented using open source vendor-neutral Jakarta EE and Eclipse MicroProfile specifications on Kubernetes.
A
B
Yeah
thanks
a
lot
and
hi
and
welcome
from
my
side
as
well
on
this
virtual
Jakarta
talk
on
yeah
kubernetes
initio
for
Jakarta
developers.
Who
knows
me
already
knows
that
I
don't
do
many
slides.
So
actually
this
is
the
only
slide.
I
have
for
you
and
I'm
gonna
show
some
live
coding
and
a
lot
of
live
demos
on
kubernetes,
Sto
and,
of
course,
in
Jakarta
e-code.
So
what
I
want
to
show?
B
B
You
actually
is
some
project
with
some
coffee,
so
I
have
a
coffee
shop
project
that
has
a
coffee
shop
and,
while
barista
and
some
you
know
in
in
the
coffee
area,
some
micro
services
and
I
will
deploy
this
micro
service
example
to
the
cloud
using
kubernetes
and
is
and
a
few
other
things
just
because
I'm
a
big
coffee
fan
and
that's
why?
Why
I
choose
this
this
example?
B
So,
when
I
have
unlimited
access
to
like
my
coffee
machine
and
and
all
that
equipment,
I
have
then
I
actually
have
to
restrict
myself
not
to
drink
like
10
coffee
a
day
is
something
I,
don't
know
if
you
can
relate
to
that,
but
yeah
anyway,
let's
do
some
virtual
coffee
with
I
have
two
projects
here
that
I
want
to
show
you
one
is
called
coffee
shop
and
the
other
one
is
called
barista.
So
these
are
two
micro
services.
Why?
Well,
because
one
is
boring
right.
B
We
need
at
least
something
to
show
some
interaction,
some
communication
going
on
and
what
we're
going
to
deploy.
Otherwise,
there
is
no
big
point.
I
will
show
you
a
little
bit
of
EE
code,
but
the
most
focus
on
the
session
should
be
about,
while
the
infrastructure
technology
and
by
the
way,
if
you
have
any
questions,
please
feel
free
to
ask
them
any
time
for
the
format.
It's
better
if
we
answer
them
at
the
end,
but
we
will
go
through
them,
just
type
them
into
the
chat
and
we'll
pick
them
up
later.
B
So
what
I
have
here
is
this
coffee
shop
and
the
barista
looks
similar,
which
is
a
Quercus
project,
so
I
use
this
Quercus
technology
and
I
use
Java,
14,
so
I
hope
you
also
use
that
latest
version
already
and
quark
is
one
about
4.2
and
well.
As
you
probably
know,
if
you
know
this,
Quercus
technology
I'll
talk
a
little
bit
about
that
as
well
that
it's
based
on
EE
api's.
B
So
if
you
know
things
like
Jax
arrests
and
c-d-I
and
all
these
good
stuff,
then
you,
while
congratulations,
can
already
have
a
project
with
forcus
where,
for
example,
I
have
this
coffee
shop,
which
is
an
CDI
bean,
and
let
me
quickly
walk
you
through
the
code.
What
is
doing
here
before
I
deploy
it.
So
what
I
will
show?
You
is
some
very
basic
example
way
it
can
order
some
coffee
right
at
the
coffee
shop
application.
B
B
So
we
have
some
JSON
input
and
output
where
we
can
call
some
methods
that
then,
will
be
invoked
by
our
runtime,
for
example,
to
get
the
orders
here
or
to
post
some
new
orders
to
order
some
coffee
right
and
I
quickly
just
want
to
walk
you
through
the
code
so
that
you
know
what's
going
on.
But
again
it's
not
that
much
about
the
code
in
this
session.
So
what
you
see
if
you're
familiar
with
these
annotations
and
these
api's,
we
have
a
Java
type,
a
coffee
order.
B
This
is
just
a
POJO
with
a
bunch
of
annotations
that
is
automatically
being
mapped
from
this
HTTP
request
into
a
Java
type
right,
so
that
will
be
done
by
Jason
B.
It's
also
a
Java
EE
or
now
Jack.
Our
standard
implemented
by
quark
is
here,
and
we
have
bean
validation
as
well.
So
we
can
actually
validate
this
and
it
will
be
handled
automatically.
So
that's
this
good
news.
It
works
with
all
of
the
known
EE
api's
here,
of
course,
and
then
we
can
invoke
our
business
method
to
say.
B
Oh,
let's,
please
order
some
coffee
and
then
what
we
have
while
we
actually
will
go
and
call
the
barista
system.
So
there
will
be
some
communication
going
on
by
a
rest,
client
here
and
then
well.
We
gonna
brew
some
coffee
by
basically
sending
some
something
to
the
other
service.
Gonna
cover
that
a
little
bit
later,
how
it
works,
but
basically
it's
gonna
be
a
synchronous
request
to
the
other
system.
B
So
the
coffee
shop
asked
the
barista
whether
it's
possible
to
brew
this
coffee
and
then
we
return
and
we
will
tell
the
client
okay,
it
will
be
possible
come
back
later,
then
we
will
get
your
coffee,
so
this
has
been
created
under
this
URL
and
that's
pretty
much
it
so
just
for
the
code
example
here
that
you
roughly
know
what's
going
on
now
for
kubernetes
and
later
is
do
how
to
run
this.
Well,
basically,
this
it's
a
question:
what
is
it?
What
is
it
about
these
technologies
and
why
should
we
care?
B
So
if
you
go
to
the
website,
kubernetes
dot
io,
it
says
production,
great,
container,
orchestration
so
with
containers.
Well,
we
probably
know
about
docker
and
all
these
things.
This
is
a
way
how
to
run
our
containers
in
production,
because
it's
actually
not
trivial.
How
to
run
the
is
right.
We
will
need
to
take
care
about
how
to
orchestrate
them
how
to
set
up
the
networking
and
all
these
non
trivial
things
it's
different.
B
If
you
run
just
docker
run
on
your
laptop
or
if
you
want
to
run
something
with
production
scale,
and
this
regard
it
will
take
care
of
all
of
the
scheduling
when
we
have
multiple
nodes
in
all
of
this,
so
I'm
pretty
sure
you're
at
least
a
little
bit
familiar
with
kubernetes
now
for
what
it
means
for
our
enterprise
developers
for
us.
How
much
do
we
have
to
know
about
this
technology?
B
Well,
I
come
back
a
little
bit
to
this
later,
but
at
least
it
makes
sense
to
know
what
are
we
running
in
our
production,
because,
ultimately,
this
doesn't
matter
which
containers
are
we
running
and
how
that
works.
In
order
to
run
some
containers,
we
have
to
know
well,
basically
what
our
application
contains.
So
we
write
a
docker
file
infrastructure
as
code,
how
we
specify
how
our
container
should
look
like
right-
and
we
do
this
twice
once
for
the
coffee
shop
project
and
wants
for
the
barista
project.
B
So
that
is
a
very
simple
docker
file
where
we
basically
tell
well.
We
need
Java
in
order
to
run
this
Quercus
application
and
I
use
here
this
from
a
adopt,
open,
JDK,
and
then
we
add
something
here:
I've
come
back
to
that
a
little
bit
later
and
then
we
can
run
our
application
all
right
anyway.
Let's
build
this
maiden
clean
package
or
only
maven
package.
If
you
have
a
maiden
application,
that's
the
same.
If
you
have
a
Quercus
application
and
then
it
will
already
build
your
application.
B
So
that's
already
interesting,
it's
quite
fast,
and
it
will
build
well
to
artifacts
here,
a
runner
that
and
well
one
Java
without
is
runner
and
artifacts.
That,
basically,
is
a
thin
deployment
artifact,
where
all
of
your
Quercus
dependencies
and
potential
third-party
dependencies
are
contained
in
this
slash,
Lib
folder.
So
we
already
have
some
separation
of
our
of
our
application
classes.
The
classes
that
are
compiled
that
are
contained
in
our
project
and
of
the
implementation
and
the
third
party
lives
that
we
use.
B
This
actually
makes
a
lot
of
sense
for
a
cloud
native
world
where
we
use
docker
and
these
talker
layers
the
image
layers.
So
in
this
way
I
already
built
the
application.
I
could
run
it
here,
but
it
needs
to
barista
as
well
as
actually
what
I
want
to
do.
Instead,
I
want
to
build
a
docker
image
from
this
coffee
shop
and
then
the
same
from
the
barista.
B
So,
in
order
to
do
this,
I
mean
just
Tucker
built
something
here
and
I
will
call
it
just
if
I
can
type
I
call
it
coffee
shop
and
then
just
version
temp
and
what
we
see
here
then
I
built
my
my
docker
image
and
that's
already
done
happy,
as
you
actually
can
see.
I
built
some
pre-existing
things
here
before,
so
it
only
acts
performed
the
steps
that
are
necessary.
So
in
my
case
it
only
added
my
actual
application,
not
the
libraries,
the
libraries
haven't
changed,
I
didn't
change.
B
B
If
we
darker
push
coffee-shop
Tim
now
to
my
public
docker
hub,
then
actually
I
only
have
to
push
a
few
kilobytes,
even
though
the
total
image
will
be
huge
or
quite
big,
a
few,
a
few
megabyte
at
least
well,
it's
very
fast
to
push
it,
because
only
what
has
been
changed
needs
to
be
pushed,
and
this
is
always
actually.
This
was
always
true
for
EE
workloads
for
Java
and
jacquard
EE,
and
it
just
makes
sense
to
keep
the
separation
of
the
implementation
and
your
app.
B
So
that
is
the
reason
why
I
use
Korkis,
almost
always
in
the
JVM
mode,
where
I
have
the
separation
of
off
the
implementation
end
of
my
application
here
now,
let's
very
quickly,
do
the
same,
just
to
show
you
that
it
works.
Similarly
for
the
barista
a
service,
those
two
services
are
totally
independent,
like
shared-nothing
micro
services
architecture.
If
you
want
and
I
built
them
into
two
separate
images,
this
one
is
called
rista
and
then
I
built
and
pushed
them
in
the
same
way.
B
So
as
you
see,
if
you
use
these
thin
deployment
artifacts,
it's
just
very
fast
to
rebuild
and
we
push
your
project
here.
So
it's
already
done
and
that's
pretty
much
it
now,
just
to
change
gears.
A
little
bit.
I
have
two
images
now:
two
docker
images,
one
coffee
shop
and
one
barista,
and
to
finally
show
you
how
that
works,
I
want
to
deploy
them
in
kubernetes,
so
I
have
a
kubernetes
cluster.
B
How
do
you
get
one?
Well,
there
are
many
many
ways
available.
You
can
run
one
locally
when
you
run
sto
as
well
later
on,
then
you
might
run
into
some
resource
problems.
So
this
is
why
I
use
a
cloud
deployment.
I
use
the
IBM
cloud
for
manage
kubernetes,
which
supports
up
until
the
latest
version
that
I
have
already
it
all
supports
and
manage
East
you
and
it
also
supports
openshift,
so
I
managed
OpenShift
cluster
in
the
IBM
cloud.
B
So
if
you
want
to
create
that-
and
that
is
available
and
at
the
end,
I'll
show
some
resources
how
I
can
create
a
free
cluster
to
play
around
for
yourself
and
for
now,
let's
say:
okay,
I
have
a
kubernetes
cluster
already
in
the
cloud
that
I
can
look
into
I
created
it
just
before.
I
have
a
cluster
of
three
nodes
and
the
cluster
yes,
should
be
empty.
So
there's
nothing
running
right
now
and
what
I
do
now
I
just
well
deploy
something
by
applying
my
resources
right,
so
I
have
these
infrastructure
as
code
files.
B
Let
me
just
quickly:
do
this
so
I
can
show
you
how
that
works.
I
basically
have
these
EML
files
that
you
probably
have
seen,
and
this
is
how
I
deploy
I'm
a
coffee
shop
and
my
barista,
so
this
will
create
a
deployment
from
a
specific
image.
It
basically
says
well
take
this
docker
image
and
run
it
in
my
cluster
in
so
on
so
near
replicas.
B
Here,
one
and
the
same
for
the
barista
I
also
have
a
barista
deployment
where
it
just
deploys
my
app
here
and
now
once
that
is
up
and
running,
and
it
will
start
up
quite
quickly
because,
as
you
know,
this
Quercus
application
is
quite
quite
fast.
Now
I
have
some
barista
and
some
coffee
shop
available
and
then
well.
Finally,
let's
access
it.
B
Then
we
can
talk
a
little
bit
more,
so
what
I
have
I
have
to
slash
or
this
resource,
if
you
remember
so,
I
could
go
and
use
curl
to
access
it
now,
I
actually
need
to
get
some
IP
address
of
my
cluster,
so
I
actually
am
accessing
my
real
kubernetes
cluster.
That
I
just
deployed
a
few
hours
ago
on
the
cloud
that
are
created
on
the
cloud
and
we're
just
right
now
deployed
my
Jakarta
apps
and
if
I
say
well,
first
of
all,
I
can
go
for
a
health
check
and
see
whether
this
works.
B
So
that's
good
news.
It
says
yes
up
coffee
shop
so
later
today
you
will
see
some
more
sessions
on
well
how
to
do
this
with
micro
profile
and
health
checking
and
all
these
things.
So
that
is
basically
a
health
check
resource
where
it
says.
If
you
go
to
default
resource,
slash
health.
Well,
then
you
will
get
a
default
health
check
here
for
the
coffee
shop
and
now,
let's
have
a
look.
B
It
only
shows
the
coffee
shop
because
that's
the
user
facing
application
and
it
also
goes
to
orders
slash
orders
with
the
get
request
to
get
the
current
orders
right.
Well,
there
are
no
orders
in
the
system,
so
let's
finally
order
some
coffee.
Let's
have
a
look
into
order
to
resource
to
create
some
coffee
here
and
then
we
can
go
a
little
bit
further.
So
let's
say
well
we're
posting
something
we're
posting
some
application
JSON
and
we
will
have
well.
B
The
coffee
order
has
a
type
so
I
create
some
type
espresso
and
then
good
news,
201
created
it
succeeded
in
creating
this
coffee
and
it
says
well
great.
The
coffee
here
has
been
created
with
these
information.
Alright,
that's
good
news
and
we
already,
if
we
wait
a
little
bit,
we'll
see
that
the
status
changes
here,
because
there's
some
communication
going
on
between
the
coffee
shop
and
the
barista,
and
this
is
now
pretty
much
the
point
what
we
want
to
look
into.
B
We
want
to
see
a
little
bit
what's
going
on,
because
I
just
showed
you
a
little
bit
of
code
that
apparently
the
coffee
shop
is
talking
to
the
barista
application.
Well
right
now
we
don't
even
see
the
barista
application
right
and
you
want
to
see
whether
all
of
that
works
correctly.
So
now
you
see
the
status
is
collected,
so
there
is
some
asynchronous
status
updating
going
on
there
as
well,
and
we
want
to
have
a
little
bit
a
look
into
that.
B
So
what
we
have
here
is
now
two
applications
where
we
have
a
barista
and
a
coffee
shop,
application
that
is
running.
You
can
ignore
the
third
one
for
now
and
we
have
this
running
in
a
kubernetes
cluster.
Alright.
So
how
does
the
communication
work
as
you
can
see?
We
have
these
services
as
well,
that
is
a
kubernetes
coffee,
shop
service
and
the
service.
Also,
that's
an
interesting
story.
Does
a
cluster
internal
DNS
resolution.
B
So
if
we
have
a
look
into
this,
the
rest
client
configuration
what
the
URL
looks
like
it's
just
test:
barista
barista,
colon
8080-
and
that
is
already
one
of
the
interesting
things
about
the
consider
container
orchestration
that
we're
using
and
about
this
abstraction.
So
in
general
about
any
technology,
you
should
always
ask
yourself
the
question:
besides
the
whole
hype
and
everything,
and
somebody
on
a
video
explaining
you
that
you
must
use
it,
where
is
the
benefit?
B
Why
should
I
care-
and
this
is
one
of
the
points
why
we
should
care
about
this
cloud
native
technology
as
I
am
as
I
call
it?
It
gives
you
ideally
a
very
nice
abstraction.
This
is
why
I
claim
that
docker
was
very
successful
and
kubernetes
was
very
successful.
Both
operate
on
a
very
nice
abstraction
layer
containers,
for
example.
If
you
build
an
application
and
you
want
to
run
it
somewhere,
you
don't
really
have
to
know.
What's
inside
it's
just
abstracted
away
on
the
layer
of
that
you
can
run
anything.
B
What
is
a
Linux
process
right,
because
these
docker
containers
are
basically
running
Linux
processes
and
as
long
as
it's
running
in
Linux?
Well,
you
can
run
that
container
right.
It
doesn't
matter
if
it's
Java
here
or
if
it's
a
node.js,
app
or
a
go
or
something
like
something
else.
You
just
specify
in
the
container
what
your
app
needs
in
order
to
live
in
order
to
exist,
and
then
you
start
it
up
with
docker
run,
for
example,
so
that
abstraction
is
a
very
nice,
well
abstraction
layer
to
communicate.
B
Another
thing
is
contain
infrastructure,
as
code
and
in
terms
of
automation.
You
only
specify
here
as
code
how
well
your
application
looks
like
right.
You
don't
need
to
take
a
sysadmin
to
log
on
to
some
machine.
You
know
install
Java,
install
this
and
that
install
disk
workers
runtime
or
install
some
other
runtime
container.
You
basically
ship
everything
in
your
image,
and
this
is
a
very
nice
way
because
then
you
automate
basically
how
your
application
looks
like,
and
you
saw
what
I
did
with
docker
build.
It
runs
very
quickly.
B
But
actually
we
have
to
support
while
the
whole
hallway
the
whole
pipeline
until
production
and
see
ok,
what's
going
on
here,
and
we
have
to
somewhat
care
what
is
actually
going
on.
So
at
least
you
should
have
a
look
into
well.
How
does
this
darker
concept
work?
How
do
we
write
our
doc
of
files
and,
what's
running
inside
there
for
kubernetes
very
similar,
the
idea,
the
abstraction
layer?
There
is
mostly
about
pots
and
instances
and
especially
services,
so
that
is
the
biggest
advantage
here
and
the
secret
sauce.
B
If
you
want
of
kubernetes
that
you
have
to
serve
as
abstraction,
you
run.
Something
is
a
service
like
in
coffee
shop
for
for
this
one
here
and
then
you
can
abstract
it
by
this
name:
coffee,
shop,
right
or
barista,
so
I
call
something
barista
and
then,
if
I'm
in
the
coffee
shop
right
now,
I
can
say
well
the
rest
client,
please
just
connect
to
barista
host
named
barista
port
8080
I
could
even
switch
it
to
port
80.
B
If
I
want
to
doesn't
really
matter
and
then
it
doesn't
even
have
to
know
well
where
the
barista
is
running.
So
all
of
that
networking
is
abstracted
away
behind
this
cluster
internal
DNS
resolution
for
the
server's
named
barista
and
then
I
just
connect
to
you
know
whatever
is
running
in
barista
there
and
that's
a
good
abstraction,
because
for
my
business
perspective,
I
don't
really
have
to
care.
B
I
only
go
to
my
where's,
my
business
component
to
my
coffee
shop
and
to
my
barista
class
and
say
well
just
you
know,
rest
client
just
connect
to
the
service
right
here.
I
want
to
care
about
the
business
logic.
What
do
I
have
to
do
and
how
the
implementation
or
the
networking
is
taking
place?
That's
a
different
abstraction.
So
this
really
makes
sense.
B
I
think
on
this
to
talk
about
this
abstraction
layer
same
with
the
deployments,
I
say
well
run
one
instance
of
whatever
is
in
this
image:
go
or
run
two
and
three
and
so
on,
and
so
forth.
So
same
is
true
for
the
infrastructure
is
code
that
I
write
everything
here
in
these
yellow
files.
Whether
or
not
you
like
the
format,
but
that's
not
pretty
much
the
point
you
can
just
apply
what
I
just
did
and
then
run
something
here.
B
So
we
see
that
this
works
here
and
now
let
me
show
a
little
bit
more
of
this
well
Sto.
So
now
we
only
talked
about
kubernetes
and
what
is
tio?
Does
it's
a
so-called
service
mesh
technology
but
I?
Don't
like
all
of
these.
You
know
buzzwords
and
complicated
stuff.
It's
basically
a
way
how
to
provide
extra
aspects
like
security,
like
some
for
the
traffic
management
or
observability
to
your
applications
to
your
micro
services
without
while
implementing
them
all
over
again
and
without
implementing
them
in
the
wrong
abstraction
layer.
B
If
you
want
so,
for
example,
you
could
put
a
lot
of
stuff
into
your
application
with
some
metrics
with
some
tracing
and
all
these
things,
but
ultimately
you
can
solve
it
by
the
infrastructure
layer
if
you
want
to
provide
that
here
from
the
containers,
so
what
you
can
do-
and
this
is
oh
yeah
to
re-
start
my
port
forwards.
One
second
I
connect
to
my
cluster.
B
What
you
can
do-
and
this
is
what
I'm
showing
you
is
actually
already
in
history
of
cluster-
not
only
a
kubernetes
cluster,
what
I
deploy
to
so
this
is
graph
fauna,
monitoring,
technology,
open
source
and
out-of-the-box.
It
will
already
include
well
a
dashboard
for
here.
The
coffee
shop,
so
I
did
not
create
this.
Let's
create
some
some
more
coffee
to
show
you.
This
actually
comes
out-of-the-box.
B
You
just
write
write
some
loop
here.
It's
always
helpful
right.
You
say:
okay,
new
coffee,
new
Coffee,
no
coffee,
so
you
all
can
wake
up
to
get
some
more
coffee
and
in
order
to
see
a
little
bit
more
traffic,
and
now
we
see
Oh,
actually
there's
something
going
on
here,
so
we
get
traffic
that
arrives
at
our
cluster
and
also,
if
I
switch
to
the
barista
service
here.
So
this
these
are
just
some
default
metrics.
Here
then
I
see
okay,
there's
actually
something
going
on
in
my
cluster
and
I
even
see
hey,
wait.
B
A
second
I
have
incoming
requests
from
the
coffee
shop,
so
it
actually
works
that
the
coffee
shop
is
talking
to
my
barista.
So
it
was
here
so
this
communication
that
I
told
you
we
all
need
not
only
see
it
in
the
code
it
actually
we
have
some
observability
here.
The
same
is
true
for
some
tracing,
so
this
cute
little
guy,
it's
a
tracing
technology
Jagr
and
where
we
can
trace
individual
requests,
which
is
even
more
well
interesting
for
some
debugging
purposes.
B
Well,
now
we
only
have
two,
but
if
we
have
a
little
bit
more
than
you
might
see
a
little
bit
more
insights.
So
actually
that
particular
request.
We
see
that
now
we
have
the
synchronous
request
from
the
coffee
shop
to
the
barista
and
back
so
this
particular
request
originally
came
into
the
ingress
into
the
cluster
ingress
then
ended
up
at
the
coffee
shop
application
and
this
one
then
forwarded
or
actually
did
another
synchronous
requests
within
that
overall
span
to
http
put
something
to
yeah.
B
This
URL
looks
familiar
to
the
barista
service,
so
this
was
this
adding
the
brew
in
our
in
our
code.
So
by
doing
this
we
see
actually
that
we
have
two
synchronous
communication
going
on
going
back
here
and
then
back
to
the
original
original
request.
In
order
now,
you
might
think
okay,
how
does
it
work?
How
does
is
Tia
know
that
this
request
is
actually
correlated
to
the
other
one
well
by
including
correlations
IDs,
so
we
would
originally
have
to
do
this
ourselves.
If
you
look
into
the
Sto
Doc's,
it
tells
you
well.
B
If
you
want
to
actually
have
all
of
this
information
that
I
just
showed
you,
you
need
to
propagate
the
trace
context
and
things
like
that
which
is
quite
cumbersome.
You
take
all
of
these
HTTP
headers
and
then
propagate
it
to
the
synchronous
request.
Synchronous
next
HTTP
request
within
your
synchronous
thread
execution,
so
everything
that
happens
within
the
same
execution
thread
if
you
and
it
even
shows
some
code.
Well,
if
you
have
some
Jax
arrests,
then
you
need
to
include
it
as
follows,
but
if
you
remember,
I
did
not
include
it
as
such.
B
Well,
it's
good
because
that
would
be
quite
annoying
because
I'm
using
Jakarta
and
micro
profile
technology
and
Quercus
actually
takes
care
of
that
if
I
configure
it
appropriately.
So
if
I
have
a
Jack's
arrest
resource
and
then
Jack's
arrest
or
a
rest
client
within
the
synchronous
thread
execution,
then
it
will
automatically
propagate
all
of
these
headers
to
the
next
well,
next,
on
going
HTTP
requests.
So
the
next
request
will
already
include
all
of
these
headers
and
then
the
sto
proxies
can
pick
them
up.
B
If
you're
interested
a
little
bit
more
into
that,
you
can
have
a
look
into
documentation.
How
SEO
works
it.
Basically,
it
deploys
a
second
site
car
container,
alongside
of
your
main
application
container,
that
just
intercepts
all
of
your
requests
and
gets
all
of
the
information
out
of
it.
So
it
can
lock
the
information
and
then
forward
it
to
these
yaga
and
to
these
dashboards
or
to
this
one,
which
is
pretty
cool,
Jiali
and
even
cooler.
B
Now,
let's
do
a
little
bit
more
examples,
just
for
the
fun
of
it,
for
this
is
deal
because
what
I
did
now
was
just
a
default
implementation
like
the
default
routing.
If
you
want
where
we
already
saw
a
little
bit
of
the
observability
point
here
and
what
I
also
want
to
show.
You
is
some
traffic
management,
because
this
can
be
quite
helpful
in
the
way
how
you
want
to
deploy
your
application,
especially,
you
have
some
ongoing
deployment
when
you
want
to
do
some
like
AP
testing,
to
see
hey.
B
Actually,
we
have
a
new
worden
that
our
users
just
try
it
out.
First,
before
we
deploy
all
of
them
or
just
have
a
separate
sub
group
where
we
see
which
wasn't
they
like
better
and
whatever,
and
this
is
actually
straight,
a
pretty
straightforward
with
a
steal.
So
what
I
want
to
show
you
and
again
it's
not
really
that
much
about
the
syntax
and
the
ml's
here.
If
you
interested
in
that,
I
will
point
some
resources
to
you.
B
It's
much
more
about
the
concepts
and
the
understanding
and
I
think
this
is
always
true
with
the
technology
that
we're
looking
at.
Then
you
have
to
look
into
the
concepts
and
get
a
base
understanding
of
what
is
happening
here
first.
So
if
you
understand
what
is
a
container,
what
is
a
community-service?
What
is
a
kubernetes
pod?
What
is
an
sto
get
a
gateway
or
a
virtual
service?
Then
you
will
know
okay,
these
are
the
building
blocks,
that
I
have
and
I
will
use
them.
B
For
these
reasons,
what
we
have
here,
it's
basically
a
default
routing
where
you
say
well:
virtual
servers,
coffee-shop
everything
that
goes
into
the
cluster
will
end
up
at
the
coffee
shop
because
it
routes
this
kubernetes
service,
coffee
shop.
Well,
okay,
that's
not
a
really
interesting.
Basically,
it's
just
a
default
routing,
oh
and
by
the
way,
subset
everything
route,
everything
to
werden.
We
won
because
we
come
further
subsets.
We
can
further
split
up
one
particular
service
by
just
having
the
single
one
service
like
coffee
shop
and
split
up
the
traffic,
for
example.
B
So
what
we
have
here,
everything
always
goes
to
version
v1,
probably
have
already
seen
that
we
have
well
also
version
v2
that
behaves
slightly
differently
of
coffee
shop.
Well,
we
also
can
route
stuff
too.
So
this
has
been
deployed
by
a
separate
kubernetes
deployment
and
a
separate
image
here
and
well,
then
we
could
route
something
different.
For
example,
if
I
say
well,
actually,
I
don't
want
to
have
only
working
one
but
also
version
v2,
then
I
can
change
this
also
called
destination
rules.
B
So
this
defines
the
subsets
that
I
have
here
available
and
the
subsets
as
usual
operates
on
kubernetes
labels.
So
these
are
just
these
these
tags,
these
metadata
that
I
can
add.
So
this
has
you
know
coffee
shop
and
version
v2,
the
other
one
has
coffee
shop,
inversion,
read
one
and
then
I
can
basically
select
them
here
and
I
can
say
well.
Actually,
I
could
switch
from
version
one
to
version
two
and
make
an
immediate
switch
here.
B
If
I
go
to
this
one
and
okay
just
apply
my
deployment
and,
let's
create
one
more
then
have
a
look
at
the
orders
that
I
have
in
this
system
here.
Are
we
like
this
so
now,
I
see
well.
Actually,
after
all
this
time,
there
are
only
two
orders
in
the
system
because
well
I
just
switched
to
urgent,
and
this
is
actually
now
the
version
two
here
that
I
see
okay,
so
I
now
just
switch
to
version
from
one
to
two
without
any
any
downtime.
B
I
can
also
switch
this
back
and
now
I
can
make
some
a
little
bit
more
interesting
things.
Now
it's
getting
dangerous
because
I
do
not
only
live
coding,
but
llamó
live
coding,
which
will
definitely
must
go
wrong,
but
let's
see
what
I
can
have
I
can
say.
Well,
please
match
some
requests,
not
only
take
all
of
the
requests,
but
one
particular
request,
for
example,
match
some
HTTP
headers,
so
you
can
match
on
anything.
So
this
one
is
HTTP
level
aware
it
can
match
about.
B
B
That,
for
example,
includes
matches
a
certain
reg
X,
like
user
18,
Firefox
right
so
matching
on
the
browser
or
matching
on
whatever
you
want
to
have
and
then
take
this
route
and
route
it
to
werden
v2
instead,
so
only
for
Firefox,
you
want
to
route
something
to
version
B.
Let's
see
what
I
did
everything
correctly.
So,
let's
try
to
deploy
this
okay.
It
looks
right
now
if
I
say
well
curl.
B
So
this
is
how
it
can
make
some
a
be
testing
on
some
routing
specific
on
some
criteria.
We
can
also
have
some
splits
routing
to
say:
ok,
please
always
routes
to
only
this
subset
and
another
destination
as
well,
so
both
routes
to
v1
and
v2,
with
a
specified
weight
where
you
say
well
only
30%
go
here
or
at
the
beginning,
let's
say
70%,
so
you
can
include
some
canary
releases
here
in
this
way
and
then
applying
that
yeah.
This
looks
good.
B
If
we
apply
it
in
this
way,
then
we
get
some
orders
and
then
now,
depending
by
chance,
we
either
get
this
well.
If
I
say
chance
by
the
live
coding,
it
never
works.
The
big
or
the
small
response
here
or
if
now
I,
can
delete
my
header
again
I
go
to
the
health
check,
then
I
use
I,
see
either
this
version
1
or
wait
for
it
wersion
2,
so
that
also
works
and
I
can
have
some
more
advanced
traffic
management
here.
I
think
that's,
that's
quite
interesting,
and
actually
it's
also
it's
very
helpful.
B
So
it
has
been
proven
itself
quite
helpful
in
projects
where
we
can
make
this
routing,
not
only
at
the
end
of
M,
at
the
edge
of
the
cluster
but
also
inside,
and
if
we
have
some
internal
micro
services
that
are
not
reachable
directly
from
the
outside,
for
example,
for
our
barista
I
could
include
something
similar
for
our
barista
routing
to
say:
ok,
actually,
please
route
something
here
accordingly
and
then,
if
the
barista
is
team,
then
has
a
new
version.
While
we
can
also
deploy
it
in
this
way.
B
B
So
let's
remove
this
again
to
the
original
version,
and
we
include
some
timeout
here,
so
that
is
of
available
and
everything
that
it
can
imagine
that
works
on
a
connection
level.
Basically,
so
let's
apply
this
here
and
have
our
post
here
so
well
now
it
seems
to
work,
but
actually
we
don't
know,
because
apparently
the
response
is
quicker
than
one
second
well.
B
What
we
can
do-
and
that's
also
quite
helpful
in
this
deal
in
general,
because
we
have
to
full
control
over
over
these
resources,
because
every
request
goes
through
this
sto
proxy,
so
we
can
also
for
testing
purposes,
just
insert
some
arrows
and
yes,
this
can
be
helpful.
So,
for
example,
if
I
say
well,
how
does
my
application,
or
my
marker
service
be
react
if
I,
if
something
goes
wrong
right?
B
If
we
got
some
timeout
here
or
if
we
have
some
HTTP
errors
and
things
like
that,
so
actually
what
I
can
insert-
and
please
only
do
this
for
testing
purposes.
We
can
assume
some
faults.
For
example,
some
delay
make
things
slower
and
say:
well
always
include
a
fixed
delay
of
three
seconds,
just
really
a
long
time
within
our
cluster.
B
So
this
is
how
we
can
simulate
some
slow
responses
and
can
be
really
helpful
to
test
things
also
to
test
things
on
the
front-end
like
how
we
can
make
something
and
behave,
or
in
this
case
not
always,
but
only
50%,
of
the
of
the
cases.
Alright,
again,
please
don't
do
this
in
production
restore
or
is
the
deployment
that
we
can
apply?
This,
and
by
doing
so,
we
can
then
test
our
behavior.
Now
you
see,
while
we
get
some
gateway
timeouts,
so
either
we
get
an
immediate
response
and
immediate
successful
response
or
well.
B
We
fall
into
this
trap.
We
then
get
well
not
a
slow
response,
not
three
seconds,
but
one
second
timeout,
because
now
actually
the
timeout
of
the
first
route
will
will
work
here.
We
can
actually
look
into
our
observability,
for
example,
in
the
tracing
to
see
okay.
Now
we
have
something
here
that
just
failed
where
we
see
okay
actually
after
one.
Second,
there
is
an
arrow
here
and
the
barista
only
responded
after
a
long
time.
So
then
we
also
see
what's
going
on
in
the
same
way
in
our
monitoring
dashboard.
B
We
see
that
our
success
right
now
will
go
down,
because
not
all
of
the
requests
will
be
successful.
So
this
is
also
how
the
observability
helps
us
here,
a
little
bit
alright.
So
these
were
a
few
things
where
a
steel
can
be
helpful,
but
again
just
to
go
back
a
little
bit
in
general
I
think
it's
just
important
that
we
care
about
well,
why
we
should
use
a
specific
technology,
so
not
just
why
it's
hype
and
cool,
but
actually,
where
is
the
benefit
for
us,
especially
in
these
cloud
native
technologies
and
also
for
developers?
B
The
question
is:
how
much
do
we
need
to
care
for
for
our
projects
and,
ultimately,
it's
again
about
this
question
of
ownership
and
about
that
we
actually
care
about
our
users
and
care
that
stuff
works
later
on
in
production,
and
for
that
matter
it
does
matter
how
we
run
our
application
so
which
exact
runtime,
whether
we
run
a
specific
JVM,
because,
ultimately,
if
things
go
wrong,
we
do
need
to
know
how
to
take
a
threat
or
heap
analysis
and
all
these
things
and
to
see
what
is
going
on
and
I
think
this
movement
of
the
technology
is
a
is
a
very
positive
one
that
it
supports.
B
Ultimately,
somebody
who
wants
that
the
application
works
in
production
and
this
technology
supports
us
a
lot
by
automation
and
by
having
stuff
as
code
right
like
infrastructure
as
code,
because,
if
you
think
of
it
a
docker
file
or
these
kubernetes
or
ISTE,
oh
yeah,
moles
is
just
well
it's
code,
whether
or
not
you
like
the
syntax,
but
you
can
put
it
into
version
control.
You
change
something
here
and
you
just
apply
it
right.
You
don't
have
to
do
manual.
Steps
I
can
throw
everything
away.
B
You
saw
I
just
created
this
Kuster
before
and
apply
the
the
things
here
and
then
I
can
recreate
my
environment
from
scratch
very
quickly,
and
that
is
ultimate
ultimately
the
benefit
of
just
moving
fast
here
and
having
things
in
a
reliable
and
predictable
way
by
using
automation
and
by
using
things
as
code.
So
that's
pretty
much
why
we
should
why
we
should
care
here
going
back
a
little
bit
to
the
runtime
what
I
have
so,
if
you
were
more
words
about
that,
I
was
showing
some
co-workers
here,
but
I'm.
B
Also
a
huge
fan
of
some
other
modern
ie
run
times,
especially
open,
Liberty
and
there's
also
very
interesting,
run
time
here
in
general.
I,
just
care
a
lot
that
the
that,
basically,
the
technology
supports
known,
api's
right
because
I
don't
want
to
relearn
the
whole
world
each
and
every
time
something
new
comes
up
right,
so
I
use
the
known
API
is
like
Jack's
arrest
like
CDI,
like
JPA,
and
all
of
that
that
is
supported,
and
that
is
the
case
for
Quercus.
B
That
is
the
case
for
others
like
open,
Liberty
and
I
also
care
a
lot
about
the
developer
experience
so
with
quark.
Is
you
probably
know
about
to
have
the
website
open
somewhere?
You
probably
know
about
the
development
mode
that
it
can
use
the
same
works
in
open
Liberty
and
a
few
other
runtimes,
where
I
can
quickly
see
your
changes
and
also
about
things.
B
How
big
the
dip
is,
because
you
might
be
on
a
bad
connection
or
you
just
need
a
lot
of
time
and
repost
all
of
these
megabytes
all
over,
where
99%
of
the
application
actually
doesn't
change.
So
this
really
doesn't
make
sense,
especially
if
you
come
from
an
ee
background.
It
was
always
the
case
that
this
separation
was
supported
off
the
application
and
the
runtime.
If
you
want-
and
this
really
makes
sense
to
run
this
in
a
in
a
JVM
mode-
if
you
do
the
cork
is
native
built,
then
this
doesn't
work
anymore.
B
So
you
have
one
native
image
like
one
native
blob,
that
is
in
total,
smaller
than
the
full.
A
JVM
mode,
but
the
diff
is
much
bigger
and
you
always
have
to
ship.
Well,
basically,
everything
and
the
second
thing
is
also
from
experience.
It
does
matter
what
your
run
in
production,
especially
those
colleagues
who
are
familiar,
how
to
how
to
run
a
JVM.
B
For
example,
if
we
talk
about
production
workloads
and
it
doesn't
matter
what
you
ultimately
are
running
there,
so
you
know
how
to
take
this
frettin
or
all
these
more
detailed
analysis
what's
going
on,
and
then
you
do
have
to
well
again
care
about
all
of
these.
How
do
you
get
the
information
out
of
it?
What
I'm,
specifically
using,
is
open
j9
here,
which
is
an
source
Java?
B
Well
eternity
I
am,
if
you
want
that,
I
get
from
adopt
open,
JDK
I
have
some
more
resources,
where
I'm,
showing
actually
the
difference
in
resource
consumption,
which
I
think
is
quite
remarkable
just
by
swapping
a
JVM,
and
then
this
difference
to
to
the
native
execution
is,
is
also
smaller
and
we're
talking
also
about
a
bigger
throughput.
That's
another
thing
that
you
should
care
about
because
of
running
JVM
thanks
to
the
JIT,
it's
actually
very,
very
performant.
B
When
we
talk
about
throughput
in
cases
more
performant
than
native
code,
and
that's
why
I'm
using
that,
so
it
can
have
a
look
into
that
also
I'm
sharing
all
of
these
links
by
the
way
I
explained
how
the
Istrian
networking
API
works
on
a
more
conceptual
way
again.
This
is
what
I
think
is
important,
just
to
get
it,
how
the
concepts
work
same
for
kubernetes.
So
by
the
way,
if
now
you
can
spend
some
time
in
the
situation
to
read
through
d
kubernetes
documentation
or
the
is-2
documentation.
B
So
this
is
another
session
that
I
wanted
to
share
where
I
talk
a
little
bit
more
about
Quercus
from
a
code
perspective
like
how
to
use
all
of
these
specific
api's
and,
of
course,
the
resource
resources
here
of
the
codes
that
it
can
get.
I
also
shared
an
Easter
workshop
where
I
guide
you
through
creating
basically
in
this
geo
cluster
and
then
all
of
these
resources
that
also
users,
of
course,
Jakarta
and
micro
profile,
and
it
shows
you
actually
how
to
create
such
an
sto
cluster.
If
you
don't
already
have
one.
B
So
it's
really
step
by
step
to
get
to
similar
examples.
How
kubernetes
in
this
tio
works
and
all
of
this
and
now
I
actually
do
want
to
take
two
time
to
answer
some
questions.
If
we
have
some
so
now
to
mine,
my
last
slide.
Second,
actually
slide
a
lie
down.
I
have
two
slides
one.
Thank
you
slide
with
the
link
here
where
you
can
get
all
of
the
resources
that
I
was
just
showing
you,
including
the
code
and
including
the
further
resources,
and
now,
let's
take
it
on
to
some
questions.
A
B
That's
that's
totally
fine
I,
yes,
I
can
and
actually
I
did
this
myself
a
few
times
by
moving
work
loads
from
one
cloud
to
the
other,
especially
if
you
mostly
lock
your
application
only
against
standards
right
so
the
same
work.
We
have
an
EE
world
with
Jakarta,
with
micro
profile,
where
I
used
just
plain
darker
and
kubernetes
resources
and
in
many
cases,
also
open
shift
resources,
because
many
from
my
clients
just
use
open
shift
as
well,
and
it's
somewhat
a
de
facto
standard.
B
If
you
want
or
support
it
in
many
cloud
environments,
and
then
it
really
works
to
basically
redeploy
your
your
workloads.
I
mean
I
can
say
this
because
I've
gone
through
that
a
few
times.
So
sometimes
you
have
the
lock
ins
that
you
do
have
is
more
about
the
routing
and
how
to
manage
that
certificates
and
DNS,
and
all
of
the
like
extra
extra
things,
but
like
in
I,
would
compare
it
very
much
to
Jakarta
and
micro
profile
like
in
that
world.
B
If
you
have
built
as
much
as
possible
just
using
standards,
especially
in
your
applications,
then
you're
pretty
much
on
the
safe
side.
And
then,
if
you
do
have
some
breaking
changes,
they
are
very,
very
limited
to
like
just
a
small
set,
or
just
you
know,
to
a
very
small
part
of
that
picture.
I
think
the
best
bet,
especially
regarding
all
of
these
technologies,
because,
for
example,
kubernetes,
is
now
supported
by
all
major
clouds.
For
example,
and
the
more
you
do,
custom
vendor
specific
stuff,
the
better
in
general.
A
B
A
All
right
so
I
don't
think
anyone
else
has
any
questions.
So
thank
you.
Everyone.
We
will
be
back
at
11
a.m.
with
Roberto
Cortez
talking
about
kubernetes
native
java,
with
micro,
profiling,
orcas,
so
Thank
You
Sebastian.
That
was
an
excellent
talk
and
we'll
see
you
everyone
at
11:00.
Thank
you.
Thank
you.