►
From YouTube: Jakarta Tech Talk - Serverless Java Apps in the Cloud: MicroProfile, Quarkus, and Cloud Run
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
another
Jakarta
Tech
talk.
My
name
is
Serena
and
joining
us
today
is
mods
and
rustam
who
will
be
presenting
on
the
topic
of
serverless
Java
app
in
the
cloud
micro
profile,
corcus
and
Cloud
run.
If
you
have
any
questions
for
mads
and
rust
them
as
we
move
through,
today's
presentation
feel
free
to
ask
them
in
the
chat
or
use
the
ask
a
questions
tab
without
any
further.
Today,
mads
russom
over
to
you.
B
Thank
you
very
much.
Thank
you
for
having
us
it's
great
to
to
be
back
and
great
to
be
here
today
with
you
all
good
morning,
good
evening
good
afternoon,
depending
on
where
you
are
around
the
globe,
and
it's
I'm
I'm
happy
to
be
here.
D
D
D
C
No
worries
so
next
hour,
we'll
be
talking
about
mod
jav
development
in
the
cloud
and
the
two
of
us
who
are
going
to
talk
about
that
is
me.
My
name
is
m
I
work
in
a
norian
consultancy
company.
It's
called
kutas,
where
I
do
mainly
cut
nowadays
but
been
doing
Java
of
many
years.
C
B
B
You
know
all
those
things
and
I
I
I,
I,
I,
I,
I
work
and
talk
mostly
about
cloud
and
jav
Java
in
one
way
or
another,
sometimes
both
sometimes
one
of
them,
and
sometimes
it's
a
mix,
and
sometimes
it's
not
so
that
is
a
short
introduction
about
us
and
let's
talk
about
now
about
what
we
actually
going
to
be
talking
about,
because
I
mean,
if
I
ask
how
many
of
you
write
code
is
probably
will
be
the
majority
of
you.
But
the
question
is
usually
that
is
kind
of
fun
is
like.
B
Why
do
we
actually
write
code?
Because
we
write
code
to
you
know
to
to?
Well
obviously,
it's
fun
and
we
enjoy
doing
that
and
all
that
at
least
most
of
us
are
doing
that.
I
I
assume
now
I'm
assuming
things
but
sorry
if
I'm
wrong.
But
you
know
at
least
it
is
like
that
for
me,
but
the
idea
is
like
we
write
code
to
solve,
typically
a
problem
right.
B
We
just
do
it
because
we
need
to
solve
something
and
usually
that
something
needs
to
be
available
to
other
people
and
with
today
we're
going
to
be
talking
about
how
you
actually
take
that
piece
of
code
that
you
have
written
and
make
it
available
to
many
people
as
as
many
people
as
possible
and
to
make
it
available
also
to
to
to
in
in
a
manner
that
is
more
Cloud
native,
and
you
know
modern
way
of
doing
that,
and
also
hopefully
not
too
expensive
way
of
doing
that.
B
D
B
And
anything
else
we
want
to
say
here
or
should
we
show
our
application
super
Advanced
very,
very
difficult,
hard
to
Grass
application.
C
D
B
B
About
you
know
all
this
modern
ways
of
saying
that
sometimes
the
it's
been
called
devops
way
of
thinking
and
all
this
kind
of
stuff,
but
the
idea
is
that
you
actually
have
to
do
both
development
and
Ops
of
your
code
and
all
that,
but
you
want
to
automate
things
as
much
as
possible,
so
you
don't
really
have
to
think
about
all
those
steps.
You
just
write
code,
push
it
and
it's
available,
and
our
application
is
quite
on
purpose
and
well.
Actually
one
more
thing
before
we
do
that.
B
One
more
thing
is
that
this
talk
will
be
opinionated,
because
obviously
we
had
to
choose
some
products
that
you
know
to
to
make
that
code
available.
But
a
lot
of
those
things
that
we'll
be
talking
about
can
also
be
applied
to
other
products
Frameworks.
You
know
all
the
other
kind
of
things,
but
this
will
will
be
a
slightly
opinionated.
Like
pick,
we
had
to
pick
some
some
things
to
actually
build
application,
so
the
application-
it
is
a
very
simple
application.
B
It
just
gives
you
back
a
random
two
random
words
from
two
different
dictionaries,
so,
typically
an
adjective
and
a
noun
that
will
provide
you
with
every
time
you
call
an
API.
It
will
give
you
a
list
of
two
words:
one
will
be
a
a
noun
and
the
other
one
will
be
adjective
so
like
something
like
this.
So
well,
it's
well!
That's
what
you
get
right.
Sometimes
you
can
use
it.
I
mean
this
was
a
little
bit
inspired
by
two
Factor
authentication
systems
that
we
have
here
in
Norway.
B
Sometimes
you
would
just
like
try
to
log
into
something
like
your
bank
or
anything.
It
would
show
you
two
words
like
this
and
then
you'll
get
the
same
words
on
your
mobile
phone,
and
then
you
will
know
that
it
was
actually
the
application
on
the
screen
that
initiated
that
thing
on
your
screen
on
your
phone
and
all
this
kind
of
stuff
to
to
to
to
make
it
more
secure.
B
You've
seen
something
similar
to
this,
and
also
probably
to
if,
if
you've,
if
you've
been
creating
a
lot
of
containers,
I
know
that
some
some
container
applications
also
give
names
to
Containers.
B
If
you
don't
specify
that
also
in
the
same
way
just
noun
and
adjective,
you
can
probably
call
use
that
I've
been
suggesting
it
a
couple
of
times
now
that
you
should
probably
try
to
use
that
for
naming
your
pets,
but
somehow
it
didn't
really
get
that
much
traction
yet
so
maybe
maybe
maybe
somebody
will
start
using
that
for
this
I
don't
know,
but
you
know,
let's,
let's
stick
to
Containers
naming
containers
now
and
you
refresh
that
again
it
will
give
you
another
set
of
words
and
well.
C
B
So
this
is
a
typical
piece
of
you
know,
a
method
that
you
create
right,
but
then
we
all
learn
to
okay.
Now
we
need
to
make
that
thing.
You
know
that
that
method
is
there,
but
you
can't
really
run
that
method.
B
This
way
right
in
ja,
you
will
need
to
at
least
add
some
kind
of
main
method,
or
something
like
that
or
you
know,
but
if
you
run
run
it
with
public
static
void
main
you
know
all
this,
it
will
run
on
your
machine,
but
you
know
remember:
I
told
you
that
the
whole
idea
was
to
try
to
make
it
or
actually
to
make
it
available
to
people
around
the
world
or
to
make
it
available
for
more
than
just
yourself.
B
So
this
public
set
of
void,
Main
thing
that
doesn't
really
flly
really
well
with
this
kind
of
requirement,
so
we
need
to
run
it
somehow
and
if
you
look
back
at
the
at
the
at
the
URL
and
also
at
the
screen
that
you've
seen
a
little
bit
earlier,
you
see
that
there
is
a
little
bit
of
giveaway
like
how
do
you
want
to
do.
B
C
D
B
And
also
yeah,
and
also
that
the
the
one
thing
is
that
we
kind
of
see
here
it's
that
well
at
least
we're
going
to
refer
to
later.
Just
I
mean
for
now
it's
a
local
host,
but
we'll
we'll
we'll
do
something
about
that
later,
as
well.
So
a
little
bit
of
spoiler
alert
for
for
now.
But
the
point
is,
it
will
be
a
URL
and
that's
how
you
want
to
make
it
available
to
do
that.
We
needed
to
choose
something.
B
So
we
decided
to
go
with
whatever
Jakarta
e
has
to
offer
and
also
whatever
we
have
to
whatever
can
make
it
simpler
for
us
to
build
rest
interfaces
and
all
that,
and
that
was
also
where
jart
mic
so
Eclipse
Foundation.
Another
project
from
Eclipse
Foundation
comes
into
the
place,
that's
the
micro
profile.
So
that's
the
kind
of
those
two
a
mix
of
those
two.
B
That's
what
we
decided
to
use
to
build
that
application
and
before
we
go
any
further,
maybe
we
should
just
mention
really
quick
I,
don't
know
how
many
of
you
I
mean
you
can
feel
free
to
use
the
chat
to
ask
questions,
comment
and
say
things
whatever
you,
whatever
you
would
like
to
share
with
everyone
else,
and
also
please
share
if
you
know
if
you
have
used
Pro
micro
profile
before
as
well,
so
that
would
be
really
nice.
B
D
C
Well,
it's
jard
e
for
microservices,
so
you
can
say
that
while
Jak
e
has
a
lot
of
different
features,
as
we
see
in
a
moment
where
micro
profile,
you
get
a
set
of
specifications
that
are
suitable
for
microservices,
really
opinion
really
opinionated
ones,
so
we'll
see
that
in
a
minute,
but
first,
let's
see
that
I
assume
that
lots
of
you
know
what
to
is
since
since,
after
all
this
is
Jak
TCH
talks,
so
this
might
be
a
place
where
people
know
Jak,
but
still
it's
there's
a
lot
of
features
and
aspects
of
Ja,
so
we
I
think
we
should
spend
a
couple
of
minutes
going
through
some
of
it
anyway.
C
So
we
on
the
same
same
page
and
same
level
when
talking
about
tar.
So
as
we
can
see
on
the
next
slide
here,
you
see
the
version.
10
is
newest
one,
so
you
can
see
it's
a
platform
and
you
can
see
gray
behind
it.
You
can
see
jakar
web
profile,
you
see
lots
of
boxes,
but
to
the
right.
You
can
see
the
jakar
E10
core
profile,
and
each
of
these
boxes
represent
a
specification
so
to
to
try
to
put
it
EAS.
C
It's
like
something
you
typically
need
in
many
business
applications
is
one
of
these
boxes,
so
things
like,
while
you
need
to
send,
send
mail,
for
instance,
that
c
a
mail
or
need
to
do
some
authorization,
or
even
some
precisions
like
these
are
typical
things
you
need
in
big
applications
and
there
there
are
different
profiles.
So,
in
this
talk,
we're
going
to
focus
on
Jak
E10
core
profile,
which
is
new
in
10
and
that's
like
that's
depends
injection
it's
Jason
and
rest.
C
C
Actually,
as
we
see
on
this
slide-
and
here
is
like
you
can
see
on
the
but
to
the
left-
it's
like
it's
micro
profile,
the
mic
profile
umbrella.
So
to
say,
and
what
you
can
see
at
the
bottom
is
Jak
10
core
profile,
so
you
can
visualize
it
in
the
way
that
micro
profile
builds
upon
the
Cod
profile
and
like
everything
you
to
see,
CDI
and
everything
is
rest
based
and
so
on,
and
you
can
see
that
there
are
many
specifications
here.
So
6.0
is
the
newest
M
mic
profile
release.
C
That's
why
it
says
2022
at
the
top,
so
we're
looking
forward
with
the
next
one,
but
I
think
what
we
really
should
say
here
is
that
we
can
see
that
in
the
you
have
things
like
you
can
do
things
like
with
ja
V
with
jakata
web
services
and
and
rest
services,
whereas
M
profile
just
says
stressed.
So
it's
it's
even
more
opinionative
in.
B
Sense
and
before
it
used
to
be
the
all
those
boxes
that
you've
seen
in
the
part
of
the
core
profile,
it
used
to
be
like
separate
boxes
and
all
that
now
we
had
this
change
where
we
kind
of
have
a
separation
of
jard
profiles,
and
now
we
have
that
as
a
as
a
kind
of
foundation
for
all
those
things
on
top
of
that,
and
then
it
adds
more
things
that
are
typically
needed
for
microservices
that
will
typically
your
microservice
or
your
generally,
your
Cloud
native
applications.
B
There
is
also
a
term
called
like
modernization
of
app
modernization
and
stuff,
like
that,
and
those
kind
of
apps
typically
will
need
those
boxes
that
you
see
on
top
of
jakar
core
profile.
That's
on
the
bottom.
There
you
know
you
would
need
to
have
you
need
to
expose
metrics
of
your
microservice
or
you
need
to
probably
you
need
to
have
some
kind
of
way
of
authenticating
your
users
or
your
calls,
or
you
need
to
expose
some
health
information
you
need
to.
B
Maybe
you
need
to
create
a
rest
clients
to
talk
to
another
other
web
based
rest,
Based,
Services
or
Rest.
Full
Services
configuration
F.
You
know
all
those
kind
of
things
and
then
there
is
also
things
on
the
side
that
we
not
going
to
talk
too
much
about,
but
they've
been
they're
also
part
of
this
umbrella
thing,
but
not
of
the
main
kind
of
the
main
micro
profile
part.
It's
all
about
things
like
reactive
and
graphql
and
all
those
kind
of
things.
B
But
let's
talk
a
little
bit
more
about
who's
behind
this
thing,
so
there's
there's
a
working
group
that
is
actually
decides
how
things
should
look
like
in
micro
profile
and
and
what
should
be
there
and
what
shouldn't
be
there,
and
things
like
that,
and
here
you
can
see
it's
a
mix
of
big
names,
big
companies
and
also
smaller
ones.
B
B
But
then
you
also
see
the
the
top
line,
which
is
jugs
Java
user
groups
that
are
also
part
of
the
working
group
where
they
support
and
put
in
some
work
work
as
well
from
the
community
side
as
well,
so
Atlanta
Jug
on
top
there
I
jug,
that
is
a
kind
of
umbrella,
User,
Group,
Java
user
group
for
Germany,
and
also
a
Garden
State
Java
User
Group,
which
is
well
Garden,
State,
that's
New,
Jersey,
so
US
based
as
well
right.
B
Let's
mention
also
companies
that
or
well
yeah.
It
is
implementations
of
micro
profile
right.
C
Yeah
sure
and
I
think
like
we
should
start
this
Slide
by,
like
repeating
something
I
said
without
explaining
it
earlier
is
that
mer
profile
is
a
set
of
specifications.
That
means
that
well,
mer
profile
specifies
like
well
to
do
help
you
do
it
this
way,
and
then
someone
needs
to
actually
take
that
and
implement
it
into
code
and
typically
app
servers
or
application
run
times,
implement
the
entire
M
profile,
or
at
least
most
of
it.
C
So
these
are
examples
of
application
service
that
Implement
micro
profile
and
the
as
well,
which
is
really
nice,
and
here
you
can
see
big
from
big
companies
like
open
Liberty
from
IBM
wild
flying
workers
from
Red
Hat
had
done
from
orle,
but
also
cumulus,
which
is
smaller,
Slovenian
company
or
Tommy
from
about
apart
they're
different
apps
servers,
and
what
is
really
cool
here
is
that
if
you
write
your
code
using
M
profile
and
J,
and
you
you
start
out
with
one
of
these-
and
you
realize
that
well,
I
should
I
should
really
switch
to
app
server
the
other
app
server.
C
Because
of
this,
and
that,
then
you
can
do
that
without
changing
the
code
at
all,
which
is
really
cool.
We
might
say
more
about
that
later
on
yeah,
all
the
sub
servers
have
their
own
strengths
and
weaknesses.
Of
course,.
B
Yeah
and
we'll
we'll
probably
mention
a
little
bit
of
that
I
mean
not
exactly
like
going
through
all
of
those,
but
we're
going
to
probably
talk
a
little
bit
more
about
that
in
in
dur.
During
this
presentation
as
well,
but
I
mean
okay.
We
talked
about
about
all
this
and
we're
still
in
the
situation
where
we
have
a
little
method
that
we
don't
know
where,
to
put
I
mean
we
put
in
public
static
void
main
to
it.
B
It
runs
in
our
machine
everything
fine,
but
we
want
to
share
it
with
the
world.
So
where
do
we
start?
How
do
we
actually
start?
This
chart
micro
profiles
thing,
and
it's
also
probably
a
question
if
you
haven't
used
that
if
you
haven't
created
this
kind
of
applications
before
yourself,
so
you
probably
would
like
to
know
that
as
well.
B
B
So
you
know
you
can
basically
either
you
can
choose
the
one
you
want
to
go
with
and
just
generate
it
there
or
you
can
generate
it
one,
and
then
you
can
actually
modify
the
Palm
files
or
greater
files
or
whatever
you're,
using
to
to
build
your
application
to
to
to
to
contain
all
that
information.
But
probably
good
idea
is
to
pick
one
of
those
starters,
so
in
our
examples
we'll
be
using
actually
quite
a
bit
of
open,
Liberty
and
quirkus.
B
D
C
D
C
Yeah,
sorry
I
just
think
we
need
to
add
that.
Well,
if
you
don't
know
one
of
these
dos,
you
typically
get
examp.
So
this
is
how
you
do
create
a
rest
stand
point
in
this
with
this
runtime,
and
this
is
how
you
do
Jason
and
like
it's
easy
for
you
to
take
the
code,
that
we
shown
you
and
put
it
into
your
into
the
starter
and
tweak
it
to
to
your
needs.
It's
pretty
cool,
of
course.
Also.
You
can
download
the
starter
and
then
copy
the
files
into
an
existing
project.
You
have,
if
you
want.
B
B
You
just
download
the
artifact
to
the
whole
starter
thing
and
you
just
move
the
files
that
you
want
to
typically
I
mean
we've
been
we're
going
to
be
using
Maven
in
this
example
in
this
project,
and
so
that
also
means
that
we
can
of
work
with
pom
files,
pom
XML
files,
and
it
also
means
that
you
know
if
you
want
to
switch
from
corcus
to
open
Liberty
or
from
open
Liberty
to
corcus,
or
you
know,
to
Helen
or
whatever,
we'll
just
download
those
artifacts
and
just
copy
the
Palm
files
and
replace
them,
or
maybe
just
annotate
them
with
different
names
for
different
application
service,
and
it
just
will
work
okay.
B
So
now
we
kind
of
make
sure
that
it
works
on
our
machine
and
it
also
works
not
just
with
public
static
void
main
but
is
also
running
now
on
the
Local
Host
as
a
application.
So
we
can
actually
go
in
download
that
artifact
from
whatever
start
that
you've
chosen,
you
put
that
method,
that
you
know
that
that
that
pick
two
random
words
thing
into
that
application,
that
you
have
example
application
that
you
have
there
and
typically
it
will
tell
you
what
to
do,
how
to
run
that
application
server.
B
So
some
of
them
would
have
like
a
Dev
mode
that
you
can
start
that
will
refresh
all
changes
and
all
that
some
of
them.
You
can
just
do
it
as
simple
as
do
may
clean.
So
you
basically
build
the
whole
thing,
and
then
you
just
run
it
as
an
as
a
fat
jar,
for
example
right
and
it's
it
still
works
and
you
can
make
it
run
and
it's
a
typically.
B
It
would
be
a
jar
file
that
can
be
run
by
with,
like
you
know,
Java
Dash
jar
and
then
the
jar
name,
but
it's
still
we're
still
not
there
we're
still
not
into
the
all
Cloud
native
thing
right,
so
the
next
step
would
be
to
put
it
into
a
something
that
can
be
portable.
I
want
to
talk
a
little
bit
more
about.
C
That
yeah
sure
so
this
is
probably
not
a
surprise
for
my,
like
containers,
is
kind
of
like
a
default.
We
are
doing
things
nowadays,
but
what's
really
nice
about
containers
is
that
well
and
the
entire
ID
behind
it
right
to
say
when,
when
we
used
to
build
applications
and
say
like
well,
it
works
in
my
machine.
Then
the
idea
is
like
okay,
take
your
machine,
put
it
together
and
ship,
it
so
standardize
it
in
sense.
B
And-
and
probably
it's
worth
mentioning
here,
sorry-
is
that
it
like
we're
showing
we're
mentioning
Docker
we're
mentioning
pman,
but
you
know
you,
you
can
choose
whatever
you
want
standard.
The
container
thing
is
pretty
much
standardized.
There
are
different
ways
of
running
it.
There
is
a
Rancher.
There
is
there's
a
lot
of
ways
of
doing
that,
so
it
doesn't
really
matter
whatever
you
want
to
run
it
on.
You
should
be
able
to
do
that
nowadays.
B
At
least
you
have
quite
a
few
options:
it's
not
only
Docker
or
only
podman
or
anything
else.
Just
this
is
just
a
few
examples,
and
one
more
thing
that
I
want
to
mention
is
that
containers
that
that
means
typically
I
say
that
it
means
two
things
it's
reproducible
builds,
but
but
it's
also
reproducible
deployments,
because
usually
we
talk
about
one
of
those
things
and
like
typically,
it
would
be
like
oh,
but
it's
easy
to
deploy
it's
so.
Reproducible
deployments
is
cool,
but
yes,
it
is,
but
it's
also
reproducible
builds.
B
You
know
that
if
it
builds
in
on
one
machine
with
the
same
kind
of
architecture
and
everything,
it's
most
likely
will
be
built
in
the
same
manner
and
S
similar
way
and
other
things,
and
you
don't
really
have
to
build
that.
You
just
create
a
container
image
and
just
push
it
somewhere
into
cloud.
And
then
you
can
just
pull
that
image
and
deploy
it
in
different
different
environments.
Different
servers,
different
machines,
different
different
things,
all
right
back
to
you.
C
Sorry,
yeah
no,
but
that's
important,
Fe
important
input
as
well
and
yeah.
We
could
be
talking
about
things
like
layered,
Docker
file,
layered
container
files,
layered
images
and
so
on
and
don't
use,
but
yeah
I
think
someone
else
could
do
a
an
entire
talk
on
like
we
need
the
best
containers.
So
we
won't
talk
much
about
that,
but,
like.
B
There
is
a
lot
of
optimizations
you
can
do
with
containers
and
it's
kind
of
important
to
think
about
that.
It's
not
just
shove
things
into
a
container.
Just
it's
going
to
work,
because
you
will
need
to
do
some
optimizations.
You
will
need
to
make
sure
that
it
takes
less
time
for
it
to
build.
You
want
to
make
sure
that
you
don't
really
use
too
much
time
rebuilding
things
that
have
been
already
so
layered
like
M
mentioned,
but
also
it's
a
good
idea
to
keep
your
container
size
as
small
as
possible
for
several
reasons.
B
So
one
is
that
you
don't
really
need
too
much
information
too
much
applications
that
are
not
really
important
for
your
your
code
to
be
running
there,
for
like
security
reasons,
also
space
reasons,
and
also
you
want
to
have
your
container
images
to
be
as
small
as
possible
because
they
will
be
pulled
back
and
forth
across
the
network
at
some
point,
and
you
want
to
be
that
to
make
that
thing
as
small
as
possible.
So
it
goes
as
fast
as
possible
from
from
a
container
registry
somewhere
to
a
deployment
machine
somewhere
else
over
the.
C
D
B
So
that's
actually
a
very
good
point.
Yes,
that's
great
to
mention
as
well.
So
now
we
containerized
it
I
mean
like
like
we
said
it's
not
very
big
surprise
nowadays.
Now
everybody
knows
that
we
should
put
it
in
a
container.
Another
kind
of
kind
of
partially
obvious,
but
maybe
not
so
obvious
thing
is
that
you
don't
want
to
maybe
be
respons
responsible
for
for
for
the
whole
Ops
part
of
that.
So
you
don't
really
want
to
have
your
own
servers.
You
don't
want
to,
especially
if
you
have
a
small,
tiny
little
application.
B
You
don't
really
want
to
have
a
full
server
room,
full
of
racks
and
stuff,
and
everything
just
around
that
tiny
little
applications.
So
one
way
of
doing
that
is
to
push
that
thing
into
Cloud.
It
can
be
different
kinds
of
cloud,
I
mean
public,
private,
hybrid.
You
know
all
these
kind
of
things,
but
we're
going
to
be
mentioning
or
talking
about
public
offerings
that
are
available
to
everyone
pretty
much
on
on
on
on
the
internet.
So
we
need
to.
B
We
have
a
container
file.
We
have
our
code,
we
have
it.
You
know
available
through
a
rest,
API
or
URL,
and
we've
put
it
on
into
a
container,
and
now
we
want
to
put
it
into
cloud
and
kind
of
Beyond
we'll
see
what
what
what
it
means
by
what
we
do
mean
by
Beyond
in
a
little
bit,
but
now
we
have
basically
containerized
application.
That
is
running
there.
B
What
do
we
need
to
do
to
make
it
our
life
much
easier
too,
because
we
want
this
thing
to
be
just
write
code
commit
code
and
magically
it's
available.
It's
magically
available,
let's
say
in
some
kind
of
public
Cloud
offering
right
and
well
first
step
would
would
be
to
try
to
automate
your
builds
and
that
you
definitely
need
that
to
be
able
to
to
to
trigger
builds
out
of
the
commits,
but
also
you
need
to
actually
automate
everything
else
as
well.
B
It's
that,
by
everything
else,
I
mean
things
like
setting
up
your
environments.
That
does
not
exist.
Maybe
there
to
begin
with
that,
will
you
would
you
don't
want
to
put
that
into
the
same,
build
procedure
or
pipeline,
but
you
should
probably
have
some
kind
of
infrastructure
code
set
up
for
your
environments.
You
should
have
automation
for
running
tests
on
the
new
version
of
an
application.
B
You
should
have
automation
for
checking
the
health
of
your
application,
checking
how
it's
doing
how
it's
performing
if
it
needs
to
be
restarted
and
all
those
kind
of
things
that
will
make
your
life
much
easier.
If
you
deploy
it
to
some
kind
of
cloud.
Typically,
they
will
offer
you
a
way
of
checking
health
of
your
applications
and
killing
the
containers
that
are
not
working
that
are
failing,
respawning,
a
new
containers
that
are
working-
and
you
know
all
those
kind
of
things
that
will
make
yourself
your
life
easier.
B
Also,
they
will
offer
a
way
of
pulling
the
logs
out
of
those
containers
and
putting
into
the
central
place
where
all
this
logs
will
be.
So
you
might
might
want
to
automate
some
alarms
or
some
alerts
on
on
some
specific
stuff
for
your
application
as
well.
But
let's
go
back
to
build
automation,
so
there
are
different
ways
of
doing
builds.
Some
of
them
are
built
in
and
follow
the
specific
cloud
provider
that
you
want
to
use,
and
some
of
them
are
a
bit
more
generic.
B
So
over
to
you
again
let
yeah.
C
So
we
started
out
by
saying
that
talk
is
a
bit
opinionated,
that
we
chosen
some
tools
and
when
yeah,
when
we,
when
we
were
about
to
put
the
code
and
the
builds
and
everything
into
a
cloud,
we
know
obviously
need
we
needed
to
choose
one
of
them.
So
for
this
talk
we
chosen
a
good
Cloud,
build
and
good
cloud
in
general
for
running
running
and
building
the
code.
It
could
be
AER,
it
could
be
whatever,
but
we
in
Google.
So
that's
like
disclaimer.
C
So
what
we
see
here
is
the
user
interface
of
glob
build,
which
is
like
Google's
build
Service,
as
the
name
indicates
Frey
clear,
so
you
can
compare
it
to
like
Jenkins
or
G
actions
or
a
devop
or
all
the
things
are
some
of
the
same
principles.
But
what
you
can
see
here
is
you
can
see
the
event
on
the
screen
which
is
like
says
like.
C
C
It's
say
like
well
file
type,
you
could
say,
go
build
or
dock
file,
which
are
the
options
you
have
and
what
we
can
say
about
this
is
that
yeah
you
choose
one
of
them,
so
we
already
talked
about
bucker
bucker
file
or
yeah
containerizing
like
so.
You
can
try
to
use
the
container
you
already
graded
and
like
stick
to
that
or
you
can
use
more
specific
configuration
in
globble.
So
globble
is
yeah.
It's
similar
to
like
G
action
files
or
genit
file.
B
Yeah,
because
anytime,
you
want
to
do
something
very
specific
that
is
well
by
specific
I
mean
specific
for
that
particular
Cloud.
You
want
to
make
it.
B
You
would
like
to
put
it
into
Cloud,
build
the
yo
or
similar
files
like
that,
because
doer
file
contains
information
how
to
build
that
particular
container,
but
it
doesn't
say,
like
oh
I-
want
to
have
so
many
cores
to
build
that
thing
or
I
want
to
do
it,
build
it
in
this
way
or
I
want
to
integrate
with
some
specific
offerings
or
products
on
that
specific
cloud
provider.
B
So
all
those
kind
of
things
go
into
this
specific,
build
file
that
you
want
to
put
there
and
again,
if
you
choose
Google
Cloud,
that's
fine!
If
you
choose
anything
else,
you
will
probably
look
for
similar,
build
servers
and
build
services
like
this
as
well.
B
So
now
we've
built
we
have
written
code,
we
have
buil
put
it
into
container.
We
have
deployed
that
image
of
that
container
into
the
cloud.
And
what
do
we
do
now?
I
mean
like
how
do
we
run
it?
Because
that's
another
thing,
because
when
you
go
to
any
public
cloud
offering
and
not
only
even
public
Cloud
offerings
all
kind
of
Cl
Cloud
offerings,
you
will
find
a
several
ways
of
running
containers.
B
So
some
of
them
will
be
I
mean
probably
it
will
be
anything
from
some
kind
of
kubernetes
managed
kubernetes
offering
to
servus
offerings.
So
if
you
want
to
or
I
mean
worst
case,
you
can
just
go
old
school,
really
old
school,
you
can
just
install
a
VM
install
operating
system
or
create
a
VM
install
rating
system
on
a
VM
put
a
some
kind
of
runtime
on
it,
install
that
and
then
deploy
your
jar
files
onto
that
thing.
That's
like
that's
the
probably
least,
cloudy
version
of
doing
stuff.
B
But
you
know
you
can
do
that
if
you
have
to.
But
you
really
don't
want
to
do
that,
and
you
probably
don't
need
full
kubernetes
cluster
to
do
that.
And
there
comes
this
part
of
and
beyond
that
we
saw
that
to
the
cloud
and
Beyond
and
because
you
want
to
have
as
little
management
as
possible,
and
that
is
that's
where
the
surus
part
comes
in.
So
serverless
is
not
really
Ser
without
servers,
it's
more
like
with
less
management,
so
it's
manag,
less
platform
or
manag
less
offering
I
usually
call
it.
B
In
our
case.
We're
going
to
be
using
something
called
Cloud
run,
which
is
started
as
a
as
a
managed
version
of
K
native
some
time
ago
and
kind
have
been
developing.
D
B
In
in
in
in
a
little
bit
other
ways
after
a
while,
but
it's
it's
very
similar
to
what
you
know
as
K
native
as
an
open-
Source
project,
and
it's
traditionally
it's
been
that
in
the
bottom
there
will
be
actually
kubernetes
you're,
not
really
running
away
from
kubernetes,
but
it's
managed
by
somebody
else.
On
top
of
that,
there
would
be
some
kind
of
service
match.
B
Traditionally
it
was
EO,
but
now
you
can
also
do
different
other
ones,
and
on
top
of
that
there
will
be
a
k
native
running
that
can
actually
let
you
do
that.
So,
if
you're
going
to
go
serverless
on
Prem
on
your
own
solution,
you
might
want
to
go
with
just
install
K
native
and
and
make
it
all
work.
If
you
want
to
let
somebody
else
manage
that
for
you,
then
you
can
look
for
offerings
like
this.
B
In
this
case,
we're
using
Cloud
run
to
do
that
and
and
then
there
is
also
even
if
you
want
want
to
go
even
further.
You
can
also
go
with
like
hybrid
Cloud
off
Springs,
where
you
will
do
between
different,
like
on
Prem
and
public
cloud,
or
between
or
multi
Cloud,
where
you
go
between
different
Cloud
offerings
from
the
public
Cloud
providers,
but
with
all
that
you
can
there
is.
There
are
offerings
like
that.
There
is
also,
for
example,
antos
or
open
shift,
or
things
like
that.
B
That
will
let
you
do
this
kind
of
things
across
different
environments
as
well,
but
still
it
will
be
servess
and
seress
is
kind
of
good,
because
you
don't
really
you're
getting
closer
to
this
I
want
to
write
my
code,
push
the
commit
button
and
magically
it's
deployed
somewhere
and
runs
because
you
don't
really
have
to
do
all
the
deployment
thing.
B
You
don't
really
have
to
think
about
the
health
of
your
environments
more
or
less,
and
then
it
just
just
appears
there,
and
in
our
case
it
looks
like
this,
and
the
cool
thing
is
that
a
lot
of
serverless
offerings
would
also
let
you
do
some
cool
stuff.
So,
for
example,
my
cool
stuff
I
can
of
mean
things
like
do.
Traffic
splitting
so,
for
example,
you're
deploying
a
new
version,
and
you
will
say:
hey,
you
know
what
I'm
not
really
sure
about
my
new
version.
B
There
might
be
bugs
there
might
be
dragons.
You
know,
I
want
to
have
5%
of
my
traffic,
go
to
the
new
version
and
95
go
to
the
others,
to
the
older
version,
and
then
so
you
can
actually
do
traffic
splitting
and
test
and
gradually
increase
the
traffic
and
see
if
everything
is
good
and
study.
Your
logs
and
study
your
you
know,
metrics,
and
all
that
and
making
sure
that
everything
works
before
you
switch
entirely.
It
also
lets
you
manage
services
like
this.
B
Let
you
add
quite
similarly
authentication
on
top
of
that,
you
know
bunch
of
other
things
that
will
make
your
application
more
kind
of
production,
Enterprise,
ready
kind
of
things.
There
will
be
a
lot
of
things
that
will
help
you
with
this
yeah
anything
else
we
need
to
mention
when
it
comes
to
to
the.
C
Environment,
I,
don't
think
so
or
like
I
think
we
might
say
that
when
we
come
this
far,
you
could
see
that
we
have
done
done
the
entire
process,
so
to
say
so.
You
can
see
at
the
top.
We
have
green
deployment
and
a
lot
of
yeah
we've
been
thinking
about
like.
Should
we
replace
this
image
with
like
one
where
we
have
been
going
for
for
a
while
and
had
lots
of
green
builds,
but
like
no?
It's
like.
It's
always
like
this.
When
you
set
up
new,
build
things
so.
B
It's
good
to
show
that
it's
okay
to
fail
and
we've
been
doing
that
I
mean.
So
this
is
that's
the
point
right
you
will
it
will,
especially
in
the
beginning
when
you're
setting
things
up
and
you're
doing
maybe
in
the
beginning,
you
will
be
doing
actually
quite
a
bit
of
Click
Ops
and
just
making
things
just
deployed
by
with
clicks
and
points
and
clicks
and
everything
through
the
user
interface
of
that
cloud
provider
and
will
look
like
this.
B
But
at
some
point
you
should
go
from
click
Ops
to
automated
scripts
and
and
builds
and
pipelines,
and
you
know
environment
setup
and
all
that
should
be
automated.
And
then
you
shouldn't
see
that
much
red
things
I
mean
unless
you,
unless
something
breaks
in
your
code,
your
tests
and
things
like
that
right.
But
this
is
this
is
typically
a
first
stage.
So
this
is
a
good.
So
now
it
runs
and
then
you
can
also
notice
that
I
mean
it
looks
almost
the
same
as
you
have
seen
before.
B
But
now
you
can
actually
see
that
the
URL
is
different,
so
URL
is
actually
public
URL.
It's
it's
one
of
the
old
URLs.
So
it's
not
really
no
point
in
trying
to
type
that
long,
URL
and
see
if
it
works.
We'll
show
you
actually
the
working
one
in
in
a
little
bit,
but
you
know
the
whole
point
here
is
to
actually
tell
you
that
now
we
have
a
public
API.
Now
we're
getting
very
close
to
make
it
available
to
everyone
with
least
possible
effort.
B
We
don't
we
just
run
code,
we
deploy
it,
we've
set
up
an
environment,
but
we
don't
have
to
do
all
the
kind
of
management
of
that
environment
in
a
sense
right,
but
we
should
probably
talk
a
little
bit
about
because
we
talked
about
serverless
and
how
amazing
it
is,
and
all
that
and
but
it's
not
only
just
like
you
know
fun
times,
and
everything
is
just
like
happy
day
scenario.
B
Sometimes
you
don't
want
to
go
serverless
or
sometimes
you
might
want
to
actually
consider
going
a
bit
more
other
way
and,
like
I
said
there
are
different
ways
of
doing
that.
There
is
when
people
talk
about.
First
of
all,
when
people
talk
about
cus,
a
lot
of
people
would
think
about
Cloud
functions
or
something
like
similar
Lambda
functions.
B
It's
being
called
different
things
different
in
different
providers,
but
basically
smallest
units
in
a
quote
in
quote
out
of
servus
is
a
kind
of
a
cloud
is
a
serous
function,
and
that
is
not
what
we're
talking
about
here.
We're
actually
talking
about
the
full
application,
but
yet
you
can
do
that.
You
can
actually
do
sist
functions
as
well,
but
that
should
be
very
small,
tiny
little
things
that
there
is
no
point
of
creating
huge
applications.
B
A
lot
of
sist
functions
talking
to
each
other
because
after
a
while,
you
probably
will
lose
control
of
all
those,
and
it
will
be
really
hard
to
manage.
So
don't
do
that,
but
here
we're
talking
about
applications-
and
sometimes
you
want
to
like
I
said
there
is
a
lot
of
different
ways,
so
the
most
kind
of
old
school
version
of
doing
that
is
the
installing
a
VM
and
putting
everything.
Maybe
you
want
to
do
that,
but
there
should
be
very
specific
reasons
for
that.
So
maybe
it's
an
old
application.
B
Maybe
it's
not
really
running,
though
it
has
some
specific
requirements.
It
has
some
kind
of
Hardware
or
other
kind
of
requirements,
but
typically
Next
Step
from
that
would
be
at
least
like
few
steps
over.
That
would
be
to
put
it
on
the
kubernetes
cluster,
but
then
you
need
to
be
able
to
I
mean
you
can
also
manage
the
kubernetes
cluster
yourself
or
or
you
can
use
the
manage
services
of
that.
B
But
in
that
case
you
will
need
to
have
an
action
application
with
quite
a
enough
of
users,
quite
a
enough
of
load,
quite
a
n
of
containers,
for
it
to
be
worth
running
that,
because
most
of
the
people
most
of
the
applications,
don't
really
need
the
whole
huge
multi-,
node,
kubernetes
clusters
and
stuff,
like
that.
There
is
also
way
also
other
ways
of
running
applications
that
are
more
permanent
in
a
way
that
it
will
not
scale
down
to
zero.
B
I
know
the
cloud
run
offers
now
ability
to
scale
down
to
one,
not
zero,
because
in
case
you
don't
want
to
do
that
as
well,
but
still
the
if
you
go
serverless,
you
have
to
be
prepared
that
technically,
your
application
can
be
killed
and
recreate
it
without
any
user
noticing
that
change,
and
that
is
that
is.
That
means
that
kind
of
brings
a
few
requirements
to
that.
So
most
of
the
cases,
you
should
have
an
a
serverless
application.
B
That
is
stateless.
So
that
means
that
you
do
not
store
any
state
in
that
container.
It
has
a
database,
it
has
a
storage
and
everything
in
somewhere
else,
in
a
different,
different
container
different
environment
different
place
that
all
the
containers
can
actually
access
and
can
be
killed
and
restarted
without
losing
any
data
or
having
to
reload
and
do
things
well.
B
Loading
data
is
fine,
losing
that
data
is
bad,
so
yeah,
so
there
are
different
ways
of
running
it,
but
make
sure
that
if
you
go
serverless
you
want
to
make
sure
it's
probably
stateless
and
also,
if
you
have
an
application,
that
is
a
very
heavy
on
Startup
time
on
loading
stuff
from
the
database
or
wherever
it's
loading
data
from
if
it
has
long
time.
B
So
if
it's
a
large
containers
that
take
a
long
time
to
actually
p
over
the
network,
probably
not
a
good
idea
to
go
serverless
if
you
have
in
very
tiny
little
container,
that's
really
qu
fast,
but
your
application
inside
there
takes
some
time
to
load,
probably
again,
not
not
a
good
idea
to
go
serverless,
because
users
will
spend
a
lot
of
time
in
waiting
for
that
thing
to
load
and
for
them
it
will
be
looking
like
the
application
is
slow,
unresponsive,
laggy
and
all
those
kind
of
things,
at
least
in
the
beginning
at
least
the
cold
starts
what
you
call
like
the
first
start
of
application.
B
First
call
coming
in
to
the
application
from
from
from
it
just
starting
up,
so
you
know
use
this.
Serverless
is
not
a
solution
for
everything,
but
it's
solution
for
a
lot
of
things,
but
you
know
there
are
some
options
where
you
don't
want
to
do
that
and
speaking.
D
B
Performance
and
all
these
kind
of
things
it's
still
need
to,
even
if
you
go
Ser
lless,
even
if
your
application
is
fast
enough
and
all
this
things
you
still
have
to
try
to
optimize
build
of
your
containers.
First
of
all,
but
that's
not
as
important
as
actually
load
of
fi
containers
as
fast
as
possible.
So
you
need
to
optimize
optimize
optimize
that
you
know
make
sure
that
the
size
of
container
is
small.
B
Make
sure
that
you
can
the
first
startup
your
appication
is
fast,
and
then
you
maybe
if
you
need
more
data
and
stuff
like
that-
maybe
you
can
lazy
load
that
later,
but
make
sure
that
the
point
is
make
sure
that
your
users
do
not
experience
a
lag
when
they're
are
starting
when
when
the
first
call
comes
in
and
that's
all
on
the
containers,
level,
applications
level
and
all
that,
but
there
is
also
another
important
part
of
that,
and
that
is
that
the
runs
you're
using
to
run
your
application
inside
that
container
can
also
differ.
B
So
you
can
actually
they.
They
would
have
a
little
bit
different
startup
times.
They
will
be
especially
tuned
in
for
specific
things
or
specific
use
cases
and
stuff
like
that
and,
like
we
said,
we've
mentioned
that
already
a
few
times,
both
Yar
and
micro
profile.
There
are
specifications,
it's
up
to
the
providers
to
actually
Implement
those
and
they
will
be
implemented
in
a
tiny
little
bit
different
ways,
meaning
that
performance
might
be
different.
B
So
that's
a
good
thing
to
keep
in
mind
as
well,
and
the
one
we're
actually
going
to
be
talking
here
about
is
quarkus
that
we
want
to
mention,
as
you
can
probably
have
seen
it
already
from
the
title.
We
will
be
mentioning
quarkus
and
we
do
that
because
it
has
a
specific
it.
It
is
really
quick
to
start
up,
so
it
fits
really
well
into
the
whole.
B
This
serverless
story
that
we
have
and
also
that
because
it
actually
have
even
even
even
further
it
has
two
different
modes
for
doing
things
and
we'll
talk
a
little
bit
about
that
as
well.
So
it's
there
is
a
jvm
mode
that
lets
you
do
it
in
to
start
your
application
in
as
a
as
as
a
general
like
a
jvm
mode
application,
jvm
application,
but
also
it
has
a
native
mode
where
everything
goes
compiled
down
to
Binary,
and
then
you
can
just
start
it
up
as
well
before
we
do
that.
B
C
Yeah,
what's
really
cool
about
this?
Is
you
can
see?
You
can
see
a
list
here
right?
That's
like
check,
check
check
and
you
can
see.
Well,
you
change
P
files
to
create
check
properties,
files
and
yaml
files.
What
you
don't
see
here
is
anything
of
your
Java
or
cotlin
code,
so
you
don't
need
to
change
the
code
at
all
to
switch
from
another
Jak
or
and
micro
profile
compliant
run
time
into
porus
or
the
other
way
around.
C
For
that
sense,
so
you
get
really
portable
portable
code
in
that
sense,
but
I
think
maybe
we
should
mention
as
well
like
white
quers
like
or
like
why
white,
workus
yeah
and
it
has
a
really
fast
startup
time-
that's
one
of
the
characteristics,
and
it
can
do
that
because
corus
is
made
to
be
run
inside
a
container.
That
means
that
most
Java
Java
run
times.
They
do
quite
a
lot
of
stuff
to
like
support,
dropin
or
config
changes
or
so
on.
C
During
run
time
like,
as
the
application
is
running
things
can,
files
can
drop
in
from
outside
or
new
beans
can
be
added
or
so
on,
but
we
workus
right.
That
said
that,
while
nowadays
we
are
running
inside
a
container
anyway,
that
means
that
the
environment
is
stable
and
if
it's
set,
so
you
won't.
That
won't
actually
happen,
and
that
means
we
don't
need
that
support
and
that
again
means
that
you
don't
need
to
do
those
checks
when
the
when
the
r
time
is
starting
up,
which
means
F
start
time
and
yeah
so
yeah.
C
We
we
could
talk,
lot
lots
more
about
forus,
I,
guess
K
in
the
chat
can
also
say
something
as
well
he's
working
in
red
hat,
but
here
we
are
with
quarcus
there.
Any
application
in
corus-
and
you
see
the
response-
is
pretty
much
the
same.
But
what
you
don't
see
is
that
this
this
answer
came
much
faster
than
BR
was
one.
B
Yeah
and
we'll
see
that
we,
if
we
have
I
I
I,
think
we'll
have
a
little
bit
of
time
at
the
end,
so
we
can
actually
show
you
a
cold
start
for
for
quirkus
and
all
these
kind
of
things,
but
we
have
one
thing
that
I
kind
of
mentioned
I
did
mention
it
as
a
little
bit
as
a
as
already,
but
we
have
another
thing
that
we
want
to.
How
do
we
make
things?
B
B
Well
that?
So
you
want
to
talk
a
little
bit
more
about
the
native
mode,
mods.
C
I
can
I
can
go
on
for
a
long
time
and
try
to
do
the
short
version
like
you
can.
As
said,
you
could
do
quirkus
in
JMM
mode
or
can
do
it
in
Native
mode
and
in
Native
mod
use
grbm
the
test
that
well,
we
don't
actually
need
the
jvm,
so
let's
run
it
as
a
binary,
which
is
that
that
is
smaller
and
it
starts
up
crazy
fast.
C
So
it's
for
for
settings
like
this,
where,
where
like
the
scale
up
time
from
zero
to
one
is
really
important,
grm
is
think
to
go
to,
and
it's
really
emphasized
by
by,
or
nowadays
is
like
looks
like
grm
is
kind
of
the
primary
focus.
So
it's
getting
it's.
C
B
Better-
and
the
thing
that's
worth
mentioning
here-
is
that
the
the
different
modes
they
have
their
I
mean
if,
if
you're
creating
application
like
ours,
that
is
kind
of
simple
and
not
that
many
users
expected
to
be
there.
It's
not
super
production,
critical
or
like
business,
critical
application.
All
this
that's
cool,
then
you
don't
really
have
to
think
about
those
things.
But
if
you
have
a
very
specific
kind
of
environment
and
specific
requirements
to
your
application,
then
you
need
to
think
about
that
different
environments.
B
Different
modes
will
have
a
different,
a
tiny
little
bit
different
optimizations
for
things
right
so
with
jvm
mode.
You
will
see
that
jvm
is
doing
some
smartness
and
actually
optimizing
call.
So
in
the
beginning
they
might
be
a
little
bit
slower
and
then
they
will
kind
of
learn
and
become
a
little
bit
faster
with
time,
because
jbn
will
do
some
kind
of
optimizations
garbage
collections.
You
know
memory
optimization
all
these
kind
of
things
in
the
background
with
the
binary
mode.
They
don't
have
that.
B
So
then
you
will
have
much
more
linear,
but
maybe
a
little
bit
slower
response
time,
but
now
we're
talking
about
Mill,
Mill
milliseconds
of
that
so,
like
I,
said,
your
regular
application
won't
mention
that
one
notice
that
if
you're
doing
super
crazy
performance,
optimized
trading
application
that
needs
to
do
like
a
every
Milli
microsc
counts
and
everything.
Then
you
need
to
think
about
issues
like
this,
but
in
general
you
don't
really
need
to
think
about
that.
D
C
Yes,
yeah
I
guess
there
are
a
few
other
things
we
should
mention
as
well.
The
first
one
is
a
shout
out
to
Red
for
the
documentation
on
quk
is
native,
it's
getting
better
and
better,
and
now
now
it's
really
good
like
if
you
Google
porkus
Native
and
get
into
the
qu
documentation,
you
can
find
most
things
you
need
there
and
the
other
thing
is
like
building
an
application
with
quk,
where
with
grm
is
heavy,
it's
of
heavy
lifting
like
creating
the
native
file.
B
And
I
mean
you
don't
really
care
right,
I
mean
if
builds
are
happening.
In
the
background,
your
application
is
still.
If
you
have
already
an
application
or
build
is
happening
in
the
background.
Your
users
won't
see
anything
different
right,
so
the
so
it's
fine
to
actually
to
sacrifice
a
build
time
because
nobody
is,
except
for
us
developers,
just
waiting
for
build
to
finish,
and
nobody
really
kind
of
sees
that
part
right,
but
they
see
the
startup.
They
see
performance.
B
They
see
all
this
thing,
while
once
your
application
is
running,
and
so
now
we
have
a
different
kind
of
URLs
right
now
we
have
a
quirkus
one,
and
then
we
have
a
native
one
again.
Those
URLs
don't
try
to
type
them.
There
is
no
point
in
try
trying
to
do
that
because
they
are
they're
wrong
on
purpose
but
or
older
versions
of
that.
But
we'll
show
you
the
live
one
in
a
little
bit
in
a
second,
and
now
we
are
kind
of
getting
to
the
end
of
the
story
right.
B
We,
we
started
our
story
with
writing
a
little
bit
of
logic.
We
we
added
a
little
bit
more
to
it
or
actually
quite
a
bit
more
to
it
in
the
form
of
jar
in
micro
profile
and
all
that
to
make
it
available
through
the
web
interface,
which
is
all
well
more
or
less.
All
modern
applications
are
expected
to
have
an
interface
through,
and
then
we
containerized
it.
We
also
found
a
way
to
deploy
it
without
with
less
toil.
So
we
chose
to
deploy
it
to
the
clouds.
B
It
could
have
been
your
own
cloud,
but
we
ended
up
on
a
public
cloud
and
then
we
also
talked
about
how
to
make
it
a
less,
also
actually
less
expensive,
because
you
know
it's:
if
you
go
serverless,
it
will
cost
you
less,
because
you
don't
only
pay
for
the
time
your
application
is
actually
up
and
running
and
not
all
the
other
times.
So
if
your
application
has
less
users
less
load,
it's
it's
a
good
thing
to
to
have
this
thing
scal
down
to
zero
thing.
B
But
you
know
it
has
some
disadvantages
as
well.
Don't
try
to
chase
this
scale
down
to
zero
thing
if
you
don't
have
to
again,
but
you
know
it's
a
good
idea
as
well
and
that's
pretty
much
it
so
now
we
have
a
seress
application.
We
talked
also
about
a
little
bit
about
when
not
to
go
serverless
and
what
you
can
do
to
make
your
servus
applications
even
faster.
B
All
of
this
is
so
both
corcus
multi-stage,
builds
and
also
all
the
application
code
for
the
micro
profile
random
strings.
So
this
little
all
tiny
little
application
with
all
the
builtup
files
and
all
this
everything
is
available
in
GitHub,
so
feel
free
to
take
screenshot
of
those
two
links
here
and
follow
those,
and
we
have
and
well
another
thing
that
you
can
take
screenshot
This
Is
Us
on
social
media.
B
So
this
is
our
our
links
on
Twitter,
but
you
should
be
able
to
find
us
as
well
with
a
name
with
the
same
name
or
maybe
a
little
bit
different
handle,
at
least
for
me
on
different
other
platforms
as
well:
Mastadon,
Blue,
Sky,
whatever
is
out
there
probably
you'll
find
us
there
as
well.
So
that's
another
one.
B
Last
thing
and
now
I
think
we
can
do
a
really
quick
switch
back
to
switch
the
screens
and
show
a
little
bit
of
a
demo
before
we
do
the
any
comments
or
questions
or
anything
I've
been
looking
at
the
chat.
There
is
nothing
coming
in
yet,
but
if
there
is
anything,
if
you
have
any
comments
or
questions
or
any
other
things
related
to
clouds
native
applications
or
Java
or
anything
like
that,
feel
free
to
write
into
the
chat
while
we're
doing
the
demo,
so
I
will
stop
sharing
thanks
a
lot
Carlos.
B
C
It
away
and
while
while
I
start
sharing
I
can
also
say
that
well,
please
ask
questions
in
the
chat.
If
you
have
anything
and
I
will
also
post
post
those
slides
at
the
meet
of
page
or
somewhere
I
guess
Serena
can
help
us
with
that
afterwards.
So
so
we
have
you'll
have
BR
done.
So
what
you
can
see
here
now
is
Google
Cloud
run.
This
is
live,
so
this
is
real
real
stuff
and
you
can
say
you
have
run
string
quirkus
and
your
quk
is
native.
C
So
when
we
say
that
things
are
fast,
what
does
that
really
mean?
So,
let's
copy
the
URL.
So
what
we
can
see
now
is
that
the
application
is
stopped.
No
instances
running,
I
haven't
done
anything
in
hours,
so
now
I'm
typing
the
URL,
and
this
is
thej
mode
quirkus.
So
when
I
press
enter
now
you
can
see
that
the
container
is
starting,
quirkus
is
starting
and
the
application
is
starting.
Let's
see
how.
B
Tell
us
when
you
actually
press
the
enter
just
so
yeah,
because
we
don't
see
that.
D
C
Yeah
there
you
go
re
Dev,
okay,
so
that's
with
the
JM
mode
takes
some
seconds.
So,
let's
head
over
to
the
native
one
I
copy,
the
you
can
see.
Also
here,
that's
not.
D
B
D
B
C
B
Really-
and
you
have
you
have
to
think
I
mean
it
might
sound,
I
mean
for
for
some
of
you.
It
probably
sounds
impressive
already,
but
if
you
know,
if
you
think
about
what
actually
is
happening,
how
many
things
are
actually
happening
in
the
background?
It
becomes
even
more
expens,
more
impressive
in
a
way,
because
here
I
didn't
manage
to
even
count
to
one
right.
So
it's
like
less
than
a
second.
A
B
Managed
to
count
to
like
five,
maybe
something
around
five
seconds,
and
that
is
really
five
seconds
is
fast
but
less
than
a
second,
it's
it's
even
faster
I.
Don't
know
if
my
calculations
were
correct,
but
we
can
see
it
very
soon
in
the
logs
yeah
about
a
second
yeah
exactly
and
the
thing
is
think
about
what
actually
happens
in
the
background
as
something
has
to
wake
up
and
receive
that
HTTP
request.
B
Try
to
figure
out
where
is
supposed
to
go,
go
to
the
container
registry,
pull
that
container
image
put
it
onto
find
a
place
to
for
it
to
run
start
it
up
the
container
starting
up
the
OS
operating
system
or
whatever
the
layer
is
in
a
container.
Then
that
thing
needs
to
start
up
the
application
itself,
and
then
it
has
to
actually
process
that
URL
request
that
HTTP
request
and
send
it
back
to
to
the
to
the
user.
B
So
there
is
quite
a
lot
of
things
that
have
happening
in
the
background
that
that
we
don't
really
think
about
when
we
want
to
see
and
me
as
a
user.
That's
that
5
Seconds
versus
1.
Second
is
a
kind
of
different
thing
because
for
the
other,
one
5
seconds
is
not
that
bad,
but
I
will
probably
realize
that,
oh
this,
this,
this
thing
is
probably
starting
up
now,
with
the
other
one.
B
I
wouldn't
really
know
if
it's
running
already,
if
it's
warm
and
pre
pre
pre-warmed
up
or
it's
just
a
cold
start,
I
would
probably
not
have
a
chance
to
actually
know
that.
B
So
that's
that's
really
really
nice
and
any
questions.
Any
comments.
I
still
see.
No
nothing
and
do
you
yeah.
A
We
well
thank
you.
Thank
you
very
much
for
that
excellent
presentation.
Team.
It
was.
It
was
great
and
very
well
received
from
the
audience.
We
don't
have
any
questions,
but
I
do
want
to
take
a
few
moments
to
to
quickly
wrap
things
up
here,
so
we
are
always
looking
for
more
more
Jakarta
Tech
talks
through
2023,
so
The
Fall
season
and
we're
just
halfway
through
booking
our
last
presentation.
A
So
if
you're
interested
feel
free
to
fill
out
the
form
that
I
linked
in
the
chat
a
few
moments
ago,
and
of
course
your
feedback
is
of
the
tech
talk
program
is
greatly
appreciated.
So
once
we
close
out
this
webinar
session,
there's
going
to
be
a
little
green
button
that
will
pop
up.
Please
let
us
know
how
you
enjoyed
the
talk,
so
thank
you
again,
mads
and
RAM
for
an
amazing
presentation,
and
you
guys
have
a
wonderful
day.