►
From YouTube: Powering Jakarta EE and MicroProfile on Azure with Open Liberty and OpenShift | J1 Live 2021
Description
In this demo-heavy and mostly slide-free session we will show first-hand how to run Open Liberty on Azure managed OpenShift. We will demo in real time how to stand up a cluster quickly and deploy a realistic Java EE/Jakarta EE/MicroProfile application that integrates with some services on the cloud.
A
B
It
we're
live
now.
A
Okay,
perfect
all
right
thanks
for
joining
everyone,
so
my
name's,
graham
charters,
I'm
with
ibm
and
I'm
a
technical
product
manager,
I'm
responsible
for
liberty.
So
that's
open
liberty
and
websphere
liberty,
which
is
a
javery,
jakarta,
microprofile,
runtime,
open
sourced
and
my
co-speaker
today
is
reza
rahman,
he's
a
principal
program
manager
at
microsoft,
responsible
for
java
on
the
azure
cloud.
A
The
session
we've
got
is
well,
it's
entitled
powering
jakarta
and
microprofile
on
azure,
with
open
liberty
and
openshift
was
it
which
is
a
bit
of
a
mouthful,
but
it's
essentially
all
about
well
two
things:
one
is
either
taking
existing
enterprise
applications,
maybe
written
in
java,
ee
and
evolving
those
into
into
the
cloud
and
so
and
as
part
of
doing
that,
moving
them
into
a
kubernetes
environment
or
the
other
other
other
other
route
is
essentially
creating
new
cloud
native
applications
using
the
ducatry
and
microprofile
technologies.
A
Again,
maybe
born
on
the
cloud
this
time
deployed
into
kubernetes
and
the
technologies
we're
going
to
demonstrate.
Although
the
talk
is
is
kind
of
more
generic,
the
technologies
we're
going
to
demonstrate
are
essentially
using
open
liberty,
as
the
country
microprofile
run
time
using
openshift
as
the
kubernetes
environment,
but
it
could
be,
it
could
be
another
kubernetes
environment
and
using
the
azure
cloud
as
the
the
cloud
of
choice.
We
it's
not
a
random
choice
that
we
pick
these.
I
work
for
ibm
reso
works
for
microsoft,
so
they're
kind
of
our
preferred
choices.
A
If
you
like
okay.
So
when
we
look
at
cloud,
it's
probably
quite
clear
that
cloud
is
probably
the
biggest
thing
to
happen
to
us
in
the
industry
since,
since
the
I
guess,
the
creation
of
the
internet,
and
when
we
look
at
what's
happening
in
terms
of
applications,
we
hear
a
lot
about
kind
of
best
practices
for
creating
new
net
new
cloud
native
applications
and
so
on.
But
the
reality
is
a
lot
of
applications
that
are
that
are
on
the
cloud
actually
originated
in
companies.
A
Data
centers,
they
were
on-premise
applications
originally
and
and
companies
are
moving
those
to
the
cloud
either
to
to
increase
their
agility
or
to
save
costs
and
so
on,
and
the
rate
at
which
things
are
moving
to
the
cloud
is
is
extremely
fast.
The
last
number,
which
says
is
essentially
saying
actually
by
the
end
of
this
year,
and
we
pretty
much
at
the
end
of
this
year.
The
people
who
responded
to
this
igg
survey
said
most
59
of
them
said
most
or
all
of
their
applications
will
be
on
public
cloud.
A
So
in
just
a
few
weeks
time.
So
when
we
look
at
public
cloud
and
cloud
native,
there
are
two
clear
winners.
I
think
in
terms
of
technologies
and
those
are
containers
and
kubernetes
you'd
be
hard
pushed.
I
think,
to
find
a
cloud
that
doesn't
support
kubernetes
as
a
deployment.
A
Orchestration
technology
and
you'd
be
hard
pushed
to
find
a
cloud
that
doesn't
support
containers
and
they
don't
necessarily
come
together
in
the
sense
that
some
people
are
using
containers
for
development
and
as
a
way
of
doing
kind
of
lightweight
isolation
and
not
necessarily
using
kubernetes.
We
see
a
lot
of
a
lot
of
environments-
I
guess
emerging
in
in
public
cloud
around
serverless,
which
are
based
around
containers,
but
not
necessarily
using
kubernetes,
but
I
think
from
this
particular
red
hat
survey.
A
Probably
the
most
interesting
statistic
is
the
the
the
fact
that
two-thirds
of
people
who
responded
said
kubernetes
was
very
or
extremely
important
when
it
comes
to
cloud
native.
So
so
the
clear
thing
here
is
is
containers
and
kubernetes
are
the
kind
of
way
to
go
in
public
cloud
or
for
cloud
native,
in
fact,
so
where
does
javery
or
jakarta
and
microprofile
fit
in?
A
A
They
supported
java
ee
as
the
as
the
programming
model
and
enterprises
have
relied
on
them
for
many
years
since
the
late
1990s
and
what
we're
seeing
now
is
the
evolution
of
javari
applications
towards
jakarta
tree,
so
oracle
donated
the
technology
to
eclipse
and
if
it's
due
car
tree
was
created,
so
you
can
think
of
jakarta
as
being
the
kind
of
evolution
or
the
way
forward
for
for
existing
enterprise
java
applications,
but
also
for
creating
new
cloud
native
applications
and
then
there's
microprofile
microprofile.
A
Essentially,
you
can
think
of
it
in
two
ways.
You
can
think
of
it
as
a
kind
of
extension
to
existing
jakarta
or
java
reapplication.
So
if
the
runtime
supports
it,
you
can
take
an
existing
monolithic
application,
containerize
it
and
use
the
microprofile
capabilities
to
make
that
run.
Make
that
container
work
very
well
in
a
kubernetes
environment.
You
can
also
start
to
instrument
it
for
observer
observability
and
expose
start
exposing
apis
and
consuming
apis.
It
still
has
a
monolithic
application.
Potentially
the
other
way
to
think
about
the
technologies
is
micro.
Profile
might
be
you're.
A
Actually,
your
starting
point
and
you're
creating
new
cloud
native
microservices
using
using
microprofile,
and
that
builds
on
a
subset
of
jakarta
tree
and
you
may
pull
in
additional
jakarta
tree
capability
if
you
need,
for
example,
to
access
a
database
and
so
on,
and
the
great
thing
about
the
technologies
now
being
at
eclipse
that
that
gives
us
a
much
a
much
more
open
community,
a
much
more
open
and
open
environment
for
collaboration,
and
you
can
see
I've
just
kind
of
off
the
the
eclipse
cartridge
page
picked
out
the
the
images
for
the
the
compatible
implementations
you
can
see.
A
We
now
have,
I
think,
it's
somewhere
in
the
region.
I
think
it's
19
compatible
implementations
of
jakarta
3e
and
even
the
most
recent
jakarta
9.1.
There
are
a
good
number
of
implementations
available.
A
And
similarly,
with
microprofile,
so
microprofile
started
outside
of
eclipse,
but
soon
moved
to
eclipse
and
again
the
open,
the
open
community
nature
of
it
and
the
fact
that
there's
a
kind
of
reasonable,
reasonably
low
barrier
to
entry
for
it
for
implementations
means
we
already
have
13
compatible.
Actually,
it's
very
it's
14
compatible,
implementations
and
micro
profile.
A
So
I
mentioned
I
I
I
I'm
responsible
for
open
liberty.
That's
that's
my
preferred
implementation,
of
course,
and
I
and
I
took
on
the
role
of
product
manager
for
open
liberty,
because
I
think
it's
great
technology
and
I
wanted
to
help
people
be
successful
with
it,
and
in
order
to
do
that,
I
kind
of
tried
to
distill
why
I
write
why
I
really
like
open
liberty
as
a
technology,
and
I
kind
of
came
up
with
these
six
reasons,
but
you
can
think
of
them
as
kind
of
three
three
groupings.
A
The
first
two
are
all
about
helping
developers
and
I
guess
new
devops
teams
develop
cloud
native
applications,
so
the
liberty
has
a
great
developer
experience
that
we
integrate
into
the
most
popular
ides.
It
has
a
a
rapid
in
a
loop
developer
experience
called
dev
mode
where
you
can
essentially
fire
up
a
liberty
runtime,
including
in
a
container,
make
code
changes,
configuration
changes,
save
those
and
they're
automatically
reflected
in
that
running
server.
There's
no
need
for
a
rebuild,
no
need
for
a
redeploy,
no
need
to
restart
the
server.
A
We
also
integrate
it
very
very
tightly
with
the
the
kubernetes
environment.
So
through
the
microprofile
apis,
you
can
integrate
with
it
with
the
kubernetes
health
capabilities.
For
example,
we
provide
container
images
that
are
production
ready
available
in
popular
locations,
so
in
docker
hub,
but
also
in
the
ibm
container
registry
and
one
of
the
things
we
do
is
we,
we
self
tune
the
runtime.
A
So
if
you
think
about
how
to
get
the
best
performance
out
of
your
application,
when
you're
running
in
a
containerized
environment,
when
you're
building
your
container,
you
don't
necessarily
know
what
that
production
environment
is
going
to
look
like,
so
it's
very
difficult
to
tune
to
that
environment.
So
what
we
do
is,
rather
than
having
the
developer
burdened
with
tuning
that
application,
the
runtime
tunes
itself,
so
you
can
deploy
it
into
one
environment,
get
the
optimal
performance.
If
the
environment
changes
over
time,
you
continue
to
get
the
best
performance.
A
The
next
two
are
all
about
how
we
deliver
liberty
and
help
you
in
terms
of
staying
current
and
staying
secure
and
avoiding
technical
debt,
so
with
zero
migration.
What
what
we
say
with
liberty
is
if
you've
got
an
application
that
you
deployed
to
liberty
three
years
ago
four
years
ago,
that'll
continue
to
work
on
liberty
date
today,
unchanged.
So,
if
you
keep
your
application
the
same
and
keep
your
server
configuration
the
same,
you
can.
You
can
deploy
that
to
the
latest
level
of
liberty
without
any
changes.
A
So
you
can
pick
up
security
fixes,
pick
up
performance,
enhancements
with
zero
effort
and
also
to
help
along
with
that
to
help
help
you.
If
you're
moving
to
a
more
continuous
integration,
continuous
delivery
approach,
is
we
actually
have
a
continuous
delivery
approach
to
the
liberty
runtime
itself,
so
we
have
regular
and
frequent
releases.
So
we
release
liberty
every
four
weeks
and
it's
a
complete
release
of
liberty
and
with
zero
migration.
A
That
means
you
can
pick
up
the
latest
releases
every
four
weeks
and
you
get
secure
any
security
fixes
that
have
happened
in
the
meantime
and
the
last
two
are
all
about
if
you're
doing
cloud-native
applications,
if
you're
deploying
microservices
so
microservices
and
so
on,
you
want
a
lightweight
environment
and
liberty,
essentially
when
we
compare
it
against
other
runtimes,
for
example,
tomcat
and
spring
boot.
A
A
So
we've
been
collaborating
reza
myself,
for
is
it
just
over
a
year
now
I
think
it
is,
and
the
scope
of
the
collaboration
is
is
quite
broad.
But
what
we're
going
to
talk
about
specifically
in
this
session
is
open.
Liberty
deployed
to
azure
red
hat
openshift.
We
also
have
the
options
to
deploy
website
liberty
and
webster
traditional,
so
traditional
webster
into
azure
virtual
machines
and
website
liberty
into
either
as
your
kubernetes
service
or
as
your
red
hat
open
shift.
A
So
our
goal
is
to
make
it
as
simple
as
possible
and
easy
for
you
to
get
started,
deploying
your
jakarta
or
microprofile
applications
into
the
into
the
these
kubernetes
cloud
environments
and
once
you're
in
that
cloud
environment
you
can
integrate
with
services,
maybe
being
provided
in
your
own
data
center
still
so
maybe
reaching
back
into
a
db2
instance
in
on-premises,
or
you
can
start
leveraging
or
making
use
of
the
the
cloud
services
so
postgres
database,
which
I
think
reza
is
going
to
demonstrate
or
is
your
active
directory
and
so
on.
A
Okay,
so
last
slide,
we've
got
some
resources
to
help
you.
If
you
want
to
read
more
about
liberty
and
the
the
capabilities
in
liberty,
also
more
resources
to
find
out
more
about
the
capabilities
in
the
azure
cloud.
That's
the
middle
set
of
links,
and
if
you
do,
if
you
want
to
give
things
a
try,
then
there
are
links
to
to
guidance,
documentation
or
to
guides
on
openly.
If
you
just
want
to
learn
liberty
technology
in
a
bit
more
detail.
A
Okay,
so
I'm
going
to
hand
over
to
reza
russ
is
going
to
do
the
the
more
interesting
the
exciting
bit,
which
is
well
do
a
demo
so
I'll
over
to
you.
B
As
I
get
started
with
the
demo
here,
there's
not
going
to
be
any
more
slides,
it's
pretty
much
going
to
be
live
coding,
the
rest
of
the
time
here,
I'm
basically
going
to
do
a
show
and
tell
of
how
to
take
the
jakarta
in
my
file
application
and
run
it
on
liberty
and
then
take
that
liberty
run
time
and
run
it
on
azure
cloud
on
azure,
redhead
openshift
in
particular.
We're
just
gonna
do
all
of
that.
B
The
rest
of
the
time
and
I'll
sort
of
do
a
brief
tour
and
give
you
hopefully
interesting
commentary.
You
know
in
the
rest
of
our
tour
now
before
I
get,
I
start
doing
that.
One
thing
that
I
want
to
remind
you:
we're
not
going
to
be
doing
q
a
at
the
very
end
of
this
session,
we're
going
to
be
doing
q
and
a
throughout.
B
So
you
know,
as
I'm
doing
this
start
with
posting
your
questions
please
and
graham
will
keep
an
eye
on
that,
and,
graham
will
ask
me:
you
know
what
the
questions
are,
because
I
won't
have
time
I'll.
I
need
your
heads
done
running
through
the
demo,
but
graham
is
gonna.
B
That
corner
and
we're
going
to
try
to
get
rid
of
these
video
panels
and
and
floating
controls
also
because
I
don't
really
need
all
those
graham
is
going
to
do
all
of
that
for
us
already
so
here's
the
application.
I
already
have
it
up
and
running
as
you
can
see
in
my
console,
I
started
the
application
just
a
bit
earlier.
I
think
we
all
pretty
much
know
how
a
chakra
3
microphone
application
looks
like.
I
don't
want
to
do
that
too
too.
Much
I'll
just
show
you
the
ball,
mixing
one
real
quick.
B
Hopefully
you
can
see
the
font
okay.
So,
as
you
can
see,
the
only
dependency
I
have
in
this
application
is
only
chakra
eight
and
that's
about
it.
It's
not
chakra
the
e9
application
quite
yet,
but
soon
enough
I'll
move
over
this
application
to
jakarta
e9.
Also,
this
application
by
the
way
is
available,
and
it's
out
there
on
github
I'll,
show
you
where
to
get
this
application.
You
can
definitely
take
a
look.
Look
at
the
subtraction
to
your
heart's
content.
B
Basically,
it's
a
your
regular
jakarta
stack
that
you
can
imagine
so
jsf
for
the
front-end
technology
and
interesting
that
jsf
front-end
will
actually
be
talking
to
a
rest,
back-end
so
using
jax,
rs
and
then
cdi,
ejd,
jpa
beam
revolution
and
all
the
good
stuff
that
folks
here
already
know
about
so
I'll
show
you
the
application,
here's
the
application
up
and
running
nothing
too,
fancy
it's
just
a
simple
card
application
I'll!
Do
the
crowd
part
for
you
here,
so
I'll
just
go,
delete
and,
of
course
face
it.
B
B
And
it's
producing
a
external
through
jax
v.
If
I
switch
the
header
up,
it
would
be
producing
json
using
the
json
api
binding
technologies.
So
that's
about
it
for
the
for
the
application
itself,
like
I
said
at
the
application,
is
startup.
Not
the
main
main
point
here
is
just
you
know,
jakarta
application
running
on
liberty,
which
are
for
all
the
reasons
that
that
graham
mentioned
is
a
very
good
runtime,
so
actually
don't
need
the
ide
anymore.
B
I'm
gonna
shut
down
my
liberty
instance
here,
and
I'm
also
gonna
shut
down
my
id
because
I
again
don't
really
need
it
right
at
this
moment,
all
right.
So
now,
let's
move
into
how
do
we
put
this
application
on
onto
the
cloud,
namely
in
this
case
using
openshift?
B
So
I'll
tell
you
how
to
get
to
all
the
good
stuff
for
to
get
to
all
of
the
stuff
that
that
graham
talked
about
all
you
need
to
do
is
open
up
your
favorite
search
engine
and
type
in
websphere
azure,
that's
about
it!
B
Okay,
so
the
first
page
will
take
you
to
a
page
that
basically
overviews
all
of
the
stuff
that
we're
working
on
together.
As
you
know,
there's
a
little
diagram
that
you
showed
you
saw
from
before
and
there's
many
different
pathways
where
we
want
you
to
take
these
microphone
applications,
whether
net,
new
or
or
migrating,
and
these
are
all
of
the
different
solutions
that
we
are
offering
for
you.
B
So
this
liberty,
investor
liberty,
on
aro
or
azure,
red
hat,
openshift
and
I'll
talk
more
about
azure,
redhead
openshift
in
a
moment
or
you
could
take
your
website.
Traditional
application
servers
applications
the
way
the
way
they
are
and
run
it
on
virtual
machines
on
azure.
That's
also
perfectly
fine
and
then,
of
course,
you
can
also
write
on
vanilla,
kubernetes
and
that's
like
taking
open
liberty
and
vegetability
and
running
it
on
vanilla,
kubernetes.
B
B
Once
you
get
it
up
and
running,
you
don't
have
to
worry
about
all
of
the
integrity.
Details
of
configuring
and
installing
kubernetes
it
just
works
for
you.
If
you
want
it,
it
can
do
auto,
updates
all
that
nice
stuff
and
we'll
also
see
some
cool
monitoring
and
management
type
capabilities
built
into
azure,
red
hat
openshift.
Also,
so
there's
a
couple
of
different
ways:
you
can
proceed
if
you
say
you,
you
won.
B
So
the
first
pathway
you
can
consider
is
sort
of
the
more
traditional
way
of
doing
things
and
probably
what
you
would
do
if
you
were
going
on
any
other
cloud
other
than
azure,
and
that
is
you
know,
take
a
step-by-step
approach
of
getting
openshift
up
and
running
and
then
enabling
the
built-in
container
registry
and
then
installing
the
operator
creating
an
a
space
assigning,
an
administrator
for
that
namespace
and
then
finally,
getting
to
deploying
your
application.
So
you
can
do
that.
It's
perfectly
fine.
We
have.
B
We
have
instructions
so
that
you
can
do
all
of
that
stuff
by
yourself
and
like
every
it's
down
to
the
lowest
level
possible
detail
where
you
can
do
everything
that
that
you
want
to
do
and
customize
it
in
any
which
way
that
you
want
and
by
the
way,
the
application
that
I'm
going
to
show
is
here
is
linked
here.
Also.
So,
if
you
go
up
here-
and
it
says
towards
the.
C
C
So
it's
got
to
be
higher
up
somewhere
higher
up
where
we
give
you
a
link
to
the.
B
So
that's
a
way
of
doing
things,
but
as
actually
graham
and
I
started
about
a
year
and
a
half
ago
on
this,
and
that's
where
we
started,
we
started
by
just
saying:
hey,
you
know
we're
going
to
give
people
basic
instructions
on
how
to
get
started
with
kubernetes
and
or
aero
and
and
liberty,
and
these
are
all
of
the
steps
that
you
need
to
do
in
order
to
get
from
point
a
to
point
b.
B
But
what
we
noticed
in
in
the
process
is
that
a
lot
of
those
steps,
as
I
pointed
out
a
moment
ago,
is
pretty
boilerplate.
So
it's
there's
not
that
much
that
you
would
probably
want
to
change
while
doing
those
steps,
it's
just
very
manual
steps
and
kind
of
a
laborious
and
kind
of
error
problem.
So
what
we
did
was
we
applied.
This
concept
call
called
azure
marketplace
solution
templates
and
basically,
what
it
does
is
that
it
takes
all
of
those
boilerplate
steps
and
automates
it
behind
the
ui.
B
On
the
azure
portal,
so
here's
a
link
to
that.
So
let
me
take
you
there
next,
hopefully
I'm
logged
in
as
me.
Yes,
I
am
right.
B
And
I'm
going
to
show
you
what
this
marketplace
solution
template
looks
like
so
here
it
is,
I'm
I'm
in
the
market
marketplace
solution,
template
now,
let's
say
you're
in
the
other
portal
and
you
forgot
what
that
link
was,
and
that's
perfectly
okay.
So
let's
actually
shut
that.
I
shut
this
window
down
and
let's
pretend
I'm
I'm
in
the
azure
portal
home.
What
you
can
do
is
at
the
very
top
bar
here.
B
You
can
simply
type
in
liberty
and
you'll
see
that
there's
a
bunch
of
marketplace
entries
that
will
show
up
and
the
second
one
the
first
one
is
running.
It's
the
same
concept:
marketplace
solution,
template
for
running
liberty
on
kubernetes,
but
then
next
one
is
the
one
we're
going
to
use
and
by
the
way,
there's
the
webster
traditional
one.
Also,
if
you
wanted
to
do
that,
there's
a
marketplace
solution
for
that
too.
But
this
is
the
one
we'll
look
at
we'll
look
at
right
now.
B
So
let's
click
on
that
and
here's
how
it
looks
like
and
there's
a
some
verbiage
on
what
this
is
about
and
what
it's
going
to
do
for
you
essentially
it'll
automate
a
number
of
things.
B
If
you
read
this
word,
which
will
tell
you
so
it's
going
to
create
a
virtual
network
for
you,
it's
going
to
set
up
that
virtual
network,
it's
going
to
set
up
and
install
azure,
it
had
openshift
for
you,
it's
going
to
install
the
liberty
operator,
it's
going
to
create
a
namespace
or
project
in
in
openshift
terms,
and
it's
going
to
assign
a
simple
administrator
to
that
namespace
so
that
you
can
begin
deploying
your
application.
B
B
And
here's
the
ui,
so
the
ui
isn't
super
complicated.
We've
tried
to
keep
it
relatively
simple.
Like
I
said,
there's
not
a
lot
you
can
you
wanna
you
wanna,
really
customize
here
the
customization
is
fairly
minimal,
so
it
as
a
result.
The
ui
is
also
pretty
simple,
so
you
can
you're
going
to
pick
your
azure
subscription
by
the
way
you
can
get
one
for
free
for
a
year
in
case
you
don't
have
it
just
to
try
things
out.
You
can
create
pick
a
resource
group.
B
A
resource
group
is
really
nothing
much
more
than
sort
of
a
grouping
of
different
azure
resources.
You
can
think
of
it
as
an
analog
of
a
mini
data
center.
So
everything
that
you
would
want
for
an
application,
a
given
application
that
you
think,
logically,
of
a
data
center
that
you
would
put
what
you
would
put
in
a
resource
group.
You
pick
a
region
and
you'll
see
there's
tons
of
data
centers
across
the
globe
for
azure.
You
pick
the
one
that's
closest
to
you.
All
of
this
stuff
actually
is,
is
created
automatically.
B
So
in
order
to
automatically
create
all
this
stuff,
you
need
some
kind
of.
What's
called
a
managed
identity,
so
you're
giving
so
you're
creating
this
identity,
and
this
identity
is
going
to
have
some
privileges
to
go
ahead
and
create
all
these
resources
on
your
behalf,
so
you
go
ahead
and
add
that,
and
in
this
case
you
could
pick
like,
for
example,
liberty,
aro
demo,
identity,
which
I,
which
I
created
ahead
of
time
and
there's
there's
also
instructions
on
how
you
do
all
that.
B
So
there's
a
little
blurb
here
on
how
you
create
a
the
correct
user
management
to
do
with
the
correct
permissions
so
that
that
can
do
the
rest
of
the
steps
on
your
behalf.
The
next
step
is
configuring,
the
cluster
and
you
can
do
two
one
or
two
things.
You
can
either
create
a
brand
new
openshift
cluster
and
do
the
provisioning
basically
from
scratch,
or
you
can
also
pick
an
existing
cluster
that
you
might
already
have,
and
what
this
will
do
is
basically
it
will
do
everything
else,
except
for
creating
the
cluster.
B
So
it'll
do
that.
It'll
install
the
liberty
operator
for
you
and
it'll
install
the
application
instead
of
the
namespace
and
so
on.
Since
we
are
deploying
red
hat,
you
will
need
what's
called
a
red
hat,
pull
secret.
It's
super
easy
to
get.
You
just
create
a
account
on
red
hat
and
you
have
it
again.
There's
instructions
on
how
to
do
that,
and
then
you
specify
you
know
what
what
is
the
so-called
project
administrator
that
you
want.
B
So
I've
actually
go
ahead
and
created
all
this,
because
it's
going
to
take
about
half
hour,
maybe
about
as
much
as
45
minutes.
Obviously
we
don't
have
that
much
time,
so
I've
gone
ahead
and
and
done
that,
so
I
did
specify
again
a
project
manager
name
and
a
password
just
to
give
a
special
special
user
for
this
particular
application,
and
then
you
can
configure
the
application
so
by
default
you
don't
need
to
do
it,
but
if
you
do
want
to,
you
can
just
specify
a
publicly
available
docker
image
name.
B
So
this
could
be
on
docker
hub
or
some
other
registry,
where
you,
where
there's
anonymous
access
and
you
specify
the
full
path
for
that
application.
And
that's
then
that's
the
application
that
will
deploy
with
this.
B
Obviously,
because
it's
it's
using
the
liberty
operator,
we're
assuming
that
this
application
is
a
is
a
liberty
application
of
some
kind
and
we
will
add
a
demo
application
in
this
as
well
just
to
if
you,
if
you
just
want
to
get
started
and
take
a
look
at
how
things
look
like
you
can
do
that,
using
that
you
can
specify
the
number
of
replicas.
That
means
how
many
copies
of
this
application
do
you
want
so
for
failover
purposes,
and
you
can
you
know,
bump
that
up
and
down.
B
However,
you
want
and
then
finally
that's
it
once
you
specify
all
that
information,
you
hit,
create
an
author
and
everything
will
be
created
for
you
just
just
by
the
wizard.
So,
as
I
mentioned,
I've
already
did
that
I
already
done
that.
So
let
me
show
you
what
was
produced
and
by
the
way
I
do
have
a
database
up
and
running.
In
fact,
the
database,
the
application
that
I
was
running
locally,
is
also.
I
was
also
running
against
that
database.
B
So
here's
the
database
I'll
show
you
the
database
real,
quick.
It's
a
postcard
database
that
I
again
created
ahead
of
time
and
there
it
is.
You
can
see
azure
database
for
postgresql,
so
that's
the
one
we're
using
so
the
deployment
of
aro
I
had
the
wizard
create
created,
or
I
have
the
solution
template
created
for
for
me
in
this
resource
group.
So
once
this
everything
is
done,
you
get
two
things
you
get
the
azure
red
hat
openshift
cluster.
B
So
this
is
that
guy
here
let
me
show
you
real
quick,
a
little
bit
more
a
little
bit
better.
So
there's
the
azure
red
hat
openshift
cluster
and
you
get
the
virtual
network
and
inside
this
cluster
I'll
show
you
in
a
moment.
We
also
have
the
liberty
operator
and
the
name
space
corrected
correctly
created
now.
The
other
thing
that
that
the
solution
template
will
do
is
once
it
is
done.
B
So
you
do
that
by
basically
looking
at
the
deployments,
taking
a
look
at
the
longest
running
deployment
and
going
to
outputs
and
it
will
produce
a
number
of
interesting
bits
of
information
so
that
you
can
actually
begin
using
this
cluster.
B
So
we'll
give
you
the
cluster
name,
that's
the
one
you
saw
in
the
resource
group,
the
resource
group
name
that
we
use
the
cluster
console
url.
So
this
is
the
url
you're
going
to
use
to
actually
access
the
openshift
cluster
and
we're
going
to
do
that.
Just
in
a
moment,
it'll
give
you
the
container
registry
url.
So
the
registry
is
also
allocated,
so
you
can
begin
to
deploy
applications
using
that
internal
registry
in
in
inside
of
red
hat
openshift
it'll
give
you
the
project
name.
B
B
In
I
don't
really
want
the
developer
view.
I
want
the
admin
view
because
I'm
going
to
be
doing
admin
stuff
all
right,
so
this
is
basically
mint
environment
created
by
by
the
wizard.
So,
as
you
can
see,
there's
the
open
shift
registry
that
was
created
and
there
is
the
project.
So
let's
select
the
project
and
you'll
see
that
once
we
go
to
the
operators,
here's
the
open
liberty
operator
that
was
installed.
I
go
to
the
open
liberty,
application
there
isn't
one
yet.
B
Know
here
is
that
we
have
a
mint
environment
so
now
we're
going
to
go
ahead
and
login
into
this
environment
and
actually
deploy
something.
So
first
thing
I'm
going
to
do:
is
I'm
going
to
log
in
locally
with
the
openshift
command
line.
So
let's
go
ahead
and
do
that
and
I'll
show
you
the
syntax
of
this-
don't
worry
about
this
one
too
much.
This
is
literally
copy
and
paste
this
guy
from
your
open
shift,
console
yeah.
If
you
go
over
here
and
if
you
say
copy
login
command.
B
This
is
what
you're
getting,
and
it's
also
by
the
way,
output
output
by
the
solution
template
also,
so
let's
go
ahead
and
do.
A
B
B
Log
in
with
a
new
token
so
I'm
logged
in
and
as
you
can
see,
those
are
the
two
projects
that
I
have
access
to.
B
B
Am
I
in
the
right
spot?
Yes,
I
am.
The
next
step
is
for
me
to
take
this
a
war
file
and
put
it
in
in
a
docker
image
and
I'll
show
you
that
in
a
moment
right
now,
so
let
me
show
you
first
for
you
the
the
docker
file
that
I'm
using
it's
pretty
simple.
It's
just
starting
from
open
liberty,
as
you
can
see,
from
open
liberty
copying
over
a
postgres
jar
file,
because
that's
what
I'm
using
to
connect
to
the
database
copying
over
the
server.xml,
the
server.xml
doesn't
have
anything
too
exciting
in
it.
B
There's
one
exciting
thing
that
I'll
show
you,
namely
that
I'm
enabling
micro
profile,
health
and
micro
profile
open
api,
but
other
than
that,
it's
just
pretty
simple
stuff
just
connecting
up
to
the
to
the
database
and
that's
pretty
much
it
and
then
copying
over
the
war
file.
So
let
me
do
it
here
so
that
you
can
see
my
command
line
a
bit
better
and
let's
go
ahead
and
do
the
docker
build.
This
should
go
pretty
fast
because
I
already
did
this
earlier
in
the
morning.
B
Most
of
everything
should
be
cached.
In
fact,
in
liberty,
you
have
a
really
nice
facility
called
configure.sh,
it's
a
script
that
creates
a
highly
highly
efficient
liberty,
image
and
configures
liberty
in
a
very
efficient
way.
I
am
cheating
here,
so
I'm
taking
the
lazy
way
out,
which
is
I'm
just
starting
with
full
liberty
just
to
speed
up
the
speed
up
the
demo
here
and
even
then
the
demo
is
not
going
to
be
sped
up
because
my
my
cache
expires.
So
I
am
in
fact
going
ahead
and
downloading
everything.
B
But
luckily
I
have
a
pretty
high
speed
internet,
so
this
should
be
done
relatively
quickly,
but
the
configure.sh
takes
a
little
bit
of
time,
but
it
is
worth
it
if
you're
doing
certainly
production
deployments
it's.
It
produces
a
lot
more
efficient
of
it,
of
a
docker
image
for
you,
okay
and
almost.
C
B
Already
we
have
in
a
docker
image
so
now
we're
going
to
go
ahead
and
run
this
docker
image
locally,
just
to
validate
that
everything
is
working
and
if
you
haven't
seen
it
before,
I
will
show
you
the
syntax
of
this
all
right.
So
here's
the
syntax.
Hopefully,
if
you
you,
can
see
my
screen.
Okay,
if
you
cannot
yell-
and
I
will
increase
the
font
size
so
docker
run-
is
run,
run
the
docker
image
at
the
very
last
argument,
you
see
jakarta,
cafe,
d1,
that's
the
docker
image
that
I
just
built
locally
run.
B
I
t
means
run
interactive,
run
it
in
the
foreground,
dash
p
long
story
short.
Basically,
it's
enabled
enabling
me
to
access
this
application
through
port
9080,
think
of
it
as
opening
up
the
port
and
then
in
this
liberty
xml,
which
I'll
show
you
in
just
a
moment.
I
reference
all
of
the
database
connect
connection
information
as
environment
variables,
so
I
need
to
pass
in
those
environment
variables
here
and
basically,
the
environment
variables
are
pointing
to
the
database
that
I
created
on
azure.
B
B
And,
as
you
can
see,
the
server
is
saying
that
it
is.
It
is
live
and
if
I
say
explicitly
check
live,
then
it
is
live.
I
can
also
look
at
the
open
api,
which
is
one
of
the
I
think
more
use,
one
open
api
endpoint
generated
through
the
micro
profile
capabilities
in
open
liberty
and
here's
open
api
ui.
I
think
it's
pretty
nice,
especially
for
development
purposes.
B
So
let's
just
do
a
you
saw
the
xml
output.
So
let's
take
a
look
at
the
json
output
from
the
rest,
endpoint,
try
it
out
execute
and,
as
you
can
see,
there
is
a
rest
version
of
my
of
my
endpoint
all
right,
so
everything
is
working.
Basically,
I've
validated
that
my
docker
image
is
is
good
to
go
ready
to
ready
to
put
into
action
and
ready
to
put
into
into
openshift.
B
C
B
The
docker
image,
the
first
argument
is
the
image
that
I've
created
locally.
I
need
to
give
it
the
fully
qualified
name.
The
first
part
of
the
qualification
is
the
location
of
the
of
the
openshift
container
registry,
which,
by
the
way,
was
output
by
the
solution
template.
In
fact,
this
resolution
template
actually
produced
this
entire
docker
tag.
I
copied
and
pasted
it
from
there
and
then
the
project
name,
so
the
name
space
that
was
created
for
me
so
that
that's
the
project
name
there
and
then
finally,
the
name
of
the
application
and
the
version.
B
So
let's
go
ahead
and
do
the
tag.
The
tag
is
really
nothing
much
more
than
adding
some
metadata.
So
it's
very
quick.
Then
we
are
going
to
log
on
onto
the
openshift
container
registry.
So
let's
go
ahead
and
do
that
that's
what
this
is
doing.
So
it's
logging
into
that
container
registry
that
I'm
working
against.
B
B
Again,
explaining
the
syntax
docker
push
pushing
the
image
and
then
the
full
name
right
so
the
name
of
the
registry
and
then
the
name
of
the
project
and
then
the
name
of
the
application.
So
we'll
do
a
push,
and
this
should
go.
I
hope
quickly,
because
again
I
did
this
earlier
in
the
morning.
I
hope
my
cash
has
not
expired
and
in
this
case
the
cash
has
not
expired,
so
everything
should
be
going
going
done
soon.
B
So
it
is
uploaded
now
and
now
we
are
ready
for
action,
so
we
can
go
ahead
and
begin
to
deploy
the
application,
so
we're
gonna
deploy
the
application
through
the
operator
right
here.
So
before
I
do
that,
I
promised
you.
I
was
going
to
show
you
the
the
server.xml.
So
let
me
just
do
that
through
github.
B
Hopefully
you
can.
You
can
see
that
okay,
let
me
increase
the
phone
just
like
that,
and
I
think
that's
that
should
be
decent
already.
So
those
are,
as
you
can
see,
I've
enabled
the
microphone
health
feature.
B
That's
why
you're
seeing
microfile
health
and
then
there's
the
data
source
configuration
like
I
said,
and
that's
why
it's
a
pretty
simple
server.example
next
thing
I'm
going
to
do
is
we're
going
to
go
ahead
and
do
the
deployment
of
the
application,
and
in
order
to
do
that,
we're
going
to
make
use
of
the
liberty
operator
and
I
need
to
pop
down
the
font
just
a
little
bit.
B
This
is
so
you
do
that
through.
You
can
do
that
a
couple
of
different
ways,
but
the
most
efficient
way
to
do
this
is
just
really
writing
out
your
deployment.
Yaml
file-
and
I
I
don't
think
it's
that
scary
right-
if
you
were
to
do
this
in
kubernetes
or
doing
this
without
the
operator,
it's
a
lot
more
verbose
using
the
operator,
it's
actually
pretty
natural,
in
my
opinion,
so
let's
go
ahead
and
do
that
deployment.
B
So
we
have
everything
up
and
running
and
then
I'll
explain
to
you
the
syntax
of
the
deployment
descriptor.
So
here's
the
open
liberty,
application,
let's
create
the
open
liberty,
application
and
we're
going
to
create
the
yaml
file.
Let's
take
a
quick
look
at
the
ml
file
here,
so
this
is
the
kind
is
open
liberty,
application.
This
is
what
is
saying,
use
the
operator
to
do
that.
Deployment
metadata
is
very
simple,
so
giving
it
a
name
and
a
namespace.
B
The
project
that
this
is
going
to
be
deployed
at
this
is
the
name
of
our
project.
I
want
to
create
two
replicas.
This
is
the
image
name
that
I
want
because
we're
using
the
openshift
container
industry.
You
do
not
need
to
specify
the
full
paw.
So
openshift
knows
how
to
resolve
this
docker
image,
an
image
full
policy,
exposing
it
meaning
expose
it
through
a
route.
B
B
This
already
and
here's
our
values,
let's
go
to
the
form
view
I
just
like
to
validate
that
everything
is
okay.
There
there's.
B
App
labels
is
fine,
there's
image
name
pull
policy
is
always
I'm
going
to
bound
this
up
to
three
expose
is
true
and
we're
gonna
hit
create
all
right,
so
this
works
asynchronously,
so
it
looked
like
it
went
really
fast,
but
it's
not
deployed
quite
yet.
So
let's
go
look
at
the
pods
now,
so
you
can
see
that
I
now
have
three
different
parts.
B
Remember
when
I
showed
you
before
this
was
empty,
so
I
think
the
pods
are
gonna,
come
up
pretty
quickly
and
then
there's
a
little
bit
of
a
lag,
because
now
openshift
is
checking
whether
these
these
guys
are
live
or
not.
So
there's
a
liveness
probe
in
action.
As
you
can
see,
so
the
first
one
is
active:
let's
wait
for
the
rest
of
the
tube
to
become
active,
so
they're
all
active.
Now
you
can
look
at
the
deployments
and
you
can
see
that
the
deployments
were.
B
Let
me
show
you
this
before,
but
it
was
empty,
but
now
we
have
three
pods
and
there's
a
deployment
and
if
we
go
to
services,
okay,
you'll
see
that
there's
a
jacquard,
a
cafe
service
deployed
under
the
hood
of
type
load.
Balancer.
Actually
and
that
there
is
a
route,
so
the
route
is
there
where
you're
going
to
get
the
actual
endpoint.
You
can
also
get
the
endpoint
directly
in
the
application,
but
we'll
just
get
it
from
here.
So
let's
hit
this
link
and
see
what
happens
all
right.
B
Already
so
there's
our
application,
so
it
is
up
and
running.
You
can
also
access
the
open
api
ui
through
here,
but
I
don't
really
want
to
do
that.
I
think
you
more
or
less
get
the
point.
So
once
you
do,
the
deployment
here
you'll
see
a
number
of
other
interesting
capabilities.
If
you
go
to
the
pods,
for
example,
and
pick
one
of
the
pods,
you
can
do
things
like
take
a
look
at
the
logs,
so
you
can
see
what
logs
are
being
produced
by
by
open
liberty.
B
You
can
even
log
on
directly
onto
this
terminal,
okay,
so
I'm
literally
in
in
the
in
the
terminal.
Now
I'm
in
inside
that
pod,
so
doing
this
kind
of
stuff
is
actually
pretty
difficult
in
native
kubernetes.
So
this
is
where
this
is
the
kind
of
kind
of
thing
where
openshift
really
shines.
C
B
Nope,
I
think
we're
good
already,
so
it's
deleted
there,
let's
go
to
the
pods,
no
more
pods,
no
more
deployment,
so
our
application
is
gone.
You
can
do
things
like
rolling
updates.
You
can
configure
this
with
github
actions
to
do
that.
Everything
that
I
did
through
ci
cd,
instead
of
doing
it
manually.
Basically
it
would
be
the
same
sort
of
steps
that
I
went
through
as
well.
So
that's
about
it
as
you
can
see
it's
a
pretty
powerful
combination.
You
know,
and
it
has
a
lot
of
different
capabilities.
B
All
right,
all
right,
that'll
reward
people
to
death
and
nobody
yeah
already.
So
thank
you
very
much.
We
really
appreciate
it.
You
know,
if
you're
interested
in
any
of
this
begin
to
take
a
look
at
it.
B
One
thing
that
I'll
mention
is
ibm
and
red
hat
also
has
a
joint
offer
to
help
you
with
migration
cases
as
we're
developing
these
offers,
so
you'll
find
a
little
form
you
can
fill
out
and
then
you
can
get
in
touch
with
us
and
we'll
help
you
on
your
journey,
hopefully
from
you
know
all
the
way
from
jakarta
and
micro
profile
to
running
it
all
in
azure
and
in
open
open
shift,
or
what
have
you.