►
Description
This session will include an introduction to each of the additional factors that have been added to this methodology, why they matter and the OSS Java tools and technologies that can be used to enable them.
A
Hello,
everyone
and
welcome
to
another
Jakarta
Tech
talk.
My
name
is
Serena
and
joining
us
today
is
Grace
Jensen,
who
will
be
presenting
on
the
topic
of
thriving
in
the
cloud
going
beyond
the
12
factors.
If
you
have
any
questions
for
Grace
as
we
move
through,
today's
presentation
feel
free
to
ask
them
in
the
chat
or
the
ask
a
questions.
Tab
Grace
over
to
you.
B
Great
thanks
so
much
appreciate
it
hi
everyone
thanks
for
joining
today,
I
know
you've
all
got
pretty
very
busy
day.
So
I
appreciate
your
time
here,
joining
me
for
this
presentation,
so
yeah
as
Serena
said
I'm
going
to
be
presenting
on
sort
of
how
do
we
not
just
survive
or
sort
of
barely
get
by
in
the
cloud?
How
do
we
really
Thrive
within
this
Cloud
environment
and
to
do
that?
B
B
This
slide,
I
have
no
apologies
for
putting
it
in
pretty
much
all
of
my
presentations.
It's
a
feeling
we
get,
hopefully
you've
seen
Toy
Story,
so
you'll
get
this
reference,
this
feeling,
as
developers
of
the
cloud
we
get
excited
by
the
cloud,
because
it
enables
us
to
have
lots
of
different
advantages
and
capabilities
that
we
didn't
necessarily
get
when
running
on-prem.
B
For
example,
those
include
things
like
reduced
cost,
increased
scalability,
increase,
speed,
ensuring
we
have
resiliency
within
our
application
in
case
of
failure
or
bottlenecks,
having
sort
of
flexibility
within
our
applications
and
also
sort
of
taking
advantage
of
fashion.
You
might
not
think
of
the
cloud
or
even
it
as
particularly
fashionable,
but
really
just
like
other
Industries.
We
follow
Fashions
and
Trends,
especially
when
it
comes
to
Innovations,
and
the
cloud
is
one
of
those
it's
cool
and
it's
trendy
and
we
want
to
be
a
part
of
it.
B
So
all
of
these
benefits
are
why
we
get
this
feeling
and
this
excitement,
and
it's
why
we
want
to
be
effective
when
we're
developing
for
the
cloud
and
why
we
want
to
move
on
to
it.
In
order
to
be
able
to
actually
do
that,
though,
it
can
be
difficult
to
know
what
we
need
to
achieve
within
our
applications.
What
behaviors
do
we
need
to
be
introducing
to
be
able
to
be
Cloud
native?
B
B
Cloud
native
and
I
took
all
of
those
definitions
from
various
different
Cloud
providers
and
sort
of
vendors
who
offer
Cloud
native
Technologies
and
applications,
and
things
and
I
took
those
definitions
and
sort
of
paired
them
back,
took
out
and
stripped
out
all
of
the
specific
language
to
say
the
type
of
language
recording
with
or
the
platform
itself
and
I
stripped
it
back
to
sort
of
the
common
words,
the
common
themes
that
they
all
talked
about,
and
this
is
what
resulted
from
that
we're
trying
to
achieve
applications
that
are
highly
observable
that
are
scalable
and
resilient.
B
Like
I
was
talking
about
those
behaviors
that
we
want
that
the
rapid
and
have
great
speed.
Importantly,
they
need
to
be
Loosely
coupled
we
want
to
achieve
that
sort
of
modern
or
often
modern,
sort
of
Innovations
and
Technologies,
often
we're
using
either
hybrid
or
public
Cloud.
So
really,
this
is
kind
of
the
very
broad
sort
of
Sweepstake
of
of
things.
We
think
about
things.
We
want
to
be
achieving
to
be
able
to
be
Cloud
native,
but
the
question
is:
how
do
we
achieve
these?
Because
these
are
great
words?
B
You
know
we
look
at
them
and
go
yeah,
that's
exactly
what
I'm
trying
to
achieve.
But
how
is
the
key
question
here?
How
do
we
go
about
achieving
this,
and
this
is
where
sort
of
methodologies
like
the
12
Factor
methodologies
are
really
useful,
especially
because
their
vendor
platform
and
language
agnostic?
That's
why
I
particularly
love
this
methodology?
Many
of
you
may
have
come
across
this
before.
Originally,
it
was
defined
by
the
developers
over
at
Heroku,
and
it's
really
all
about.
B
B
So
we
cover
you
know
the
12
factors
come
with
sort
of
end-to-end
development
right
from
the
very
start
in
terms
of
okay.
How
do
we
get
started?
Where's
our
code
base?
What
dependency
do
we
need
working
our
way
through
to
more
of
the
devops
side
to
how
am
I
deploying
where
are
my
operations
within
this
all
the
way
through
to
sort
of
looking
at
okay?
What
about
things
like
Dev,
prod
parity?
Where
are
my
logs?
B
What
data
am
I
Gathering,
From,
This
admin
processes,
and
that
kind
of
thing,
so
it
really
was
designed
to
go
from
end
to
end
and
serve
as
sort
of
that
underlying
Foundation
that
introduction
to
the
discipline
of
building
and
deploying
applications
built
specifically
for
the
cloud
and
preparing
dead
teams
to
be
able
to
do
this
now.
This
was
a
really
great
start
in
terms
of
sort
of
laying
out
these
foundations.
However,
this
methodology
was
sort
of
created
around
a
decade
ago,
so
technology
has
moved
on.
B
The
cloud
has
moved
on
and
our
understanding
of
what
we
need
for
the
cloud
and
what
made
what
applications
need
to
be
Cloud
native
has
also
changed,
and
so
our
methodologies
need
to
change
with
that,
and
so
this
original
12-factor
methodology
was
sort
of
Taken
and
edited
and
built
upon
to
be
able
to
create
the
15
Factor
app
methodology.
This
was
designed
and
developed
by
Kevin
Hoffman
from
dynatrace.
He
has
a
book
which
describes
it
and
you
can
access
that
for
free
online
I
put
the
link
at
the
bottom
here.
B
If
you
wanted
to
go
into
it
in
a
bit
more
detail,
now
I'm
not
going
to
make
this
too
difficult
for
you.
There
are
three
additional
factors,
but
I've
highlighted
them
here
to
make
it
really
easy
to
see
so
as
well
as
sort
of
iterating
upon
and
expanding
or
making
the
original
12
factors
more
specific,
as
you
can
see
here,
for
example,
instead
of
just
code
base,
our
first
Factor
here
is
now
one
code
base,
one
application,
so
you
can
see,
we've
edited
it
and
iterated
upon
it.
A
bit.
B
So
what
we're
going
to
do
in
this
presentation
is:
go
through
the
original
12
factors
and
how
they've
changed
in
this
15
Factor
app
methodology
and
what
technologies
we
can
use
for
them
and
then
we're
going
to
spend
a
bit
more
time
on
the
three
additional
factors
that
have
been
introduced
within
this
15
Factor
app
methodology.
B
So
let's
take
a
look
at
the
original
12
factors.
First
and
I'm
going
to
go
through
some
of
these
first
ones
quite
quickly,
just
because
they're,
probably
things
you're
naturally
doing
anyway
or
have
done,
especially
if
you're
already
involved
in
Enterprise
Java
applications
and
creating
and
building
Enterprise
Java
apps.
So
the
first
one
is
one
code
base,
one
application.
As
I
mentioned,
the
original
was
just
code
base.
So
what
we're
really
doing
here
is
we're
trying
to
be
really
specific
in
terms
of
okay.
B
What
do
we
mean
by
code
base
because
you
can
have
multiple
code
bases
and
you
could
have
you
could
store
them
in
different
places
and
we
want
being
very
specific
or
very
helpful
with
that
title.
So
this
fact,
or,
and
the
reason
it's
been
made,
more
specific-
is
really
to
show
this
one-to-one
relationship
between
an
application
and
its
code
base.
So
Cloud
native
applications
should
consist
of
a
single
code
base.
B
That's
then
tracked
with
revision
control
systems,
and
this
is
essentially
sort
of,
even
if
you
have
sort
of
a
code
base,
it's
a
source
code
repository
or
a
set
of
repositories
that
really
share
a
common
route
and
can
be
used
to
reduce
any
number
of
immutable
releases.
So
we
have
a
one-to-one
relationship
between
the
app
and
the
code
base,
but
a
one-to-many
relationship
between
our
code
base
and
our
various
different
deployments,
whether
that's
production,
staging
QA,
devsecops
whatever
it
might
be.
B
The
reason
that
this
is
important
in
Cloud
native
development
is
to
enable
proper
versioning,
firstly
to
also
support
sort
of
collaboration
between
development
and
various
other
teams,
whether
you've
got
sort
of
teams
like
support
teams
or
teams
that
enable
you
to
publish,
for
example,
to
production.
It
also
helps
to
enable
much
faster
time
to
Market.
So
how
do
we
achieve
this
again?
B
This
is
why
I'm
going
through
this
one
quickly,
you're,
probably
already
using
one
of
these
Technologies,
so,
for
example,
git
repositories
is
really
a
great
way
to
be
able
to
achieve
this,
whether
you're
using
GitHub,
you
have
Enterprise
or
git
lab.
There
are
alternatives
like
bitbucket
or
if
you're,
going
on
a
cloud-specific
providers,
so
things
like
Google,
Cloud,
Source
repositories,
AWS
code
commit
there's
lots
of
different
ways
in
which
we
can
use
git
repositories,
lots
of
different
versions.
B
This
is
a
great
way
to
ensure
this
one-to-one
relationship,
the
third
one,
because
we're
skipping
that
additional
API
management
we'll
go
we'll
go
back
onto
that,
so
the
third
one
or
the
second
Factor
we're
looking
at
is
dependency
management,
so
dependency
management.
This
is
really
about
the
fact
that
you
sort
of
in
classic
Enterprise
environment
we're
very
used
to
this
concept
of
sort
of
having
a
mummy
server.
B
So
this
is
a
server
that
provides
essentially
everything
our
application
needs
and
it
takes
care
of
their
every
desire
and
every
whim
from
satisfying
the
applications
dependencies
to
providing
a
server
in
which
to
host
the
app
it
does
everything
for
them.
However,
as
we
move
from
sort
of
more
traditional
Enterprise
environments
onto
the
cloud,
our
apps
have
to
essentially
mature
and
grow
up
and,
as
such
sort
of
our
applications
can't
be
relying
upon
the
the
environment
to
take
care
of
these
needs.
B
We
have
to
instead
have
the
app
bring
their
dependencies
with
them.
For
example,
migrating
to
the
cloud
and
maturing
your
development
practices
means
you
have
to
wean
your
organization
off
the
need
for
mommy
servers.
That's
kind
of
what
this
factor
is
all
about
sort
of
explicitly
declaring
and
isolating
application
dependencies
using
things
like
declaration
manifest,
for
example,
it
can
be
stored
within
the
code
base
or
using
things
like
dependency,
isolation
tools
during
application
execution.
B
Why
is
this
important?
Well,
this
helps
to
provide
consistency
between
your
various
different
environments,
which
links
into
another
factor
that
we
have
in
this
methodology.
It
also
helps
to
simplify
setup
for
developers
who
might
be
new
to
the
application
and
helps
to
support
portability
between
Cloud
platforms,
because
no
matter
where
you're
going
You're,
not
assuming
that
you're
going
to
have
these
things
provided
for
you,
you're
bringing
them
with
you
you've
matured
in
that
sense.
B
B
Well,
the
first
thing
is,
you
know
you
have
to
explicitly
declare
and
isolate
your
dependencies
instead
of
packaging
to
things
like
third-party
libraries
inside
your
microservice
specify
your
dependencies
in
something
like
a
maven
or
a
Gradle
file,
so
whether
you're
using
Maven
or
XML
or
gradles
settings.gradle
files,
and
these
can
enable
you
to
be
able
to
freely
move
up
or
down
to
various
versions
of
a
dependency
and
have
a
very
clear
manifest
in
which
all
of
your
dependencies
are
declared.
So
that
can
be
a
really
useful
tool
to
use.
B
Many
of
you
are
probably
already
using
them.
Definitely
recommend
checking
it
out.
If
you
haven't
already
the
fourth
one
is
design,
build,
release
run
now
when
you
compare
that
to
the
original
12
Factor
application,
Manifesto,
what
we're
missing
here
and
what
we've
added
is
this
design
phase,
so
we've
added
in
this
design
step
into
this
process.
This
is
about
taking
a
single
code
base
through
the
build
process
to
produce
a
compiled
artifact.
B
The
ass
artifact
is
then
merged,
with
configuration,
there's
external
to
the
application
to
reduce
in
a
mutual
release.
So
the
reason
that
this
design
phase
was
really
added
in
is
because
we're
no
longer
doing
sort
of
waterfall
development
where
we
do
all
of
the
design
at
the
beginning
and
then
just
go
all
the
way
through
for
months
at
a
time
with
our
development
to
produce
the
end
result
we're
working
in
a
much
more
agile
Manner,
and
so
we
have
to
realize
that,
as
we
go
through
development,
things
may
change
requirements
might
change.
B
So
we
need
to
be
aware
that
our
design
might
change
alongside
that.
So
it's
all
about
sort
of
Designing
small
features
that
can
be
released
often
and
have
sort
of
a
high
level
design.
That's
then
use
to
inform
everything
we
do,
but
also
knowing
that
that
design
might
change
and
small
amounts
of
the
design
are
then
part
of
every
iteration,
rather
than
being
done
completely
and
entirely
up
front.
So
it's
important
that
we
build
this
into
our
sort
of
process
when
it
comes
to
creating
our
applications.
B
The
point
of
this
factor
is
really
to
understand
that,
although
this
is
all
part
of
one
process
overall,
this
is
there
should
be
sort
of
strict
separation
between
the
various
different
stages
of
this
process,
between
design,
build
release
and
run.
The
reason
that
this
is
important
to
note
and
to
add
into
our
Manifesto
is,
and
to
into
our
sort
of
like
methodology,
is
to
prevent
changes
to
the
code
and
our
configuration
at
runtime,
which
in
turn
then
lowers
the
number
of
unexpected
failures.
B
It
enables
more
effective
release
management
and
maximum
delivery
speed,
while
still
keeping
high
confidence
that
our
application
is
going
to
work.
The
way
we
expect
it
to,
and
it
also
allows
us
to
be
able
to
introduce
things
like
automated
testing
and
deployment
and
sort
of
standardization
for
our
application.
How
do
we
achieve
this?
Again?
B
You
may
be
using
tools
already
in
terms
of
things
like
tecton,
which
can
be
really
helpful
in
in
offering
us
sort
of
CI
CD
pipelines
and
that
automation
that
I
talked
about,
but
it's
important
to
remember
that
these
factors
aren't
always
specifically
related
just
to
a
technology.
It
can
also
be
more
about
the
people
and
the
organization
and
how
we're
set
up
as
teams
to
be
able
to
deliver
Cloud
native
applications
rather
than
specific,
Technologies
and
I.
B
Think
this
factor
is
one
of
those
factors
that
relates
more
to
how
we,
as
teams,
perceive
this
process
of
development,
as
opposed
to
necessarily
strict
technologies
that
relate
to
each
one.
So
tecton
really
helpful
in
terms
of
automating
that
CI
CD,
Pipeline
and
helping
us
go
through
this
process.
But
it's
important
for
us
to
consider
these
as
strict
and
separate
stages.
B
B
What
we
mean
by
this
really
is
we're
referring
to
any
value
across
a
variety
of
different
deployments,
so,
whether
it's
your
Dev
deployment,
your
QA
your
production,
so
this
can
include
things
like
URLs,
other
information,
about
backing,
Services
information
to
locate
and
connect
to
databases,
credentials
to
a
third-party
service
or
information
that
might
normally
be
bundled
in
Properties
or
configuration
files.
So
the
point
of
this
in
terms
of
within
our
12
Factor
Cloud
native
methodology,
is
that
again
it's
about
this
strict
separation.
B
We
should
be
strictly
separating
credentials
from
code
if
you've
got
any
credentials
within
your
code,
you're
going
to
have
a
massive
security
vulnerability,
and
that
is
exactly
what
we're
trying
to
avoid.
We
need
to
be
storing
configuration
and
and
credentials
separately
to
our
code
in
environmental
variables.
B
What
that
allows
us
to
do
is
that
allows
us
to
be
able
to
have
one
place
to
change
configuration
so
we're
not
having
to
go
into
each
individual
file
or
class
to
be
able
to
change
one
specific
factor
or
one
specific
piece
of
configuration,
and
it
allows
us
to
have
that
security
in
terms
of
not
having
credentials
directly
in
our
code.
Why
is
this
important?
Well,
as
I
said,
it
allows
us
to
modify
application
Behavior
without
having
to
make
code
changes
within
our
classes.
B
It
simplifies
our
deployment
to
multiple
environments
or
different
platforms,
for
example,
and
it
reduces
our
risk
in
terms
of
security
and
vulnerabilities.
So
how
do
we
enable
this?
Well,
a
lot
of
this
is
about
extracting
away
configuration,
whether
that
be
credentials
or
not,
and
allowing
us
to
be
able
to
store
them
as
environmental
variables.
So,
in
order
to
do
that,
we
can
use,
for
example,
open
source
specifications
like
micro
profile
config
if
you've
not
come
across
micro
profile
before
definitely
worth
checking
out.
B
It's
a
great
open
source
specification
that
links
really
well
into
Jakarta
and
actually
shares
some
of
those
specifications
like,
for
example,
Jax,
RS,
Jason,
B,
Jason
P.
So
there's
lots
and
rests:
there's
lots
of
different
specifications
they
share
and
actually
a
lot
of
the
Jakarta
eats
Community,
also
work
on
microprofile
as
an
open
source
specification
as
well.
B
It's
been
designed
to
help
people
who
are
building
microservice
based
applications
on
Java
to
do
that
more
effectively
by
offering
things
like
configuration,
Health,
metrics
and
a
bunch
of
others,
so
micro,
Ripple
config,
is
the
one
that
I
would
recommend
to
check
out
for
this
specific
Factor.
This
allows
us
to
be
able
to
place
configuration
in
properties
files
that
then
can
be
very
easily
updated
without
having
to
recompile
our
microservices.
B
We
can
also
make
use
of
another
specification,
which
is
CDI
injection,
to
be
able
to
directly
inject
a
set
of
properties
into
our
application
as
well.
So
a
couple
of
the
specifications
there
for
micro
profile
that
are
really
helpful.
The
next
one
we
have
is
logs.
This
is
pretty
much
exactly
the
same
as
the
12
Factor
one,
so
I'm
just
going
to
go
over
this
really
quickly.
What
is
this
Factor
about?
B
That
should
be
a
platform
concern,
not
a
developer
concern,
and
so
this
factor
is
all
about
being
able
to
improve
the
flexibility
for
introspecting
our
Behavior
over
time
and
enabling
those
real-time
metrics
to
be
collected
and
analyzed
without
the
developer
having
to
be
concerned
with
that,
taking
advantage
of
the
platform
that
we're
using,
how
do
we
do
this?
B
Firstly,
log
to
stood
outstudo
only
start
treating
those
logs
as
event
streams
and
consider
things
like
the
aggregation
and
the
processing
and
the
storage
of
those
logs
as
a
requirement
that
should
be
satisfied
not
by
your
application,
but
by
your
platform.
We
can
make
use,
for
example,
of
tools
like
the
elk
stack
or
fluent
D,
which
can
be
help
us
to
be
able
to
sort
of
capture
and
analyze
our
log
emissions
as
well.
B
Disposability
is
the
next
Factor.
This
is
a
bit
of
an
abstract
diagram,
so
please
bear
with
me
with
this.
I
am
going
to
explain
it
so
disposability.
This
is
all
about
the
fact
that
an
application
can't
scale
deploy,
release
and
recover
rapidly
if
it
can't
start
and
shut
down
rapidly
and
gracefully.
So
what
we
need
to
do
is
build
applications
that
are
not
only
aware
of
this,
but
really
truly
embrace
it
to
take
full
advantage
of
the
cloud.
B
So
microservices
need
to
be
fault
tolerant,
but
they
also
need
to
be
able
to
function
under
any
situation
and
be
able
to
deal
with
any
potential
bottlenecks
or
failures.
So
this
factor
is
all
about
maximizing
robustness
of
our
applications
by
enabling
fast
startup
and
graceful
shutdown,
and
to
do
that.
The
mentality
that
we
need
to
have
when
we're
designing
our
microservices
is
that
we
need
to
treat
our
app
instances
on
our
microservices
as
cattle
and
not
pets,
and
the
reason
I
say
that
is
in
this
example.
B
What
I'm
really
saying
is
if
we
have
a
bird
or
a
flock
of
sheep
or
cows,
and
one
of
them
gets
sick.
So,
let's
say
one
of
them
is
not
functioning
as
it
should
be.
What
we
do
is
we
generally
as
harsh
as
it
sounds,
get
rid
of
that
particular
animal,
and
if
we
need
more
in
the
herd,
we
replace
it
with
a
new
one,
whereas
when
it
comes
to
pets,
for
example,
I
put
Carl
the
cat
here,
I
love,
Carl
Carl's,
my
cat.
B
If
Carl
gets
sick,
I
take
him
to
the
vet
I,
look
after
him,
I
nurse
him
back
to
health.
If
he's
not
functioning
the
way
he
should
be
instead
of
just
getting
rid
of
him.
So
really
what
we
need
to
do
within
our
applications
is
not
treat
them
like
Carl
the
cat,
because
what
we're
going
to
end
up
doing
is
having
reduced
functionality
and
resilient
responsiveness
of
our
applications.
B
So-
and
this
is
so,
this
factor
is
really
enabling
our
increased
resiliency
to
potential
unexpected
and
sudden
shutdowns
bottlenecks
or
failures.
Why
is
this
important?
This
really
enables
us
to
facilitate
fast
and
elastic
scaling,
a
rapid
deployment
if
we
have
things
like
new
code
or
configuration
changes,
and
it
provides
that
robustness
that
we
need
that
we
saw
in
that
original
word
map
right
at
the
beginning
of
this
presentation.
It
also
enables
us
to
be
able
to
have
things
like
Auto
scaling
within
our
application
as
well
to
help
deal
with
load.
B
B
We
need
to
maximize
robustness
with
very
fast
startup
and
automatic
scaling,
and
to
do
that,
we
can
use
things
like
micro
profile,
fault,
tolerance
and
fallback
behaviors,
so
this
can
allow
us
to
be
able
to
sort
of
code
in
what
Behavior
we
want
if
something
were
to
go
wrong
within
our
application,
so
that
we
know
the
a
very
clear
path
of
what's
going
to
happen
and
what
we
expect
our
application
to
do
in
regards
to
this.
So
that's
one
specification:
you
can
use
again
from
the
microprofile
specification.
B
The
next
one
is
backing
services,
so
back
in
service.
In
this
case,
what
I
mean
is
sort
of
any
service
on
which
our
application
relies
for
its
functionality?
It's
a
really
broad
definition
with
a
wide
scope.
That
is
very
intentional.
It's
kind
of
a
catch-all
it
can
include
things
like,
for
example,
like
databators
or
event
streaming
or
like
anything
that
you're
using
for
sort
of
to
perform
line
of
business
functionality
or
security.
So
what
do
we
mean
by
backing
services,
and
why
is
it
in
our
methodology?
Well,
this
is
all
about.
B
What
we
want
to
remember
here
is:
we
need
to
be
treating
any
backing
services
that
we're
using
as
bounded
or
attached
resources.
There
should
theoretically
be
no
distinction
between
a
local
or
a
third-party
service.
We
should
be
attaching
all
of
our
resources
via
URLs,
that
are
stored
in
an
apps
configuration.
Why
is
this
important
for
cloud
native
apps?
B
Well,
what
this
allows
us
to
do
is
to
be
able
to
provide
loose
coupling
between
our
service
and
our
deployment,
which
is
one
of
those
key
characteristics
of
cloud
native
applications
not
having
that
tight
coupling,
so
that,
if
one
fails,
another
one
will
fail
and
we
have
a
bottleneck
or
a
potential
unresponsive
application.
We're
trying
to
avoid
that.
So
it
also
means
that
no
code
changes
are
required.
If
we
need
to
update
our
backing
service,
let's
say
our
backing
service
changes
or
we
need
to
change
sort
of
replace
it
with
something
completely
different.
B
It
also
allows
operators
to
be
able
to
automatically
swap
out
Services
as
And
when
they
see
fit
as
well
to
automate
that
process
it
really
in
total
is
really.
What
we're
trying
to
aim
here
is
is
to
enable
resiliency
and
Agility
being
able
to
attach
and
detach
boundary
resources
at
will
whenever
we
want
to.
How
do
we
achieve
this?
So
the
first
thing
is
that
mentality
again,
this
is
this
people
think
this
team
thing
treat
our
backing
Services
as
attached
or
bounded
resources,
treating
the
same
as
if
you
were
a
third-party
service
micro
profile.
B
The
next
one
is
environmental
parity.
So
the
difference
between
this
and
the
original
12
factors
is
that
in
this
term,
instead
of
making
this
more
specific,
we've
actually
broadened
this
Factor,
because
we
recognize
that
in
the
original
12
Factor
we
only
specified
Dev
prod
parity.
That
implies
that
we
only
have
development
and
production
environments
when
what
we
really
know
is
that
often
we
have
far
more
deployments
than
this
like
QA
devsecops,
whatever
it
might
be.
B
Staging,
for
example-
and
we
want
all
of
these
environments
to
have
parity
and
what
we
mean
by
parity
is
really
to
make
sure
that
they're
as
similar
as
possible
and
that's
to
avoid
potential
issues
that
might
come
in
that
that
could
potentially
creep
in
that
we
don't
spot
in
developmental,
staging
but
crop
up
in
production
because
we're
not
using
the
same
underlying
backing
services
or
the
same
systems.
So
we
want
this
to
be
as
similar
as
possible,
which
is
why
we
want
this
parity
and
to
be
able
to
do
this.
B
We
really
need
to
apply
sort
of
a
great
deal
of
just
discipline
and
rigor
to
our
environmental
parity,
to
keep
your
team
and
your
entire
organization
to
make
sure
that
we
have
confidence
that
the
application
will
work
wherever
it's
deployed,
no
matter
its
environment.
This
Factor
allows
us
to
reduce
the
risk
of
unexpected
errors
and
to
support
our
sort
of
continuous
deployment
Pipeline
and
enables
greater
resiliency.
B
How
do
we
enable
this?
We
need
to
ensure
we're
using
the
same
backing
services
for
all
of
our
deployments
and
ensure
that
we
have
a
solid,
continuous
integration
pipeline.
All
the
way
through
deploying
to
production,
as
often
as
possible,
is
a
great
way
to
be
able
to
also
test
this
and
automated
cicd
programs
can
help
with
this,
just
as
we
showed
with
tecton
earlier.
We
can
also
use
things
like
container
tools,
so
Docker
obviously
can
help
to
make
production-like
environments
on
our
local
machines
to
make
it
more
accessible
to
us.
B
In
that
development
stage,
we
can
also
use
newer
Technologies
like,
for
example,
microchip
testing
and
test
containers
test.
Containers
can
be
a
fantastic
way
to
be
able
to
mimic
that
production
environment
by
allowing
us
to
have
multiple
different
containers
representing
our
backing
services,
to
also
test
that
integration
between
our
service
and
our
backing
services
within
our
application,
rather
than
just
doing
unit
tests
so
testing
in
a
much
in
in
a
way
in
which
is
much
more
aligned
with
our
production
system.
If
you've
not
checked
out,
Microsoft
testing
before
I
definitely
recommend
it
test.
B
There
is
a
link
at
the
bottom
here
where
we
have
an
interactive
guide
which
I'll
be
showing
at
the
end,
there's
a
lot
of
them
that
we
have
I've
been
linking
throughout
this
presentation
to
some
of
them,
and
it
allows
you
to
get
Hands-On
with
this
technology
without
needing
any
prerequisites
on
your
local
machine,
so
really
handy
if
you're
interested
in
trying
this
out,
but
don't
necessarily
want
to
commit
to
sort
of
downloading
and
installing
all
of
those
dependencies
locally.
B
The
next
one
is
Administrative
processes,
I'm
going
to
run
through
this
one
super
quick.
This
is
the
same
as
the
original
Factor.
So
this
is
all
about
running
admin
or
management
tasks.
As
one
of
processors,
we
need
to
be
able
to
store
code
for
admin
tasks
within
the
apps
code
base,
so
things
like,
for
example,
one-off
debug
tasks
is
an
example.
Why
is
this
Factor
important?
It
allows
us
to
be
able
to
safely
debug
the
admin
of
production
applications
and
enables
greater
reliability.
How
can
we
achieve
this?
Well?
B
B
The
principle
of
Port
binding
is
all
about
sort
of
asserting
that
a
service
or
an
application
is
identifiable
to
the
network
by
a
port
number
and
not
by
a
domain
name.
This
is
all
about
enabling
Ops
efficiency.
The
reasoning
is
that
domain
names
and
Associated
IP
addresses
can
be
assigned
on
the
Fly
by
manual,
manipulation
and
automated
service
Discovery
mechanisms,
and
so
they
can
be
changed
dynamically
and
that's
why
it's
important
that
we
consider
it
when
it
comes
to
sort
of
cloud
native
applications.
B
If
we
need
to
change
where
that
particular
service
is,
which
is
why
it's
important
that
we
assign
it
via
a
port
number
and
not
via
a
domain
name.
This
is
also
about
again
similar
to
the
other
Factor.
We're
looking
at
this
shouldn't
really
be
a
a
developer
concern
when
it
comes
to
managing
the
port
assignment.
B
That
should
be
a
concern
of
your
cloud
provider,
because
it's
also
likely
managing
things
like
routing
scaling,
availability
and
fault
tolerance,
all
of
which
require
that
provider
to
manage
certain
aspects
of
the
network,
including
routing
host
names
to
port
and
mapping,
external
port
numbers
to
containerization
and
sort
of
container
ports
so
yeah.
This
is
all
about
enabling
that
Ops
efficiency.
How
do
we
do
this?
Well,
we
need
to
be
exporting
Services
via
Port
bindings.
B
That's
the
important
first
step,
and
things
like
microprofile
config
can
help
with
this
again
by
specifying
the
Newport
in
kubernetes
config
map.
Microprofile
config
can
then
automatically
pick
it
up
pick
that
value
up
and
give
the
correct
information
to
the
deployed
microservice.
So
a
bit
more
automation,
they're
involved
here
again
trying
to
ensure
that
portability
and
flexibility
micro
profile
rest
client
can
also
help
with
this
another
of
the
microprofile
specifications.
B
Rest
client
can
help
with
creating
client
code
to
be
able
to
connect
from
one
microservice
to
another.
The
next
factor
is
stateless
processing.
We've
only
got
a
couple
more
to
go
before
we
get
on
to
the
new
ones.
This
is
all
about.
The
fact
that
microservices
should
be
stateless.
Rest
is
a
really
well
adopted.
Transport
protocol
and
Jax
RS
can
be
used
to
be
able
to
achieve
a
restful
architecture.
B
This
and
and
sort
of
systems
that
follow
the
rest
paradigms
are
in
nature,
stateless.
So,
in
this
way,
the
underlying
infrastructure
can
then
destroy
or
create
new
microservices
without
having
to
lose
any
information
that
might
be
sort
of
traditionally
stored
within
that
microservice.
So
what
does
this
Factor
all
about?
B
Well,
executing
it
an
app
as
one
or
more
stateless
processes
ensuring
that
all
of
our
stateful
data
is
stored
in
some
sort
of
backing
service,
and
this
is
important
because
it
allows
us
to
be
able
to
reduce
deployment,
complexity,
reduce
operational
complexity,
and
it
allows
for
much
simpler
scaling
and
Cloud
compatibility
so
again
that
portability
that
Flex
ability
and
that
reduced
complexity?
B
How
do
we
go
about
achieving
this?
First
thing
is
store
any
stateful
data
in
a
backing
service.
Don't
use
things
like
sticky
sessions
or
the
local
file
system?
Please
and
don't
use
things
like
in-memory
caches,
storing
in
memory
cache
that
your
application
thinks
is
always
available,
can
actually
bloat
the
application
making
each
of
your
instances
take
up
far
more
RAM
than
is
necessary.
B
So
really,
what
we
should
be
doing
is
using
things
like
third-party
caching
services
that
can
include
things
like
gemfire
geode
is
the
open
source
version
of
that
J
cash
redis?
All
of
them
are
designed
to
act
as
a
backing
service
cache
for
your
application,
so
allowing
us
to
remain
sort
of
in
that
stateless,
State,
I
guess
so
that's
what
this
factor
is
all
about
and
leading
on
from
that
factor.
13
kind
of
relates
to
this
concurrency.
This
is
the
same
as
factor
eight
in
the
original
12
factors.
B
This
is
about
advising
us
that
cloud
native
applications
should
be
able
to
scale
out
using
the
process
model.
There
was
a
time
originally
when,
if
our
application
reached
the
limit
of
its
capacity,
the
solution
was
just
to
increase
its
size.
So
if
an
application
could
only
handle
some
number
of
requests
per
minute,
the
preferred
solution
was
just
to
make
the
application
bigger,
so
we're
creating
huge
monstrous
applications
where
we
were
adding
CPUs,
RAM
and
other
resources
to
a
single
monolithic
application.
Essentially,
we
were
doing
vertical
scaling.
B
This
other
behavior
is
typically
frowned
upon
nowadays
in
our
sort
of
cloud
native
environment,
a
much
more
modern
approach,
one
that's
ideal
for
sort
of
the
elastic
and
scalability
that
the
cloud
supports
is
to
scale
out
or
to
scale
horizontally
instead
of
vertically.
So,
rather
than
making
a
single
big
process
even
bigger,
you
create
multiple
processes
and
then
distribute
the
load
of
your
application
among
those
processes.
So
this
allows
us
to
be
able
to
scale
up
microservices
up
and
down
depending
on
the
workload.
So
how
do
we
achieve
this?
B
Well,
we
need
to
scale
out
by
the
process
model
and
treat
operating
system
and
sort
of
split
our
app
into
separate
runnable
processes
to
allow
for
that
easy,
horizontal
scaling
and
also
allow
us
to
be
able
to
Auto
scale
as
well.
To
be
able
to
achieve
this,
we
can
use
things
like
kubernetes
Auto
scaler,
which
is
a
tool
that
can
enable
us
to
have
this
sort
of
independent
scaling
and
auto
scaling.
B
So
that
was
the
last
of
the
original
factors
now
I'm
going
to
Deep
dive
into
the
additional
three
factors
that
have
been
added
to
this
methodology.
The
first
of
those
is
API
first,
so
what
do
I
actually
mean
by
API?
First
again,
it's
important
to
remember
that
some
of
these
factors
don't
necessarily
map
to
a
specific
physical
requirement
imposed
by
the
cloud,
but
more
to
the
habits
of
people
and
teams
and
organizations
that
are
building
Cloud
native
apps.
So
what
is
this
Factor
all
about?
Why
has
it
been
added?
B
B
Applications,
what
we're
usually
designing
is
a
microservice
and
that
microservice
has
to
interact
with
a
whole
other
network
of
microservices
and
the
way
that
they
interact
is
usually
through
apis.
So
it's
all
about
having
this
sort
of
mentality,
I,
guess
of
having
API,
as
the
first
thing
you
think
of
in
terms
of
what
am
I
producing.
B
The
reason
that
this
was
added
to
the
original
12
factors
is
to
help
avoid
integration
failures
and
to
formally
recognize
API
as
a
first-class
artifact
in
that
development
process,
because
it
is
because,
if
you
can't
effectively
interact
with
other
microservices
within
your
network
or
system,
then
you're
not
going
to
be
able
to
create
an
effective
application.
That's
designed
for
the
cloud,
especially
when
we're
deploying
these
in
a
distributed
fashion.
B
It
also
allows
for
greater
collaboration
with
stakeholders
so
by
allowing
us
to
be
able
to
think
about
apis
first,
when
it
comes
to
that
when
we
think
back
to
design
that
design
sort
of
process
having
that
design
at
the
start
allows
us
to
be
able
to
sort
of
show
and
design
what
our
stakeholders
need
from
each
API
or
service,
create
greater
documentation
and
have
the
ability
to
be
able
to
mock
up
that
API
first
to
be
able
to
vet
or
test
the
direction
and
our
plans
before
we
start
investing
too
much
time
into
supporting
a
given
API.
B
There
might
not
even
be
what
our
stakeholders
need
or
want.
It
also
allows
us
to
have
greater
flexibility
in
terms
of
sort
of
changing
that
design
right
at
the
very
start
before
we
put
in
that
those
resources.
How
do
we
enable
this?
The
first
thing
is
that
mentality
change.
B
We
need
to
make
sure
that
we're
being
aware
that
we're
participants
in
a
great
ecosystem
of
different
services
and
components-
and
we
need
to
be
aware
of
that
by
sharing
a
public
contract
for
others
to
be
able
to
interact
with
without
having
to
Deep
dive
into
our
code.
We
can
utilize
tools
for
this,
like,
for
example,
open,
API
or
API
blueprint.
These
are
both
open
source
specifications
and
actually
we
can
utilize
tools
that
build
up
on
these
specifications.
B
Originally,
the
open
API
specification
was
actually
based
on
the
Swagger
specification,
but
this
was
donated
to
open
source.
So
if
you're
already
familiar
with
swagger
you'll
be
familiar
with
the
open
API
format,
and
this
really
allows
that
vendor
neutrality
that
open
communication
of
how
to
utilize
and
interact
with
your
particular
service.
So
this
is
a
really
fun,
I.
Think
fantastic
step
in
the
right
direction
of
creating
that
ecosystem
working
together
more
effectively
and
collaboratively
in
this.
B
What
can
be
often
very
distributed
and
complex
environment
without
people
having
to
Deep
dive
into
my
code
and
understand
the
nitty-gritty
of
what's
Happening
under
the
covers
in
my
microservice,
the
next
one
that
was
added
is
telemetry,
so
you
might
be
for
those
of
you
who
are
really
Keen
eyed.
You
might
be
thinking
back
to
the
original
12
factors
and
saying
yeah,
but
we
already
had
logs
in
the
original
12.
Factors
like
why
do
I
need
to
add
Telemetry
into
this
as
well.
B
Well,
logs
are
a
really
great
start,
but
generally
it's
a
tool
that
we
use
during
development,
so
we
can
diagnose
errors
and
code
flows.
Logging
is
typically
orientated
towards
the
internal
structure
of
our
app
rather
than
reflecting
sort
of
Real
World
customer
usage.
To
ensure
logging
is
how
we
collect
data
about
our
app
when
it's
in
the
lab.
You
know
in
our
desktop
when
we're
in
development
instrumenting
our
app
for
Telemetry.
B
That,
on
the
other
hand,
is
how
we
collect
data
once
the
app
is
essentially
released
into
the
wild
what's
happening
in
reality,
rather
than
just
in
the
lab.
The
concept
of
this
isn't
among
the
12
factors.
This
has
been
an
important
addition
into
the
additional
12.,
the
additional
sort
of
three
factors
in
the
15
Factor
app
methodology.
B
So
how
do
we
achieve
this?
Well?
Well,
sorry,
why
is
this
important?
Why
has
it
been
added?
The
reason
it's
been
added
is
because
modern
applications
are
more
complex
than
ever
before.
These
applications
and
supporting
environments
are
composed
of
hundreds,
if
not
thousands
of
microservices
that
are
highly
Dynamic
distributed
and
often
scaling
automatically
to
meet
our
users
needs.
This
all
creates
quite
a
complex
environment
that
we
still
need
to
be
able
to
understand
and
diagnose,
especially
if
anything
were
to
go
wrong,
especially
because
applications
nowadays
are
more
important
than
ever.
B
Almost
all
companies
have
applications
that
are
mission
critical
if
these
applications
aren't
monitored
and
our
performance
to
grades
or
crashes,
revenues
potentially
at
stake,
and
possibly
things
like
bad
press
call
outs
on
social
media
Etc.
So
our
customers
expect
more
features
more
rapidly,
but
in
a
safe
environment
where
they
can
still
monitor
all
of
this
and
understand
how
our
application
is
behaving
in
real
time,
and
we
need
to
be
able
to
provide
real-time
feedback
to
be
able
to
quickly
understand
how
our
application
is
behaving
and
sort
of.
B
Also,
if
we're,
if
we're,
releasing
features
to
understand
how
our
customers
are
reacting
to
that
feature
and
if
it's
performance
is
optimal,
this
factor
is
also
about
sort
of
having
that
I
guess
auditing
and
monitoring
real
time
in
the
cloud
in
this
distributed
environment.
So
it
is
a
really
important
addition
on
top
of
logging
into
the
into
our
methodology
of
how
to
build
Cloud
native
applications.
How
do
we
achieve
this?
B
Microprofile,
metrics
and
health
are
two
specifications
from
the
micro
profile
community
that
can
really
help
with
this
health
providing
sort
of
information
as
to
whether
our
microservice
is
up
or
down,
depending
on
a
particular
performance.
Factors
that
you
can
determine
and
metrics
is
a
wide
variety
of
different
metrics.
You
can
collect
that
have
already
been
sort
of
enabled,
through
the
specification
from
your
microservices,
to
really
monitor
how
it's
behaving
in
real
time,
potentially
identify
bottlenecks,
failure
points
and
be
able
to
act
on
that
immediately
or
as
soon
as
possible.
B
The
last
and
final
Factor
I'm
going
to
go
over
is
this
last
Edition,
which
is
authentication
and
authorization.
What
really
shocked
me
when
I
actually
reflected
back
on
the
original
12
factors,
is
the
fact
that
there
is
absolutely
no
discussion
of
security
within
those
12
factors
and,
as
we
know
from
recent
events
like
Lord
for
Jay
security
is
a
really
vital
part
of
any
application,
especially
Cloud
native
applications.
Security
should
really
never
be
an
afterthought,
so
it's
important
that
it's
included
within
Cloud
native
methodologies.
B
A
cloud
native
application
particularly
has
to
be
secure,
because
you
know
your
code,
whether
it's
compiled
or
raw,
is
transported
across
many
different
data
centers,
and
it
can
be
executed
within
multiple
containers
and
accessed
by
countless
clients.
Some
of
those
will
be
legitimate,
but
many
will
often
be
nefarious.
It's
as
important
that
we're
considering
security
really
at
the
Forefront
of
our
development
in
an
ideal
world.
What
we'd
be
doing
is
and
securing
all
of
our
endpoints
with
things
like
our
back
so
role-based
access
control
and
so
for
every
request
for
an
application's
resource.
B
We'd
know
who
was
making
the
request
and
the
role
to
which
that
customer
belongs
and
whether
that
role
allows
them
to
be
able
to
have
sufficient
permissions
to
access
or
honor
that
request.
So
that's.
The
first
thing
we
can
do
is
enable
securing
our
endpoints
through
our
back
and
to
be
able
to
do
that.
B
So
this
is
a
really
important
factor
and
something
that
everyone
should
be
considering.
Security
should
be
at
the
Forefront
of
our
application
development,
especially
in
the
cloud
so
now
that
we're
taking
a
look
at
these
fixing
factors,
let's
take
a
look
at
what
it
would
be
like
if
we
mapped
it
onto
a
development
process.
So
here
we
have
either
a
public
or
a
private
or
a
hybrid
Cloud,
whatever
you're
using
we've
got
some
backing
services
in
there
we
have
our
application
with
various
different
microservices.
B
Artifacts
too,
when
we
start
to
map
out
our
15
factors,
we
can
start
to
see
where,
in
this
process,
they
come
into
play
and
how
they're
all
important
as
we
go
from
end
to
end
right
from
the
start
of
development,
all
the
way
through
to
deployment
and
monitoring
in
Ops,
and
then
what
we
can
see
from
this
is
actually
what
it
enables
is
some
of
those
key
factors
that
we
were
talking
about,
that
really
are
characteristics
of
cloud
native
applications
that
we
need
to
have
if
we
always
be
building
effective,
primitive
applications
designed
for
the
cloud
so
being
able
to
have
this
rapid
deployment
and
Rapid
building
of
our
microservices
is
scalability
observability
having
that
loose
coupling
between
our
microservices
and
that
resiliency
and
robustness
as
well,
and
to
achieve
that
as
I've
shown,
there's
loads
of
different
sort
of
tools
and
technologies
that
you
can
make
use
of,
and
this
is
sort
of
where
some
of
them
would
map
onto
in
this
mapping
of
sort
of
our
development
Pipeline
and
our
deployed
application
to
see.
B
You
know
there
are
lots
of
ways
in
which
we
can
enable
these
factors.
We
can
enable
these
behaviors
through
open
source
specifications.
B
If,
as
I
mentioned,
you
want
to
get
Hands-On
with
any
of
the
Technologies
or
open
source
specifications
that
I've
mentioned
throughout
this
presentation,
we
do
have
our
interactive
Cloud
native
labs,
so
these
all
stem
from
our
open
Liberty
guides.
If
you've
not
come
across
open
liberty
before
worth
checking
out
it's
a
great
open
source,
Cloud
native
runtime.
B
That
really
allows
some
fantastic
features
like
Dev
mode
to
being
able
to
rapidly
iterate
upon
your
code
and
see
those
changes
in
real
time
allows
things
like
instant
on
so
being
able
to
have
that's
based
on
cry
you
or
crack
checkpoint
restore.
So
it's
about
being
able
to
have
near
serverless
speeds
when
it
comes
to
startup
time,
which
can
be
a
great
advantage
and
really
the
key
part
here
is
that
it's
open
source
and
it
supports
all
of
these
specifications
and
Technologies
I've
been
talking
about
it's
really
involved
in
the
community.
B
In
fact,
many
of
the
guys
who
develop
openivity
are
actually
the
specification
leaders
for
micro
profile
and
Jakarta
ee,
so
definitely
an
open
source
runtime
that
you
should
take
a
look
at
the
nice
thing
is
we
have
over
like
50
or
60
guides
I
think
it
is
that
you
can
check
out
that
are
introducing
you
to
different
Technologies
and
specifications
and
AP
guys.
B
You
can
see.
We've
got
categories
for
them,
so
if
there's
one
you're
particularly
interested
in
then
check
it
out,
I've
added
some
links
throughout
this
presentation
that
link
off
to
some
of
these.
You
can
either
choose
to
do
these
guides
locally.
So
if
you
really
want
to
do
them
on
your
own
machine,
you
absolutely
can
just
bear
in
mind
that
you
will
need
some
prerequisites
and
they're
listed
on
the
top
right
of
each
guide.
B
Environment,
so
this
is
all
browser-based,
so
all
you
need
is
a
browser,
preferably
something
like
Chrome
or
Firefox,
but
most
browsers
should
work
for
this.
It's
Loosely
based
off
the
open
source
IDE
of
Thea
and
essentially
what
you
have
here,
is
the
instructions.
On
the
left
hand,
side
you've
got
the
IDE
on
the
right
and
the
terminal
at
the
bottom,
so
everything
you
need
is
within
one
UI,
so
it's
really
easy
to
be
able
to
use
and
we've
actually
added
new
functionality
into
this.
B
That
allows
you
not
only
to
copy
the
commands
that
we
put
in
the
instructions,
but
you
can
also
automatically
execute
them
within
your
terminal
in
our
newer
version
as
well.
So
it's
super
straightforward
to
get
started
with
this
and
you
do
need
to
log
in
that's.
Sadly,
because
we
had,
we
did
have
it
originally
open.
Then
we
had
Bitcoin
miners
come
along
and
ruin
that
for
everyone.
B
These
can
take
like
20
minutes,
ish
15
to
20
minutes
for
each
lab,
so
really
not
a
huge
amount
of
time,
and
it
can
allow
you
to
be
able
to
get
really
quickly
Hands-On
with
these
Technologies
and
discover
whether
it's
right
for
your
application
in
a
safe
environment,
that's
easy
to
use
so
yeah
I
would
recommend
taking
a
look,
we're
very
proud
of
this
environment.
So
if
you
do
use
it
and
you
love
it,
please
let
us
know,
because
we'd
love
to
have
your
feedback
on
it
and
saying
for
any
of
the
guides
as
well.
B
The
link
is
at
the
top
here.
So
to
summarize,
hopefully
what
I've
shown
you
throughout
this
presentation
is
that
the
12
Factor
applications
were
a
great
start,
but
to
really
thrive
in
our
Cloud
environments.
We
need
to
be
looking
Beyond,
those
12
factors,
two
methodologies
like
the
15
factors,
or
maybe
even
further
in
the
future,
and
unfortunately
guys,
there's
no
excuses.
There
are
so
many
open
source
tools
and
technologies
that
are
available
to
help
and
communities
backing
them
that
have
lots
of
different
FAQs
answers
and
communities
to
help
you
use
them.
B
So
an
action
I'd,
like
you
guys
to
take
away
from
this
presentation,
is
evaluate
your
own
applications
against
these
15
factors
and
consider
what
you
could
do
to
enable
any
of
them
that
you
don't
already
to
truly
Thrive
within
that
cloud
environment
and
become
truly
Cloud
native,
just
in
case
I
haven't
given
you
enough
resources
throughout
this
presentation.
I've
got
a
couple
of
slides
here
that
I've
categorized
into
different
sections,
based
on
the
factors
that
you
can
go
and
check
out.
B
If
you
want
additional
information
about
any
of
this,
so
General
resources,
ones
on
design,
build
release,
run
logging,
stateless
concurrency
security
and
also
we've
got
the
open,
Liberty
tools,
which
are
our
tools
that
allow
us
to
be
able
to
offer
really
easy,
IDE
use
when
you're
using
open
Liberty
for
your
application.
It's
available
on
on
Visual
Studio
code
IntelliJ
and
on
the
clips
as
well
again
all
open
source
and
we're
always
looking
for
feedback.
B
So
if
you
do
use
them
feel
free
to,
let
us
know
we
also
have
social
media
I,
run
the
social
media,
so
we'd
love
to
say
hi
to
you
all.
So
if
you
do
want
to
find
out
more
news
when
we
produce
new
guides,
for
example,
or
new
demos
code,
demos
and
repositories
then
feel
free
to
follow
us
on
either
LinkedIn
or
Twitter,
or
both
we'd
love
to
connect
with
you
guys
there.
B
A
You
thank
you
for
that
excellent
presentation,
Grace,
so
just
a
reminder
in
the
audience,
if
you
have
any
questions
to
put
them
in
the
ask
a
questions
tab
or
in
the
chat,
we're
happy
to
address
them
and
so
I'll
give
you
a
few
moments
to
do
that
as
before
I
wrap
up
we're
also
looking
for
some
more
Jakarta
Tech
talks.
A
So
if
you
have
something
great
to
share
with
us,
please
I
will
provide
you
a
link
in
the
chat
as
well,
so
kindly
click
on
the
on
the
link
and
fill
up
the
form
and
we'd
love
to
have
you.
So
let's
have
a
quick
little
I'll.
B
B
So
if
anyone
wants
the
slides,
I
will
be
sharing
them
on
I.
Think
I'm,
gonna
I'm
using
speaker
deck
now,
I've
switched
I've
made
the
switch,
so
it
will
be
on
speaker,
deck
and
I
will
be
posting
on
my
social
media
with
some
links
so
feel
free
to
check
that
out.
If
you
want,
if
you
want
slides
amazing.
A
So
it
doesn't
look
like
we
have
any
questions
Greece,
but
I
want
to
thank
you
again
for
your
time
today
and
the
excellent
presentation,
and
you
all
have
an
amazing
rest
of
your
day.
Thank.