►
From YouTube: OpenShift Commons Briefing #56: Implementing NGINX Microservice Architecture with OpenShift
Description
Guest Speaker: Chris Stetson, Chief Architect, NGINX
Chris will discuss and demonstrate the ins and outs of implementing the Proxy, RouterMesh and Fabric model Microservice network architectures in OpenShift. We will discuss the configuration and deployment details of using Kubernetes and the other facilities provided by OpenShift to achieve a powerful, high-speed and resilient network architecture.
A
A
Well,
hello
and
welcome
everybody
to
another
openshift
Commons
briefing
this
week,
we're
really
lucky
to
have
with
us
Chris
Stenson
from
nginx
and
he's
going
to
be
talking
about
implementing
nginx,
microservice
architectures
would
open
shipped
and
I'm
gonna.
Let
him
introduce
himself.
The
format
for
this
today
is
that
I'll,
let
him
do
his
presentation
and
the
MOU
and
explain
how
it
more
works
and
then
we'll
have
Q&A
at
the
end
of
it.
So
without
too
much
Berger
do
Chris,
go
right
ahead
and
take
it
away.
B
Okay,
great
thanks
Diane,
so
welcome
to
the
briefing
on
implementing
nginx
micro-service
architectures
with
open
shift,
I'm
Chris
Stetson,
and
if
my
slides
would
advance,
we
might
be
able
to
see
a
picture
of
me
there.
We
go
I'm
Chris
steps
in
I'm,
the
chief
architect
here
at
nginx,
I
am
in
charge
of
micro
services
and
and
building
out
our
micro
services
products
and
functionality.
I
will
tell
you
that
today
is
the
day
after
our
holiday
party.
So
if
you
hear
my
voice,
cracking
you'll
know
why
it
was
very
loud.
B
B
I've
been
a
developer
and
architect
of
web
applications
for
the
past
20
years
or
so
I've
been
building
large-scale
websites
that
many
of
you
probably
familiar
with
I
built
the
first
version
of
Sirius
satellite
radio
I
built
visa
comm
for
many
years,
I
built
large
parts
of
in
telecom.
Microsoft
comm,
as
well
as
company
websites
like
Lexus
comm
I've,
been
building.
You
know,
large
monolithic
applications
service-oriented
architectures
and,
most
recently,
I've
been
building
out
micro
service
based
systems.
B
So
what
we're
going
to
be
talk
thing
about
today
is
very
much
around
what
microservices
mean
and
hallenge
necks
can
help.
You
build
out
a
micro
service
application
I'm
going
to
talk
a
little
bit
about
our
history
with
with
Red,
Hat
and
open
shift.
I
think
it's.
It
is
a
relevant
topic
because
we've
watched
you,
the
OpenShift
evolution
and
and
where
it's
come
and
and
we're
very
excited
about
it,
particularly
the
the
latest
version
of
it,
is
really
solid
and
has
a
lot
of
the
features
that
we've
been
looking
for
in
a
platform.
B
So
that
was
that
was
very
exciting
and
it
hello
is:
is
somebody
talking
there
alright
I'm,
going
to
keep
going
a
little
bit
of
history?
I'm
also
going
to
talk
about
the
the
major
shift
in
architecture
that
micro
services
brings
to
the
table?
How
you
know
you
need
to
think
about
about
applications
differently
in
a
micro
service
context
than
you
do
in
a
monolithic
context,
and
what
kind
of
of
issues
that
introduces
it
definitely
brings
a
lot
of
benefits.
B
But
there
are
things
that
you
have
to
tackle
in
terms
of
building
out
your
applications,
namely
how
do
you
deal
with
service
discovery?
How
do
you
deal
with
resource
management
in
in
the
context
of
Micra
services?
That
means
load
balancing,
and
how
do
you
build
a
fast
and
secure
network
architecture
to
allow
your
your
application
to
do
communicate
with
itself,
then
I'm
going
to
go
into
the
architectures
themselves
and
then,
finally,
we'll
we'll
touch
on
some
of
the
issues
that
that
you
know
that
you
get
with
some
of
the
architectures.
B
So
there
will
be
a
whole
discussion
around
all
of
that
and
then,
and
obviously
will
be,
will
be
talking
about
about
answering
your
question
so
hopefully
that
all
make
sense.
Let's
dive
in
alright
a
bit
of
history,
red
Red
Hat
has
been
delivering
on
the
microservices
platform
for
a
while.
We
worked
with
a
very
early
version
of
OpenShift
when
it
was
using
proprietary
cartridges
and-
and
we
could
see
that
the
value
that
that
you
know
that
format
was
was
bringing
in
the
kind
of
a
value
that
microservices
delivered.
B
But
more
importantly,
we
were
very
impressed
with
the
vision
that
it
articulated
and
and
the
way
that,
even
if
you
know
a
number
of
the
features
were
were
kind
of
held
back
by
legacy
issues.
We
can
see
that
that
it
was,
you
know
the
very
beginning
of
the
journey
and
and
really
that
the
vision
was
was
all
there
and
with
the
OpenShift
3.3
we
feel
like,
like
it
really
delivers
on
the
vision,
the
very
clean
implementation
of
the
core
component
REE.
It's
got
a
very
robust
security
model
which
is
really
nice.
B
It
you
know,
particularly
as
for
enterprise
customers,
that's
a
critical
feature
and
being
able
to
manage
that
very
specifically
as
its
good.
It
really
fills
the
gaps
that
you
know
where
docker
and
kubernetes
has
still
have
some
some
loose
areas,
and
it
fully
exposes
the
kubernetes
api
in
ways
that
we
were
able
to
take
advantage
of
in
order
to
implement
our
three
architectures,
specifically,
the
proxy
model,
the
router
mesh
and
the
fabric
model.
B
So
I
call
this
the
big
shift.
You
know,
the
diagram
that
you
see
in
front
of
you
is
is
a
context
diagram
of
the
classic
monolithic
application.
You
know
it's
a
in
this
case.
It's
a
an
uber
like
app.
You
have
all
of
the
functional
components
of
your
application
and
passenger
management.
The
billing,
the
notification
payments,
all
of
that
running
in
a
single
VM,
on
a
single
large
host,
communicating
with
with
all
the
components
within
that
that
host
using
pointers
or
or
object
references
or
some
some
manner,
and
they
all
work
together.
B
If
you
compare
and
contrast
that
to
a
micro
service
version
of
the
application,
you
see
that
that
all
of
the
components
have
shifted
out
from
being
on
that
single
host
to
running
in
containers
all
talking
to
each
other
via
restful
api
and
having
that
connectivity
happen
within
the
or
the
communication
happen
over
an
HTTP
connection
between
the
different
services.
There's
a
lot
of
benefits
to
this
and
I.
B
You
know
I
think
it's
important
to
reiterate
the
real
benefits
that
you
derive
from
micro
services,
specifically
the
the
boundary
isolation
that
that
each
of
the
components
get
that
it's
very
clear
where,
where
one
bit
of
code
stop
and
another
bit
of
code
starts,
you
also
have
the
ability
to
very
easily
do
deployments
of
of
core
components
of
the
B
application
without
having
to
redeploy
the
entire
thing.
So
you
can
Rev
the
passenger
management
component
or
the
payments
component
without
impacting
the
other
components
on
that
are
running
your
application.
B
It
also
gives
you
an
asymmetric,
scalability
component
and
an
allowance
to
do
asymmetric
scaling.
So,
for
example,
if
you
had
a
surge
of
passengers,
you
could
scale
up
your
your
passenger
management
micro
service
very
easily
without
having
to
impact
the
other
parts
of
your
application
that
aren't
being
utilized,
obviously
in
a
monolithic
application.
If
you
had
a
surge
of
passengers,
you
would
have
to
scale
up
the
entire
application,
which
is
a
much
bigger
and
harder
thing
to
do,
but
it
does
introduce
some
some
challenges
and
we'll
be
talking
about
those
in
a
little
bit
now.
B
B
One
of
the
nice
things
about
the
dotnet
framework
is
that
that
visual
studio
actually
allows
you
to
almost
flip
a
switch
and
change
your
your
well
calls
to
being
restful
api
calls,
and
so
we
were,
you
know
we
we've
made
that
change.
We
were
going
through
the
process
refactoring
the
code
that
took
a
couple
of
weeks,
but
it
was.
B
It
was
not
really
that
painful
yeah
and
we
were
pretty
surprised
and
happy
with
how
things
were
going
and
it
was
moving
along
really
smoothly
until
we
put
our
system
on
to
our
staging
site,
where
we
had
actual
client
data
running
on
the
the
production
data
running
on
the
staging
server
and
suddenly
our
most
popular
pages
pages
that
that
were
hosting
videos
like
like
the
Microsoft,
Word
tips
and
tricks
videos,
those
pages
were
suddenly
taking
over
a
minute
to
render
they
in
the
past.
You
know
they
were.
B
They
had
taken
four
to
five
seconds
to
render.
Now
they
were
taking
it
over
a
minute
and
we
were
dumbfounded
and-
and-
and
you
know
really
concerned
about
this-
this
and
realized-
we
could
not.
You
know
push
to
production
with
that
kind
of
performance
as
we
dug
into
it.
What
we
discovered
is
that
that
the
community
server
that
we
were
using
telogen
community
was
the
name
of
the
system,
was
doing
something
that
was
causing
the
system
to
run
really
slow.
B
It
was,
it
said
that
it
was
restful
and
that
it
used
restful
api
protocols,
but
it
was
it
literally.
What
they've
done
is
is
simply
used
the
switch
in
Visual
Studio
and
not
really
optimize
the
system
at
all,
and
what
we
discovered
was
that
that,
for
our
most
popular
pages,
where
we
had
literally
thousands
of
comments
and
thousands
of
users
who
were
were
you
know
talking
about
and
and
discussing
the
video
that
that
we
were
delivering
for
Microsoft,
that
those
pages
were
were
having
significant
problems
because
of
all
the
restful
loops
that
they
were
doing.
B
What
we
discovered
was
that,
in
the
comments
the
pages
were
being
rendered
with
with
user
IDs
and
those
user
IDs
would
have
to
would
be
populated
by
a
loop
that
would
go
through.
Take
the
user
ID
call
back
to
the
user
manager,
which
was
on
another
server
populate
the
ID
and
then
iterate
through
the
entire
page,
and
where
we
had
thousands
of
comments
on
a
page
which
we
did
for
our
most
popular
pages.
B
We
did
a
lot
of
work
to
mitigate
that
problem.
We
grouped
the
requests,
we
cached
the
data
we
did.
You
know
we
did
what
we
could
to
optimize
the
network
and-
and
there
was
you
know
we
were
dealing
with
IAS,
so
there
was
only
so
much
we
could
do
honestly.
If
we'd
had
nginx,
we
probably
could
have
speeded
up
quite
a
bit
more,
but
it
was
it
was.
You
know
at
a
time
when,
when
is
was
the
only
game
in
town
for
for
dotnet
applications?
B
In
the
end,
we
were
able
to
get
it
to
to
a
acceptable
speed
and
and
delivered
it,
but
it
for
me
it
was
one
of
those
moments.
Those
searing
moments
when
I
became
very,
very
aware
of
the
difference
in
performance
that
you
get
from
having
components
that
talk
to
each
other
in
memory
versus
talking
to
each
other
in
in
across
the
network.
And
it
really
forced
me
to
think
about
how
you
architect
an
application
so
that
it
works
properly
and
efficiently
over
a
network
connection,
as
opposed
to
an
in-memory
connection.
B
So
what
does
all
that
mean
for
micro
services
well
with
micro
services
you're,
essentially
taking
this
this
I
architecture
that
we
built
there
and
and
putting
it
into
hyperdrive?
All
of
the
objects
that
that
are
within
your
application
are
are
going
to
be
talking
to
each
other
over
the
network
and
they're
going
to
be
using
HTTP
for
that
data
exchange
and
obviously,
from
from
engine
X's
perspective.
That's
a
good
thing.
B
We
think
you
know
that
that
gives
us
a
lot
of
a
lot
of
ability
to
help
you
manage
that
that
communication
process
and
it
gives
us-
and
you
can
you
utilize,
all
the
features
and
functionality
within
and
genetics
to
take
advantage
of
that.
You
know
and
nginx
is
very-
has
been
part
of
the
micro
services
movement
from
the
beginning.
We
are
the
the
number
one
application
downloaded
off
of
docker
hub
right
now.
B
The
only
two
items
that
are
are
downloaded
more
than
10
nginx
are
sent
os
new
boon
to
we,
our
the
largest
micro
services,
application
delivery
systems
on
the
planet,
Airbnb
Netflix,
uber
all
use
nginx
throughout
their
infrastructure
to
help
them
manage
their
HTTP
traffic,
and
we
have
been
working
very
diligently
internally
on
on
micro
services
as
well.
We
built
a
very
robust
reference
architecture
that
we
call
the
ingenious
photo
site.
It's
essentially
a
a
photo
sharing
application
that
uses
that
uses
doctor
containers
for
all
of
the
core
components
of
the
application.
B
We
built
it
using
all
the
different
languages
that
you
could
use,
because
we
wanted
to
not
ground
ourselves
in
any
particular
language
or
system.
We
wanted
to
show
that
that
our
solutions
worked
with
whatever
type
of
language
that
you
are
building
with.
So
we
have,
we
have
Python,
we
have
Ruby,
we
have
no
js',
we
use
Java
PHP,
all
the
languages
that
are
popular
with
with
our
customers
out
there.
We
built
the
system.
On
top
of
that,
we
also
use
a
12
factor
app
app
designed
for
the
application.
B
B
And
while
we
have
been
working
with
micro
services
for
a
while
we're
also
good
at
traffic
management,
and
this,
this
architectural
change
has
really
introduced.
You
know
the
advantages
that
I
talked
about
for
in
terms
of
scalability
and
in
terms
of
deployment,
but
it
also
introduces
some
challenges
and
when
you
look
at
the
when
you
compare
it
especially
to
the
the
application
framework
of
a
monolithic
application,
you
run
it.
You
can
recognize
some
some
issues,
some
things
that
we
call
the
networking
problems
and
specifically
it's
around
service
discovery.
B
It's
around
resource
management
or
in
in
this
case,
load
balancing
and
then
how
to
tackle
that
that
performance
and
security
problem
that
that
I,
you
know
on
a
personal
experience,
was
very
searing.
For
me.
Let's
talk
about
service
discovery
to
start
with
and
I
think
it's
always
good
to
to
compare
and
contrast.
You
know
microcircuit
picture
to
a
monolithic
one,
because
that's
one
that
every
developer
is
familiar
with.
So
when
you
are
working
in
a
in
a
monolithic
application
and
you
have
one
object
that
wants
to
talk
to
another.
B
The
VM
takes
care
of
all
of
that
that
communication
protocol.
For
you,
you
know,
if
you
you,
when
you
create
the
new
object,
you
can
just
call
the
method
and
the
the
VM
will
handle
the
the
point
of
reference
for
the
the
object,
reference
communication
between
the
two
objects-
and
you
don't
have
to
worry
about
it
in
microservices.
It
is
not
nearly
as
clean.
B
You
have
to
have
a
much
more
aware
system
to
to
make
that
service
discovery
process
happen.
Typically,
there
is
a
service
registry
of
one
sort
or
another.
You
know
in
the
case
of
kubernetes,
it's
typically
at
CD,
and
it
is
a
it's
a
database,
essentially
that
that
contains
all
the
information
about
your
your
services
that
are
running
and
and
available
what
the
the
IP
addresses
of
those
services
are
and
what
the
port
numbers
are.
If
they're
running
without
an
overlay
Network.
B
The
second
issue
that
that
micro
services
introduces
you
know
again
in
comparison
to
two
micro
services
or
in
a
monolithic
application,
is
how
do
you
utilize
your
resources
effectively?
You
know
you
want
to
be
able
to.
If
you
have
three
instances
of
a
shopping
shopping,
cart.
Instance,
you
want
to
be
able
to
distribute
your
your
request
to
those
those
different
instances
of
the
shopping
cart
using
the
the
resources
of
the
application
most
effectively.
So
you
want
to
be
able
to
to.
You
know,
distribute
them
between
the
three.
B
You
want
to
distribute
them
to
the
one:
that's
responding
the
fastest.
You
want
to
distribute
them
to
the
one
that
is
closest
to
the
object,
that's
calling
it
and
you
want
all
of
that
that
to
to
do
it
and
you
know
effectively
and
transparently
for
you,
but
you
also
want
the
developer
to
be
able
to
configure
the
the
load
balancing
mechanism
to
match
the
the
profile
of
what
their
system
needs.
B
B
For
some,
you
know
types
of
applications
that
is
an
unacceptable
risk
for
your
your
system,
and
so
the
solution,
of
course,
is
to
add
ssl
encryption
for
the
communication
between
the
different
services
that
you
have
in
place.
The
problem
is
that
that
ssl
really
exacerbates
the
performance
issue
that
that
you
know
you've
been
trying
to
mitigate
and
working
very
hard
to
do
to
overcome
just
in
the
architecture
of
the
system.
As
you
can
see
the
diagram
we
we
have
sort
of
a
a.
B
An
archetype
of
a
prototype
of
what
a
service
call
looks
like
with
you
know,
between
2
to
micro-services,
using
SSL
as
the
the
protocol
for
communication,
and
essentially
what
happens
is
the
java
service
would
be
creating
an
HTTP
client
that
would
go
to
your
service
registry
and
using
the
dns
and
request
an
IP
address
of
one
of
the
user
manager,
instances
that
it
wants
to
talk
to
you.
It
would
get
back
that
that
IP
address
it
would
create.
B
It
would
then
make
the
request
to
the
the
user
manager
get
the
response
back
consume
that
data
close
the
connection
and
garbage
collect
that
HTTP
client
that
it
created
and
for
every
request
to
the
user
manager
or
any
other
service
that
it
was
doing.
It
would
go
through
that
same
process
in
order
to
get
that
data
and
that's
a
fairly
intensive
CPU
intensive
process.
And
it's
a
you
know.
It
adds
many
hundreds
of
milliseconds
to
the
the
request
process.
B
You
know,
and
and
especially
as
you
start,
having
a
deep
call
chain
that
becomes
significant
problem,
so
we
think
that
we
have
solutions
that
address
all
three
of
these
networking
problems.
Specifically,
we
have
a
solution
that
is
is
is
very
focused
on
on
answering
how
to
do
really
robust
service
discovery.
B
We
have
a
the
our
architectures
address,
the
the
load
balancing
issue
and
how
to
utilize
your
resources
effectively,
and
we
have
a
solution
for
really
improving
the
performance
of
the
encryption
process
so
that
that
it
you
get
a
77%
increase
in
in
performance
when
using
our
architecture
versus
a
straight
SSL
solution.
So,
let's
get
into
the
architectures.
B
We've
come
up
with
three
different
models,
and
these
architectures
are
not
mutually
exclusive.
In
fact,
you
know
there
are
good
reasons
for
mixing
and
matching
them.
The
three
models
that
that
we're
going
to
be
talking
about
are
the
proxy
model,
the
router
mesh
and
the
fabric
model.
The
fabric
model
is
the
most
complex
of
the
three
and
kind
of
puts
load
balancing
on
its
head.
So
we're
going
to
spend
probably
the
majority
of
our
time
addressing
that,
but
they
are
all
very
robust
and
different.
You
can
use
cases
really
deliver
on
on
those.
B
Those
different
use
cases
require
different
models,
and
you
know,
depending
on
what
you
need
to
do.
We
think
that
that
at
least
one
of
them
will
will
satisfy
your
needs
all
right.
So
the
first
one
is
the
proxy
model,
and
this
this
model
very
much
reflects
the
way
that
most
people
use
nginx
within
their
their
application
and
a
lot
of
people
use
nginx
in
this
capacity
with
monolithic
applications
as
well.
Essentially,
it's
the
idea
of
putting
nginx
in
front
of
your
application
to
deal
with
with
inbound
Internet
traffic.
B
B
Many
of
our
customers
use
nginx
open
source.
We
also
have
our
plus
product,
which
provides
you
with
with
things
like
robust
load,
balancing
and
and
a
better
ability
to
do
dynamic
service
discovery,
which
is
very
valuable,
particularly
for
micro
service
applications,
where
you
are
scaling
the
individual
services
up
and
down,
you
know
and
having
a
changing
pool
of
applications,
as
as
the
the
system
needs
to
respond
to
different
levels
of
traffic
and
and
different
types
of
requests
that
are
coming
in
at
any
given
time.
B
And
an
open
shift
is,
is
really
designed
around
this
because
it
uses
kubernetes,
there's
the
ingress
controller
model
which
which
we
have
a
solution
around
and
I'll,
be
talking
about
that
a
little
bit
more
in
a
couple
minutes.
So
again,
the
proxy
model
is
really
focused
on
on
dealing
with
Internet
traffic.
You
can
think
of
it
kind
of
as
a
shock
absorber
for
your
application
and
with
our
nginx
+
commercial
product.
We
have
the
ability
to
do
that
dynamic
connectivity
back
to
your
your
ever-changing
pool
of
micro
service
application.
B
When
we
have
been
working,
you
know
we've
been
working
with
with
OpenShift
3.3
and
and
have
been
able
to
actually
implement
all
three
of
these
models
with
openshift
and
I
want
to
take
a
moment
to
to
talk
about
how
we
did
that.
So,
with
the
reference
architecture
tour,
we
have
a
a
proxy
model
system
that
is
kind
of
an
ingress
controller,
abstract
and
an
abstracted
ingress
controller
functionality
for
our
our
application
for
a
reference
architecture.
B
We
have
included
authentication
in
it,
so
it
has
an
OAuth
agent
that
does
authentication
for
for
all
of
the
traffic
coming
in
and
and
attaches
they
a
authentication
token
to
the
request,
so
that
that,
as
it's
passed
down,
the
stack
the
person
and
user
is
identified
through
the
system.
Unfortunately,
Cooper
dainese
does
not
support
support,
authentication
right
now,
so
our
proxy
model
is
not.
It
doesn't
fit
into
the
kubernetes
ingress
controller
format.
However,
nginx
does
have
a
ingress
controller
system
that
we
have
open
sourced
and
made
available.
B
You
can
download
it
off
of
our
github
account,
I
believe
its
nginx
at
the
nginx
repo
ingress
controller,
and
we
have
both
a
an
open-source
version,
as
well
as
the
nginx
Plus
version,
which
provides
some
some
extra
features
that
you
can
take
advantage
of
some
things
to
know
about
the
the
openshift
ingress
controller
implementation.
It
does
require
you
to
to
play
around
with
the
with
the
permissions,
as
I
mentioned
before.
One
of
the
things
that
we
really
like
about
OpenShift
is.
B
Alright.
So
one
of
the
things
about
the
the
proxy
model
is
that
it
is
very
focused
around
that
that
edge
routing
scenario.
The
use
case
of
dealing
with
Internet
traffic
coming
in
to
your
to
your
micro
services
application,
and
it
doesn't
really
concern
itself
with
how
your
micro
services
talk
to
each
other.
B
Starting
to
get
my
my
horse
voice
again,
so
the
router
mesh
model
is
really
focused
around
trying
to
provide
a
more
robust
system
for
managing
your,
your
inter
internal
traffic.
We
do
recommend
that
you
have
some
sort
of
edge
routing
management
system,
so
a
proxy
model
like
system
2,
to
deal
with
with
Internet
traffic.
B
But
then,
within
your
your
application,
we
have
built
out
a
what
we
call
the
router
mesh
and
it
works
in
the
capacities
where
each
of
the
services
calls
the
the
router
mesh
do
distribute
requests
between
the
different
applications
or
the
different
services
that
you
have
available.
So
in
this
diagram
here
you,
if
the
pages
micro-service
needed
to
talk
to
service
two,
it
would
make
a
request
to
the
the
router
mesh
that
would
be
able
to
call
the
different
instances
of
service
and
few
things
like
properly
load-balanced.
B
This
is
a
very
powerful
system
because
it
really
centralizes
your
your
request
management
and
gives
you
an
ability
to
really
track
the
performance
of
the
applications
within
your
system,
a
centralized
place
for
for
dealing
with
with
all
of
the
metrics
that
are
coming
out
about
traffic
in
your
your
application
and
a
good
place
to
implement
something
like
the
circuit
breaker
pattern.
For
those
of
you
who
are
not
familiar
with
the
circuit
breaker
pattern.
It
is
a
a
pattern
that
that
is
designed
to
do
really
provide
resilience
within
your
application.
B
If
you
have
a
you
know,
and
service
instance
that
is,
is
unhealthy.
The
router
mesh
can
do
things
like
route
request
to
other
services
itself.
It
can
also
do
retry
use
retry
logic
to
you
know,
retry
the
the
connection
as
it
as
it
becomes
available
and
in
the
worst
case
scenario,
we
can
provide
cache
data.
Even
if
the
entire
service
is
down
we
can.
We
can
continues
service
continuity
by
using
old
stale
cash
data.
That's
available
for
particularly
for
read
type
services,
that's
a
very
valuable
feature.
B
So,
as
I
said,
it's
really
gives
you
robust
service
discovery
and
I'll
talk
about
that
mechanism
shortly
it.
It
allows
you
to
utilize
all
of
the
advanced
load
balancing
features
within
within
nginx,
rather
than
just
your
simple
round-robin
system
it
can.
It
can
take
advantage
of
more
robust
things
like
like
lease
connection
or
lease
time
load
bouncing,
and
it
will
allow
you
to
implement
the
circuit
breaker
pattern
in
terms
of
the
OpenShift
implementation.
B
B
So
I
always
find
it's
useful
to
go
back
to
this.
This
diagram
to
talk
about
the
process
again,
so
you
know,
let's,
let's
go
through
that
process
of
where
the
investment
manager
instance
up
at
the
top
needs
to
talk
to
the
user
manager
instances.
One
of
these
are
manager
instances
down
below
you
know.
In
this
case
the
the
Java
service
would
create
an
HTTP
client.
The
client
would
then
do
a
DNS
request
to
the
service
registry
and
ask
for
an
IP
of
one
of
the
the
user
managers
the
the
service
would.
B
Would
the
registry
would
respond
with
with
the
IP
address.
The
HTTP
client
would
go
through
the
SSL,
the
nine-step
SSL
key
exchange
process
to
establish
the
ssl
connection.
It
would
make
the
the
request
it
would
get
the
response
it
would
close
down
the
connection.
It
would
garbage
collect
the
HTTP
client
and
it
would
go
through
that
process
for
every
single
request
that
you
that
you
have
for
your
micro
service
application.
B
In
the
fabric
model,
you
can
see
that
that
having
nginx
plus
in
each
of
the
the
systems
changes
around
the
way
of
the
the
the
communication
between
the
systems
works
and
I'm
gonna
go
into
detail
on
how
all
this
this
happens
in
just
a
sec.
So
here
you
have
that
same
java
service,
and
it
is
instead
of
talking
to
the
user
manager
or
even
the
service
registry.
It's
talking
only
to
to
nginx
plus
here
when
it
creates
a
an
HTTP
client.
It
talks
to
localhost
and
a
route
that
would
be
you
know.
B
Instead
of
having
that
that
service
discovery
process
happened,
what
what
we
have
is
nginx
plus
has
a
a
resolver
feature
within
its
application
within
the
application
that
runs
asynchronously
and
is
regularly
checking
the
these
service
registry
for
all
instances
of
the
the
user
manager
and
adding
and
subtracting
those
from
the
load
bouncing
pool
on
a
regular
basis.
So
it
doesn't
need
to.
It
doesn't
need
to
make
a
request
to
the
service
registry.
Every
time
the
Java
service
wants
to
make
a
request
to
a
user
manager,
it
only
needs
to
do
it.
B
You
know
every
three
seconds
or
so,
so
you
reduce
the
load
actually
that
that
you're
hitting
the
service
registry
and
getting
that
that
DNS
information
for
it.
Also
because
it
has
all
the
information
about
the
the
instances
it
can
make
a
much
more
intelligent
decision
about
how
to
load
balance.
The
requests
and
one
of
my
favorite
load
balancing
schemes
for
micro
services
is
to
use
the
least
time
connection
load,
balancing
scheme
at
least
time.
The
the
nginx
Plus
instance
evaluates
which-which
instance
in
the
load.
B
Balancing
pool
is
responding
the
fastest
and
it
will
will
skew
the
the
requests.
That's
to
the
the
instance
that
is
responding
the
fastest,
all
the
time
doing,
a
moving
average
of
which
instances
responding
the
fastest.
This
has
a
benefit
of
also
sort
of
biasing.
The
request
change
to
two
instances
that
are
local
to
the
system.
B
B
Obviously
you
can
also
build
in
the
circuit
breaker
functionality
with
that
that,
with
the
instances
of
nginx
plus
using
active
health
checks,
we
have
that
retry
ability
and
caching
logic.
We
also
have
a
much
more
robust
ability
to
you
know
to
deal
with
service
failures.
So
you
know
if
you
have
alternative.
B
Service
options,
or
you
really
understand
the
failure
profile
of
the
services
you
can
build
in
things
like,
like
rate
limiting.
You
can
build
in
things
like
like
backup,
service
options
as
to
what
you
want
to
do
in
case
the
service
is
unavailable,
so
there's
a
lot
of
power
and
flexibility
in
terms
of
how
you
implement
the
circuit
pattern
within
the
fabric
model,
as
well
as
within
the
the
router
mesh
model.
B
B
I
would
be
remiss
if
I
didn't
say
that
there
are
some
issues
with
implementing
fabric
model.
The
first,
of
course,
is
that
that
docker
recommends
that
you
use
one
service
per
container.
The
idea
here
is
that
you
know
you
should
not
have
multiple
things
running
in
a
container.
You
don't
want
to
you,
don't
want
it
to
be
a
VM.
You
want
to
keep
your
docker
images
simple.
B
We
have
have
worked
really
hard
to
to
try
and
keep
this
as
simplified
as
possible
and,
in
fact,
have
come
up
with
a
solution
where
process
failure
of
either
your
application
code
or
nginx
causes
the
the
container
to
failure
as
well.
So
you
get
that
close
association
between
container
failure
and
and
application
failure
within
the
fabric
model,
as
as
we
built
it.
B
B
It
does
provide
a
lot
of
power
to
the
development
team
and
you
know-
and
we
think
that
that
for
for
companies
that
you
know,
organizations
that
need
to
have
encryption
within
their
system,
it
really
provides
you
with
with
high
performance,
and
you
don't
have
to
sacrifice
any
of
that
performance
in
order
to
to
really
make
your
application
secure
and
we've
built
out
a
bunch
of
tooling
to
make
this
this
process
simpler
and
and
not
have
to
force
you
to
go
through
all
the
complexities
of
implementing
reverse
proxy
SSL
settings.
Within
the
nginx
configuration.
B
A
B
A
B
And
that
is
that
is
up.
You
know,
given
everything
that
we've
seen
in
three
three
I
think
we're
going
to
be
revisiting,
that
and
updating
the
the
the
blog
post,
because
three-three
really
delivers
on
the
vision,
and
you
know
allowed
us
to
implement
all
of
the
you
know
the
router
mesh
the
fabric
model,
as
well
as
the
English
controller,
so
expect
to
see
more
blog
posts
from
us
shortly.
A
Well,
let's
see
if
there's
any
what
else.
There's
one
question
coming
in
from
the
rush
I
understand
the
engine:
x+
is
the
commercial
offering
from
nginx
which
supports
running
nginx
with
in
containers?
Is
the
nginx
Plus
implementation
closed-source?
What
are
the
benefits
over
H
a
proxy
together
with
consul
or
open
ship
internal
capabilities.
B
B
The
feature
that
that
that
really
makes
the
the
fabric
model
work.
Excuse
me
is
the
resolver
feature
and
that
that's
the
ability
for
nginx
plus
to
do
that
service,
discovery
against
the
DNS
and
change
the
load,
balancing
pool
of
the
instances
that
we're
connecting
to
dynamically
so
that
it
is,
it
is
regularly
responding
to
changes
within
your
environment.
B
Resolver
is
much
more
robust
than
the
one
in
H
a
proxy
H
a
proxies
it
honestly.
It
looks
like
they've
they've
deprecated
its
functionality.
They
never
implemented
the
SRV
record
capability,
which
is
something
that
we
use
extensively
and
is
the
reason
that
you
do
that
that
port
naming
recommendation
that
I
provided
also
H
a
proxy
does
not
allow
you
to
implement
the
the
circuit
breaker
pattern
within
the
application.
So
that's
the
that's
the
downside
of
using
a
proxy.