►
From YouTube: Luke Bond - Node.js Live London
Description
Luke Bond of YLD presents on Node.js deployment patterns. What will you learn from this talk?
- Common Node.js architectures and how to deploy them.
- Process monitors (spoiler: use Linux)
- Scaling and service discovery considerations
- A little bit about containers along the way
A
So
I'm
going
to
talk
about
nodejs
deployment
patterns,
I
think
it
was
an
itch
I
think
on
the
schedule
was
no
Jess
in
productions,
talkers
and
change.
Just
change
the
name
basically
going
to
be
talking
about
some
number
of
the
the
type
of
applications
that
you
might
typically
run
in
node
and
how
we
I
might
recommend
you
to
play
them.
And
if
you
want
to
contact
me,
details
there.
A
Okay,
so
I've,
first
of
all,
going
to
break
up
into
like
three
different
main
application
types:
architectures,
not
trying
to
say
that
there
are
only
three
kinds
of
node
apps,
but
just
just
a
spectrum
from
very
simple,
very
complex
and
I'm
just
going
to
go
through
some.
Some
tips,
I'm
also
going
to
go
into
rid
of
an
decide
about
process,
monitors
and
Linux
a
minute
systems
along
the
way.
Some
scaling
and
service
discovery
considerations
and
I'll,
as
always
I'll
talk
about
containers
at
some
point.
A
Probably
a
cache
of
some
kind
database
are
going
to
do
two
at
the
moment
and
then,
at
the
other
end
of
the
spectrum,
like
a
really
complex
microservices.
He
type
thing
with
the
message:
bus
and
all
that
kind
of
stuff.
The
kind
of
main
theme
will
keep
coming
back
to
is
basically
don't
overcomplicate
things
that
will
cause
you
some
operational
pain
and,
in
particular,
don't
over
complicate
things
prematurely.
I'd
make
it
as
complex
as
it
needs
to
be
and
keep
it
as
simple
as
you
can.
A
Okay
diagrams
is
not
my
on
my
skill.
I
have
have
other
skills,
so
sorry
about
this.
So
basically
the
simple
one
is
where
you
just
have
one
thing
to
deploy:
it
probably
talked
to
some
kind
of
back
end
it
may
or
may
not,
but
from
the
point
of
view
of
the
person
like
listening
to
this
talk
and
pulling
some
you've
got
one
thing:
good
ploy
to
know
that,
and
it's
taken
one
service,
okay,
so
some
properties
of
that
one
is
that
they're
very
easily
very
easy
to
scale
horizontally.
A
You
can
both
across
within
one
host
and
across
hosts.
You
basically
just
run
a
bunch
of
processes
on
that
host
and
then
scale
them
horizontally,
put
it
all
behind
a
load
balancer.
It's
pretty
simple
one
of
the
reasons
why
it
is
pretty
why
this
model
is
particularly
simple
is
because
there's
no
communication
between
the
different
components.
You
don't
have
like
say
like
an
API
talking
to
a
cache
as
you've
just
got
one
service,
it
answers
request.
A
It
sends
it
sends
information
back
and
basically
all
you
need
to
do,
for
that
is
run
one
node
process
for
each
core
on
your
host,
so
you've
got
a
fork
or
vm
of
some
kind
for
processes,
some
sort
of
load
balancer
on
on
the
hosts,
so
the
kind
of
thing
that
PMT
gives
for
it
gives
you
and
then
just
scale
them
across
boxes
for
how
much
traffic
you
need
and
put
all
that
behind
a
load
balancer
in
a
dns
like
this
is
pretty
basic
stuff.
It's
a
very
basic
application
for
the
load
balancer.
A
You
might
use
something
like
engine
X,
H,
a
proxy
or
even
something
as
simple
as
balance,
which
is
like
a
few
hundred
lines
of
code,
TCP
load,
balancer
written
in
situ
being
around
my
15
years.
It's
fast
simple!
You
don't
need
much
more
than
that.
There
are
bunch
of
things
you
can
use,
but
that's
a
nice
simple
example.
So
my
recommendation
for
applications
like
this
is
to
use
system
d
would
be
my
my
choice
but
generally
I
guess.
My
advice
is
to
use
a
linux
linux
init
system,
because
it
is
basically
the
ultimate
process
manager.
A
So
in
the
node
world
we
have
probably
these
four
main
ones,
at
least
of
all
the
ones
that
came
to
me,
that
are
the
most
common
mon
mon
forever
and
PM
to
the
first
three,
pretty
simple
process:
three
starters
when
it
crashes
that
are,
we
started
for
you
sure
they
may
be
do
more
than
that,
but
that's
basically
for
the
purpose
of
this
talk.
What's
what's
relevant,
where's
p.m.
tues
different
does
a
whole
lot
more.
It's
very
good.
A
It
restarts
processes,
but
it
also
does
logging
gives
you
a
sort
of
nice
interface
for
all
your
applications
that
are
running
in
it.
It
clusters
ports
together.
It
has
a
built-in
load,
balancer
and
monitoring,
and
you
can
also
optum
optionally
hook
it
up
to
key
metrics
services
for
monitoring,
so
very
powerful
vehicle,
not
sending
say
anything
negative
negative
about
it
at
all,
but
nevertheless
my
personal
choice,
I'd
say
you
systemd,
there's
the
reason.
A
So
if
you
want
to
know
how
to
do
that,
what's
that
they're
not
going
to
go
into
it,
but
basically
the
reason
why
I
think
this
is
important
is
because
the
reason
why
I
think
just
do
this
is
one
that
the
tooling
is
really
good,
so
journal
control,
so
any
sysadmin,
that's
working
on
systemd
will
use
germinal
control
to
look
at
Colonel
log
system
logs
application
logs.
You
can
slice
them
in
different
ways.
Look
at
them
across
the
system
per
unit
per
service.
A
You
can
specify
talk,
you
know
all
the
things
you
might
expect
is
really
powerful
and
I.
Think
it's
more
powerful
than
any
node
specific
process.
Monitor
will
be
able
to
give
you
if
you
run
things
with
systemd
I.
Think
you'll
also
have
a
lot
more
control.
You
have
you'll
be
able
to
because
you'll
be
able
to
choose
your
components,
choose
how
you
do.
A
Logging
choose
how
you
do
monitoring
choose
the
load
battled
so
that
you
use
tune
those
things
as
much
as
you
like,
and
also
anyone
you'll
be
able
to
run
all
your
non
node
applications.
In
the
same
way,
you
run
your
node
ones,
so
you
don't
so
for
from
a
sysadmin
point
of
view.
They
might
be
thinking.
Okay,
here's
how
we
run
our
applications.
Then
you
give
us
a
node
application
and
you
want
us
to
write
the
theme
to
they
might
be
like.
A
We
already
have
this
functionality
think
you're
good
to
stick
to
proven
things,
not
the
PM
to
isn't
proven,
or
these
things
I
really
don't
mean
to
say
negative
about
it.
It's
very
good.
This
is
my
preference
and
I
think
it's
always
good
to
learn
more
about
the
operating
system
on
which
are
apps
or
run
that's
Betsy
in
there,
okay,
so
on
to
the
second
type.
So
that's
just
to
recap
that
one.
A
What
I'm
saying
is
that
for
the
simple
web,
app
based
the
system
d
and
a
load,
balancer
/
host
and
put
the
put
all
the
hosts
behind
announcer,
and
if
you
want
you,
can
your
team
to
basically
does
the
same
thing
cool
so
for
the
middle
one?
This
is
basically
I.
Think
one
by
the
time
you
have
a
website
with
these
kind
of
components
and
you
get
it
to
production.
A
You've,
probably
and
you've
done
all
sort
of
load,
testing,
performance
testing
and
it's
kind
of
thing,
maybe
Sunye
in
production
a
while
it'll,
probably
look
something
like
this
you've
got.
Your
website
talks
to
WebSocket
service
servers.
Api
is
you
probably
have
some
to
the
cash
that
goes
with
that?
Maybe
that's
read
us!
Your
web
socket
servers
may
also
talk
to
rise.
A
You
probably
have
some
sort
of
back-end
database
that
kind
of
thing,
so
it
might
end
up
looking
something
like
this
just
not
important,
but
it's
exactly
the
same,
but
just
to
depict
the
kind
of
complexity
I'm
talking
about.
So
these
are
also
quite
easy
to
horizontally
scale,
to
some
extent
less
easy
than
the
others,
because
they're
bit
different.
But
you
will
be
running
multiple
versions
of
your
API
service.
Multiple
versions
of
your
web
socket
service,
one
of
the
things
that
complicates
it.
A
In
fact,
probably
the
main
thing
is
the
fact
that
there
is
communication
between
components
so
web
sockets.
Sorry,
our
API
service
needs
to
talk
to
redis
WebSocket
service
may
also
they
need
to
talk
to
database.
So
when
you
have
multiple
instances
of
say
of
any
one
component
and
you
need-
and
you
need
one
service
you
need
to
configure,
one
serves
to
tell
it
how
to
talk
to
that,
how
you
specify
that
and
whether
there's
load
balancing
in
between
this
is
the
problem
of
service
discovery.
A
So
by
having
a
more
complex
application,
we
introduce
this
problem
that
we
need
to
solve
and
we
want
to
do
it
with
this
little
complexity
introduced
as
possible.
The
other
thing
that's
important
is
a
host
affinity
becomes
relevant
for
your
services.
You
will
want
to
have
say
Redis
running
on
the
same
host
as
your
API
service.
I
would,
I
would
say
that
you
don't
have
them
have
to
be,
but
it
will
be
faster
that
way.
You'll
want
to
close
it
off
to
the
outside
world.
A
That
kind
of
thing,
so
you
can't
simply
other
services
that
are
scheduler
and
just
say
run
on
wherever
it's
like.
You
have
some
constraints
that
you
want
to
specify
so
and
also
you
want
your
database
to
be
on
some
post
that
has
an
SSD
on
the
you're
backing
up,
of
course,
so
this
also
complicates
it.
So
how
do
we
introduce
service
discovery
without
introducing
much
complexity?
I
would
say
for
this.
You,
you
probably
want
to
use
something
like,
but
they're
a
bunch
of
options
are
many
ways
you
can
do
it.
A
So
one
is
to
use
things
like
cloud
formation,
elastic
Beanstalk
for
an
AWS,
because
you
they're,
like
you'll,
be
able
to
express
the
kind
of
constraints
and
enough
to
be
able
to
to
be
able
to
solve
these
problems
or
a
platform
as
a
service
like
Google,
App,
Engine,
so
I'm
glad
you
go
introduction
to
that
earlier.
So
I
don't
have
to
go
into
that,
but
I
will
also
that's
underneath
using
Cuba
Nettie's,
which
is
or
a
Cuban
Eddie's
based
platform,
which
is
quite
complex,
but
that's
all
hidden
for
me.
A
For
at
this,
at
your
level
that
you're
using
a
bit
actually
quite
simple
and
you'll,
be
able
to
express
express
the
kind
of
constraints
that
you
need
in
order
to
solve
this
problem
without
introducing
too
much
complexity.
Probably
the
way
I
might
do
it
because
I
just
like
these
tools,
but
I'm,
probably
something
like
chorus
is
fleet,
which
is
a
project
that,
whereas
systemd
is
the
thing
that
starts
stops.
A
Restart
your
services,
among
other
things,
fleet
debate
for
one
machine
fleet
is
basically
an
abstraction
for
that
to
make
your
cluster
appear
like
one
machine
and
you
can
express
constraints
about
scheduling
and
it
will
introduce
the
complexity
of
that
CD,
which
is
something
you
may
not
want
to
do
something
to
keep
in
mind.
But
if
you've
been
using
system
d
in
your
when
your
application
was
simpler,
the
transition
do
something
like
flea.
It's
quite
easy
because
you're
already
expressing
your
services,
as
you
know,
files,
that's
another
option
and
you'll
need
a
dynamically
configured
load.
A
Balancer
reverse
proxy
of
some
kind
to
say
that
when
things
get
scheduled
across
different
house,
they
move,
they
can
add
it
and
remove.
So
this
is
another
item
of
complexity,
but
you
so
you
wouldn't
want
to
do
it
for
a
really
simple
application,
but
something
moderately
complex,
I
think
it's
a
good
solution.
A
You
could
also
use
dockers
form
the
new
Doc's
form
that
is
in
dr
1.2,
which
is
very
green
at
the
moment
particular
Praveen.
But
if
you're
starting
a
new
project
is
probably
a
good
time
to
try
it
out
and
that's
because
it
is
quite
simple,
you
can
express
again
the
kind
of
constraints
that
you
have
about.
What
runs
we're
quite
well
using
something
like,
and
it's
really
simple
and
easy
to
use
and
I.
Think
also,
once
you
get
to
this
stage
of
complexity,
then
containers
become
quite
helpful.
You
might
have
a
heterogenous
stack.
A
You
know
using
different
languages,
things
like
that
different
versions
of
node,
even
if
it's
all
mode,
but
it's
nice
to
just
say,
I-
just
build
an
image
and
I
deploy
at
a
range
and
run
containers.
It's
the
same
artifact
as
nice
simplifies
things.
I
keep
doing
that.
So
the
final
one
well
I
must
be
seeking
really
fast,
because
I've
done
this
at
seven
minutes
faster
than
I
did
earlier
today,
cool
so
so
the
final
one
is
something
more
complex:
a
distributed,
microservices
based
system.
A
So
this
is
arbitrary,
of
course,
doesn't
matter
what
your
architecture
is,
but
something
that's
got
basically
a
bunch
of
services.
The
way
you
don't,
you
know
leiram
to
think
about
individual
service,
because
there
are
too
many
you
want
to.
You
want
something
to
handle
things
a
little
more
for
you,
so
I'm,
just
imagining
you
might
have
pub
celibate
a
bunch
of
Micra
services
on
the
back
of
the
same
diagram.
I
had
before
something
like
that.
A
Ok,
so
you
don't
be
thinking
in
any
way
about
plumbing
you
don't
to
be
kind
of
adding
like
IP
table
rules
for
particular
ports.
You
don't
want
to
be
adding
AWS
ec2
security
groups
for
certain
services
that
you
don't
want
to
get
to
this
kind
of
level.
It's
just
it's
too
much
because
you
have
too
many
services.
You
don't
want
to
be
specifying
exactly
what
runs
where,
because
the
game
they're
too
many.
Basically,
what
you
need
is
a
declarative
platform
with
service
discovery,
and
it
will
manage
things
for
you.
A
It
will
scale
things
less
how
if
you,
if
a
few
services
go
down,
that'll
start
them
reschedule
with
the
host,
goes
down.
It'll,
wait
for
another
one
little
reschedule
things
and
always
try
to
get
your
application
back
to
the
state
that
you
kind
of
declared.
You
know
when
five
instances
of
my
API
3answers
of
words
web
sockets,
that
kind
of
thing
so
that's
kind
of
what
we're
looking
for
and
so
I
would
say.
Here's
a
the
perfect
use
case
for
cube
Nettie's,
because
it's
designed
for
exactly
this
kind
of
applications.
A
All
the
things
I
said
on
the
previous
slide.
It'll.
Do
that
really
nicely
so,
which
implies
you
need
to
contain
your
IT
services
and
I.
Think
by
the
time
you
get
to
this
level
in
places
that
becomes
really
beneficial
to
do
anyway.
I,
don't
wanna
go
too
much
into
containers,
but
also
you
can
use
Cuba
Nettie's
in
hosted
platform
such
as
tectonic
or
Google
container
engine,
on
top
of
which
is
built,
google
app
engine
which
will
also
do
all
this
stuff
and
work
quite
well.
A
You
can
use
other
covina
he's
based
platforms
as
a
service
such
as
dais
fisher
has
a
really
nice
developer.
Workflow,
that's
similar
to
Heroku.
It's
really
nice
to
use,
maybe
someday
the
new
Dhaka
swarm
will
be
ready
for
this,
but
I
think
it's
just
too
new.
At
this
stage
we
don't
know
how
it's
going
to
scale
to
this
kind
of
level
of
complexity,
but
presumably
that's
that's
what
they're,
targeting
ok
so
just
to
kind
of
wrap
up
I.
A
Think,
most
importantly,
the
messages
keep
it
simple
and
don't
prematurely
complicate
your
stack
or
your
record,
you
some
pain.
You
might
think
that,
oh
this
thing,
I'm
building
is
going
to
have
loads
of
users.
Someday
I
need
to
build
that
infant
starter
needs
to
scale,
it
needs
tea
and
a
web
scale
or
whatever
you
might
want
to
say,
and
you
might
think
ok
has
to
be
a
cube
Nettie's.
It
has
to
be
there
has
to
be
that
until
that
happens,
it's
not
true.
You
be
the
judge,
of
course,
but
I
don't
think.
A
A
One
of
the
this
is
one
of
the
biggest
things
about
containers
for
me
that
you
can
develop
and
debug
your
applicator.
Your
whole
system
on
your
laptop
there's,
a
lot
of
talk
about
containers
and
like
security,
isolation
and
see
groups
and
namespaces,
but
I
think
the
big
thing
about
containers
that
the
revolution
is
going
on
with
dr.,
in
particular.
Is
the
developer
user
experience
just
being
able
to
like
bring
stuff
in
like
this
and
the
way
you
can
we're
not
going
to
go
off
into
around
to
their
containers.
A
But
this
is
very
important
and,
and
as
I
said
before,
having
homogenous
deploy
an
artifact
for
essentially
heterogenous
stack
is
it's
really
useful
makes
things
very
simple,
of
course,
test
test
test
test
everything,
but
when
you
are
so
we're
talking
about
deployment,
see
I
think
something
is
very
important
is
to
be
able
to.
In
fact,
it's
critical
is
to
smoke
test
your
releases
after
they've
gone
out,
so
you
want
to
do
some
non-destructive
requests.