►
Description
Building the Node.JS Global Distribution Network - Guillermo Rauch, ZEIT
"At ZEIT we’re building the first global distribution network for Node.js applications, websites and microservices. Our mission is to democratize access to the cloud by ensuring that services are fast from and to everywhere, not just Silicon Valley.
We’ll discuss some of the unique technical challenges this endeavor presents in the context of distributed systems."
A
Hello,
everyone
and
welcome
to
Amsterdam
just
kidding
it
I
just
got
here,
so
my
name
is
guillermo
rauch
and
I'm.
The
creator
of
few
node.js
frameworks
that
you
might
have
used
no
J's
libraries
that
you
might
have
used
like
socket,
IO,
mongoose
and
a
few
other
smaller
projects,
if
you're
up
to
date
with
our
latest
work.
You
probably
know
about
other
stuff
like
hyper
term
or
micro,
and
you
can
probably
later
on
see
all
of
that
our
new
github
account.
A
So
I'm
the
CEO
of
a
company
called
Zeit
we're
creating
this
product
called
now,
which
is
the
easiest
way
to
deploy
anything
to
the
cloud.
We
call
it.
Real-Time
global
deployments
and
I'm
gonna
talk
about
specifically
what
we
mean
by
each
of
these
things.
First
of
all,
let's
start
with
deployment.
We.
What
we
mean
by
deployment
is
launching
your
application
code
and
running
it
in
the
cloud.
A
So
he
says
that
it's
a
computer,
that's
out
there
and
the
people
who
built
this
crap
are
out
there
they're
insane
and
maybe
so
because
I
think
that
the
term
cloud
as
an
ideal
thing
to
strive
for
it's
very
fitting,
because
cloud
doesn't
mean
a
specific
geographical
location.
It
doesn't
mean
a
specific
proprietary
solution
and
class
can
be
everywhere
and
ideally
they're
above
every
one
of
your
users
heads
and
they
can
access
informations
and
api's
and
services
with
minimal
latency.
A
So
that
idea
of
the
cloud
which
is
a
computer,
that's
out,
there
is
a
very,
very
good
idea,
but
when
most
people
talk
about
the
cloud
nowadays,
they
actually
mean
a
very
specific
thing
for
a
lot
of
people
in
the
u.s.
they
either
deployed
their
servers
in
US
West
or
on
us
East.
So
the
cloud
is
a
specific
location.
The
cloud
is
usually
this
cloud
or
we
tend
to
think
that
is
this
cloud,
because
this
is
Northern
California,
but
it's
actually,
this
little
city
called
Fresno,
which
is
not
nearly
as
glamorous.
A
So
this
is
a
cloud
for
a
lot
of
people
nowadays
and
coming
back
to
that
definition,
I
think
it's
actually
a
really
great
thing
to
strive
for,
and
it's
something
we
try
to
do
at
site
and
it's
this
idea
that
you
can
deploy
without
worrying
about
where
your
code
goes.
Specifically,
we
deploy
your
code,
we
don't
worry
about
infrastructure
with
just
a
single
command,
so
you
go
to
your
computer,
you
install
it
with
NPM.
You
could
do
any
directory.
That
holds
your
application
code.
You
press
now
and
the
deployment
is
ready.
A
So
just
to
give
you
an
idea
of
what
this
looks
like
in
practice.
I
have
a
directory
called
my
app
I
press,
deploying
it
gives
me
a
link
right
away,
and
this
is
the
reason
that
we
call
it
real-time
deployments,
because
you
start
getting
the
logs
of
what
the
deployment
is
doing
right
away.
This
is
very
important
because
build
processes
in
the
nodejs
and
especially
the
front-end
JavaScript
community,
keep
getting
bigger
and
bigger
and
bigger
and
bigger,
and
sometimes
there's
a
lot
of
failure
when
it
comes
to
building
your
application.
A
So
the
idea
of
immediately
getting
a
feedback
about
what's
going
on,
is
a
very,
very
important
notion.
Is
we
don't
want
to
execute
a
black
box
in
the
sky?
The
next
step
that
I
run?
There
is
I,
give
it
a
name
I
a
Lea
said,
but
we're
gonna
get
into
that
later
on.
But
basically,
this
is
all
there
is
to
it.
You
find
your
directory,
you
press
you
type
in
now
in
press
enter.
The
first
really
important
notion
of
this
is
what
we
call
the
immutable
deployment
for.
A
Those
of
you
that
are
used
to
immutable
data
structures.
Is
this
idea
that
you
can
travel
back
in
time
and
you're
never
modifying
your
data
on-site?
This
is
also
true
for
our
deployments.
You're,
never
overriding
a
previous
deployment,
you're,
never
throwing
it
away
unless
you
explicitly
want
to
so
every
time
you
type
in
now
you
get
a
new,
unique
server,
and
this
happens
for
you
immediately.
So
this
URL
so
like
this,
it's
like
the
prefix
will
be
the
project's
name,
and
then
there
will
be
some
UID.
A
That's
unique
to
your
project,
because
deployment
is
primarily
about
address
ability
to
give
you
a
contrite
example.
You
would
never
tell
your
boss-
or
maybe
you
wouldn't
be
here
today-
oh
I
just
deployed
your
localhost
is
ready.
No,
you
probably
want
to
deploy
somewhere
where
other
people
can
use
it.
So
it's
all
about
taking
that
code
of
production
and
to
take
you
to
production.
You
also
wouldn't
give
people
this
strange,
URLs
that
are
immutable
and
have
some
random
nonce
in
it.
A
So
we
have
this
simple
command
that
we
call
alias
where
you
take
one
URL
and
you
give
it
a
easy-to-use
subdomain
or
even
that
full
domain
name.
So
this
is
what
it
looks
like
in
practice.
I
take
the
URL
that
agents
employed
I,
give
it
a
custom
domain
in
this
case
notice.
That
I
also
didn't
worry
about
the
other
pieces
of
infrastructure
that
are
usually
kind
of
a
pain
to
manage
so,
for
example,
SSL,
Certificates
and
renewal
and
dns
and
dns
records.
A
All
of
that
really
doesn't
matter
to
us,
because
we
only
worry
about
your
code,
putting
it
in
the
cloud
and
give
it
and
give
it
a
name.
That's
easy
to
use
and
friendly
to
users
or
customers
in
terms
of
scale.
This
is
actually
some
one
of
the
most
interesting
premises
of
this
system
and
the
reason
that
we
started
with
node
is
we
care
a
lot
about
elasticity
of
your
deployments
and
for
your
deployments
to
be
elastic.
They
have
to
boot
up
and
replicate
very
quickly.
A
So
node
was
our
initial
platform
because
it
satisfied
this
condition
for
a
lower
large
variety
of
processes.
When
you
deploy
and
run
something,
a
node
is
actually
very,
very
quick
to
boot.
Up,
compare
that
to
like
initializing
the
java
virtual
machine,
and
you
wouldn't
have
this
sort
of
like
really
easy
scale
and
elasticity.
So
our
request
comes
in
and
we
match
it
with
one
of
your
now
applications.
If
more
requests
come
in,
we
match
it
with
more
containers
and
more
now
applications.
A
So
this
are
actually
hosted
in
infrastructure-
that's
already
scalable
already
proven,
but
that
you
don't
have
to
worry
about
it,
which
gives
the
system
a
property
of
censorship
resistance.
So
if
Amazon
decides
to
not
work
with
you,
why
not
work
with
us
or
there's
a
DMCA
complaint
and
they
shut
down
your
website
automatically
or
algorithmically.
Like
some
of
these
cloud
providers,
do
sometimes
we
have
a
fallback.
Some
of
this
are
well-known
and
well-understood.
A
Some
of
others
are
less
tested,
but
we
care
mostly
about
geographical
reach
and
latency,
and
the
other
problem
that
this
solves
is
that
when
traffic
slows
down
a
lot
of
your
cloud
deployments
end
up
being
over
fitted.
So
you
end
up
either
under
provisioning,
which
means
you
crash
under
load
or
you
over
provision,
and
you
pay
a
cloud
provider
more
way
more
than
you
actually
need
to
pay
them,
because
your
traffic
fluctuates,
and
sometimes
we
have
a
certain
spike
in
a
certain
geographical
area
that
you
didn't
even
know
existed.
A
True
story,
so
truly
elastic
you're
not
limited
to
very
simple
functions,
so
we
support
any
service
that
talks
over
the
HTTP
protocol.
No
matter
what
it
does,
we
can
launch
more
of
them,
provided
that
you
basically
worry
a
little
bit
about
statelessness
and
boot
up.
Time
is
also
concerned
specifically
for
the
benefit
of
your
users.
A
Geographically,
distributed
means
that
you
know
you're
not
gonna,
have
to
worry
about
what
location
to
pick
or
what
region
to
pick
and
there's
some
unique
benefits
to
this.
So,
first
of
all,
we
give
you
a
built
in
HTTP
to
support,
so
you
also
don't
have
to
worry
about
the
capabilities
of
your
edge
proxy
as
as
well
as
SSL,
which
is
a
requirement
for
most
HTTP
two
deployments.
So
this
will
give
you
out-of-the-box
dramatic
performance
improvement
across
many
different
verticals
one.
That's
specifically
interesting
is
mobile
connection.
A
So,
on
the
way
here
in
my
uber
I
had
a
2g
network
and
imagine
if
I
had
established
TCP
connections
to
20
different
host
names,
I
would
probably
struggle
or,
if
I
have
to
start
many
connections
for
HTTP
1.
It's
also
not
ideal
for
connections
where
even
one
connection
is
very
hard
established,
especially
reliable
connections
like
TCP,
so
mobile
and
real-time
connections
will
benefit
certain
mobile
and
real-time
applications
will
benefit
greatly
from
this
ability,
an
HTTP
to
support.
The
other
thing
that
we
focused
on
is
to
make
this
platform
Universal.
A
So
a
lot
of
our
larger
customers
use
node,
and
but
not
every
team.
Within
their
company
will
over
time
its
know,
maybe
maybe
a
certain
API
that
they
need
is
written
in
some
other
language,
or
maybe
the
one
I
deploy
a
modified
version
of
node.
So
we
worry
about
universality,
the
first
piece
of
manifest
or
like
code
that
we
support
is
JavaScript,
so
provided
there's
a
package.json,
you
can
deploy
it.
The
second
one
is
darker
files.
A
So
if
you
can
encode
or
create
a
recipe
of
your
application
as
a
docker
file,
you
just
press
it
now
again,
and
this
will
give
you
the
ability
to
deploy
a
wider
variety
of
programming,
languages
or
scripts
or
static
websites.
So,
for
a
lot
of
people,
like
you,
run
a
marketing
campaign
or
you
create
documentation
website
for
for
your
project,
you
don't
really
need
to
necessarily
have
a
full
programming
language
behind
it.
A
A
Real-Time
because
you
get
that
feedback
in
in
real
time
when
you
deploy
a
JavaScript
application,
we
invested
a
lot
of
time
making
sure
that
you're
not
waiting
forever
for
NPM
install
to
complete
or
you're,
not
waiting
forever
to
perhaps
run
tests
or
integration
systems
that
are
quite
CPU
or
memory
intensive.
So
we
have
a
smart
cache
that
every
time
you
deploy
specifically,
if
you're,
making
a
lot
of
changes
and
deploy
a
lot,
you
don't
really
need
to
research
every
package
every
time.
A
So
for
us
it's
much
better
if
we
just
share
the
Delta
of
your
source
code,
put
it
in
the
cloud
and
run
the
build
process
there
and
finally,
something
I
want
to
emphasize.
Is
we
invested
a
lot
of
time?
Thinking
about
what
the
best
models
are
for
elasticity
and
for
you
to
become
more
productive,
so
we
found
that
microservices
are
very
uniquely
fitted
to
this
deployment
process
and
idea.
So
imagine
that
instead
of
versioning,
your
entire
API,
you
could
deploy
method
updates
all
the
time.
A
So
imagine
that
each
API
method
that
you
have
or
each
schema
that
you
have
like
in
Turin,
for
example,
if
you're
using
an
graphical
back-end,
had
its
own
little
service
that
you
can
update
on
a
one
per
one
basis.
So
essentially
you
can
deploy
updates
all
the
time
and
you're
not
constrained
by
a
single
monolithic
process.
A
So
we
think
that
this
idea
of
monoliths
versus
micro
services
at
the
end
of
the
day
is
somewhat
of
a
false
dichotomy.
And
the
reason
for
this
is
that
to
the
consumer
of
your
services
or
websites,
it
has
to
look
and
feel
like
a
monolith.
So
just
to
give
you
an
example,
you
want
to
have
a
consistent
interface,
a
consistent,
a
URL
and,
for
example,
a
single
authentication
mechanism
for
all
your
API.
You
don't
want
to
have
20
different
ways
of
accessing
your
data,
for
example.
A
So
imagine
that
your
end
goal
is
to
have
an
API
server
that
has
three
different
paths.
We
think
that
the
ideal
approach
to
solving
this
problem
is
you
split
up
the
code
into
the
discrete
micro
services
that
run
discrete
processes
that
can
be
individually
tested,
but
then
you
can
deploy
them
individually
to
in
the
independently
scaled
services.
And
then
all
you
have
to
do
is
provide
the
mapping.
A
So
we
think
that
to
the
consumer,
the
API
looks
like
a
monolith
is
API
dot,
something
comm
or
when
you're
working
on
a
website
you
don't
want
to,
for
example,
deploy
two
different
subdomains
and
then
have
users
relent
acade.
There
are
like
a
lot
of
scenarios
where
the
user
experience
increases
a
lot.
A
If
you
do
everything
in
there
a
single
domain,
for
example,
so
you
could
be
tempted
to
conflate
this
idea
that
the
interface
looks
like
a
monolith
with
I
should
build
the
monolith,
but
with
now
we're
working
hard
to
make
sure
that
you
can
still
have
the
benefits
of
micro-services,
where
it's
a
very
smooth
experience
to
develop
and
deploy
in
scale.
But
when
we
proxy,
we
make
it
really
easy
for
your
app
and
and
consumer
interface
to
look
like
a
monolith,
so
the
instrument
developed
experience
would
break
otherwise.
A
But,
more
importantly,
you
could
have
performance
problems
so,
like
I
mentioned
earlier,
if
you're
taking
advantage
of
HTTP
2
connection,
reuse,
research,
for
example,
you
want
all
the
data
flow
to
happen
over
a
single
domain
and
over
a
single
connection.
So
this
this
model
allows
for
that
very
very
elegantly.
A
So,
for
to
this
end,
we
developed
a
little
library
that
we
called
we
call
micro.
That
makes
it
very
easy
to
focus
on
what
a
single
concern
at
a
time
so
notice
that
I
don't
deal
with
routing
here.
Even
so,
all
I
do
is
from
my
module,
I
export
a
function
that
is
defined
as
async
and
with
it
within
it.
I
can
await
a
bunch
of
promises.
A
So
this
solves
two
things
from
our
perspective,
so
one
of
them
is
again,
you
don't
really
need
to
start
with
a
large
server
to
which
you
configure
every
single
type
of
middleware
that
ever
any
endpoint
within
it
might
require.
So
that's
one
thing
here:
you
await
a
specific
concerns
that
this
specific
micro
service
has
so,
for
example,
if
it
I
need
to
do
body
parsing,
you
don't
really
need
to
include
the
body
function
that
you
can
await,
so
it
increases
performance
by
only
evaluating
and
executing
the
things
that
are
relevant
to
specific
endpoint
or
server.
A
So
that's
one
thing
and
the
other
thing
is:
it
really
solves
the
callback
held
problem
because
you
just
need
to
like
run
micro
index
ijs,
for
example,
for
this
file
and
within
it
we
transpile
on-the-fly.
We
do
this
very
quickly.
We
don't
have
to
invoke
babel
or
anything
like
that
and
within
it
you
can
manage
asynchrony
very,
very
easily,
so
any
function
that
returns
the
promise
can
be
awaited
and
you
can
express
your
business
logic
very
very
nicely.
So
I'll
show
you
the
github
once
again,
zayed
slash
micro.
A
When
you
run
the
micro
program,
it
listens
on
a
server
and
that
server
can
be
obviously
I've,
rerouted
Andry
proxied
to
specific
path,
name,
for
example,
within
your
API,
and
that
way
you
don't
really
have
to
worry
about
that
monolith
versus
micro,
service
dichotomy
and
idea.
At
the
same
time,
you
don't
have
to
worry
about
writing
code,
that's
brittle,
and
it
perhaps
the
most
interesting
benefit
of
this
architecture
is.
When
you
put
more
of
your
micro
services,
they
will
boot
up
extremely
quickly.
Perhaps
one
of
the
most
overlooked
aspects
of
performance.
A
Is
you
just
need
to
load
less
code
because
loading
code
takes
time
in
the
form
of
interpretation
time
parsing?
It
takes
a
lot
of.
It
takes
up
a
lot
of
memory
once
it's
fully
loaded
by
the
note
process,
and
that
means
that
if
you
have
a
request
coming
in
and
we're
scaling
very
quickly,
you
really
want
to
make
that
happen
very,
very,
very
fast.
So
micro
Zhu
is
a
project
that
we
open-source
to
sort
of
try
to
contribute
to
this
end
goal
for
the
community
write
small
chunks
of
code
that
have
one
primary
concern.
A
A
Yeah
yeah
he's
asking
about
databases,
and
that's
a
really
really
great
question,
because
our
vision
is,
we
really
want
to
be
the
best
at
creating
this
glue
for
the
cloud.
Databases
are
emerging
every
single
day
in
ways
that
are
not
very
obvious.
So,
for
example,
you
can
use
the
Google
Spreadsheets
API
to
host
some
data,
provided
a
data
volume
is
not
that
big.
A
For
example,
you
would
have
a
lot
of
benefits
like
collaboration
and
introspection
and
the
ability
to
see
your
data,
which
a
lot
of
databases
actually
provide
very
easily
and
it
mediate
fryin
for
your
data.
So
we
think
that
over
time,
databases
are
gonna
be
hosted
not
by
you,
but
by
people
that
are
consistently
upgrading
them,
monitoring
them
scaling
them
on
your
behalf,
for
example,
if
we
applied
the
same
model
of
you
want
to
change
your
code
and
deployment
I
deploy
a
new
version.
A
If
you
wanted
to
upgrade
your
database
in
that
way,
you
would
probably
have
a
lot
of
trouble
because
you
would
have
to
deal
with
all
this
estate.
You
would
have
to
deal
with
all
these
migrations
and,
at
the
same
time,
you
would
have
to
deal
with
operational
problems
that
come
with
databases,
so
our
customers
use
specific
databases
to
solve
the
specific
problems.
So,
for
example,
if
you
have,
if
you're
building
a
chat
application,
you
can
use
firebase
if
you're
building
a
CRM,
you
can
interface
with
another
API,
that's
more
suited
for
that.
A
The
kind
of
queries
that
you're
gonna
make
as
one
thing
that
we've
seen
in
the
marketplace
is
databases
become
more
and
more
and
more
specific
to
each
use
case.
So
you
use
elastic
for
search
engines
or
you
use
influx
for
time,
series
data
and
or
use
the
specific
services
that
don't
look
like
databases,
there's
a
great
talk
by
jewel
or
a
great
blog
post
by
Joel
Spolsky,
saying
that
the
best
kind
of
applications
are
actually
data
structures.
A
So
if
you
store
your
data
in
Trello,
you
boot
up
a
service
on
now
that
makes
API
calls
to
Trello
and
you're
using
them
as
your
back-end
of
storage,
because
maybe
the
the
data
structure
that
they
deal
with
fits
your
application
really
well.
So
we're
sort
of
trying
to
rethink
how
to
best
serve
our
customers.
A
In
this
way,
we
recently
launched
our
secret
storage
solution
that
to
provision
api
kiss
for
your
deployments
and
we're
working
with
partners
on
making
that
process
of
getting
and
obtaining
and
supplying
the
api
key
for
a
certain
database
to
your
deployment
very
very
quickly.
So,
at
the
end
of
the
day,
it
feels
like
you
just
deploy
a
database,
but
all
you're
doing
is
you're
orchestrating
its
deployment
through
a
third
party,
and
we
think
that
that's
really
great
because
it
relieves
you
from
the
burdens
of
operational
difficulties
of
databases
that
are
many.
A
Yeah,
so
for
dr.
fight,
we
took
a
very
hard
look
at
the
container
spectrum,
and
one
thing
we
found
is
that
containers
are
smaller,
but
when
you
actually
are
building
them
all
the
time
and
when
you're
downloading
them
and
re
uploading
them
the
in
practice,
they
take
often
like
hundreds
of
megabytes,
and
if
you
remember
about
like
a
Emma,
Amazon
a.m.
eyes
or
ISO
images,
those
seems
to
be
like
it
in
the
six
hundred
seven
hundred
megabyte
wall
park
and
in
practice,
containers
can
get
at
that
point
as
well.
A
So
what
we
do
is
we
synchronize
the
contents
of
the
docker
file
and
we
build
the
image
for
you
in
the
cloud.
So
you
really
don't
have
to
deal
with
registries.
You
don't
have
to
deal
with
uploading
and
downloading
images
and
within
it
you
can
basically
write
any
any
orchestration
code
that
you
want,
including
code
that
as
an
example,
as
we
purposefully
made
a
really
funny
example.
In
our
blog
we
wrote
a
micro
service
in
COBOL,
which
is
probably
the
oldest
programming
language
that
has
an
HTTP.
A
Let's
call
it
a
framework
to
be
polite,
but
essentially
we
deployed
a
micro
service
in
a
sixty-year-old
programming
language,
just
to
make
the
point
that
we're
creating
a
universal
platform
and
that
we
can
give
you
the
versatility
without
adding
operational
complexity.
So
you
should
check
out
our
blog
Zico
/
blog
and
you'll
find
examples
of
docker
files.
When
we
announced
the
feature
recently,
we
did
a
PHP
if
you
really
think
about
like
PHP
as
the
sorry,
that's
the
way
that
most
people
have
used
it.
A
It
fits
this
model
really
well,
because
most
people
never
cared
about
writing
HTTP
code
in
their
PHP
applications.
They
just
wrote
a
bunch
of
PHP
files
that
were
primarily
concerned
with
their
business
logic,
and
this
is
what
made
PHP
people
really
productive,
so
facebook.com/
box
I
met
a
lot
of
really
big
comments.
Are
powered
in
some
big
way
by
PHP.
So
when
you
announce
docker
file,
we're
we're
embracing
that
there
are
really
great
alternatives.
A
We
we
found
that
you
know
the
npm
node
modules
directory
can
be
hundreds
of
hundreds
of
megabytes
of
like
babel,
react,
angular
jquery
dependencies.
So
that's
another
really
interesting
use
case
because
with
us
you
just
upload
the
source
code
and
you
the
build
that
happens
in
the
cloud.
If
it
fails,
you
can
each
you
can
even
share
that
URL.
With
your
co-workers
and
say
like
look,
there
was
a
transitive
dependency
in
npm
that
just
broke
my
life.
Can
you
help
me
and
you
show
them
the
log
of
the
of
the
build?