►
From YouTube: Where Did My Servers Go?- Trevor Bodz, IBM
Description
Where Did My Servers Go?- Trevor Bodz, IBM
Does provisioning and maintaining servers bore you? Have you ever wished that you spent more time writing application code? Find out how others have ditched their servers with this introduction to Serverless. Trevor Bodz introduces the concepts behind serverless, debunks some of the hype and more importantly explains how it is different from the past. We’ll cover best practices and the lessons learned from building serverless Node.js applications. Apply these concepts to quickly build, assemble and scale your applications through reusable components leveraging the power of Node.js and the opensource serverless platform OpenWhisk.
A
So
my
name
is
trevor
bodds.
I
work
at
ibm
developer
for
probably
about
15
years
and
now
moved
into
the
product
management
side
of
things.
When
I
was
in
development,
my
kind
of
goal
for
myself
and
then
I
moved
to
managing
my
team-
was
really.
How
can
I
make
teams
more
productive?
A
A
So
I'm
curious
how
many
people
have
actually
heard
of
serverless
programming
or
architectures
before
okay,
so
a
little
under
half
the
room.
So
I'm
going
to
kind
of
talk
today,
hopefully
you'll
get
to
learn
what
serverless
is
some
of
the
times.
It's
good,
some
of
the
things,
that's
not
good
and
then
talk
a
little
bit
about
openwhisk
the
offering
that
we
have
from
ibm.
Hopefully
that'll,
be
the
shorter
part
of
the
talks.
I
know
a
lot.
Everyone
loves
vendor
pitches.
A
So
I
like
contrasting
kind
of
the
traditional
way
of
doing
things,
you
have
an
idea.
You
want
to
come
up
with
the
next
billion
dollar
startup.
A
You
are
going
to
be
the
next
facebook
or
something
and
so
you're
going
to
spend
weeks
or
months
writing
your
code
and
obviously
you
have
code
and
microservices,
and
you
need
a
bunch
of
servers
to
go
and
run
those
things
for
you,
and
so
you
want
to
get
that
out
to
the
door
as
fast
as
you
can
so
that
you
can
get
feedback
from
your
users
iterate
and
kind
of
keep
doing
that
cycle
as
fast
as
you
can,
but
servers
always
let
us
down
something
always
goes
wrong.
A
hard
drive
fails.
A
We
find
out
a
couple
weeks
in
that
hey
now
I
have
a
linux
os
problem
that
I
need
to
go
fix,
there's
middleware,
updates
your
site
goes
down,
clients
have
nothing
to
go
and
look
at,
and
so
a
bunch
of
you
are
probably
thinking
hey
we're
doing
microservices.
So
we've
already
figured
that
out.
We've
had
a
monolith
that
we
started
with:
we've
broken
it
down
into
microservices,
we've
kind
of
taken
the
next
step
and
we've
made
them
all
aj,
so
they're
all
highly
available.
A
So
if
one
microservice
goes
down
no
problem,
we
got
another
one
there
backing
it
up,
and
few
of
you
probably
have
even
gone
farther
on
our
netflix
and
it's
like
hey.
We
even
got
that
figured
out
if
the
whole
data
center
goes
down.
We've
got
another
region
where
we
can
switch
over
to
and
everything's
going
to
be
great,
and
so
by
doing
that,
you
kind
of
add
two
new
problems
to
yourself:
yeah
your
app's
going
to
be
there.
A
Your
services
are
going
to
be
there
the
entire
time
for
your
users,
but
now
your
infrastructure
costs
are
probably
going
to
start
shooting
up
through
the
roof.
The
more
and
more
microservices
you
have
the
more
servers.
You
probably
need
it's
kind
of
interesting
to
see
how
that
trade-off
balances
out.
Obviously,
you
want
to
keep
costs
down
as
you
try
and
cram
more
things
onto
a
single
server,
but
now
you're
kind
of
getting
back
to
that
monolith
and
then
operational
complexity.
A
You
have
a
pile
more
things
running
and
obviously
you
need
a
much
bigger
ops
team
to
go
and
handle
that
for
you
in
july
of
2013,
I
think
it
was
google
put
out
a
report
and
one
of
the
numbers
that
they
had
in
there
is
that
they
looked
at
over
20
000
clusters
that
they
run
over
three
months
and
they
were
20
to
40
percent
used.
That
means
60
to
80
of
the
time
those
servers
were
sitting
there
idle,
which
means
you're
paying
for
resources
that
you're
not
using.
A
Basically,
it's
like
you
brush
your
teeth
and
you
leave
the
tap
running.
It's
just
water
going
down
the
drain
for
you,
it's
kind
of
depending
who
you
talk
to.
I
think
everyone
agrees.
That's
one
of
the
big
draws
of
serverless
is
that
you
don't
have
to
worry
about
that,
and
so
over
the
last
10
years,
or
so
we've
kind
of
seen
an
evolution
happen
where
life,
I
believe
for
developers,
has
kind
of
gotten
easier
when
you
look
at
servers,
so
we
start
off
with
bare
metal
and
a
lot
of
people.
A
A
We
no
longer
have
pets,
we
have
cattle
for
our
machine,
they
should
be
interchangeable,
replaceable
container
dies
just
throw
another
one
up
there
and
now,
as
we
move
to
platform
as
a
service
and
functions,
a
lot
of
people
are
more
thinking
of
this
is
a
herd
of
cattle.
You
don't
even
really
care
about
cattle
that
you
have.
It's
just
there's
a
whole
herd
of
cattle
out
there
that
you
can
start
using
it's
more
of
a
utility
like
an
electricity
company.
A
Electricity
is
always
on
there
for
you,
it's
up
to
you
if
you
want
to
use
it
or
not,
and
so
really,
depending
who
you
talk
to
serverless
functions
as
a
service,
there's,
actually
an
interesting
blog
circulating
around
that
just
call
it
jeff.
In
the
end,
it's
really
about
the
value
and
not
about
the
name.
A
A
lot
of
people
don't
really
care
for
the
name
serverless,
because
hey
the
big
punch
line,
there
are
servers
they're,
just
not
your
servers
anymore
and
you
don't
have
to
worry
about
the
management
and
upgrade
them,
and
so
really
now
your
microservices
come
down
into
a
very
smaller
unit
of
deployment.
You're
now,
basically
deploying
a
function,
a
discrete
little
piece
of
code.
A
That
usually
does
one
thing
and
does
one
thing
very
well
for
you,
and
so
in
its
essence,
that's
what
serverless
gives
you,
but
that
then
comes
with
its
own
problems
as
well,
but
a
lot
of
benefits,
and
so
I
think,
there's
two
benefits
I
want
to
talk
about.
One
is
scaling
and
so
for
those
not
familiar
with
it.
Basically,
what
most
of
the
platforms
do
is
in
some
way
or
form
either
via
cli
or
a
web
ui
that
they
have
more
rest
apis.
A
A
Your
api
now
just
routes
to
the
single
function
that
happens,
and
so
basically,
when
an
event
happens,
they
will
grab
we'll
grab
your
code,
put
it
into
a
container
with
your
runtime
and
execute
it,
and
so
you're,
probably
thinking
it's
like
well
great.
I
have
one
instance
running
now.
What
happens
if
another
call
comes
in?
I
need
to
do
another
one.
That's
the
beauty
of
serverless
is
that
you
get
infinite
scale.
A
If
you
have
one
or
a
thousand
of
these
requests
coming
in
that's
again
abstracted
away
from
you,
that's
the
less
part
of
serverless
you
don't
have
to
you.
Don't
have
to
worry
about
that
when
you're.
Looking
at
platforms,
there's
kind
of
two
pieces
to
look
at
so
let's
say
you
have
a
request
that
comes
in
it's
done:
processing
and
then
another
request
comes
in
five
or
six
seconds
later
serve.
Companies
are
pretty
smart.
A
That
they'll
actually
keep
that
container
around
for
a
little
while,
and
this
way
they
can
reuse
it
it's
your
container
with
your
code
if
they
need
to
run
the
same
thing
it's
there,
and
so
these
things
usually
start
in
an
order
of
milliseconds,
definitely
less
than
10
milliseconds
for
your
code
to
actually
start
executing
again,
and
so
everyone
calls
that
a
warm
start
on
the
opposite
side.
There's
a
cold
start.
So
it's
like
hey,
you
haven't
had
something
that's
run
for
an
hour,
so
maybe
it's
a
scheduled
job
that
runs
every
night.
A
If
your
code's
not
executing
you,
don't
pay
for
it
and
that's
the
end
of
it
and
when
your
code
does
execute,
that's
what
you
pay
for,
and
so,
if
you
write
efficient
performant
code
and
runs
really
fast,
you
pay
for
that
amount
of
time
that
it
runs.
It
could
be
milliseconds,
it
could
be
seconds
and
almost
all
platforms
charge
in
increments
of
100
milliseconds.
A
A
So,
if
you
think
back
to
10
years
ago,
when
you
probably
had
to
go
build
your
own
data,
centers
huge
savings-
and
so
I
think
the
piece
that's
missing
out
of
here
is
everyone
always
talks
about
compute
costs.
No
one
really
talks
about
the
manpower
or
the
upside
of
it,
and
so
one
customer
that
we've
been
working
with
decided
that
some
other
stuff
doesn't
make
sense
for
serverless
right
now.
A
They're
running
at
about
2000
api
calls
a
second
by
the
end
of
the
year,
they're
expecting
10
000
calls
per
second
for
this
one
api
and
their
plan
from
two
to
five
years
from
now
is
50.
000
calls
per
second.
So
when
you
start
doing
that
math
they
said
we
already
have
an
ops
team.
We
have
a
bunch
of
vms.
A
It's
not
that
it's
not
going
to
kill
us
to
add
a
few
more
vms
to
run
this
load,
and
so
we're
fine
with
not
doing
serverless,
and
so
I
think
they
didn't
really
factor
in
that
yeah.
They
have
the
ops
team.
Ops,
guys
are
hard
to
find
great
ones
they're,
not
that
cheap,
but
then
also
what
does
the
productivity
cost
to
your
developers?
A
A
There's
a
few
startups
out
there
as
well
iron.io
seems
to
be
one
of
the
bigger
more
common
ones,
and
so
again,
as
you
pick
your
providers,
there
are
lessons
to
be
learned,
so
everyone's
always
afraid
of
vendor
lock-in.
So
these
are
execution
platforms,
they're,
not
really
the
full
application
suite
that
you
need
to
deliver.
So
that
means
you
probably
need
a
database.
You
probably
need
object,
storage,
the
list
kind
of
goes
on
for
services
that
you
need.
A
So
once
you
start
implementing
you're
dependent
on
someone's
set
of
apis
amazon's
are
different
than
ibms
are
different
than
google's,
and
so,
depending
on
who
you
choose
and
the
path
you
go
down,
there
is
potentially
that
lock
for
event
or
vendor
lock
in
analysts
are
starting
to
get
in
on
this
as
well,
and
so
gartner
released
their
hype,
curb
of
2016
and
so
coming
up
kind
of
right
in
the
beginning
now
is
serverless
and
their
projection
is
pretty
soon
that
people
are
going
to
run
into
struggles
and
kind
of
hit
the
trough
of
disillusion,
and
so
I
think
right
now,
it's
just
that
these
platforms
are
so
immature.
A
I
think
lambda
started
off
in
2014
and
everyone
else
quickly
caught
up
shortly
after
that,
but
it
is
still,
it
is
very
brand
new,
there's
still
a
very
long
way
to
go
with
them.
So
some
of
the
struggles
and
other
things
you
should
worry
about
is
compute
limits.
So
these
aren't
great
for
things
that
are
long
running.
A
A
They
give
it
128
megs
to
save
money,
because
it's
a
lot
quicker.
But
you
really
need
to
worry
about
that.
Another
one
is
concurrent
processes
and
so
kind
of
as
a
safety
net.
A
lot
of
them
have
some
limit
that
you
can
run
only
say
a
thousand
of
these
functions
or
actions
at
a
time
after
that
they
start
airing
out,
and
so
there
have
been
times
where
people
have
actually
done
a
denial
of
service
on
themselves
without
realizing
it
that
it's
per
an
account.
A
So
if
you
have
an
amazon
account
or
an
ibm
account,
you
can
run
a
thousand
at
a
time
and
usually
account
will
have
your
dev
environment
and
your
production
environment.
And
so,
if
you
screw
up
something
in
dev
and
all
of
a
sudden,
you're
spinning
up
a
bunch
of
these
by
accident,
then
obviously
that's
going
to
impact
production
too,
and
so
then
it
kind
of
gets
into
interesting
account
management
problems
as
well.
A
I
think
for
developers
the
biggest
struggle
right
now
is
the
set
of
tooling,
that
is
around
serverless,
because
serverless
is
so
new
tools
just
haven't
caught
up,
and
so
I
think
three
big
ones
for
me
are
debugging
testing
and
monitoring.
So
with
debugging
you
don't
have
an
environment
that
you
can
log
into.
There
is
no
box.
You
can
ssh
into
to
go
and
test
your
code,
which
means
there's
no
remote
processes.
A
Local
development
is
also
hard
on
most
platforms,
because
it's
a
cloud
platform,
you
up
your
upload,
your
code
and
away
you
go
testing,
gets
a
little
interesting
that
there
are
actually
now
a
lot
of
frameworks
that
are
popping
up
for
this,
and
so
unit
testing
is
actually
getting
pretty
decent.
There
are
some
interesting
ways
of
doing
it.
People
have
done
it
of
hey.
I'm
going
to
write
a
function
as
a
service,
I'm
going
to
write
a
function
to
go
test.
A
My
other
functions
which
to
me
kind
of
defeats,
the
purpose
of
pay
for
what
you
use,
because
obviously
you're
now
running
more
and
more
functions
against
that
integration.
Testing
starts
getting
really
hard,
though
so
at
this
point,
you've
offloaded
everything
to
your
platform,
and
so
that
means
how
do
you
now
start
testing
with
all
these
third-party
apis?
Your
online
database
and
everything
else
you
have
nothing
on
your
data
center,
that's
nothing
on
premise
and
no
one's
really
figured
out
how
to
do
a
good
job
of
integration.
A
Testing
and
monitoring
is
just
starting
to
catch
up.
So
I
think,
last
week
or
two
weeks
ago,
datadog
is
the
first
one
I
actually
saw
for
kind
of
a
third
party
that
actually
will
pull
data
in
and
actually
let
you
do
some
monitoring
alerting
on
that,
and
so
I
think
it
was
in
july.
Amazon,
sydney,
there's
a
huge
storm,
backup
generators
didn't
kick
in,
the
entire
data
center
went
down,
and
so,
if
you
think
about
these
think
about
serverless,
everything
is
event
driven.
Your
code
is
waiting
for
an
event
to
occur
so
that
it'll
execute.
A
There
are
a
couple
frameworks,
but
there's
still
a
very
long
way
in
going
for
deployment.
So
a
few
people
have
stated
there
is
no
ops
in
the
devops
portion
of
serverless.
There
definitely
is
you
still
things
will
go
down?
You
still
need
people
to
monitor,
alert,
debug
and
troubleshoot,
but
the
cicd
part,
I
still
think,
has
a
good
way
to
go.
A
So
if
you
think
about
building
your
microservices
there's
a
bunch
of
code
all
put
together
and
deployed
out
where
here
people
are
now
getting
into
the
tens,
hundreds
and
thousands
of
functions,
because
they
are
discrete
little
blocks
that
you
use
so
trying
to
deploy.
All
of
that,
you
might
have
a
function
that
used
by
three
four
or
five
different
applications.
So
how
do
you
version
that
and
then
make
sure
you
deploy
out
the
right
version
to
the
right
set
of
applications.
A
So
I
highly
recommend
if
you
do
start
your
serverless
journey
pick
a
framework,
the
first
one
that
was
out
there
was
serverless
the
serverless
framework
and
then
apex
followed
and
then
not
for
this
room,
but
zappa.
If
you
are
a
python
developer,
if
you
have
python
in
your
shop,
they
do
make
life
a
lot
easier,
setting
up
api
gateways
of
doing
deployments,
roll
backs.
They
have
a
lot
of
great
skeletons
and
frameworks
out
there
as
well.
A
So
if
you
definitely
as
a
developer,
I
highly
recommend
do
some
look
into
frameworks
and
pick
one
of
those.
So
some
of
you
that
have
been
doing
cloud
native
might
be
wondering.
Well,
isn't
this
just
platform
as
a
service?
By
now
they
sound
pretty
similar,
and
so
this
is
my
kind
of
favorite
tweet
from
adrian
on
this
note,
where
yeah,
if
you
can
spin
your
things
up
in
20,
milliseconds,
have
them
run
and
shut
down,
then
call
your
platform
serverless,
but
I
haven't
seen
anything
out
there
cloud,
foundry
or
other.
A
A
So
this
brings
me
to
one
of
the
platforms
openwhisk,
which
is
ibm's
open
source
platform,
which
basically,
as
I
stated
a
few
times
it
exocodes
code
in
response
to
events,
and
so
one
of
the
things
that's
interesting
and
unique
about
openwhisk
is
that
it
is
open
source.
It's
the
only
major
cloud
platform
that
is
100
open
sourced,
so
ibm
we're
actually
here
with
the
opentech
team,
is
very
big
about
open
for
the
enterprise,
and
so
that
means
you
can
actually
go
and
grab
the
code
and
write
locally.
A
If
you
want,
we've
actually
had
people,
we
have
a
slack
channel
out
there,
so
we've
actually
had
people
pick
up
openwhisk
and
they're
running
it
on
amazon,
so
yeah,
the
one
guy's
life
and
the
first.
Sometimes
someone
asked
this
question:
I'm
like
hey
we're
trying
to
run
this
on
amazon.
Can
you
guys
help
us
we're
like
sure
it's
open
source
like
as
long
as
you
guys,
can
help
us
work
through
some
of
the
problems,
we'll
gladly
get
things
up
and
running,
and
so
we
do
have
a
ui
there's
a
cli.
A
A
So
if
you
have
some
languages,
scalago
and
other
which
aren't
supported
by
any
of
the
platforms
out
there
right
now,
you
can
actually
package
up
your
code
into
a
docker
container
and
give
it
to
us
and
every
time
you
want
to
execute
it
as
an
event
happens,
we'll
actually
run
your
docker
container
for
you,
so
removing
some
of
that
orchestration
problem
that
you
have
so
here's
just
kind
of
a
simple
thing.
So
for
what
obviously
all
the
platforms
need
is
we
need
a
main
and
a
return.
A
A
triggers
is
the
kind
of
other
fundamental
piece,
and
so
as
events
come
in
trigger
is
the
thing
that
calls
your
code.
So
an
event
happens.
I
need
to
have
my
code
execute
it,
and
so
we
built
openwhisk
in
a
way
to
have
things
reusable.
So,
basically,
you
can
build
a
library
of
actions
and
a
library
of
triggers
and
have
them
independent
of
each
other.
But
then
we
have
the
concept
of
rules,
and
so
you
can
actually
now
start
stitching
these
libraries
of
things
for
reuse.
A
So
maybe
you
have
a
action
that
will
post
something
to
slack
and
now
you
have
10
different
github
repos,
and
so
those
are
all
different
triggers
linked
to
each
each
repo.
Instead
of
having
to
rewrite
that
code
for
hey,
I
want
to
post
the
slack.
You
can
just
create
10
rules
and
reuse
that
same
piece
of
code
that
you
have,
and
so,
as
I
mentioned,
I
was
a
developer
and
being
at
node.
I
was
guilty
of
being
a
java
developer
for
10
years,
and
so
all
I
knew
was
servers.
A
Devops
was
kind
of
new
at
that
time
and
so
servers
the
headaches
with
them
and
all
the
problems
they
brought
to
me.
That
was
just
my
normal
life
and
this
kind
of
picture
sums
it
up.
I'm
like
yeah,
that's
fine!
I'm
like
that's
what
I
knew
I
didn't
know
anything
better,
but
I
challenge
all
of
you
guys
to
go
figure
out
what
use
case
works
for
you
guys.
A
I
challenge
you
to
burn
your
servers.
Take
that
idea
that
you
have
and
really
upgrade
your
productivity
as
developers.
There's
definitely
some
use
case
in
every
company,
a
lot
of
people.
I've
seen
have
kind
of
started
with
kind
of
scheduled
or
batch
jobs
that
don't
run
that
frequently
they
just
want
to
offload
them
to
a
system
that
might
be
a
good
place
for
you
guys
to
start
as
well.
So
that
is
my
20
minutes.
A
B
I've
been
working
with
london
a
bit
and
I
was
struggling
with
having
a
web
app,
calling
a
server
calling
an
api
and,
for
example,
with
long
polling,
also
yeah.
You
want
updates
from
your
server
to
propagate,
to
your
app
or
to
your
website.
B
B
So
how
do
you
yeah.
A
And
so
it's
supposed
to
solve
that,
I
guess
instead
of
kind
of
pulling
and
waiting
for
data,
amazon
and
micro
or
us,
you
can
create
your
own
custom
event
sources,
and
so
basically
you
should
be
able
to
go
and
create
an
event
source
saying:
hey
a
new
event
has
happened,
go
and
run
this
code,
so
it
should
remove
the
polling
portion
of
it.
A
B
So
you
need
some
way
to
subscribe
to
events
from
the
browser
and
then,
of
course,
server
sides.
You
can
initiate
a
function,
but
that
needs
to
propagate
yeah
some
notification
or
something
like
that,
but
then
integrate
it
in
an
api.