►
Description
If you run your software in the cloud, you might have already done some Serverless programming. Be it either as glue code that connects existing services or for your entire web API. But can we run Serverless workloads with our most favorite programming language as well?
We can! In this talk, we are going to look at how to run serverless workloads in Rust in Azure Functions and AWS Lambda. In doing so, we will see the fundamental differences between both serverless providers, and what effect this has on your applications!
Stefan on Twitter: https://twitter.com/ddprrt
Rust Linz on Twitter: https://twitter.com/rustlinz
A
I
want
to
talk
today
about
serverless
rust,
so
I'm
I'm
stefan
for
the
folks
who
don't
know
me,
I'm
I
have
the
the
worst
twitter
handle
in
the
entire
world,
but
I'm
also
one
of
the
co-founders
and
co-organizers
of
rustlings
and
over
the
last,
let's
say
one
and
a
half
years
up
to
two
years.
I've
done
a
lot
about
serverless,
so
serverless
and
serverless
platforms
and
cloud
providers
who
provide
serverless
offerings
are
what
they
do.
Basically,
on
a
day-to-day
basis.
That's
why
I
want
to
talk
to
you
about
serverless,
I'm
going
to
promise.
A
There
will
be
some
some
rust,
but
there
will
also
be
a
lot
of
cloud
internals
and
and
things
that
you
that
you
might
that
you
might
don't
know
if
you're
using
one
of
those
those
serverless
offerings
so
who,
in
here
who
out
here,
has
not
heard
about
serverless,
you
can
raise
your
hands
okay,
so
everybody
heard
about
serverless,
that's
great!
A
Usually
we
we
so,
let's
say
serverless
comes
in
a
couple
of
different
flavors
and-
and
this
would
be
usually
the
point
where
do
this-
this
particular
joke
that
well,
serverless
is
not
actually
serverless.
There
are
still
servers,
you
know,
that's
that's
old
everybody
does
that
the
thing
is,
but
I
think
nobody
talks
about
is
that
there
are
multiple
flavors
of
serverless
and,
depending
on
who
you
talk
to
everybody
understands
something
different,
and
one
thing
that
serverless
is
is
the
whole
auto
scaling
part.
A
Whichever
that
is
make
sure
that
all
your
infrastructure
problems
are
handled
and
usually
actually,
this
is
this
is
one
of
the
most
important
points.
This
comes
with
auto
scaling
and
consumption
based
pricing,
so
this
is,
I
guess,
the
most
most
important
distinction
between
all
the
other
things
you
just
pay
for
what
you're
actually
using
be
that
an
entire
server
that
is
provisioned
or
just
you
know
a
little
bit
of
cpu
a
little
bit
of
memory
and
there
you
go.
A
The
other
part
is
mount
application
developer
sites,
and
this
is
this
is
what
what
is
usually
referred
to
as
functions
as
a
service.
So,
instead
of
of
writing
your
entire
server
http
server
or
whatnot,
you
don't
write
the
entire
server
logic.
You
are
just
writing
this
little
bit
little
piece
of
logic
that
actually
does
something
how
this
is
getting
called
transferred,
how
the
the
payloads
are
handled.
How
the
response
is
handled
is
not
your
part.
A
You
just
focus
on
this
little
teeny,
tiny
stuff
that
actually
does
something,
and
for
some
for
some
cloud
platforms,
this
is
actually
often
seen
as
some
some
sort
of
glue
code,
especially
if
you
think
about
aws
aws
is
270
services
or
something
around
in
the
poll
park.
A
You
could
basically
run
your
entire
application
on
aws
services,
but
you
need
a
little
bit
of
glue
code
to
connect
the
bits
and
pieces
and
to
make
sure
that
all
those
services
work
together
and
there
are
two
so
for
the
auto
scaling
stuff
there.
There
are
lots
of
examples,
and
one
that
came
up
recently
that
I
found
out
recently
is
google
cloud
run,
which
is
really
that's.
My
docker
file
do
whatever
you
need
to
do
with
it
and
aws
target
and
on
the
function
as
a
service
set.
A
We
so
the
two
most
most
well
known
offerings
aw
is
lambda
and
azure
functions
and
we
are
going
to
look
at
the
at
the
internal
aws
lambda
in
the
azure
functions
today.
The
thing
is
consumption-based
pricing
and
just
writing.
Just
writing
business
logic
or
just
writing
the
functions.
A
Those
two
things
can
be
seen
totally
separate,
but
usually
they
are
working
really
well
together.
Azure
functions
has
offerings
where
you
don't
have
serverless
scaling.
They
are
just
100
servers
please
and
the
right
functions,
but
they
are
running
all
the
time
they
they're
all
provisioned
the
entire
time.
This
is
possible,
but
usually
you
use
them
with
consumption
based
pricing.
A
They
say:
okay,
I'm
just
going
to
pay
for
what
they
need
and
aws
or
azure
is
making
sure
that
we
just
need
that
the
just
provision,
what
I
need
which
brings
me
to
the
building,
if
you,
if
you
use
the
two
of
them
together,
if
so,
usually
most
of
the
of
the
serverless
providers
say
we
charge
you
for
the
time
it
takes
to
execute
your
stuff
and
the
amount
of
memory
that
you
need.
That's
that's
very
much
simplified,
but
that's
usually
a
way
to
calculate
it.
Calculation
of
server
is
pricing
is
a
hell.
A
There
are
so
many
factors
involved
and
I'm
going
to
get
that
to
that.
At
the
end
of
the
presentation
there
there
are
companies
specializing
on
calculating
your
stuff.
So
we
are
not
going
down
that
road.
We
are
going
to
download
route
why
we
want
to
use
rust
with
serverless
and
if
you
think
about
that,
the
entire
pricing
mode
of
serverless
you
pay
for
amount
of
time
and
for
the
ram
that
you
use.
You
have
to
say
that
rust
is
really
good
at
memory
and
speed.
A
So
you
have
really
fast
applications,
and
you
know
memory
memory
is,
is
everything
that
rust
is
about
it's
using
just
the
right
amount
of
memory,
not
having
a
garbage
collector
running,
not
over
provisional
memory,
just
having
what
you
actually
use
and
what
you
actually
need
and
that's?
Why
that's
why?
I
think
rust
is
a
really
good
fit
for
for
serverless.
A
A
Okay
for
the
folks
in
the
stream
aws
lambda.
Well,
no,
it's
the
half-life
logo
and
and
not
everyone
is
aware
of
that.
So,
if
you,
if
you
see
any
uws
lambda
blocks
out
that
are
usually
using
the
half-life
logo,
even
aws
sometimes
uses
the
half-life
logo
in
their
blog
posts.
This
is
the
actual
aws
lambda
logo.
So
get
this
in
in
your
brain.
If
you,
if
you
see
that
that's
aws
lambda
the
rest
is,
is
half-life
and
yeah,
of
course,
we
we
want
to
talk
about
ews,
lambda.
A
I
couldn't
say
if
it's
the
first
one,
but
it's
definitely
a
pioneer
of
functions
as
a
service,
so
the
poster
child
one
one
of
the
services
that
everybody
has
in
their
minds
when
talking
about
serverless-
and
I
know
it
from
from
making
web
apis
and
the
more
I
spent
with
with
aws
lambda,
the
more
I
found
out
that
I
guess
web
apis
is
just
a
side
effect.
A
I
guess
this
wasn't
planned
that
just
works
as
well,
because
what
it
usually,
what
it
actually
should
be,
is
just
this
little
bit
of
glue
code
between
aws
services
and
that
you
have
something
like
the
api
gateway
that
does
http
requests
or
makes
http
points
that
you
can,
that
you
can
call
and
make
requests
to
that.
You
can
attach
a
lambda
after
that
is
actually
yeah
that
just
works.
But
it's
I
guess
it's
not
the
main
use
case.
A
Not
lambda
was
not
designed
exclusively
for
that.
Lambda
is
very
much
unaware
of
triggers.
It
just
takes
an
event
that
can
be
an
http
trigger
event
yeah,
but
what
it
does
it
takes
an
event
and
it
processes
some
workload
and
then
it
produces
a
result.
It's
it's
like
that:
the
classical
function
you
have
some
input.
You
have
some
output
in
the
base
case.
It's
serverless,
it's
not
a
stateless
which
is
not
entirely
two
but
but
which
is
a
it's.
A
A
mental
model
and
lambda
runs
in
in
very,
very
lightweight
micro,
vms,
firecracker,
micro,
virtual
machines
and
firecrack
is
nice
because
it's
written
in
rust
and
it's
it's
a
fantastic
piece
of
technology.
So
if
it's
it's
open
source,
you
can
check
it
out.
There
are
some
very
much
very
clever
things
in
there
and
then
you
think
about
aws
lambda.
You
shouldn't
think
about
servers
that
are
provisioned
for
you
that
just
write
the
glucose.
They
are
actually
workers,
and
this
is
a
very
important
detail
where
we
are
coming
to
in
a
minute.
A
So
if
you're
doing
either
slump-
and
I
have
to
make
my
notes
a
little
bit
bigger,
so
I
can,
I
can
see
what
I've
written
there.
Otherwise
it's
becoming
some
sort
of
karaoke.
Well,
look
at
that
screen,
so
it
was
lambda.
The
execution
life
cycle
comes
in
a
couple
of
steps,
and
what
you
see
here
in
blue
are
all
the
all
the
bits
and
pieces
that
need
to
be
run
always
and
all
the
pink
parts
are
those
pieces
that
we
call
cold
stuff.
A
First,
it
checks,
if
you're
even
allowed
to
do
that,
if
you're
allowed
to
do
the
checks,
if
you
are
within
the
range
of
concurrent
workers,
that
amazon
can
provide
for
you
and
if
that
all
is
okay,
ew
is
lambda,
requests
a
worker,
so
you
can
think
of
a
fleet
of
of
micro
vms,
and
you
can
request
one
of
those
workers
to
produce
you
or
to
to
pro
process
your
workload.
A
If
you
get
one
of
those
workers
and
if
the
worker
is
what
we
call
cold,
which,
which
means
the
worker
doesn't
know
about
what
you
want
to
process
the
worker
downloads,
some
code
spins
up
or
creates
a
sandbox
downloads,
the
code
ur
code,
your
stuff,
that
you've
written,
improves,
bootstraps
your
application.
A
If
you
think
about
nodejs,
this
means
that
downloading
your
javascript
files,
downloading
nodejs
downloading
the
node
modules
packing
them
in
a
folder
and
then
starting
node
with
the
application
and
booting
it
up,
and
only
then
are
your
workflows.
Workloads
protest.
So
this
is
the
typical
aws
lambda
execution
live
circuit.
A
A
couple
of
the
things
are
need
to
be
done
always
but
they're
very
fast,
except
those
bits
and
pieces
in
the
middle
that
the
the
pink
column.
This
can
take
some
time,
and
this
is
the
cold
start
time.
If
the
payload
is
processed
or
the
workload
is
processed,
and
you
get
another
request,
only
the
blue
parts
are
running,
which
means
aws
checks.
If
you
are
allowed
to
do
that,
and
if
there's
already
a
one
virtual
machine,
you
are
processing
that
code.
A
Well,
lambda
is
processing
the
code.
If
you
look
at
an
invocation
diagram,
this
is
this
is
how
the
invocation
could
look
like
you
have
a
trigger.
Whatever
the
trigger
is,
let's
say
it's
an
http
trigger
you're,
getting
a
yes.
A
This
is
exactly
what
you're
going
to
see
here
at
the
screen
now
so
yeah
question
yeah,
I'm
going
to
repeat
the
question
for
the
for
the
chat.
How
does
cold
starting
micro
vms
work?
If
there
are
many
processes
running,
can
I
say,
put
it
like
that
yeah?
This
is
what
we're
going
to
see
right
now.
So
so
what
is
happening
when
one
of
those
workers
is
provisioned
and
what
happens
if
you're
already
having
running
a
a
workload
or
processing
a
vehicle?
What
happens
if
another
one
is
coming?
A
So,
let's
say
you
have
a
trigger.
This
can
be
an
http
trigger
or
whatever
trust
the
trigger,
and
this
is
the
very
first
trigger
that
you
have
like
an
http
request
that
you
need
to
process
something
from
a
backend
aws
that
does
its
cold
start
thingy,
which
means
requesting
a
worker
downloading
application,
blah
blah
blah
the
entire
thing.
A
Once
your
runtime
is
bootstrapped,
the
runtime
does
an
http
call
to
the
aws
rest
api.
This
is
this
slash
next
that
you
can
see
here
in
the
screen,
which
means
it
it
does
a
call
to
the
rest
api
asking
for
for
the
payload,
so
it
asks
which
event
should
I
process.
Can
you
give
me
all
the
information
that
I
need
to
process
my
event
once
it
has
fetched
this
information?
A
It
processes
it.
This
is
the
code
that
you
are
writing
whatever.
That
is.
This
is
the
one
thing
that
you
are
doing
once
the
result
is
there:
it
does
another
http
request
to
the
aws
rest
api
with
a
post
message
to
response
giving
the
result,
and
during
the
time
this
worker
is
exclusively
working
only
on
this
particular
workload.
Nothing
else.
A
A
There's
a
certain
hibernation
period
if,
within
a
couple
of
minutes,
aws
isn't
open
about
the
period
about
the
amount
of
time
that
needs
to
pass.
But
but
if
this
happens
after
the
amount
of
period
the
worker
falls
again,
so
you
freeze,
it
then
falls
again,
which
means
it.
It's
not.
It's
not
a
cold
start,
it's
still
a
one
container,
but
it
gets
all
the
resources
back.
It
gets
cpu
back,
it
gets
run
back
and
then
can
process
another
workload.
Now
what
happens
if
you're
processing
a
record?
A
This
is
actually
the
one
big
difference
between
aws
lambda
and
all
the
other
serverless
offerings
out
there,
which
is
that
for
for
every
workload
that
you're
processing,
you
have
a
dedicated
worker.
Doing
that
no
parallel
processes
within
the
worker
or
something
not
the
thing.
That
nodejs
is
really
good.
At
which
means
doing
io
and
and
being
able
to
handle
lots
of
requests
because
it
doesn't
actually
do
anything.
It
just
has
open
open
actions.
A
Nope
you
have
a
dedicated
worker
for
just
this
particular
process
and
after
that
it
works
on
the
next
on
the
next
process,
which
also
means,
if
you,
if
you
get
another
work
response
and
this
panics
for
whatever
reason,
which
means
I
don't
know,
stack
overflow
heap
overflow,
anything
any
panic
that
can
happen.
It
just
kills
the
particular
worker
or
the
particular
runtime
within
the
worker,
and
then
it
bootstraps
this
worker
again,
and
then
you
can
run
the
next
period.
A
So
this
is
this
is
how
aws
lambda
scales
out
various
workloads
across
your
system
with
the
typical
consumption
plan
you
get
about
1
000
workers
that
you
can
that
you
can
use.
If
you
need
more
either
cool
aws
or
the
events
are
queuing
up,
which
means
that
yeah,
you
need
to
wait.
Wait
for
them.
Does
this
answer
your
questions?
Cool
great?
I
already.
I
already
said
that
if,
if
those
processes
hibernate
for
a
certain
amount
of
time,
they
are
getting
disposed
and
resources
get
freed.
A
So
this
is
this
is
how
aws
lambda
arm
works.
One
thing
that's
a
couple
of
things
that
are
good
to
know,
which,
which
kind
of
kind
of
interesting
to
me.
The
longer
I
work
with
italy's
lambda
is
that
you
were
paying
for
ram,
so
so
the
the
amount
of
dollars
that
that
you
are
paying
for
scales
within
the
amount
of
ram
that
you
use.
So
if
you're
using
128
makes
a
frame,
you
have
0.000
21
to
pay.
A
If
you
have
twice
as
much
ram
you're
going
to
pay
twice
as
much
dollars,
but-
and
this
is
interesting-
you're
also
getting
twice
as
much
cpu,
which
means
that
for
for
most
of
the
workers
that
you
are
doing
this
this
is
it
just
costs
the
same?
So
so,
if
you
don't
have
any
workloads
that
need
a
particular
amount
of
ram
that
can
work
with
with
a
low
amount
of
ram.
A
A
If
speed
is
a
problem
with
the
application
and
not
ram
just
go
for
the
full
gigabyte
for
10
gigabytes
or
whatever
it
doesn't
matter,
it's
it's.
It's
the
same
amount
of
money
that
you're
paying,
because
if
you
have
a
smaller
vm
or
smaller
arm,
it
just
takes
twice
as
long,
and
this
is
something
that
that
we,
so
I
couldn't
say
that
for
for
any
workload
or
for
every
workout
that
is
happening,
but
that
happens
for
for
a
lot
of
programs
that
we
have
been
running.
A
So,
if
ram
is
not
an
issue,
you
can
control
the
speed
of
the
execution
with
that
all
right,
how
are
we
going
to
write
an
aw
slump
in
rust?
Of
course,
we
are
bootstrapping
the
firecracker
vm.
Then
we
are
running
a
node
process
because
aws
lambda
is
node
everywhere
and
node
is
going
to
load
a
wasm
file,
a
rast
program
that
is
compiled
in
wasm,
so
it
loads
the
wasm
file
and
starts
the
western
virtual
machine,
and
no,
please
don't
do
that.
Please
don't
do
that.
That's
way
too
many
virtual
machines.
A
That's
way
too
many
turtles,
that's,
sadly,
what
you
see
in
the
blog
posts
on
the
internet.
So
if
you
type
in
using
raspberry
pi
islam
that
you're
getting
stuff
like
that,
not
cool,
there's,
there's
a
there's.
A
more
concrete
way
to
do
that
which
is
well
just
compiler,
binary
and
related
run
on
the
api
is
lambda,
so
it's
you
don't
need
to
have
you
don't
need
to
have
any
other
abstraction
and
it's
actually
quite
easy.
A
A
The
handler
function
is
the
actual
function
that
you
are
going
to
implement.
So
this
is
this.
Is
the
actual
logic,
the
actual
code
that
you're
running
then
you
compile
into
to
unknown
linux
new
the
documentation
says
that
you
need
to
compile
it
to
moosel
linux
doesn't
matter
new
works
as
well.
It's
just
that
with
the
cup
with
with
some
chili
versions,
you
run
into
issues
because
the
amazon
linux
version
doesn't
support
a
couple
of
symbols.
A
There
that's
why
they
say
in
the
documentation,
use
mozilla,
because
it's
it's
the
least
painful
linux
target
to
compile
to
it's.
Also
the
slowest
one
so
go.
Go
for
this
target
that
works.
You
might
need
to
install
a
link
if
you
are
on
macos
or
something,
but
usually
that
works.
I
have
even
some
github
actions
that
compile
that
stuff
for
you,
I'm
going
to
show
to
link
to
the
repo
where
all
that
stuff
is
later
on,
so
you
can
check
it
out
and
name
the
binary
bootstrap.
A
That's
name
put
it
in
a
zip
file
and
upload
it
to
aws
lambda,
and
there
you
go
all
right.
Let's
look
at
some
results
so
because
you
know
rusty
is
really
good
at
speed
and
memory.
A
If
I
have
a
lambda
with
128
megabytes-
and
I
run
a
typical
hello
world,
which
is
just
okay-
take
the
runtime
from
lambda
the
node
runtime
from
lambda
print
out
hello
world
and
give
the
result
back.
A
cold
start
takes
about
200
milliseconds
on
a
128mb,
lambda
vm,
a
re-run.
So
if
the,
if
the,
if
the
lambda
is
hot,
if
the
runtime
is
hot
takes
about
two
milliseconds
in
rust,
cold
starts
take
less
than
20
milliseconds.
A
I
couldn't
tell
if
this
is
for
the
entire
process
that
I
have
shown
you
or
just
for
the
one
particular
piece.
That's
that
you
have
in
control
doesn't
matter.
This
is
what
you're
built
for.
So
this
is
the
number
that
counts
it's
less
than
20
milliseconds.
So
I
had
some
cold
starts
in
in
the
ballpark
of
60
milliseconds,
with
the
hello
world.
That's
great
and,
and
three
runs
usually
are
less
than
a
millisecond,
which
is
which
is
fun.
So
I
had
a
couple
of
reruns
that
lambda
told
me
0.5
milliseconds
great.
A
It
also
beat
me
one
one
millisecond,
because
that's
the
smallest
amount
that
lambda
can
can
pay
all
right.
This
is
this.
Is
here
the
world?
This
is
not
fun.
This
is
not
something
that
that's
in
any
way.
Interesting.
That's
why
I
created
a
little
benchmark
program,
which
is
called
palindrome
products.
A
What
it
does
is
that
you
take
a
range
of
numbers,
let's
say
from
100
to
999,
you
create
the
combinations
like
multiply
100
with
101,
multiply
100
with
102,
and
do
that
with
all
possible
combinations
that
they
exist
and
then
see
if
the
product
is
the
same
forwards
and
backwards.
So
1001
would
be
a
pelinom
product
because
it
reads
the
same
from
both
sides.
A
The
nice
thing
about
it
is
that
this
takes
a
lot
of
cpu
if
you're
working
with
really
big
numbers,
so
you
can
benchmark
really
well
how
how
long
this
will
take,
because
the
the
bigger
the
range
is
the
more
it
takes.
A
If
I'm
calculating
telling
on
products
from
10
to
99
node
takes
for
reruns,
so
I'm
leaving
out
the
whole.
The
whole
call
starts
because
it's
usually
in
the
same
ballpark
as
the
hello
world,
it's
a
very
small
program,
so
you
don't
have
to
do
anything
re-runs
away.
Interesting
and
a
re-run
from
10
to
99
also
takes
two
milliseconds
and,
of
course,
in
rust,
it
takes
less
than
one
millisecond
surprise.
Small
numbers
are
fast
and
surprise.
Rust
is
fast.
This
is
not
something
that
you're
interested
in.
A
What's
more
interesting
is
if
you,
if
you
crank
up
the
numbers,
let's
say
from
100
to
999,
999
node
takes
about
500
milliseconds
and
in
rust
it
just
takes
45
milliseconds.
So
this
is
really
really
fast.
This
is
really
really
nice
and
if
you're
working
with
really
big
numbers
like
calculated
from
1
000
to
9999
node
takes
about
70
seconds,
and
this
is
something
where
you
hit
a
timeout
very
fast.
You
have
to
crank
up
those
timeout
numbers
and
also
with
rust,
because
it
takes
about
eight
seconds.
So
this
is.
A
This
is
impressive.
Of
course
you
can
control
it.
You
know
give
it
give
it
twice
as
much
twice
as
much
memory.
Cpu
gets
twice
as
much
those
times
go
down
because
it's
twice
twice
as
fast,
but
hey
same
goes
for
the
row
down
below
same
goes
for
us
but
yeah,
so
those
are
the
benefits
of
rusting
lambda.
First,
one
of
the
biggest
benefits
actually
is
that
you
have
very
very
small
binaries.
A
Instead
of
deploying
the
entirety
of
node
and
your
node
scripts,
and
maybe
a
couple
of
node
modules
or
something
no,
you
don't
have
to
deploy
so
much.
You
just
have
to
run
a
three
or
four
megabyte
binary.
This
is
you
can
unzip
it
fast?
You
don't
have
any
overhead
in
bootstrapping
it.
This
is.
This
is
actually
the
biggest
benefits.
This
is
how
you
get
those
nice
cold
starts,
because
you
don't
have
any
overhead
from
anything
else
and
it
works
great
on
low
vcpu.
A
So
if
for
the
128
megabyte
vm,
you
are
getting
provision
something
between
a
12
to
a
13th
of
a
vcpo
which
is
not
that
much
and
if
and
last,
since
it's
natively
compiled
works
also
really
well.
With
this
kind
of
cpu
units,
we
have
low
ram
usage.
So
what
we
see
is
that
in
rust
you,
you
are
just
allocating
the
ram
that
you
use
great
in
node.
A
It
always
tries
to
be
ahead
as
much
as
you
allocate
so,
which
means
that
that,
if
I
don't
know
I
use
20
megs
of
ram
in
rust,
node
allocates
40
mixer
from
that's
how
the
v8
works
but
yeah,
and
also
what
I
found
out
that
you
have
less
variation
at
execution.
So
if
you,
if
you're
running
benchmarks,
which
means
I
don't
know,
do
10
000
requests
per
second
or
something
see
see
how
how
lambda
behaves
there,
there
are
a
couple
of
spikes
there
there.
A
I
don't
know
you
see
that
vm
is
getting
freed
and
or
something
huge
or
something
like
that,
and
then
you
can
have
spikes
that,
even
if
your
payload
takes,
I
don't
know
45
milliseconds
to
to
run
on
node.
Sometimes
you
have
to
600
milliseconds
or
one.
One
second
runs
because
lambda
can't
handle
it.
If
your
execution
times
are
very,
very
slow
and
very
very
fast.
A
Lambda
is
more
able
to
handle
that,
so
you
have
less
variation
in
execution
and
plus
it's
super
fun.
So
I
love
writing
rust
and
writing.
Serverless
functions
with
rust.
They
love
you
even
well
alrighty
cool.
Let's
go
to
the
next
provider,
which
is
asset
functions
and
azure
functions
is
very
different
in
basically
everything
compared
to
aws
lambda.
So
there
are
not
that
many
overlaps,
even
if
even
if
it
looks
like
that.
So
when
you're
writing,
I
don't
know
an
old
application
in
in
aws,
lambda
or
azure
functions.
Yeah
well,
you're.
A
Just
writing
this
handler
body
and
you're
done
with
it,
but
there's
so
much
nuance
in
basically
everything
that
happens.
So,
first
of
all,
the
triggers
are
part
of
the
function.
Definition,
you
will
define
what
the
function
needs
to
be
triggered.
We
are
defining
the
input
and
output
binding,
so
you
define
the
input
the
trigger
and
the
sync
like
the
target.
While
you're
writing
the
function
in
in
aws,
lambda
you're
just
providing
the
function,
then
you
configure
the
inputs.
A
This
is
might
be
the
same
in
the
end,
okay
of
course,
but
it
also
means
that
you
can
use
the
same
lambda
for
various
input,
sources
and
output
sources.
Not
so
much
for
azure
functions.
You
have
to
define
the
trigger
with
the
function.
You
can
have
multiple
output
targets,
yes,
but
they
are
very
much
tied
to
to
what
your
function
look
likes.
It
builds
up
on
azure
web
jobs
if
you
ever
ever
run
across
azure
web
drops-
and
here
that's
that
sounds
fun.
It
sounds
also
like
serverless.
A
That's
what
run
what
runs
the
underneath?
The
most
important
difference,
I
think,
is
that
the
unit
of
deployment
is
not
just
a
single
function,
but
an
entire
function,
app,
so
you're
rolling
out
the
function
app
and
you
can
put
as
many
functions
in
there
as
you
like.
Let's
say
you
put
10
functions
in
there,
that's
great
the
function,
app
gets
deployed
and
scaled
out
together,
which
means
if
azure
functions,
requests
another
server
there.
It
comes
to
server,
not
a
worker
server.
A
It
bootstraps
the
entire
app
all
the
functions
that
you
have
in
there.
If
there's
10
functions,
all
the
10
functions
are
bootstrapped.
This
is
a
big
big
difference.
This
also
results
in
big
cold
start
times,
but
it
also
results
in
very
low
costs
at
times
for,
for
I,
don't
know
all
the
other
functions
in
web,
because
the
entire
app
is
is
booted
up.
Also,
those
are
the
servers,
which
means
they
can
take
more
than
just
one
request
at
a
time.
A
You
don't
have
them
exclusively,
so
you
it
might
be
just
like
with
any
other
server
with
a
node
server
or
whatnot
that
you
need
to
process
multiple
requests
in
parallel
already
and
it
needs
azure
storage
to
store
a
function.
So
you
can't
deploy
a
function
without
azure
storage,
hidden
costs
just
to
see
it.
How
does
the
azure
function?
Execution
life
cycle?
A
Look
like
so
the
scale
controller
checks
if,
if
any
events
come
in
and
if
those
events
come
in
and
there's
not
a
work
or
a
server
already
working
on
that
it
allocates
a
new
server.
A
The
server
is
unspecialized,
so
you
can
think
of
azure
having
tons
of
azure
function,
servers
in
their
cloud,
but
they're
all
unspecialized,
which
means
the
runs
the
function
host,
but
the
function
host
doesn't
do
what
doesn't
know.
What
to
do.
The
function
host
doesn't
know
which
functions
to
run.
It
just
is
a
dedicated
function
server
once
it's
getting
specialized,
the
files
are
mounted
to
the
server
virtual
file
system,
so
the
azure
storage
stuff
from
your
from
your
storage,
is
mounted
into
this
particular
server.
A
The
app
settings
are
applied,
which
means
yeah
telling
telling
which
version
to
use
or
feature
functions.
What
the
runtime
is
blah
blah
all
the
kind
of
things
then
the
function
host
and
I'm
sorry.
This
is
very
small.
The
function
host
resets
the
function,
runtime
reads:
the
function,
json
function,
json
defines
the
input
and
output
bindings.
A
I'm
going
to
execute
it,
so
re-runs
are
very
very
fast.
This
is
how
azure
function.
Host
works
will
get
the
trigger
the
trigger
is
defined
for
the
input
binding,
there's
a
function
host
takes
the
input
and
then
it
does
either
call
and
in
process
task,
which
is
written
in
c
sharp
wave
sharp,
because
the
azure
function
host
is
written
in
c.
A
Sharpen.Net
also
means
that
if
you're
writing
functions
in
c
sharp
or
f
sharp
they're
just
getting
linked
together,
I
don't
know
if
that's
the
correct
word,
because
I
haven't
written
any
c
sharp
in
over
15
years,
but
since
it's
the
same
technology,
it
just
can
run
c-sharp
and
f-sharp
stuff
if
you're,
using
something
else
that
azure
provides
for
you.
They
have
dedicated
run
times
for
that.
You
can
see
all
those
runtimes
on
github.
A
They
are
they're
open
source
and
it
calls
this
out
of
process
run
time,
which
is
either
no
the
python
or
whatever,
and
it
creates
a
grpc
connection
with
it.
So
it's
sending
events
over
grpc
to
the
function
runtime.
It
processes
those
events
in
the
grpc
capable
server
and
gives
the
results
back
third
option.
A
So
those
those
are
not
they're
not
running
parallel,
even
even
though
the
image
might
suggest
it
they're
not
running
in
parallel,
it's
it's
an
either
or
so
you
can
either
have
in-process
function,
apps
or
out-of-process
function,
apps
or
the
third
part
custom
handler
functional,
which
means
I
give
you
an
http
connection,
do
whatever
you
need
to
do
so.
This
is
this
is
where
you
can
define
whatever
you
like,
be
the
demo
process,
like
then
with
the
node
chess,
similar
language
runtime
for
javascript
or
rust.
A
So
this
is
where
we
are
going
to
implement
our
last
azure
function.
Runtime,
the
results
are
going
to
be
sent
back
to
the
function
host
and
it
sends
it
to
whatever
output
painting
that
you
need
stuff,
that's
good,
to
know,
ram
scales
to
your
needs.
This
is
very
important,
so
in
italian,
you
are
going
to
pre-provision
a
certain
amount
of
ramp
that
you
are
paying
for
inertia
function.
A
It
takes
as
much
ram
as
you
need
not
more
and
you
are
paying
for
this
amount
of
ram
that
it
takes
also
the
vcpu
scales
to
your
needs
up
to
one
entire
virtual
cpu,
because
azure
function
hosts
can
do
that.
So
it's
not
tied
to
the
amount
of
ram
that
you
allocate
or
that
you
that
you
provision
just
as
much
as
you
need
with
the
consumption
based
plan
it
skills
up
to
one
vcpu.
A
If
you
have
premium
plans,
you
can
have
up
to
four
or
five
or
something
it's
it's
a
hidden
in
the
number.
That's
called
azure
computing
unit
and
what
I
found
out
by
look
googling.
A
lot
is
that
100
is
used
roughly,
are
equivalent
to
one
virtual
cpu.
A
Cold
stats
can
be
slow,
but
but
then
again,
since
you
have
as
much
cpu
as
you
need,
those
reruns
can
be
very
very
fast
and
those
initial
ones
can
be
very,
very
fast
encodes
that
suffer
the
entire
function.
App.
Alright,
let's
define
the
function
host.
So
this
is
how
how
it
would
look
like
if
we
are
going
to
define
a
function
host
for
for
a
rast
application.
There
are
two
things
that
are
important,
so
I'm
not
saying
please
do
a
nodejs
application.
Please
do
a
sharp
application.
A
Whatever
now
I'm
saying
I
have
a
handler
executable,
so
this
is.
This
is
an
actual
executable
like
handler
or
handler.exe,
if
you're
on
windows
and
enable
forwarding
http
request,
which
means
that
I
want
to
communicate
via
http.
With
my
with
my
with
my
application,
this
is
a
function
binding
where
I
say:
okay
for
for
the
endpoint
palindromes,
I'm
talking
here
now
about
http
endpoints,
because
it's
the
easiest
to
do.
A
Please
create
this
binding,
where
I
accept
get
them
post
requests
as
input,
and
I
want
to
see
a
result
on
the
output
over
http.
So
those
are
the
two
bindings
that
you
can
do
there.
This
is
actually
something
that
I
really
really
like
in
azure
functions,
because
you
can,
for
example,
write
services
that
send
an
email
but
also
give
a
response
back
to
the
user
that
calls
it.
So
if
you
have,
I
don't
know,
send
email
with
a
body
or
whatever
you
do,
an
http
request.
A
A
The
nice
thing
is
any
server
will
do
so
if
you're
using
a
rocket,
if
you're,
using
what
I
have
here
or
whatever
doesn't
matter.
If
you
have
a
server
that
a
server
framework
can
rust
to
you're
writing
your
own
server
all
together,
it
will
do
that's
perfect.
Any
server
runs
funny.
It's
called
serverless,
but
you
are
going
to
write
the
server
because
you
know
infrastructure
server,
let's
not
application,
serverless,
two
things
that
are
interesting.
First
of
all,
I
need
to
have
a
path,
mapping
that
this
equivalent
to
what
azure
function
provides
to
me.
A
So
if
I
have,
I
don't
know
what
palindromes.js
in
in
nodejs
that
is
mounted
at
slash
api
palindromes
in
rust.
I
need
to
provide
that
name
in
my
app.
So
so
it's
just
forwarding
http
request.
It's
robotic
pass,
but
what
comes
in
in
my
server
is
that
this
is
a
call
to
slash
api
paleontomes
and
please
process
this
event.
A
So
you
have
to
take
care
of
that
mapping,
which
is
a
little
bit
of
extra
work,
because
you
have
to
create
those
folders
with
the
function
jsons,
but
you
also
have
to
create
the
same
mapping
again.
There's
room
for
failures
for
for
errors
that
you
are
going
to
have.
I
don't
know
a
typo
somewhere,
because
it's
all
strings.
You
all
know
that
and
the
other
part
that's
actually
quite
interesting-
is
that
you
listen
to
a
particular
function:
custom
handler
ports.
This
is
a
port
that
azure
functions
needs
to
spin
up
your
server.
A
This
is
an
environment
variable
that
gets
forwarded
to
your
application
and
you
listen
to
that
port,
which
is
also
nice,
because
if
you
want
to
try
it
out
without
azure
functions,
you
just
let
it
listen
to
any
other
ports
here
3000.
In
this
example,
I
can
develop
the
server
in
its
entirety
on
its
own
and
then
hire
those
three
lines
perfect.
It
runs
on
serverless,
it
runs
on
azure
functions.
This
is
sweet
shift
keys.
This
is
wonderful.
All
right.
A
couple
of
results.
A
Cold
starts
in
node,
hello,
world
azure
functions
around
700,
milliseconds
give
or
take
can
be
500.
Milliseconds
can
be
1.5
seconds.
Usually
it's
it's
about
700,
milliseconds
reruns,
though
one
millisecond,
because
it's
again
then
it's
super
super
fast,
and
this
is
now
great
in
rust.
Those
coils
take
less
than
100
milliseconds.
This
is
fantastic
because
it's
just
so
little
to
do.
A
Fascia
functions
I
had
cold
starts
in
the
ballpark,
between
30
milliseconds
and
in
70
milliseconds,
but
100
milliseconds
is
a
good
estimate,
a
good,
a
good
measure
to
to
say
this
is
the
cold
start
for
my
rust.
Application
really
runs
less
than
one
millisecond
palindrome
products
again
for
very
small
numbers.
Already
great
note
takes
about
nine
milliseconds.
A
A
rust
takes
less
than
five
milliseconds,
the
bigger
the
numbers
get.
I
have
about
80
milliseconds
for
for
node.
This
was
500
milliseconds
with
the
small
small
aws
lambda
vm,
and
they
have
about
50
milliseconds
in
ras
this
was
45
milliseconds
in
aws
number.
The
great
thing
is:
if
you
have
big
numbers,
it
takes
a
note
about
10
seconds
and
in
rus
less
than
one
second.
So
this
is
fantastic
because
you
know
cpu
scales
with
it
then.
A
But
what
you
can
see
here,
as
well
as
with
with
aws
lambda,
if
you,
if
you
are
doing
stuff
in
node
or
or
in
rust
factor
of
10,
is
a
good
estimate
so
10x
fast.
What
benefits?
Do
you
have
of
rust
in
azure
functions,
and
we
have
a
little
lag
here?
Alrighty,
first
of
all,
significantly
lower
cold
starts.
Cold
starts
was
always
a
very
big
problem
in
azure
functions
and
they
did
a
huge
amount
of
effort
to
get
that
right.
A
So
I
can
remember
presentations
where
I
showed
azure
functions
and
it
took
me
one
to
two
minutes
to
have
a
code
start
of
a
node
nodejs
application
now
with
700
milliseconds.
It's
really
really
great,
especially
since
reruns
are
so
fast,
but
you
can
cut
those
cold
studs
down
tremendously,
if
you're
doing
a
rust,
but
what
I
found
great,
it's
just
a
server,
so
any
server
will
do.
It
doesn't
matter
which
server
that
you
use
even
servers
that
you
already
have.
A
A
A
So
I
love
writing
rust
and
it's
it's
equally
fun
with
azure
functions,
and
this
is
actually
my
preferred
way
of
doing
servers
right
now,
just
having
azure
functions,
killing
out
my
rast
servers
because
hey
just
works
alrighty,
that's
that's
for
the
stats.
That's
for
the
millisecond
sets
for
my
tests.
Summary
first
of
all
last
should
definitely
be
considered.
If
you
want
to
write
serverless
functions,
it's
a
great
tool,
if
you
like
writing
rust.
A
For
some
cases,
you
can
benefit
a
lot,
especially
if
you
have
processes
that
that
maybe
need
to
run
in
the
background
that
there's
a
cpu
heavy
that
really
need
to
compute
something.
This
might
be
your
number
one
choice,
because
it
works
great
on
low
vcpu.
It's
fast,
it
needs
just
a
little
amount
of
memory.
A
Rust,
in
all
cases,
can
help
significantly
with
cold
start
times.
So
if,
if
cold
starts,
are
your
problem,
rust
might
be
the
choice
for
you
execution
times
are
focused
a
lot
on
execution
times.
Keep
in
mind,
we
are
not
talking
about
hidden
costs.
We
are
not
talking
about
azure
storage
costs.
A
We
are
not
talking
about
aws
api
gateway
costs,
because
this
is
you
know
you
get
you
get
the
http
bindings
in
aws
lambda
for
free
big
quotes
as
well
in
hr
functions
for
free,
but
in
aws
lambda
you
have
to
activate
the
aws
api
gateway
and
there
you're
paying
per
traffic
again.
So
it's
calculating
the
entire
thing
is,
is
a
nightmare
seriously.
I
I
couldn't
give
you
an
estimate
which
one
is
cheaper.
I
honestly
don't
know
they
are
just
two
different
and
and
I'm
just
running
in
three
plants
since
the
beginning.
A
A
First
of
all,
if
you
read
up
on
azure
functions,
I
can
highly
recommend
the
entirety
of
of
the
azure
functions,
documentation
on
what
is
the
docs.microsoft.com
excellently
written?
It's
it's
amazing
how
much
information
you
get
about
everything
in
there
and
if
you
don't
find
the
information
in
the
azure
function,
docs
you
find
it
on
github,
because
most
of
the
things
are
open
source.
So
you
can
read
up
on
the
entire
function
host,
so
it
works
on
all
the
randoms,
how
they
work
and
also
lots
of
examples
for
other
programming.
A
Actually,
so,
if,
if
go,
is
your
thing?
Do
it
and
go
you
have
a
tutorial
for
that,
so
this
this
is
great.
One
of
my
colleagues
at
dinner
has
written
a
blog
about
what
happens
behind
the
scenes
when
aws
lambda
calls
starts.
This
is
where
I
grabbed
all
the
graphics
from
it's
a
great
read,
check
it
out
and
also
check
out
the
blog
from
aws
lambda
on
the
aws
on
the
rust
runtime
for
aws
lambda.
This
is
where
going
to
see
a
nice
half-life
logo,
but
also
some
great
content
for
the
aws
lambda
runtime.
A
Last
but
not
least,
this
github
link
is
all
the
examples
from
today,
so
you
can
check
them
out,
like
literally
check
them
out
like
check
them
out
and
try
them
and
that's
about
it.
Thank
you
very
much.