►
Description
Discover the unbeatable synergy of Rust with serverless components such as AWS Lambda or GCP Cloud Run. Why Rust is the perfect match for serverless architecture? Uncover the unique features of Rust that make it the top choice for building ultra-efficient and robust serverless apps. Don't miss this chance to revolutionize your cloud development game and enjoy a 100% serverless pizza tracker demo! 🍕
--
Maxime David is a life-long open source aficionado and Senior Software Engineer advancing serverless topics at Datadog. He’s keen on code profiling, and believes that performance in software matters. When he’s not optimizing software, he likes to try every banana bread he can find.
A
But
welcome
everyone.
I
can
see
you
but
I'm
sure
you're,
all
great,
my
name
is
Max
and
today
we
are
going
to
talk
about
a
rest
and
serverless
and
why
they're
doing
such
a
perfect
match
so
can
I
change
slightly
yeah
cool
so
who
am
I?
My
name
is
Max
Maxim
David
I
used
to
be
French,
but
no
I
moved
to
to
Canada
about
10
years
ago,
so
I'm,
a
Canadian
citizen,
I'm,
doing
serverless
at
dated
dog
I'm,
also
an
AWS
Community
Builder
since
2021..
A
If
you
don't
know
about
this
program,
it's
a
great
program
led
by
AWS
to
share
and
to
be
part
of
a
growing
network
of
people,
passionate
about
technology
and
and
and
the
AWS
technology.
I
also
started
recently.
A
A
YouTube
channel
I
have
almost
600
followers,
so
definitely
into
the
YouTube
game
and
I'm
a
big
pizza
lover-
and
this
is
really
important
for
the
demo,
and
you
will
see
why,
in
a
minute,
I'm
also
the
creator
of
Lambda
perf,
and
we
are
going
to
to
talk
about
that
in
a
moment,
so
yeah
without
further
Ado.
Let's,
let's
start
so
first
I
want
this
talk
to
be
a
bit
interactive
and
I
want
to
show
you
the
power
of
serverless
first.
A
So
if
you
can
scan
this
QR
code
and
order
a
fake
pizza
and
stay
on
the
page,
so
I'll
leave
you
a
couple
of
10
seconds
to
to
make
sure
everyone
can
scan
it.
Let
me
know:
I,
don't
know
how
you
can.
Let
me
know,
but
yeah
I'll
leave
the
QR
code
for
for
five
five
seconds,
just
to
make
sure
you
have
time
to
to
scan
it.
A
So
if
you
scan
it,
you
should
be
able
to
see
this
webpage
where
you
can
order
a
pizza,
so
I'll
go
for
a
Deluxe
pizza
and
I
can
see
that
my
order
has
been
placed.
So
this
is
a
client
side
when
I
ordered
pizza
I
want
to
track
the
progress
of
my
pizza.
So
now
the
order
is
placed.
Let's
see
if
we
have
some
current
order
wow.
This
is
really
cool.
Thank
you.
So
much
for
playing
the
game,
so
I
can
see
a
lot
of
different
orders.
A
So
I
will
put
the
pizza
in
the
baked
stage.
So
you
should
be
able
to
see,
as
you
can
see,
mine
has
just
been
baked,
so
you
should
be
able
to
see
the
statues
change
directly
on
on
your
webs
on
your
phone
and
yeah
I'm,
not
going
to
spend
10
minutes
on
that,
but
yeah
just
to
show
you
that
with
serverless
and
I
will
be
back
to
the
architecture
in
a
bit.
We
can
do
some
really
cool
stuff,
such
as
like
websockets.
A
This
is
using
websockets
and
yeah,
so
we're
creating
a
pizza.
Then
you
have
a
manager
interface.
When
you
can
change
the
status,
this
will
send
a
message
over
a
websocket
API
and
the
UI
can
respond
to
to
that
event
and
display
the
statues
of
the
pizza
so
yeah.
This
is
all
done
in
Rust
and
we
are
going
to
to
talk
about
that
in
a
moment,
so
I
hope
you're
not
waiting
too
long
for
receiving
your
pizza.
It
might
not
come
but
yeah.
A
Thank
you
so
much
for
for
for
playing
with
with
the
demo.
So
now
what
is
serverless
so
Sutherlands,
it's
a
very
hard,
it's
very
hard
to
Define,
and
everyone
has
his
own
definition
of
service.
So
I
want
to
be
more
clever
than
everyone,
so
I'll
stick
with
what
AWS
and
gcp
are
telling
about
serverless,
but
yeah.
Both
are
mainly
the
same.
It's
focused
on
the
fact
that
you're
not
focusing
on
the
architecture
but
more
about
building
stuff.
A
So
it's
a
great
developer
experience,
at
least
for
me
and
I'm
sure,
if
you're
not
using
serverless,
and
you
want
to
try
it
I'm
sure
you
will
be
amazed
by
the
developer
experience
that
it
provides
so
four
main
things
about
serverless
you
deploy
accordingly
and
not
infra,
so
we
will
see
in
a
moment.
That's
when
you
deploy
components
such
as
a
Lambda
function
which
react
to
events.
You
don't
need
to
to
to
to
bring
your
own
server.
You
don't
need
to
bring
your
own
API.
A
Gateway
AWS
has
something
for
you
which
is
fully
serverless,
so
you
really
focus
on
the
business
logic
and
not
about
the
aspect
of
keeping
your
server
up
to
date,
managing
certificates
and
so
on,
AWS
or
other
G's
or
other
Cloud
providers
such
as
gcp
as
well
have
serverless
Solutions.
So
you
can
really
focus
on
your
crude
another
cool
aspect.
Is
it
automatically
Skilling?
A
So
if
you
have
a
let's
say
a
pizza
shop
and
maybe
on
Monday
it's
closed,
you
don't
want
to
pay
for
your
service
on
Monday,
because
obviously
there
won't
be
any
orders
on
Monday
because
you're
closed,
but
maybe
on
Tuesday
around
midday,
a
big
spike
of
traffic,
a
lot
of
speeds
that
will
be
delivered,
and
now
you
need
to
spawn
a
lot
of
different
servers,
and
this
can
be
difficult
in
a
non-serverless
world.
But
for
serverless
it's
built
in
you,
don't
have
to
do
anything,
it
would
scale
for
you.
A
If
you
use
it
correctly,
you
have
a
built-in
High
availability.
So
it
means
that
if
you
deploy
a
new
version,
the
traffic
will
spawn
will
change
from
the
V1
to
V2
you.
You
have
to
do
some
some
some
best
practice
in
the
code,
but
it's
generally
built
in
and
the
force
and
not
the
the
list
is
the
Players
you
use.
A
So
it's
usually
built
by
milliseconds
So,
the
faster
your
function
will
be,
or
your
container
will
spoon
the
the
lower
the
the
price
will
be
at
at
the
end
of
the
month.
So
this
is
a
four
of
serverless,
of
course,
there's
more
features,
but
let's
focus
on
that
today.
A
So
you
may
already
know
serverless
we
talked
about
AWS
Lambda.
We
are
going
to
talk
about
Cloud
run
as
well
from
from
Google,
but
it's
way
more
than
that
a
dynamodb
S3.
There
are
a
lot
of
different
serverless
services,
but
yeah.
Today
we
are
going
to
focus
on
AWS,
Lambda
and
Google
collaborate.
A
So,
let's
start
by
AWS
function.
You
can
see
that
as
a
function
as
a
service,
so
you
deploy
only
a
function
and
it
will
become
a
service.
It
will
be
events
driven,
so
it
will
wake
up
by
an
event.
It
can
be
a
HTTP
Gateway
event.
It
could
be
a
websocket
message,
just
as
we
just
saw
it
can
be.
When
you
put
an
object
into
a
S3
bucket,
you
can
wake
up
along
the
function
and
it's
supporting
almost
any
language
out
of
the
box.
A
We
have
will
be
Java,
go
node,
Python
and
so
on
and
of
course
we
can
bring
your
own
runtime
and
guess
what
we
are
at
the
rest
Meetup.
We
are
going
to
talk
about
about
rust.
A
So,
as
I
said
before,
since
it's
not
always
on,
if
you
don't
have
any
traffic,
your
Lambda
function
won't
be
up,
you
have
what
we
call
a
cold
start.
It
means
that
the
first
request
or
when
you
have
a
big
spike
of
requests
and
your
current
instance-
cannot
keep
up
to
the
pace
of
the
incoming
request.
Another
Lambda
function
will
spoon
another
sandbox,
and
this
is
what
we
call
a
cold
start
and
the
cold
start
is
actually
free.
A
You
don't
pay
for
that,
but
you
pay
in
term
of
latency,
because
if
your
function
is
taking
five
seconds,
two
starts
well.
Someone
at
some
point
will
wait
for
five
seconds
and,
as
you
saw
I
hope
actually
during
the
demo
there
was
no
latency.
Everything
was
super
smooth
and
we
switched
from.
Obviously
we
are
maybe
I
don't
know
30
to
100.
It's
not
big
spike
of
traffic,
but
the
point
is:
if
we
had
like
1
000
orders,
the
website
would
have
been
super
responsive
trust
me
at
that.
A
So
since
the
function
is
not
always
on,
we
have
what
we
call
a
cold
start
and
a
warm
starts.
So
both
are
starting
with
an
event,
because
lws
Lambda
function
is
taking
an
event.
So
let's
say
it's
a
websocket
event
or
API
Gateway
event,
HTTP
request,
for
instance,
so
AWS
will
first
download
the
code
start
the
environment
underneath
it's
a
firecracker
VM,
and
then
we
have
code
initialization.
So
the
yellow
thing
are
for
a
cold
stocks
and
then
finally,
your
function
will
execute.
A
Let's
say
it's
I,
don't
know
hello
world
or
put
a
pizza
into
a
dynamodb.
So
this
will
execute
your
code
on
the
second
request
or
when
you
have
more
traffic,
every
other
request
than
a
call
starts
will
be
a
warm
starts.
So
we
don't
need
to
download
the
code.
We
don't
need
to
start
the
environment
because
it's
already
there
and
the
code
has
been
already
initialized.
So
then
you
just
execute
your
code
so
yeah.
So
this
is
a
warm
start
and
a
cool
start.
A
And
of
course,
if
we
want
to
have
a
lowest
latency
possible
lowest
possible
latency,
you
need
to
to
make
sure
that,
for
instance,
coding
utilization
is
really
really
small.
A
So
I've
developed
a
small
tool
which
is
lambdapath
I,
don't
know
if
you
already
seen
that
before,
but
it's
an
always
up
to
that
Benchmark
of
cold
start
over
everyone
times
so
I'm
going
to
zoom
a
bit
I,
don't
know
how
big
the
screen
will
be,
but
each
yellow
spot
is
one
called
stats.
So
if
I
refresh
in
real
time,
you
see
what
does
it
takes
to
wake
up?
A
10
Lambda
function
with
an
interest
in
Python
is
Ruby
in
node.js
in
Java,
and
it's
always
up
to
date,
because
it's
auto
updating
every
day
every
day
every
single
Lambda
function
will
be
destroyed,
will
be
recreated
and
we
invoke
10
times
making
sure.
As
the
call
starts
we
take
the
average
duration,
and
this
will
be
printed
on
this
website
and,
as
you
can
see,
rest
is
the
fastest
actually
and
we
are
going
to
see
why
and
we
are
going
to
see
why
rust
is
such
a
great
candidate
for
serverless.
A
A
If
you
want
to
contribute
to
a
new
runtimes
feel
free,
the
aim
is
to
have
like
a
very
open
project.
It's
not
biased
in
any
way.
It's
just
a
hello
world
in
every
runtime,
so
yeah
rest
is
winning
for
for
that
foreign.
A
There
is
nothing
you
can
really
do
about
that.
I
guess:
you'll
love
that
function.
It
will
not
be
like
one
gigabyte.
So
AWS
has
some
internal
mechanism
to
Cache
your
dependency
and
so
on.
So
it
should
be
really
really
fast
to
the
load.
This
is
a
guess:
it's
not
officially
documented,
but
I
guess
there's
some
some
good
optimization
start
the
environment.
Once
again
you
you
can
do
in
something
about
that
fire
career
cam
fire,
firecracker
VM,
will
spawn
its
node.js,
go
it's
the
exact
same
underlying
in
front.
A
But
then
you
have
code
initialization,
and
this
is
where
rust
could
play
a
huge
role,
and
this
is
what
we
saw
in
the
Benchmark
if
you're
using
rest,
the
code
initialization
will
be
faster.
Let's
see
why
so
yeah
I
spoiled
it
but
yeah.
Let's
talk
about
rest
for
that,
because
we
saw
that
rest
was
this
candidate
for
that.
A
So
why
rest
as
faster
cool
starts
and
first
rest
binary
can
be
extremely
small.
As
you
may
know,
there
is
no
garbage
collection
in
rest,
so
the
runtime
Footprints,
the
minimal
code
to
run
a
rest
binary
is
extremely
small
and
it
it's
hard
to
beat.
For
instance,
I
have
nothing
against
girl
once
again,
but
if
you
start
with
a
Hello
World
with
go,
there
is
a
good
run
time
inside
this
binary,
so
the
binary
will
be,
will
be
a
bit
bigger
and
we'll
see
some
numbers
a
bit
later.
A
The
other
thing
is
that
Russ
is
extremely
good
at
memory
management,
so
you
can
have
very
tight
control
or
a
number
of
allocation
size
of
the
allocation
and
so
on,
because
of
course,
if
you're
doing
a
lot
of
allocation
under
hip
on
the
stack
the
the
time
to
start,
the
binary
will
be
longer.
So
rest
is
really
nice,
nice
candidate,
for
that,
because
you
can
have
control
on
that.
A
Rest
has
also
some
very
interesting
building
flag.
So
let's
say
you
have
your
Lambda
function
written
in
rest.
What
we
can
do
is
we
can
play
with
build
type,
build
flags,
sorry
to
make
sure
that
the
binary
is
extremely
small,
extremely
small
and
really
well
optimized.
So
first,
the
first
flag
is
the
optimization
level.
A
If
you
set
it
to
Z,
it
means
it's
optimized
for
size,
so
maybe
not
for
a
performance
when
it
runs,
but
the
binary
will
be
extremely
small
and
it
means
that
it
would
be
fast
to
run
so
the
course
that
will
be
extremely
fast.
Maybe
the
runtime
duration
will
be
a
bit
slower
if
you
optimize
for
size
and
not
performance,
but
that's
intuitive,
so
you
can
play
with
that
and
see
how
it
affects
cold
starts.
Second,
flag
is
lto
means
linked
time
optimization.
A
So
if
you
enable
it,
the
optimization
overall
will
be
a
bit
better,
but
at
the
cost
of
a
longer
linking
time.
So,
if
you're
not
in
a
hurry,
it's
really
for
a
hello
world.
It's
a
couple
of
milliseconds,
maybe
for
a
bigger
application,
might
be
a
bit
different,
but
for
serverless
once
again,
I
strongly
suggest
you
to
to
try
this
this
slide.
Then
you
have
the
code
generation
unit
by
default.
Rest
is
doing
some
local
in
local,
optimization
of
the
code
for
each
unit.
So
let's
say
you
have
10
unit
of
compilation.
A
You
will
have
a
local
optimization
in
in
each
one
of
those
10
blocks,
but
it
tends
to
be
faster
to
compile.
But,
of
course,
if
an
optimization
can
be
done
within
two
blocks,
it
won't
be
done.
So
if
you
specify
that
you
only
want
to
have
one
code
generation
unit,
you
are
sure
that
all
the
local
optimization
will
be
done
at
the
same
place.
A
Then
you
will
have
a
faster
binary
at
a
longer
compile
time
once
again,
but
sometimes
that's
fine
and
the
last
slide
I'm
sure
you
already
know
what
is
a
panic
in
in
rest,
you
have
this
nice
stack
Choice,
which
is
really
useful
to
to
debug,
but
for
a
Lambda
function.
Maybe
you
don't
need
that,
so
you
can
strip
it
and
instead
of
outputting
the
full
stack
Trace
you
just
abort
and
that's
it
so
it
will
remove
the
code
used
to
unwind
the
error
up
to
the
very
top
of
the
statues.
A
If
you
don't
need
that,
you
can
stream
it.
So
this
is
for
the
bid.
Flex
so
show
me
numbers
now
for
rest,
hello,
world,
Lambda
function.
The
size
with
the
four
Flags
I've
just
shown
is
about
five
722
kilobytes,
so
extremely
small,
for
instance.
Once
again,
nothing
against
good
I,
I
I
need
to
take
another
runtime
for
reference.
A
So
go
is
4.7,
so
maybe
there's
some
flag
that
I
didn't
use,
but
I
use
a
w
and
s
to
strip
the
symbol
table
and
and
so
on,
to
try
to
have
like
as
small
as
possible,
but
yeah.
This
is
a
no-brainer,
it's
almost
five
times
smaller.
A
So
that's
it
for
for
Lambda
and
now
I
want
to
show
you
something
so
for
the
connection.
This
is
a
Lambda
function
for
written
in
node.js,
and
this
is
when
someone
is
going
to
the
website.
It
will
receive
a
connection.
Events
from
the
websocket
saying
I
want
to
order
pizza.
A
So
what
we
need
to
do
to
keep
track
of
that
is
to
insert
something
in
dynamodb,
which
is
a
table
available
like
sorry,
not
a
table
or
like
a
database,
a
system
from
AWS
and
what
we
want
to
do
is
to
match
the
current
ID
of
the
connection
to
the
order
ID
that
we
want
to
track.
A
So
if
we
go
back
here,
for
instance,
here
I
have
an
ID.
So
if
I
change
the
status,
then
I
can
match
the
current
connection
to
the
current
order
ID
to
to
update
the
UI.
A
So
what
I
need
to
do
is
to
read
the
events
so
once
again,
Lambda
is
with
is
working
with
events
incoming
events,
so
I
read
the
events
I
check
the
query
stream
I
check.
If
there
is
an
order,
ID
then
I
put
an
item
in
two
dynamodb
and
if
we
want
to
have
a
look,
so
there
is
no
one
on
the
website
currently,
if
I
refresh,
if
I
go
back
to
my
website,
I
should
have
an
item
here.
Yes,
so,
as
you
can
see,
it's
fully
real
time.
No,
no!
Nothing
is
hard
coded.
A
So
we
have
a
connection.
Id
I
say
that
it's
connected
and
then
we
have
the
order.
Writing.
So
that's.
If
we
look
at
the
log
connection,
we
can
see
that
this
function,
the
initi
duration,
so
the
constant
duration
is
about
I
can't
zoom
a
bit.
If
you
want
it's
about
294
milliseconds.
So
it's
it's
pretty
good.
A
It's
it's
it's
quite
fast,
but
if
we
want
to
go
a
bit
further,
maybe
we
can
rewrite
it
to
rest
and
I'm
going
to
show
you
the
code
in
a
moment
and
in
rest,
it's
doing
the
exact
same
stuff.
I
will
show
you
the
code
in
a
moment
and
if
you
look
at
the
report
log
line
there,
any
duration
is
now
43
milliseconds.
A
So
of
course
we
saw
with
hello
world
that
rest
is
beating,
go
for
instance,
but
as
soon
as
we
have
a
bigger
like
a
use
case
such
as
it's
not
super
big,
but
it's
like
taking
an
event
reading
the
query,
string
parameter
and
put
an
item
in
dynamodb
for
all
of
that
in
terms
of
any
duration,
so
loading
the
AWS
SDK
into
memory
and
so
on.
We
only
take
43
milliseconds,
which
is
to
me
like
mind-blowing,
it's
super
fast
and
then
the
actual
run
time.
A
It's
like
99
to
actually
put
an
item
99
against
572.
So
once
again
the
runtime
duration
is
is
quite
faster,
with
West
as
well.
So
now
I
want
to
show
you
a
bit
of
code,
so
I've
occurred
here.
It's
a
rust
code
to
deploy
and
to
to
build
the
Lambda
function,
so
the
first
main
function
is
actually
receiving
an
event.
So
once
again
it's
in
the
websocket
proxy
request,
so
these
events
are
provided
by
AWS.
So
you
just
need
to
import
the
library
and
we
will
have
the
proxy
request.
A
Then
what
we
need
to
do
instead
of
hard
coding
the
table
name,
you
can
read
it
for
the
environment.
Variable,
for
instance,
then
I
take
the
order.
Id.
Finally,
I
spawn
a
new
dynamodb
client
I,
put
an
item
and
that's
it.
So
this
is
the
exact
same
code
that
I've
shown
you
in
in
node.js,
that's
written
in
rest.
First,
it's
super
fun
to
write
super
easy
to
test
as
well
and
super
easy
to
deploy
like
any
other
runtimes
but
yeah.
So
the
code
will
be
available
open
source.
A
If
you
want
to
have
a
look
but
yeah,
that's
it
for
a
Lambda
function,
really
simple,
just
a
name
if
you
want
to
add
a
tracing.
Of
course,
that's
always
better
put
an
item
into
dynamodb
and
that's
it
and,
as
you
can
see,
the
runtime
goes
from
any
duration
goes
from
more
than
200
milliseconds
to
below
5
50
milliseconds,
which
is
pretty
neat.
A
So
yeah
serverless
is
obviously
not
just
Lambda.
We
have
other
stuff
as
well.
So
let's
have
a
look
at
Cloud
run,
for
instance,
so
cladrant
is
a
service
from
gcp,
a
Google
Cloud
platform,
which
you
can
see
that
as
a
Lambda
function.
That
may
be
a
bit
bigger.
Let's
say
instead
of
having
just
one
entry
point,
which
is
your
Lambda
function,
you
can
have
a
full
container.
So
let's
say
you
want
to
have
like
a
Express
server
in
node.js
or
arctix
web
server
in
in
rest.
A
So
it's
really
like
a
a
container,
no
constraints,
no
runtime
API,
no,
nothing!
It's
just
a
regular
container.
You
ship
it
to
cladrant
and
it
will
auto
scale
depending
on
your
need
and
it
can
scale
to
zero
and
once
again,
if
it
can
scale
to
zero,
it
means
that
you
are
going
to
experience
a
cold
start
as
well.
A
So
I
have
done
a
small
experience
that
I'm
going
to
show
you
here.
I
have
cladrant,
so,
for
instance,
I
have
a
go
container.
So,
as
you
can
see,
very
simple:
go
I
will
listen
and
serve
for
8080
and
I
will
return
error
from
go.
I
have
the
exact
same
stuff
for
node.js,
so
hello
from
node.js,
just
using
Express
and
for
rest
as
well.
For
us,
that's
created
a
test
to
show
you
a
big
Enthusiast.
A
I
am
about
rest,
but
you
should
write
tests
for
everything,
of
course,
that
yeah,
so
you
have
an
active
web
service,
just
spoon,
hello
roots
and
hello
from
rest,
very
simple
code
and
I've
deployed
in
here.
So
if
we
go
to
the
go,
I
will
Zoom
it.
It
will
say
hello
from
go
hello
from
rest
and
hello
from
node.js,
so
the
exact
same
Cloud
container
for
three
different
runtimes.
A
Now,
let's
add
a
benchmark
of
that
so
I
prepared
for
a
dashboard
for
you.
This
dashboard
is
directly
available
on
Google
Cloud
console,
so
no
instrumentation
Library,
which
could
impact
cool
stuff,
no,
nothing
just
plain
metrics.
So
let
me
Zoom
a
bit.
First.
I
will
show
you
what
I've
done
I
loaded
test
the
three
services
so
sending
a
lot
of
requests.
As
you
can
see
a
lot,
it's
not
huge,
but
150
requests
per.
Second,
it's
enough
to
spawn
new
containers
continuously,
because
one
container
cannot
handle
that
much
load.
A
A
First,
things
to
to
to
note
is
the
request
latency.
So
once
the
container
is
warm,
we
can
check
how
long
the
hello
world
is
taking
and
go
and
rest
are
almost
the
same
about
1.6,
milliseconds
and
I'm,
not
sure
why,
but
node.js
is
a
bit
longer,
but
yeah
one
millisecond,
maybe
that's
fine
just
to
print
hello
world.
That's
not
a
big
deal.
If
you
look
at
the
1999ers
percentile,
so
let's
say
outliers
the
longest
request.
Rest
is
winning
as
well
about
half
a
milliseconds.
A
So
I'm
not
sure
this
is
a
really
interesting
but
yeah
just
to
show
you
the
request.
Latency,
that's
that's
no
big
deal.
What
is
really
interesting!
It's
the
container
startup
latency!
So
once
the
container
is
full
of
requests,
we
need
to
spawn
a
new
one,
because
otherwise
you
are
going
to
to
wait
too
long,
so
the
fact
of
spawning
a
new
container.
Of
course,
you
need
to
download
the
image
and,
in
this
test,
I
use
the
exact
same
image
for
the
three.
It's
a
this
choice,
node.js
and
Debian
based
image.
A
The
code
will
be
available
if
you
want
to
reproduce
that
atom
and
as
you
can
see
to
spoon,
a
new
rust
container
is
about
119
milliseconds,
which
is
once
again
absolutely
mind-blowing.
Go
text
a
bit
longer
144,
but
still
okay,
but
as
soon
as
you're
hitting
like
non-compiled
language,
it
takes
longer
so
maybe
a
bit
more
than
half
a
second
for
logis,
and
if
we
look
at
the
this
should
have
done
that.
But
here's
the
point
50
percentile
in
99
and
rest
is
still
winning.
A
So
the
point
is
like
when
you
need
to
scale
out
to
handle
more
traffic.
You
need
to
first
ask
yourself:
do
I
need
this
to
be
extremely
fast
or
maybe
it's
a
night
job
and
I
don't
really
care
about
that.
But
if
you
really
need,
if
it's
like
client
facing,
maybe
you
need
it
to
be
really
really
performance,
and
maybe
rest
could
be
a
good
option
here.
A
A
Just
did
the
demo
and
yeah.
That's
it
for
me.
I
hope,
I,
convinced
you
to
to
at
least
have
a
look
at
rest
for
a
serverless.
If
you
actually
need
it,
but
yeah
I
could
not
invite
us
all
fun.
It
is
to
write
for
us
for
serverless,
a
great
tooling
I
mentioned
AWS
Lambda
AWS
is
doing
a
really
great
job
for
the
SDK
of
writing.
Rest
Lambda
function.
So
it's
really
really
fun.
Every
incoming
event
are
already
supported,
so
dynamodb
web
sockets
S3.