►
From YouTube: RustLatam 2019 - Without Boats: Zero-Cost Async IO
Description
Boats is a member of the language design, library, and cargo teams of the Rust Project. They are Senior Research Engineer working on Rust for Mozilla Research. Contributing to Rust since 2014, they've done design work for significant language extensions like generic associated types and constant generics. For the last year, Boats has been focused on adding async/await syntax for non-blocking I/O to Rust.
Follow us on Twitter: https://twitter.com/rustlatamconf
A
A
A
Think
I
may
be
someone's
already
said
pointed
this
out,
but
I
believe
this
is
the
first
rust
conference
outside
of
the
United,
States
or
Europe,
and
so
as
someone
who
works
in
rust,
I'm
really
like
excited
and
glad
to
see
our
global
community
like
thriving
and
growing,
and
it's
really
cool
to
see
all
the
conferences
that
are
happening
this
year
come
on
to
the
technical
stuff.
So
the
feature
that
I've
been
working
on
is
this
thing
called
async/await.
A
It's
sort
of
going
to
be
probably
the
the
biggest
thing
that
we
do
in
the
language
this
year.
We're
planning
to
ship
it
sometime
in
the
next
few
months
and
it's
the
solution
to
this
problem
that
we've
been
struggling
with
for
really
long
time,
which
is
how
can
we
have
a
zero
cost
abstraction
for
asynchronous
I/o
in
rust,
so
I'm
going
to
explain
what
zero
cost
abstraction
means
in
a
moment,
but
first
just
to
kind
of
give
an
overview
of
the
feature.
So.
A
We're
now
the
function
instead
of
when
you
call
it
runs
all
the
way
through
and
returns.
Instead
of
returns
immediately
and
returns
this
future
that
will
eventually
result
in
whatever
the
function
would
return
and
inside
of
an
async
function.
You
can
take
the
await
operator
and
apply
it
to
other
features
which
will
pause
the
function
until
those
features
are
ready,
and
so
it's
this
way
of
handling
asynchronous
concurrent
operations
using
these
annotations.
That
makes
them
much
easier
to
write
so
you're,
just
a
little
code
sample
just
to
sort
of
highlight
and
explain
the
future.
A
This
is
in
like
basically
just
the
adapter
on
like
a
like
a
kind
of
ORM
type
of
thing.
It's
handwritten
where
you
have
this
get
user
method
which
takes
a
string
for
a
username
and
then
returns
this,
like
user
domain
object
by
querying
the
database
for
the
record
for
that
user,
and
it
does
that
using
async
IO,
which
means
that
it's
an
async
function.
Instead
of
just
a
normal
function,
and
so
when
you
call
it,
you
can
await
it
and
then
just
to
walk
through
the
the
body
of
this.
A
This
method,
you
just
the
first
thing
is
it-
creates
the
sequel:
query
interpolating,
the
user
name
into
you
know,
select
from
users
table
and
then
we
query
the
database,
and
this
is
where
we're
actually
performing
some
I/o.
So
query
also
returns
a
future
because
it's
doing
this
async
I/o,
and
so
when
you
query
the
database,
you
just
add
this
await
in
order
to
wait
for
the
response
and
then
once
you
get
the
response,
you
can
parse
a
user
out
of
it.
A
This
user
domain
object,
which
is
you
know,
part
of
your
application
when
so
this
method
is
just
sort
of
like
a
toy
example
for
the
talk
but
I
wanted
to
highlight
in
it.
Is
that
the
only
difference
really
between
this
and
using
blocking
I/o
or
these
little
annotations?
Where
you
just
mark
the
functions
as
being
hey
sync,
and
when
you
call
them,
you
add
this
away,
and
so
it's
you
know
relatively
little
overhead,
forgetting
using
non-blocking
I/o
instead
of
blocking
I/o
and
in
particular
and
rust.
A
Zero
cost
attractions
are
sort
of
utterly
defining
feature
of
rust.
It's
one
of
the
things
that
differentiates
us
from
a
lot
of
other
languages
is
that
we
really
care
about
when
we
add
new
features
that
they
are
zero
cost,
we
didn't
actually
come
up
with
the
idea.
It's
a
big
thing
in
C++,
also
and
so
I
think.
The
best
explanation
is
this
quote
from
be
honest,
true
strep,
which
is
that
a
zero
cost
attraction
means
that
when
you
don't
use
it,
you
don't
pay
for
it
and
further.
A
A
So
the
specific
problem
that
we're
trying
to
solve
is
async,
I/o
and
so
normally
IO
is
blocking.
So
when
you
use
IO
you'll
block
thread
and
that
will
stop
your
program
and
then
have
to
be
rescheduled
by
the
OS
and
the
problem
with
blocking
I/o
is
just
doesn't
really
scale
when
you
have
trying
to
serve
a
lot
of
connections
from
the
same
program
and
so
for
these
kinds
of
really
high
scale
network
services.
You
really
need
some
form
of
non
blocking
or
asynchronous
IO
and.
A
Rust,
in
particular,
is
supposed
to
be
designed
for
languages
that
have
these
really
high
performance
report.
Armin's.
You
know
it's
a
systems,
foreign
language,
for
people
who
really
care
about
the
computing
resources
they're
using
and
so
for
us
to
really
be
successful
in
the
network
space.
We
really
need
some
sort
of
solution
to
this
asynchronous
I/o
problem.
A
But
the
big
problem
with
async
is
that
the
way
it
works
is
that
when
you
call
the
I/o,
you
know
system
call,
it
just
returns
immediately
and
then
eventually
you
can
continue
doing
other
work.
But
it's
your
programs
responsibility
for
figuring
out
how
to
like
schedule
calling
back
to
the
tasks
that
you
were
at
the
pause
on
when
you're
doing
the
asynchronous
I/o,
and
so
this
makes
you
know,
writing
an
async
on
your
program,
much
more
complex
than
writing.
A
They
look
like
spawning
threads
and
then
just
blocking
on
I/o
and
everything
looks
exactly
the
same
as
if
you
were
using
the
native
OS
primitives,
but
they've
been
designed
as
part
of
the
language
runtime
to
be
optimized.
For
this
use
case
of
having
these
network
services
which
are
crying
responding,
you
know
thousands
tens
of
thousands,
maybe
millions
of
green
threads.
A
At
the
same
time,
a
language
I
think
the
right
now
that
is
like
having
a
lot
of
success
with
this
model
is
go
where
they're
called
go
routines,
and
so
it's
very
normal
for
a
go
program
to
you.
Have
tens
of
thousands
of
these
running
at
the
same
time,
because
they're
very
cheap
to
spawn
unlike
OS
threats,
and
so
just
the
advantage
of
green
threads?
Is
that
the
memory
overhead,
when
you
spot
an
OS
thread,
is
much
higher
because
you
create
this
huge
stack
for
each
OS
thread,
whereas
green
threads.
A
The
way
they
normally
work
is
that
you
will
spawn
a
thread
that
starts
with
a
very
small
stack
that
can
grow
over
time
and
so
spawning
a
bunch
of
new
threads
that
aren't
using
a
lot
of
memory.
Yet
is
no
much
cheaper
and
also
a
problem
with
using
like
the
operating
system.
Primitives
is
that
you
depend
on
the
operating
system,
scheduling,
which
means
that
you
have
to
switch
from
your
programs
memory
space
into
the
kernel
space
and
the
context
switching
adds
a
lot
of
overhead.
A
If
you
once
you
start
having,
you
know
tens
of
thousands
of
threads
that
are
all
being
skipped
like
switched
between
really
quickly
and
so
by
keeping
that
scheduling
in
the
same
program.
You'll
avoid
these
contexts,
which
is
really
reduces
the
overhead,
and
so
green
threading
is
a
pretty
good
model
that
works
for
a
lot
of
languages.
A
Both
go
in
Java
I,
believe
used
to
smaadahl
and
rust,
had
it
for
a
long
time,
but
removed
it
shortly
before
Mulino
and
the
reason
that
we
moved
to
this
that
ultimately
was
not
a
zero
cost
abstraction
specifically
because
of
the
the
first
issue
that
I
talked
about
where
it
was
imposing
costs
on
people
who
didn't
need
it.
So
if
you
just
wanted
to
write
a
rust
program
that
didn't
need
to
use
screen
prints
that
wasn't
a
network
service,
you
still
had
to
have
this
language.
A
Runtime
that
was
responsible
for
scheduling
all
of
your
green
threads,
and
so
this
is
especially
a
problem
for
people
who
were
trying
to
embed
rust
inside
of
like
a
larger
C
application.
It's
one
of
the
ways
that
we've
seen
a
lot
of
success
in
people
adopting
rust
is
that
they
have
some
big
C
program
and
they
want
to
start
using
rust,
and
so
they
start
integrating
a
bit
of
rust
into
their
probe
into
their
program.
A
We
removed
green
threads
from
the
language
we
removed
this
Lingus
runtime
and
we
now
have
you
know
a
runtime
that
is
essentially
the
same
as
c,
and
so
it
makes
it
very
easy
to
call
between
rust
in
c
and
it's
very
cheap,
which
is
one
of
the
key
things
that
makes
rust
really
successful
and
having
removed
green
threads.
We
still
needed
some
sort
of
solution
to
async
I/o,
but
we
what
we
realized
was
they
need
to
be
a
library
based
solution.
We
needed
some
sort
of
with
providing
a
good
abstraction
for
a
single.
A
A
So
the
idea
of
the
future
is
that
it's
this.
It
represents
a
value
that
may
not
have
evaluated
yet,
and
so
you
can
manipulate
it
before
you
actually
have
for
the
future.
It's
actually
resolved
eventually
Rose
off
to
something,
but
you
can
start
running
things
with
it
before
it's
actually
resolved
and
there's
not
a
lot
of
like
work
done
on
futures
in
a
lot
of
different
languages,
and
they
are
a
great
way
for
supporting
a
lot
of
like
combinators
and
especially
this
async/await
syntax.
A
That
makes
it
much
more
ergonomic
to
like
build
on
top
of
this
concurrency,
primitive
and
so
futures
can
represent
a
lot
of
different
things,
so
they
sync
IO
is
kind
of
the
the
biggest
most
prominent
one.
Where
you
know
you
maybe
make
a
network
request
and
you
immediately
get
a
future
back
which,
once
the
network
request
is
finished,
will
resolve
into
whatever
that.
A
Never
coasters
are
turning,
but
you
can
also
represent
things
like
timeouts,
where
a
timeout
is
just
a
future
that
will
resolve
once
that
amount
of
time
has
passed
and
even
things
that
aren't
doing
in
the
IO
or
anything
like
that
were
just
CPU
intensive
work.
You
can
run
that
alone,
like
a
thread
pool
and
then
just
get
a
future
that
you
hold
on
to
that.
Once
the
thread
pool
is
finished.
Doing
that
work,
the
future
will
resolve
the
problem
with
futures.
Was
that
the
way
that
they've
been
represented
in
most
languages?
A
Is
this
callback
based
approach?
Where
you
have
this
feature
and
you
can
schedule
a
callback
to
run
once
the
future
resolves,
and
so
the
future
is
responsible
for
figuring
out
when
it
resolves
and
then
when
it
resolves
it
runs
whatever
your
callback
was
and
all
these
distractions
are
built
on
top
of
this
model,
and
this
just
really
didn't
work
for
us,
because
there's
a
lot
of
people
experiment
with
it
a
lot
and
found
that
I
just
was
forcing
way
too
many
allocations.
A
A
A
Yes,
we
write
this
model.
I
really
want
to
give
credit
to
Alex
who's
here
and
Aaron
Turin.
Who
came
up
with
this
idea
where,
instead
of
futures
scheduling
a
callback
there?
Instead
you
pull
them,
and
so
there's
this
other
component
of
the
program
called
an
executor
which
is
responsible
for
actually
running
the
futures,
and
so
the
executor
does.
Is
it
post
the
future
and
the
future
either
returns
pending
that
it's
not
ready
yet
or
once
it
is
ready
it.
You
know,
returns
ready,
and
this
model
has
a
lot
of
advantages.
A
A
Where
this
is
a
this
state
machine
has
you
know
you
perform
qio
events,
and
so
it
has
these
different
states
and
each
state.
It
has
this
the
amount
of
like
space.
It
needs
to
store
everything
you
will
need
to
restore
front
to
that
state
and
the
entire
future
is
just
a
single
heap
allocation.
That's
that
size
where
you
just
allocate
that
state
machine
to
in
to
one
place
in
the
heap
and.
A
It's
just
no
additional
overhead,
so
you
don't
have
these
like
all
these
boxed
callbacks
and
things
like
that.
You
just
have
this
like
perfect,
really,
truly
zero
cost
model,
so
I
feel
like
that
is
usually
a
bit
confusing
to
people.
So
I
tried
to
sit
to
my
best
keynote
visually
represent,
what's
going
on,
which
is
that
the
so
you
spawn
a
future,
and
that
puts
the
future
in
the
heap
in
this
one
location
and
then,
if
there's
a
handle
to
it,
that's
you
started
in
the
executor.
A
Using
the
Waker
argument
that
you
passed
in
when
you
pulled
it
and
still
waking
the
future
up
passes
it
back
to
the
executor
and
then
the
executor
will
pull
it
again
and
it'll
just
go
back
and
forth
like
this.
Until
eventually
the
future
resolves-
and
so
then,
when
the
future
finally
resolved
in
evaluates
to
its
final
result,
the
executor
knows
that
it's
done,
and
then
it
drops
the
handle
and
drops
the
future
and
the
whole
thing
is
finished,
and
so
it
forms
this
sort
of
cycle
where
you
pull
the
future.
A
Wait
for
IO
waken
up
again,
pull
it
again
on
and
on
in
a
loop
until
eventually
the
whole
thing
is
finished
and
this
model
ended
up
being
quite
fast.
This
is
the
sort
of
the
benchmark
that
was
posted
in
the
first
post
about
futures,
where
benchmarked
features
against
a
lot
of
different
implementations
from
other
languages.
A
Higher
is
better
and
features
is
the
one
on
the
far
left.
So
we
had
this
really
great
zero
cost
abstraction
that
was,
you
know
competitive
with
the
fastest
kinds
of
implementations
of
async
IO
in
a
lot
of
other
languages.
But
of
course
the
problem
is
that
you
don't
want
to
write
these
state
machines
by
hand
right.
You
have
your
whole
entire
application
state
as
a
state
machine
is
like
not
very
pleasant
to
write,
but
that's
where
the
future
abstraction
is
really
helpful.
A
And
this
works,
it
has,
you
know
some
downsides,
especially
these
like
nested
callbacks,
which
can
be
really
difficult
to
read
sometimes,
and
so
because
you
know
it
has
his
down
sights.
We
also
start
at
it
tried
to
implement
an
async
await
implementation,
and
so
the
first
version
of
async
await
was
not
part
of
the
language.
Instead,
it
was
this
library
that
provided
this
through,
like
a
syntax
plugin,
and
this
is
doing
the
same
thing-
that
the
previous
function
did
it
just
fetches
rustling.
A
It
turns
it
to
a
string,
but
it
does
so
using
a
single
agent.
So
it's
much
more
like
straight-line,
looks
much
more
like
the
way
normal
blocking
I/o
works.
We're
just
like
in
the
example
that
showed
originally,
the
only
real
difference
is
the
annotations,
and
so
the
async
annotation,
you
know
turns
this
function
into
a
future,
instead
of
just
returning
immediately
and
then
the
await
annotations
are
wait
on
these
on
the
futures
that
you
actually
construct
inside
of
the
function.
A
And
a
weight
under
this
Paul
model
these
sugars,
to
this
sort
of
loop.
Where
what
you
do
is
you
just
pull
in
a
loop
and
over
time
you
get
pending
back?
You
yield
all
the
way
back
up
to
the
executor
that
your
pending
and
so
then
it
waits
until
it
gets
woken
up
again
and
then,
when,
finally,
the
future
that
you're
awaiting
finishes
it,
you
know,
finishes
with
the
value
and
you
break
out
of
the
loop
with
the
value
and
that's
what
these
await
expressions
evaluate
to.
A
And
you
know
you
have
to
dig
through
this
to
try
to
figure
out
what
the
actual
error
that
you
encountered
was
and
I
found.
This
quote
on
reddit,
which
I
think
really
beautifully
sums
up
all
of
the
complaints
about
features.
Oh,
it's
just
that
you
know
when
using
futures
the
error
messages
are
inscrutable
having
to
use
ref
cell
or
clone
everything
for
each
feature
leads
to
over
company
to
code,
and
it
makes
me
wish
that
rust
just
had
garbage
collection
was
just
yeah,
not
great
feedback.
A
You
know
support
so
that
you
can
have
really
good
error
messages
for
async/await,
but
the
second
was
that
most
of
these
errors,
people
are
running
into
were
actually
then
bouncing
off
a
certain
obscure
problem,
which
is
called
the
borrowing
problem
where
there
was
this
fundamental
limitations
in
the
way
that
futures
was
designed.
That
made
it
so
something
really
common
pattern
was
not
possible
to
express,
and
that
problem
was
that
you
can't
in
the
original
sign
of
futures.
You
could
not
borrow
across
an
awake
point.
A
This
kinds
of
borrows
three
weights
are
extremely
common,
because
just
the
natural
API
surface
of
rust
is
to
have
references
in
the
API,
and
so,
but
the
problem
with
futures
is
that
when
you
actually
compile
the
future,
which
has
to
restore
all
that
state,
when
you
have
some
references
to
something
else,
it's
in
the
same
stack
frame.
What
you
end
up
getting
is
the
sort
of
self
referential
future,
and
so
here's
the
some
code
from
the
original.
A
Like
get
user
method,
where
we
have
this
sequel
string
and
then
when
we
call
query
with
it,
we
use
pass
a
reference
to
the
sequel
string,
and
so
the
problem
here
is
that
this
reference
to
the
sequel
string
is
a
reference
to
something
else,
that's
being
stored
in
the
same
future
state,
and
so
it
becomes
the
sort
of
stuff
referential
struct
where
you
have.
These
are
the
fields
of
the
future.
In
theory,
if
it
were,
you
know
a
real
struct,
you
would
have.
A
But
when
you
make
that
copy,
the
reference
that
was
self
referential
is
still
pointing
to
the
old
copy
and
that's
becomes
a
dangling
pointer
and
it's
the
kind
of
memory
issues
that
rust
has
to
prevent.
So
we
can't
have
suffer
official
structs,
because
if
you
move
them
around,
then
they
become
invalidated.
A
But
we
made
this
like
really
frustrating
in
the
futures
case
was
that
we
actually
don't
really
need
to
move
these
futures
around.
So
if
you
remember
the
the
model
where
the
futures
in
the
heap
and
a
handle,
it
is
getting
passed
back
and
forth
between
the
reactor
and
the
executor,
the
future
itself
never
actually
moves,
and
so
it's
totally
fine
for
the
future
to
contain
self-references.
As
long
as
you
never
move
it
and
you
don't
need
to
move
it.
A
So
we
really
needed
to
solve
this
problem
with
some
way
to
express
in
the
API
futures
that,
while
you're
pulling
it
you're
not
allowed
to
move
it
around
and
then,
if
we
just
could
express
that
somehow,
then
we
could.
We
could
allow
these
kinds
of
self
references
in
the
body
of
the
future,
and
then
we
could
just
have
these
really.
A
And
so
if
you
have
something
in
your
API
which
says
that
it
has
to
come
be
taken
by
pin,
then
you
know
that
it
will
never
removed
again
and
you
can
have
these
kinds
of
self
referential
structs,
and
so
we
changed
the
way
the
futures
work,
so
that
now
this
is
just
being
a
boxed
future.
It's
the
Box
future
behind
a
pin.
So
we
know
that
wherever
we
boxed
it
up
and
put
it
in
the
heap,
it's
guaranteed
now,
but
I
part
of
the
pin
API
that
it
will
never
move
again.
A
And
then
you
know
when
you
pull
the
future.
Instead
of
just
passing
a
normal
reference
to
it,
we
pass
the
pin
reference
to
it,
and
so
the
future
knows
that
it
won't
that
it
can't
be
moved
and
the
trick
here
that
makes
this
all
work
is
that
you
can
only
get
an
unpin
reference
out
of
a
pin
reference
in
unsafe
code.
We
make
it's
an
unsafe
function
to
be
able
to
do
that,
and
so
the
API
looks
roughly
like
this.
A
Where
you
have
pin,
which
is
just
you
know,
a
wrapper
around
another
around
a
pointer
type.
It
doesn't
any
runtime
overhead
or
anything
it's
just
like
demarcates.
It
is
being
pinned
and
then
a
pin
box
be
converted
into
a
pin
reference.
But
the
only
way
to
cover
two
pinned
reference
into
an
unpin
reference
is
to
use
an
unsafe
function,
and
so.
A
Otherwise,
the
API
is
the
same
as
it
was
before,
and
this
is
essentially
the
API
that
we're
going
to
going
to
be
stabilizing,
and
so
with
that
change.
This
code
from
the
first
example
just
works,
the
way
that
it's
written,
and
so
you
can
just
write
code
exactly
the
way
we
would
write
it
with
blocking
I/o,
add
these
async
and
await
annotations,
and
then
what
you
get
is
this.
You
know
async
idea
with
this
really
awesome:
zero
cost
abstraction,
where
it's
basically
is
cheapest.
If
you
hand-wrote
the
state
machine
yourself
by
hand.
A
So
the
situation
today
pinning
was
stabilized
in
the
last
release
about
a
month
ago,
we're
in
the
process
of
stabilizing
the
future
API,
so
probably
in
1/35.
Maybe
it
will
slip
in
b1
36,
which
is
know
in
about
two
or
three
months.
Basically,
and
then
we're
hoping
sometime
this
year,
we'll
have
a
single
Wade
state,
but
hopefully,
by
the
end
of
summer,
even
we're
going
to
have
it
is
stabilized
and
so
that
people
will
be
able
to
write
non-blocking
I/o
network
services
using
this
syntax.
That
makes
it
very
similar
to
writing
with
blocking
I/o.
A
Looking
beyond
that
stabilization,
we're
also
already
starting
to
work
on
these
sort
of
more
long-term
features,
like
streams,
I
think
is
probably
the
next
big
one
where
it's
an
async.
Let's
so
a
feature
is
just
you
know
one
value,
but
a
stream
is
many
values
being
yield.
A
synchronism
being
yielded
is
synchronously.
A
It's
essentially
like
an
asynchronous
iterator
and
so
you'll
be
able
to
like
loop
asynchronously
over
a
stream,
and
things
like
that,
and
it's
very
important
for
like
a
lot
of
use
cases
where
you
have
like
you
know
streaming
HTTP
web
sockets
in
speed
to
push
request
that
kind
of
thing.
Instead
of
having
a
networking
like
our
PC
model,
where
you,
you
know,
make
a
network
request
and
get
a
single
response,
you
have
streams
of
responses
and
requests
going
back
and
forth.
A
A
The
real
critical
insights
that
led
to
this
sort
of
zero
cost
is
Takaya
model
where
that
first
was
this
pole
based
version
of
futures,
where
we
hope
to
compile
these
features
into
these
really
tight
state
machines
and
then,
secondly,
that
this
way
of
doing
async
await
syntax,
where
we're
able
to
have
references
across
the
weight
points.
Because
of
pinning
supposed
to
say
thank
you
I,
don't
know
how
it
became
an
ex.